Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 57/216 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Google Analytics Goal Tracking for Sub-Domains?

    - by Hasan Khan
    I am trying to track goals in Google Analytics for a website that has the goal URL on a sub-domain. The main domain for example is: domain.com and the sub-domain is my.domain.com. I have Google Analytics configured to track domains and all sub-domains and I've eve set up an advanced filter so I can see traffic to my sub-domains in Analytics. However, in goal tracking, you're supposed to put in the website URL after the front (so if it were domain.com/conversions/ you'd put in just /conversions/). However, since for me it would be my.domain.com/conversions/, how would I input that URL into Analytics to track? Would Analytics automatically determine the URL to be on the sub-domain? Thanks!

    Read the article

  • How to use Google Analytics to track a development and production versions of the same site on different servers?

    - by Abe
    I have a website with two versions, one for production and one for development (testing new features). All of the code is under version control and the websites are on separate servers. Currently, I have the same Google Analytics Tracking code used on both sites. Since the code is under version control, it would be ideal to either have an if I am on production, use this code; else if on development server use that code clause. But I suspect that Google makes it easier to do something like this. I see that there are many ways to configure a GA tracking code, e.g. "a single domain" vs. "multiple top level domains". But it is not clear to me how to set this up. Also, if tracking code configured for a single domain has been on the development server, have I been picking up traffic to both sites, or does GA just ignore the second domain that I haven't registered?

    Read the article

  • Redirect a url to another url in IIS 7.5

    - by Jason White
    I have no idea why this isn't working. I've tried creating map rules and then rewritng and redirecting the url. I've tried just redirecting it with a simple rewrite rule and no matter what, the only time I can get it to work is if I set the match url to match this regex .*. I'm trying to redirect webmail.example.com to mail.example.com. Seemed like it would have taken but a couple seconds; boy was I wrong. I'm thinking I must be doing something wrong with the regex, but I'm not sure what as when I test it it seems to work fine. <rule name="webmail" patternSyntax="ECMAScript" stopProcessing="true"> <match url=".*webmail.*" /> <conditions logicalGrouping="MatchAll" trackAllCaptures="false"> </conditions> <action type="Redirect" url="https://mail.example.com:8000" appendQueryString="false" logRewrittenUrl="true" /> </rule> Thanks

    Read the article

  • What do I do if a user uploads child pornography?

    - by Tom Marthenal
    If my website allows uploading images (which are not moderated), what action do I take if a user uploads child pornography? I already make it easy to report images, and have never had this problem before, but am wondering what the appropriate response is. My initial thought is to: Immediately delete (not just make inaccessible) the image File a report with the National Center for Missing and Exploited Children with all information I have on the user (IP, URL, user-agent, etc.), identifying myself as the website operator and providing contact information Check any other images uploaded by that IP user and prevent them from uploading in the future (this is impossible, but I can at least block their account). This seems like a good way to be responsible in reporting, but does this satisfy all of my legal and moral responsibilities? Would it be better not to delete the image and to just make it inaccessible, so that it can be sent to the National Center for Missing & Exploted Children, the police, FBI, etc?

    Read the article

  • Why are the tags on my site using wordpress being indexed instead of the page?

    - by Bernard
    I can't figure out why my tags are being indexed by google and not my actual posts. So in google, my posts are showing up as mysite.com/tags/post and I of course I want it to look like mysite.com/category/actualpost. Any ideas what could be wrong? My domain is 3 years old and I just started a new focus of an existing site. I can't figure this out! There is no duplicate content, I have a sitemap submitted to webmaster tools and robots.txt...I have everything I need. This is the first time something like this has happened to me. Let me know if anyone has any ideas.

    Read the article

  • Adding my face to my web-site in Google's search result

    - by Roman Matveev
    I'm trying to accomplish the rich snippet to the template of my future web-site. The data format is review and I used the microdata formatting to add all necessary information to the web-page. The Structured Data Testing Tool delivered rating, author information and review date: However there is no my face image and the sections related to authorship are empty: I made all that recommended to link my Google+ profile to the web-site: I did something wrong? Or I will not be able to see my face in the test tools ever and it will be in the real SERP?

    Read the article

  • Two different domains as one user session

    - by Mathew Foscarini
    I have two websites that are run as the same service. Each domain offers articles from a different market. At the top of each page the two domains are shown as menu options. If a user clicks one they can switch to the other domain. See here: http://www.cgtag.com Each domain has a different Google Analytics account, and when a user switches domains Google is counting this as a new session. It's listing the other domain as the "referral" for that new session. When the user switches back to the first domain Google is counting this as a returning visitor. This is messing up my reports. Showing returning visitors values that are higher than reality. It's also increasing hits on landing pages when the user switches, and listing the other domain as a referral site. I've found tips on how to list two domains as one website, but that results in merging the data. I want to keep the two domains separate so that I can track each ones performance, but I don't want to count domain changes as new sessions. Maybe something like treating the two domains as subdomains.

    Read the article

  • DNS configuration to force root domain to www

    - by kolosy
    we have an app running on heroku. the dns setup is like this: A record for domain.com - heroku front end ip addresses CNAME for www.domain.com - specific host name for our app provided by heroku we also have an SSL cert for www.domain.com. the issue is that if someone goes to https://domain.com/secure_stuff, they will get heroku's SSL cert, instead of ours, causing lots of fear. We can do things on our end to make sure that all of our URLs point to https://www.domain.com, but it still won't solve this specific issue. is there a way to configure the DNS record to redirect all root domain traffic to the www subdomain?

    Read the article

  • FTP file access problem

    - by Fahad Uddin
    I recently got a malware on my website. I have made the backup of the website on my computer and trying to wipe off my FTP. I am trying to delete the root folder but getting this error message on all of the malicious files, Response: 550 Could not delete index.php: Permission denied I am the sole administrator of the ftp so permission should not have been an issue. My host provider seems not to suffer from this problem as his websites are running well without any malware. I have also tried to change the root to 777 to see if the file permission change could help me delete the files but still I am getting the same error. Please help out. Thanks

    Read the article

  • Chrome "refusing to execute script"

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plaintext file? It clearly has a js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still, same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get the Chrome to recognize that the linked file is actually a javascript type?

    Read the article

  • Fetch as Googlebot works but Submit to Index does not for AJAX urls

    - by Jennifer
    First I fetch as googlebot, then I am prompted to Submit to Index. This I want to do, but the tool just re-prompts me. This does not happen when I am just submitting a standard url. For those urls I get a confirmation that they were submitted to the index. It only occurs when I am submitting a AJAX url. I know the urls are searchable, as I have performed many tests and see the results using /?_escaped_fragment_= Here is an example url: http://www.townbeam.com/#!events Can someone shed some light on this? Thank you

    Read the article

  • Is there a good [and modern] reason to not have static HTML pages with AJAX content , rather than generate pages?

    - by user1725
    Assumptions: We don't care about IE6, and Noscript users. Lets pretend we have the following design concept: All your pages are HTML/CSS that create the ascetics, layout, colours, general design related things. Lets pretend this basic code below is that: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <link href="/example.css" rel="stylesheet" type="text/css"/> <script src="example.js" type="text/javascript"></script> <head> <body> <div class="left"> </div> <div class="mid"> </div> <div class="right"> </div> </body> </html> Which in theory should produce, with the right CSS, three vertical columns on the web page. Now, here's the root of the question, what are the serious advantages and/or disadvantages of loading the content of these columns (lets assume they are all indeed dynamic content, not static) via AJAX requests, or have the content pre-set with a scripting language? So for instance, we would have, in the AJAX example, lets asume jquery is used on-load: //Multiple http requests $("body > div.left").load("./script.php?content=news"); $("body > div.right").load("./script.php?content=blogs"); $("body > div.mid").load("./script.php?content=links"); OR--- //Single http request $.ajax({ url: './script.php?content=news|blogs|links', method: 'json', type: 'text', success: function (data) { $("body > div.left").html(data.news); $("body > div.right").html(data.blogs); $("body > div.mid").html(data.links); } }) Verses doing this: <body> <div class="left"> <?php echo function_returning_news(); ?> </div> <div class="mid"> <?php echo function_returning_blogs(); ?> </div> <div class="right"> <?php echo function_returning_links(); ?> </div> </body> I'm personally thinking right now that doing static HTML pages is a better method, my reasoning is: I've separated my data, logic, and presentation (ie, "MVC") code. I can make changes to one without others. Browser caches mean I'm just getting server load mostly for the content, not the presentation wrapped around it. I could turn my "script.php" into a more robust API for the website. But I'm not certain or clear that these are legitimately good reasons, and I'm not confidently aware of other issues that could happen, so I would like to know the pros-and-cons, so to speak.

    Read the article

  • How to enable customers to use their own domain for sites hosted by me

    - by Scott
    I am thinking of running a self-site builder. But was wondering how would I allow customers to use their own domains that they already own. Is that even possible? Let's say my site is www.bestsitebuildingwebsite.com and each customer has urls like this www.bestsitebuildingwebsite.com/frances www.bestsitebuildingwebsite.com/eden www.bestsitebuildingwebsite.com/john And a customer has a domain called widgets.com Is it actually possible domain widgets.com to go to my site somehow and have HASHES on the URL still work (my site makes use of hashes for AJAX queries). And their site still have good SEO with Google? Thanks Scott

    Read the article

  • Set Up Of Common Name Of SSL Certificate To Protect Plesk Panel

    - by Cbomb
    A PCI Compliance scanner is balking that the self signed SSL certificate protecting secure access to Plesk Panel contains a name mismatch between the location of the Plesk Panel and the name on the certificate, namely the self-signed cert's name is "Parallels" and the domain to reach Plesk is 'ip address:8443'. So I figured I would go ahead and get a free SSL certificate to try to fiddle with this error. But when I generated the certificate I used my server domain name as the site name when I generated the certificate. So if I visit 'domain name:8443' all is fine, no ssl warning. But if I visit 'ip address:8443' (which I believe is what the scanner does) I get the certificate name mismatch error, Digicert's ssl checker says that the certificate name should be the ip address. Can I even generate a certificate whose common name is the ip address? I am tempted to say I should just do what the PCI scanner accepts, but what is really the correct common name to use? Anybody run into this issue before?

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • Highlight current page tab [migrated]

    - by Jose David Garcia Llanos
    I am making a website, currently i am setting the highlight tab for current page, a particular page is not highlighting its tab and i have checked the code about 5 times but i cant find anything wrong with it. the website is auto-sal.es Here is the code: style.css body#home a.hometab, body#cars a.cartab, body#feedback a.feedtab, body#contact a.contacttab, body#members a.memberstab {background: #7D0000;} contactus.html <body id="contact"> navigation <ul id="menu"> <li><a href="index.html" target="_self" class="hometab">Home</a></li> <li><a href="cars.html" target="_self" class="cartab">Cars</a></li> <li><a href="feedback.html" target="_self" class="feedtab">Feedback</a></li> <li><a href="contactus.html" target="_self" class="cotacttab">Contact Us</a></li> <li><a href="members.html" target="_self" class="memberstab">Members</a></li> </ul> Again, the issue is that it is not highlighting the tab for contact us

    Read the article

  • Preventing indexing duplicate content by search engines

    - by umesh awasthi
    I am in process of migrating my old domain (www.oldurl.com) to new domain (www.newurl.com). Almost all the content,URL structure as well database is same except for few URL's and only difference will be in the domain name. I have made entries in the Apache's .htaccess file to set 301 redirect and currently have blocked all search engines from crawling my new domain by setting in robot.txt file. I am not sure how i will handle the duplicate content issue as when i will make the new domain go live. Should i block search engines to index/crawl my old domain? i am new to this field and not sure if this is actually any duplicate content issue or not.

    Read the article

  • Is it safe to block redirected (but still linked) URLs with robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: if I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • SSL Certificate Works in Monit - But Not in Keystore

    - by Bart Silverstrim
    I have a situation where there's a keystore file with the various root/intermediate certificates stored in it in a way that it seems to work for most browsers. Problem is that when mobile browsers hit it, there's a break in the chain and they complain. I used an SSL checker at http://www.sslshopper.com/ssl-checker.html and it states that "The certificate is not trusted in all web browsers. You may need to install an Intermediate/chain certificate to link it to a trusted root certificate." So...the desktop browsers must have the intermediate certs already and can make the chain connections, I'm assuming, while the mobile browsers can't. The thing is that I had used Portecle to export certificates from the keystore and cobble them together to create a .PEM certificate to run the Monit utility. When I check that application with the SSL checker, it works fine! The person that originally created the keystore said he couldn't follow the SSL provider's directions for creating the keystore because he created the CSR request using openssl, so the cert and private key had to be converted to DER format and use importkey to get it to work; following the directions he found online had importkey seem to use only a set keystore file as a result, and it would erase anything already in the file if it existed. So is there a way to take the certificate I created for Monit and create a working keystore for the Tomcat website? What would be causing the chain to be broken in the current keystore, but work for Monit? I have the SSL cert provider's intermediate and cross certificates, and the website's certificate, but is what else would I need to create a working chain of certs for a keystore?

    Read the article

  • What liability concerns do advertising vendors raise, and how can I address them?

    - by Beofett
    One of the websites I administer wants to provide free advertising in the form of direct links to vendors at an event they are running. Up until now, there has been no advertising whatsoever on the site (or any of our other sites). The site is for a for-profit business. The idea of implicit endorsement of any vendors we advertise has been raised, which brought up the question of what we need to do, if anything, to protect ourselves from any potential problems such endorsement might create. I know that many sites have clauses in their Terms of Service that state that (in a nutshell) they are not responsible for any problems or grievances between the visitors to the site and any vendor advertised or linked. Are there other steps that a website typically takes when considering advertising, such as getting the advertiser to provide some sort of certification that their ad will not violate any trademarks or copyrighted material?

    Read the article

  • 410 Responses when your CMS host doesn't support them?

    - by leeand00
    Sending a 410 responses for a page that no longer exist should make Google stop crawling for that page. The site I am working on has been recently migrated, and very little of the content was migrated. I've already turned the existing content into 301 redirects (the content that is on both the old and the new site), but now I would like to flush the old content from Google's memory by placing 410 responses in it's path when it returns to crawl for them and finds a 404 response. However, I asked our CMS host about it, and they said that our CMS does not support 410 responses. Is there some other way to post a 410 response, like making a dead link 301 redirect to a page that a 410 response in the form of a meta tag?

    Read the article

  • Problem with user generated content

    - by grasshopper
    In general, what do you think is better in regards to adding content to a site, to allow users to add content to the site and put a flag button to report it if it doesn't fit with the site, or should only I add the content and remove that option? It will be a small site but I don't know if I'll manage to scan the site constantly or deal with the flags and on the other hand I'm worried that the site wont move forward because there will be lot less content, thoughts?

    Read the article

  • 500 error on Joolma website

    - by Rachel Sparks
    PHP Fatal error: Call to a member function setQuery() on a non-object in /home/josh/public_html/administrator/components/com_jfusion/plugins/phpbb3/forum.php on line 226 Just moved over to a new server. Anyone have ideas as to what is wrong? Is this a database issue? line 226: //get permissions for all forums in case more than one module/plugin is present with different settings $db = & JFusionFactory::getDatabase($this->getJname()); $query = "SELECT forum_id FROM #__forums WHERE forum_type = 1 ORDER BY left_id"; $db->setQuery($query); //226 $forumids = $db->loadResultArray();

    Read the article

  • Root Domain Redirects Incorrectly To Https instead of to WWW

    - by Ari
    TL;DR - Why do visits to my website homepage work without "www", but not to specific pages on it? I recently moved my website (Zappable.com) to a new webhost, RedHat's OpenShift (a PAAS). It requires using Cname records to setup custom domains, something my domain name registar (1&1) does not support without a hosting plan. So instead I setup Cloudflare in-between my domain and web host, and setup a Cname record on it. I then pointed a 1&1 "www" sub-domain to CloudFlare, and then pointed my 1&1 root to "www" sub-domain. This works fine for visiting to my homepage, but for some reason it does not work when visiting a specific page without "www". Instead of adding "www", it goes to HTTPS, which is strange.

    Read the article

  • SEO when loading items through AJAX

    - by Qmal
    Let's say I have standard scenario of commerce site that has categories on the left and items on the right. What I would like to do is that when user clicks on category it will pass it's ID to js, js will get all items from API by using that id and load them very prettily to my content. It looks all cool and pro but what is the situation from SEO point of view? AFAIK google bot enters my site, sees I have span with categories and that's all?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >