Search Results

Search found 9717 results on 389 pages for 'it pro'.

Page 237/389 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • 100% APC Fragmentation - Cacherouter & Pressflow install

    - by granttoth
    My APC cache has a 100% fragmentation. I'm not quite sure I understand what is going on here. For testing I jacked the available memory up to 512. After a day the total available free memory shows 73% but I still have 100% fragmentation. Would you gurus please look at my settings and offer your advice? Oh, I have read people suggest that I disable apc.stat when possible but when I do the site crashes. I am using the Pressflow build of Drupal 6 with the cacherouter module installed. Edit: (added screenshot) http://i.imgur.com/DqZEX.png

    Read the article

  • Will using HTTPS hurt my site's SEO or other statistics?

    - by yannbane
    I've set up a WordPress blog. Since I have to log into it from many different locations/machines, I've also got an SSL certificate, and set up Apache to redirect HTTP to HTTPS. It all works, but I'm wondering whether that's an overkill. Since most people who go to my site don't have to log in, I'm starting to wonder whether HTTPS has some drawbacks. If so, should I look for a way to make HTTPS optional?

    Read the article

  • Is there a way to setup Clicktale tag in Google Tag Manager?

    - by Cubius
    Since GTM doesn't support document.write() method the standard clicktale code doesn't work. Is there a workaround for this? ClickTale employee has sent me these instructions: Replace the document.write JS line above with the following: document.body.appendChild(externalScript); Example: <!-- ClickTale Bottom part --> <script type='text/javascript'> var externalScript = document.createElement('script'); var scrSrc = document.location.protocol=='https:'? 'https://clicktalecdn.sslcs.cdngc.net/': 'http://cdn.clicktale.net/'; scrSrc += 'www11/ptc/xxx-xxx-xxx-xxx.js'; externalScript.src = scrSrc; externalScript.type = 'text/javascript'; document.body.appendChild(externalScript); </script> <!-- ClickTale end of Bottom part --> I am not sure what to do with this. Has someone tried something like this?

    Read the article

  • In addition to Google's First Flick Free, should you whitelist search engine bots past a paywall?

    - by tobek
    Our site has subscription-only pages - non-subscribed visitors see a snippet preview. As per Google's FCF requirements, your first 5 hits to a subscriber-only pages with .google. as the referrer, you see the full page. In addition to this, should we whitelist search engine bots so that they can index the full content? I assume this is not required for Google, which can use FCF to index our content, but what about other search engines? Is this considered cloaking? My gut says that whitelisting bots past the paywall is bad practice., but I wanted to confirm - any evidence or references would be amazing.

    Read the article

  • Is the structure of my site's navigation (via price/service tables) considered 'Duplicate Content' by Google?

    - by James Gadsby
    As I'm building my business website, I'm using service/price tables at the bottom of each service page to demonstrate to customers/potential clients my other offerings. Of course, given that there are 7 or 8 service pages, each with (according to Google) the same service descriptions below the original content for that service, would this be counting as duplicate content? If so, what could I do about it?

    Read the article

  • How to set up a local webDAV server for use with Goodreader on the ipad? [migrated]

    - by confused-about-webdav
    I need to know how to set up a local webDav server on my PC so that goodreader on ipad can automatically sync with it over the local wifi network? I am really a rookie when it comes to setting up a web server and have tried various guides on the internet. I tried setting up a webdav server using IIS and forwarded the required ports and enabled webdav publishing but goodreader can find it in the local wifi network automatically nor is it able to connect even after manually entering the credendtials. So i'll be really greateful if someone who has successfully setup a webdav server for use with goodreader can point me on how to do it?

    Read the article

  • Google Scholar Realted Question

    - by Art
    I have just requested Google Scholar to use my web site for collecting papers from my personal web site: http://cs.uic.edu/~asmirnov/publications.html I was wondering if I did everything right: I submitted a request on the form provided on scholar web site I published the papers in PDF on my web site Is there anything else needed for Google to index my web site? Other questions are: 1. The first paper (link to it) is not to just paper, but to the whole issue. 2. Are there any tages to be added on my web site, if so, then which and how do I add them? 3. What are those exporting options available on google scholar web site and how do they work? Thank you very much for being patient with me and my questions as well.

    Read the article

  • Is there a way to forward emails associated with a domain without a mail server?

    - by MeltingDog
    A client owns example1.com but wants to also purchase example2.com and have it point to their original site at example1.com. No problem there. But they also want any emails going to example2.com to be forwarded to their counter parts at example1.com Eg: if someone emails [email protected] it will be forwarded to [email protected] They only way I can think of doing this at the moment is to set up host for example2.com and then set up mail forwarders in cpanel. But this seems a bit excessive and costly. Does anyone know another, cheaper way of doing this?

    Read the article

  • Why are new pages not being indexed and old pages stay in the index?

    - by ZakGottlieb
    I currently have a site that was recently restructured, causing much of its content to be reposted, creating new URL's for each page. To avoid duplicates, all of the existing pages were added to the robots file. That said, it has now been over a week - I know Google has recrawled the site - and when I search for term X, it is stil the old page that is ranking, with the new one nowhere to be seen. I'm assuming it's a cached version, but why are so many of the old pages still appearing in the index? Furthermore, all "tags" pages (it's a Q&A site, like this one) were also added to the robots a few months ago, yet I think they are all still appearing in the index. Anyone got any ideas about why this is happening, and how I can get my new pages indexed?

    Read the article

  • Free web mangement control panel

    - by Thorn007
    Hey guys I need some help. I will also apologize for not being able to be more specific. I'm looking for a specific web admin panel. that uses a login page via port 2222 or 4444. This is not vanilla forum or any forum. So the only way I know how to make this a legit question is to ask what "free" control panels do you use to manage your web sites. This means files and domains. Why do you use it? Where is it located?

    Read the article

  • How do I recover a site after WordPress' Automatic Update has failed?

    - by Metacom
    I manage several WordPress websites, and successfully recently used the Automatic Upgrade feature to bring them up to 3.1. However, on one of the sites, I received a 503 (I believe it was 503). After that I was presented with "The service is unavailable." whenever I tried to access any page, including the index page, wp-admin, etc. I had similar problems before when a WordPress site got stuck in maintenance mode, and all I needed to do was login via FTP and rename or delete the .maintenance file. I tried that in this case, but that didn't do the trick. I am now presented with "Fatal error: Call to undefined function require_wp_db() in **\wp-settings.php on line 71" I can't figure out how to fix this problem, and I was wondering if anyone else had any ideas. Any suggestions are appreciated! Thank you for reading.

    Read the article

  • Comparisons of Javascript 'data grids'?

    - by Joe
    I've found plenty of questions between here and StackExchange of people asking for the 'best' data grid / data table, or one that has a particular feature, and plenty of lists out there (of various ages) listing the various data grid implementations ... but is anyone aware of any matrix of what features the various solutions implement? (eg, allow shift-click to select multiple; support checkboxes for selection; can update a regular table in-place; allow editing of cells; support websql or indexeddb for local caching; which browsers they support; infinite scroll; etc.) There's a generic 'javascript framework' comparison on wikipedia, which would be the sort of thing I'm looking for, but it doesn't go into detail on data grids. (which makes sense, as so many are extensions, not core features of those frameworks, and in the case of jQuery, there's lots of 'em.)

    Read the article

  • What causes Google Analytics tags to work on some machines but not on others?

    - by Dallas
    The title of this question says it all. I am trying to update my code from the deprecated _getTracker() method to _createTracker(), but am experiencing inconsistent results. I have tried traditional and Asynch methods using a JSP include, but they all have the same result. My pageviews, and others in the office all show up in analytics. I have tried various test cases, but their visits are just not registering at all for them, but are showing up for me. The client has tried on multiple machines, and I have walked through with them step by step, so I know it's not just user error. I know that javascript being turned off will cause the tags not to work, but I am wondering if what else might cause the tags to not be recognized. I would appreciate any and all ideas.

    Read the article

  • I think there is a problem with my url encoding

    - by TheGateKeeper
    I took someone's advice and started encoding my image sources, but Google doesn't seem to be able to decode them. I probably did something wrong, because basically Google is taking the full path as the image's name. See this page as an example. If you go on the top most thumbnail and do "Save as", you will see the path is not being decoded. Should I stop decoding or am I doing it wrong? Should I encode only the image name itself? Thanks!

    Read the article

  • SEO - folder or file [closed]

    - by ErmSo
    Possible Duplicate: Should I use a file extension or not? I'm creating a website with a number of pricing options. Each price plan has it's own page and there is also a comparison page. As far as SEO is concerned, which of the following is better? or does it not make a difference? Option one - folders /pricing/plans /pricing/plans/free Option two- files /pricing/plans.php /pricing/free-plan.php

    Read the article

  • Should I use heroku or should I have my own ssl? [closed]

    - by user1744649
    Base on your experience, can you please advice what will be better for me? Issue : I build applications and there are 2 major constraints. 1. ssl is needed since I used facebook api's. So, only heroku is a good option. 2. My web components tend to hit the Max_Execution_Time very often, since I pull a lot of data using the facebook api. Future possible purpose of this site : 1. Will use more apis from google, twitter, future. 2. Might request for donations. 3. Just for hobby. I have two options : 1. Create a web site in heroku itself by converting all the php components to a background worker in python using django. 2. Dont use heroku at all. Do the the complete hosting with godaddy (shared plan). And buy an ssl so that I can use fb apis etc. In this scenario, what do you suggest me to do?

    Read the article

  • Google not showing any pages from my site in the index after three months [on hold]

    - by Alex Coisman
    Despite having a sitemap and using Google Webmaster Tools, it has been over 3 months and my site has not been added to the Google index at all. Here's the site: www.famouslefthandedpeople.com As far as I know, I have done everything correctly. However, there must be something I am overlooking that is preventing Google from indexing the site. I do not have a robots.txt file, so allow/disallow isn't the issue. Although the content of the site is sparse, it is original and not duplicated internally or externally so Panda/Penguin should not be a problem. I have reviewed the answers at Why isn't my website in Google search results? and I don't think it applies here. If it matters, I am using WordPress to create the site. What other factors should I be looking at in order to troubleshoot this?

    Read the article

  • Why am I getting this message "Some important page is blocked by robots.txt"?

    - by Rounak
    My site's URL is: www.hackinguniverse.org From some day in Google Webmaster tool a message is showing that says "Some important page is blocked by robots.txt". My robots.txt is: User-agent: Mediapartners-Google Disallow: User-agent: * Disallow: /search Allow: / Sitemap: http://www.hackinguniverse.org/feeds/posts/default?orderby=updated Now for other information - I hosted this website on blogger. I have some other sitemaps too where I had only included "/Search" in Disallow like in this robots.txt file. But those sites are ok. I mean no message is showing on those. So why am I getting that message telling that I have blocked some important page via robot.txt?

    Read the article

  • Simple mod_rewrite Question

    - by user5358
    Hello, I want to have everything that looks like this: /1/2/3/4/5/[...] to redirect to this: /index.php?u=/1/2/3/4/5/[...] unless the requested string is a specific file. So anything that doesn't have a ".", I want to redirect to "index.php?u=[...]". I'll then parse the URI segments in PHP to determine what the user is requesting. I've been looking around for how to do this, but have only a very rough understanding of regular expressions and have been unable to find an example of how to do it. Thanks!

    Read the article

  • Google analytics campaign advice

    - by Drewsdesign
    I am buying traffic from a broker not one source and sending to various landing pages. I would like to know the best way to structure a campaign so I can find which referrering site/url is performing the best (time on site, bounce etc) Should the utm_campaign be the 'brokername' and the utm_source be the 'landingpagename' or should this be the other way around? Also what would be the best way to create a custom report to show all the referrers metrics by each landing page ? Thank guys really appreciate any help on this.

    Read the article

  • What to use for an event listing site? [on hold]

    - by Vykintas
    I have a site which lets users buy & sell tickets, but it's built on wordpress & buddypress. So it's very heavy and messy. I would like to re-do the whole site on something lighter, cleaner and solid. The main functionality for user would have to be as follows: Register or login via Facebook. Create events and sell tickets to them. See ticket sales statistics Upload photos and associate those with events. Buy event tickets, print pdf ticket. Comment, favourite and like events. What would be your suggestions? PHP framework? CMS? CMF? I must say that I'm a front-end dev so building a system from scratch on my own would take a while. I'd be interested more in a "skeleton" app solution or something similar.

    Read the article

  • A relatively new blog seems to be getting very poor Google indexing

    - by Genadinik
    I have a new blog that is 2 months old. In the first few weeks, it was getting indexed nicely and my GoogleWebmaster reports were showing that it was getting crawled and began ranking for some terms. Then as I kept writing, the GoogleWebmaster report thinned out and showed less and less terms that this blog ranks for. Now there are only 4 terms with one of them being my name. Is there something I need to do to keep the old posts to remain indexed and crawled? Thanks, Alex

    Read the article

  • Exclude a sub directory in a protected directory

    - by user1351358
    I need to exclude protection on one of the folder inside a protected directory with .htaccess I put .htaccess in here: /home/mysite/public_html/new/administrator/.htaccess The directory need to be exclude from protection: /home/mysite/public_html/new/administrator/components/com_phocagallery/ My .htaccess file : AuthUserFile "/home/mysite/.htpasswds/public_html/new/administrator/passwd" AuthType Basic AuthName "admin" require valid-user SetEnvIf Request_URI "(/components/com_phocagallery/)$" allow Order allow,deny Allow from env=allow Satisfy any I tried but not working on my purpose. I suspect my path to the excluded directory may have some mistakes. Please advise me. Thanks.

    Read the article

  • A drop in SERP after following webmaster guidelines [on hold]

    - by digiwig
    So here's a puzzle for all you SEO gurus out there. I recently launched my own site. I had target keywords which were ranking very well for about 1 month, within the top five and even appearing in first place. In an attempt to maintain good positioning, I followed guidelines by adding robots.txt, an xml sitemap redirecting non-www to www redirecting index.php to root domain adding htaccess 301 redirect for old pages I added rich snippets created a google+ account, verified my picture to appear, I went through each of the webmaster issues with duplicate titles and meta descriptions and improved header tag document outlines i even created a few more blog posts to keep the content freshing and moving. So now my website appears on page 2 with my target keywords - and all because I followed the guidelines. What is happening? I see competitors with stagnant content superglued to position 1.

    Read the article

  • Two different domain for specific languages pointing to one site

    - by user25599
    I developing a client's blog and he needs it to be bilingual (english and spanish). Now what he wants is that users can get to the content based on the domain e.g. Jhon enters www.domain.com and he gets the english version and Juan enters www.elsenordominio.com to get the spanish version. All content will be validated by php so the users and search engines only reads the domain related language. What do i need to use header re-direct or 301? is it bad for seo? will google will google penalize me? I hope you guys can help me and forgive me if my english is not good.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >