Search Results

Search found 9763 results on 391 pages for 'ys pro'.

Page 89/391 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Ubuntu server Mysql remote access from MySQL Workbench

    - by goodseller
    I have a newly install ubuntu installed the mysql server. After the basic config, I changed the my.cnf file and commented the bind_address I can start the server and added iptable for 3306. I also add the privileges to mysql server as follow: GRANT ALL PRIVILEGES ON . TO 'root'@'%' IDENTIFIED BY 'P@ssw0rd' WITH GRANT OPTION; FLUSH PRIVILEGES; exit However after connected from the mysql workbench, it shows no database. But it seems that have login. Anyone have faced that or can help me? Thx!

    Read the article

  • Software for video subscription service

    - by Clinton Blackmore
    I'd like to sell instructional videos over the web. Primarily, I'd like uses to subscribe to the site and be allowed access to videos over the internet. Secondarily, I might sell DVDs for those who have poor internet connections or would like a physical copy, or possibly I'd sell eBooks and the like in the future. Regarding the subscriptions: I'd like a system that automatically sends out e-mails when it is time to renew I'd like to be able to offer free trials Users without a free trial or subscription should not be able to access the content Incidentally, I plan to host videos on my current web host and move them to a CDN when volume (and capital) make this a good idea. While I have no intention to go crazy with the DRM, it seems expedient not to directly link to the files -- how can I link to them indirectly? It would be nice to support multiple payment processors -- specifically, I'd like to avoid a PayPal only approach. Are there any web applications (or plugins) you'd recommend for something like this? While I've set up and administered several web technologies, I've never done anything with e-commerce. I see there are possibilities like osCommerce, one friend recommends using WordPress with plugins, and it really appears that for any given CMS, you can graft on components like this, although I imagine that not all are created equal. As I'm not tied to a particular web application (and, while open source software that can run on a LAMP [p=perl, python, php] stack is preferable), I'd like to make a good choice at the beginning.

    Read the article

  • Bizarre image loading problem from apache2

    - by NateDSaint
    Users have complained a few times about seeing a bizarre set of pink or green stripes on our webpage. At first I thought there were a rash of video card outages, but then someone sent me a screenshot from their browser (IE8). I later saw the same thing, but with slightly different colors on Chrome. Users have experienced this on their iPads and iPhones (iOS Safari). Because I've optimized the site to cache images, the bad image stays around until you clear your cache, so once you do, it resolves itself. My assumption is that the transmission of the image is being cut off mid-stream and then staying that way, but I can't for the life of me figure out why. Here's what I've checked: Header length is being sent, and transmission looks okay (wget sample below): wget http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg --2012-04-05 08:46:00-- http://www.superiorlivestock.com/templates/sla2/images/wallbg2.jpg Resolving www.superiorlivestock.com (www.superiorlivestock.com)... [ip redacted] Connecting to www.superiorlivestock.com (www.superiorlivestock.com)|[ip redacted]|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 45926 (45K) [image/jpeg] Saving to: `wallbg2.jpg' Images are not being served gzipped (apache conf below): SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI \.(?:gif|jpe?g|png)$ no-gzip dont-vary The site is www.superiorlivestock.com, and here's a sample of the bad page load: Is there something obvious I'm missing? Am I saving my images in the wrong format somehow?

    Read the article

  • My site works with www.example.com but not example.com

    - by toocool
    this is the site that I am going to develop: http://www.juve-news.com/ it works like this but it doesnt when I try to open it without the www prefix. it give me 400 bad request! I have used other web host before now I am trying a new one and I have to add some dns entries like in the picture here: http://cloudcontrol.com/developers/documentation/add-ons/aliases/ I dont know what I have done wrong. If anyone knows what could be the problem then please give me a tip.

    Read the article

  • Web Hosting Backup/Disaster Recovery Plan - Which Company?

    - by Harry Muscle
    I've been asked to look after consolidating all of our various company websites onto one host and also provide a disaster recover plan in case the chosen host goes down/out of business/etc. We're most likely going to go with HostGator as our chosen host, however, I'm not sure who to pick for our backup host. HostGator uses cPanel and has the functionality to provide regular full (ie: including configuration) backups of all the sites we host. Ideally I'm looking for a solution where we can provide these backups to another company and within a short period of time they restore all the sites onto their servers and we're back up and running. The whole disaster recover process has to be fairly straight forward from the point of view of what we need to do in case I am unavailable to assist in the disaster recovery process and no one else overly technical is available to assist (ie: take these backup files, send them to this company, and ask them to do this). Any suggestions on which company would be a good choice for this backup solution would be highly appreciated. Thanks, Harry

    Read the article

  • URL rewrite and domain frame

    - by Dennis
    I have registered the domain www.posti.sh at nic.sh. The website is on the server www.myskoob.com/postish. Unfortunately, nic.sh does not support frames, i.e. that the domain stays posti.sh as it forwards to www.myskoob.com/postish - so I thought about a URL rewrite on the server. Unfortunately I have no idea how rewriting works - I am thankful for explanations - but I would also like to ask whether this is generally possible. What I need is: The server needs to recognize that the folder postish is accessed Depending on the file that is opened, it needs to rewrite the url to www.posti.sh/<-according filename here- Also, the server needs to understand that a link to www.posti.sh/about.php links to www.myskoob.com/postish/about.php and likewise for other files - at the moment, when I type in posti.sh/about.php it redirects to http://www.myskoob.com/postishabout.php, which does not exist All this should be possible irrespective of whether the url contains a "www" at the beginning or not A plus but not necessary would be that it does not display the .php extensions Would that generally be possible? If not, what would be the alternatives? If anyone knows how to do it, any code and/or way to do it would be much appreciated!

    Read the article

  • How can I cap cloud-costs?

    - by Joe Simpson
    I am looking into launching a cloud-based product for consumers where with a prepaid account they can start a server with a simple click and load up the software and access it remotely. The technical side of that I can manage, but I am worried about the costs escalating ridiculously high for both me and my customers Is there a way I can Limit how much each server can cost me before it will be deactivated See how much a server is currently costing me (so I can deduct it from their account) with it being extremely reliable as I don't want to have to have a giant bill in any possibility.

    Read the article

  • How do I handle having too many links on a webpage because of my menu

    - by RandomBen
    I am developing a website that has a drop-down menu at the top of it. The Menu has around 100 links in it that are repeated on every page. Every page also has some number of links below the Menu that may or may not be in the menu itself. My issue is that Google says they generally don't like pages with more than 100 links on them. Is there any way to change the links on the menu so that they no longer "count" towards my max of 100 links? It seems like there should be an easy way to do this but their really doesn't seem to be. the rel=nofollow still counts towards the number of links on the page at least according to Google, so what other options do I have? I looked into where the 100 comes from and I found that it used to be here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769#2 but that is no longer the case. I found a more definitive and frankly muddier answer here: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru from Matt Cutts from 2007. Long story short, in 2007 they still felt 100 links was a good number but they stated you could go far beyond that. In fact, they said that pages with high PageRank could have 2-300. It did sound like having many links could reduce the PageRank of the page with all of the links or possibly all of the items linked to. Also, I know IIS7's SEO 1.0 toolkit suggests that pages should have no more than 250 links.

    Read the article

  • Howto fix "[Errno 13] Permission denied" in mailman mailing lists

    - by Michael
    After migrating domains from one plesk server onto another, I got several of those mails every day: (the target mailbox does not exist, so I get those as undeliverable mail bounces) Return-Path: <[email protected]> Received: (qmail 26460 invoked by uid 38); 26 May 2012 12:00:02 +0200 Date: 26 May 2012 12:00:02 +0200 Message-ID: <20120526100002.xyzxx.qmail@lvpsxxx-xx-xx-xx.dedicated.hosteurope.de> From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <list@lvpsxxx-xx-xx-xx> [ -x /usr/lib/mailman/cron/senddigests ] && /usr/lib/mailman/cron/senddigests Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/var/list> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=list> List: xyzxyz: problem processing /var/lib/mailman/lists/xyzxyz/digest.mbox: [Errno 13] Permission denied: '/var/lib/mailman/archives/private/xyzxyz' I tried to fix the permissions myself, but the problem still exists.

    Read the article

  • Web developer has become uncooperative, what should we do to rescue our site? [closed]

    - by TOM
    What can an individual or a company do if the web designer who has designed their website becomes completely uncooperative? In our case He refuses to meet with us for discussions. He refuses to give us training on the effective use and management of the website he has designed for us (and has been paid in full for) despite this training being part of the original contract. Last year he disappeared for a number of months ,refused to answer emails or phobe calls and was totally unavailable to help us He refuses to give us the details of the hosting of our website. We have totally lost faith in this arrogant and unreasonable guy and would like to break off all relationships with him but ,it appears, he's got us over a barell

    Read the article

  • How can I screen clients that try to register multiple times?

    - by Aba Dov
    My company offers a bonus to every client that register. We would like to prevent people from abusing this by registering several times. we thought about filtering clients by ip (there is a problem with workplaces where all stations have the same ip) cookies (if cookies are not allowed we might lose a client) I would like your opinions on these two methods and will be glad to hear about new ones. thanks

    Read the article

  • Email provider - suggestions needed

    - by Christian Fazzini
    We are looking for a good way to have email support. In theory, we need to allow end-users to send emails directly to support and careers. i.e. support@domain_name_here.com and careers@domain_name_here.com. Second, we need to provide emails to our staff. So each staff member has their own email address. i.e. joe@domain_name_here.com, meghan@domain_name_here.com, etc. Google Apps is one that we are considering. However, they are charging $50 per user, per year. Not so bad, considering the quality and the features they offer. However, there are also cheaper alternatives. i.e. my domain registrar offers an email plan for $20 / year / 10 emails. Go Daddy has a number of plans and still a lot more affordable than Google Apps. So far Namecheap and Go Daddy are the only ones I have looked at for email plans. Is it worth signing up with Google Apps? Or are there better alternatives? Your thoughts?

    Read the article

  • Why are the tags on my site using wordpress being indexed instead of the page?

    - by Bernard
    I can't figure out why my tags are being indexed by google and not my actual posts. So in google, my posts are showing up as mysite.com/tags/post and I of course I want it to look like mysite.com/category/actualpost. Any ideas what could be wrong? My domain is 3 years old and I just started a new focus of an existing site. I can't figure this out! There is no duplicate content, I have a sitemap submitted to webmaster tools and robots.txt...I have everything I need. This is the first time something like this has happened to me. Let me know if anyone has any ideas.

    Read the article

  • Adsense click bot is click bombing my site

    - by Graham
    I have a site that get's roughly 7,000 - 10,000 page views per day right now. Starting around 1 AM on 7/1/12 I noticed the CTR was rising dramatically. These clicks would be credited then de-credited soon after. So, they were obviously fraudulent clicks. The next day I had about 200 clicks in account with about 100 of them being fraudulent. It's about 3 - 8 per hour evenly dispersed for each of the three ads 24 hours a day. This leads me to believe that it's some sort of Adsense click bot. Also, I removed the ads last evening then put them back up around 3AM and the invalid clicks started within 10 minutes. I signed up for statcounter.com to analyze the exit links on the Adsense. Then I conditionally blocked ads for the IP address of the person / bot I suspected doing this. But, I think that the bot has several proxies to choose from and can refresh IP addresses. I've notified Google through the invalid click form / email 4 times over the past two days in order to let them know I'm aware of the situation and am working on a solution. I've also temporally removed all ads on that site. How can I block a bot like this? Thank you.

    Read the article

  • Retroactively applying a Piwik goal to visitors

    - by Andrew Aylett
    I started receiving a large (for me) amount of traffic on one of my pages yesterday. Today, I thought that it would be useful to track goals from that page -- there's a link to my blog from it. I added the 'visited external link' goal to Piwik, and new visits are being recorded. However, it seems to me that there must be enough data in the database to retroactively apply the goal to past users -- is there a way to achieve that?.

    Read the article

  • Unindexing my tumblr blogs content and moving it to another tumblr blog

    - by sam
    ive been writing a tumblr blog for the past yr or so, ive writen about 300 articles, but now i need to move the blog to another site. (before it was running under blog.mysite.com and i now want it to run under blog.my*new*site.com) I want to keep the archived articles and have them on the new site, so what i was hoping to do was export the blog from tumblr, go into webmaster tools remove all the blogs indexed urls from google webmaster, then make a new tumblr blog and import the posts. Would google see this as new content as ive deleted their indexed copy ? Could i just move the mapping of the tumblr blog to the new subdomain, but in doing this i would lose all the pr and it would still look like duplicate content whats the best way to approach this ?

    Read the article

  • Parsing google site speed in analytics

    - by Kevin Burke
    I'm having a hard time making heads or tails of the Site Speed graphs in Google Analytics. Our site speed is fluctuating wildly from month to month, despite a large sample (the report is "based on 100,000's of visits) and a consistent web set up (static files served from an EC2 instance running nginx behind a load balancer). Here's our site speed, with each datapoint representing a week worth of data. Over this time period we modified our source and HTTP headers to increase our cache hits on static resources by 5x. Why would it fluctuate so much? Is there any way to get more reliable information from those graphs?

    Read the article

  • images within noscript

    - by Guilherme Nascimento
    Note: My question is not about javascript Note: My question is how to make the HTML accessible to search engines. Note: My question is not about hiding texts, is on block loading of images in order to use LazyLoad. I tested various techniques of blocking the loading of images to use effect LazyLoad (I'm developing in javascript), was the only efficient <NOSCRIPT>: The HTML structure that would, with LazyLoad loading of images is achieved via the viewport (visible area of the website in browser). <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0101.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0201.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> <p>Lorem ipsum dolor sit amet, <span class="lazyload"> <noscript><img src="foto-m0301.jpg" alt="image description"></noscript> </span> consectetur adipiscing elit. </p> This is a bad practice for search engines? If it is a bad practice, you could put an example of good practice? If there is any other issue with noscript talking pictures, forgive me. Note: I did not find any doubts about noscript with images.

    Read the article

  • Is this form of cloaking likely to be penalised?

    - by Flo
    I'm looking to create a website which is considerably javascript heavy, built with backbone.js and most content being passed as JSON and loaded via backbone. I just needed some advice or opinions on likely hood of my website being penalised using the method of serving plain HTML (text, images, everything) to search engine bots and an js front-end version to normal users. This is my basic plan for my site: I plan on having the first request to any page being html which will only give about 1/4 of the page and there after load the last 3/4 with backbone js. Therefore non javascript users get a 'bit' of the experience. Once that new user has visited and detected to have js will have a cookie saved on their machine and requests from there after will be AJAX only. Example If (AJAX || HasJSCookie) { // Pass JSON } Search Engine server content: That entire experience of loading via AJAX will be stripped if a google bot for example is detected, the same content will be servered but all html. I thought about just allowing search engines to index the first 1/4 of content but as I'm considered about inner links and picking up every bit of content I thought it would be better to give search engines the entire content. I plan to do this by just detected a list of user agents and knowing if it's a bot or not. If (Bot) { //server plain html } In addition I plan to make clean URLs for the entire website despite full AJAX, therefore providing AJAX content to www.example.com/#/page and normal html to www.example.com/page is kind of our of the question. Would rather avoid the practice of using # when there are technology such as HTML 5 push state is around. So my question is really just asking the opinion of the masses on if it's likely that my website will be penalised? And do you suggest an alternative which avoids 'noscript' method

    Read the article

  • What do I do if a user uploads child pornography?

    - by Tom Marthenal
    If my website allows uploading images (which are not moderated), what action do I take if a user uploads child pornography? I already make it easy to report images, and have never had this problem before, but am wondering what the appropriate response is. My initial thought is to: Immediately delete (not just make inaccessible) the image File a report with the National Center for Missing and Exploited Children with all information I have on the user (IP, URL, user-agent, etc.), identifying myself as the website operator and providing contact information Check any other images uploaded by that IP user and prevent them from uploading in the future (this is impossible, but I can at least block their account). This seems like a good way to be responsible in reporting, but does this satisfy all of my legal and moral responsibilities? Would it be better not to delete the image and to just make it inaccessible, so that it can be sent to the National Center for Missing & Exploted Children, the police, FBI, etc?

    Read the article

  • Remove IP address from the URL of website using apache

    - by sapatos
    I'm on an EC2 instance and have a domain domain.com linked to the EC2 nameservers and it happily is serving my pages if I type domain.com in the URL. However when the page is served it resolves the url to: 1.1.1.10/directory/page.php. Using apache I've set up the following VirtualHost, following examples provided at http://httpd.apache.org/docs/2.0/dns-caveats.html Listen 80 NameVirtualHost 1.1.1.10:80 <VirtualHost 1.1.1.10:80> DocumentRoot /var/www/html/directory ServerName domain.com # Other directives here ... <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf)$"> Header set Cache-Control "max-age=290304000, public" </FilesMatch> </VirtualHost> However I'm not getting any changes to how the URL is displayed. This is the only VirtualHost configured on this site and I've confirmed its the one being used as I've managed to break it a number of times whilst experimenting with the configuration. The route53 entries I have are: domain.com A 1.1.1.10 domain.com NS ns-11.awsdns-11.com ns-111.awsdns-11.net ns-1111.awsdns-11.org ns-1111.awsdns-11.co.uk domain.com SOA ns-11.awsdns-11.com. awsdns-hostmaster.amazon.com. 1 1100 100 1101100 11100

    Read the article

  • Google analytics iframe code measuring visitor as two visitors

    - by Maarten
    I'm trying to measure visitors in an iframe and the site containing the iframe. What I would like is that visitors clicks in the iframe are seen being from the same visitor as the containing site, but somehow it is seen as two seperate visitors. I followed examples from http://www.blastam.com/blog/index.php/2011/02/google-analytics-cross-domain-tracking/, trimmed down to an even simpler version based on the comments about setDomainName not being needed anymore but with setDomainName I get the same result: a click on a page and a click on the iframe is seen as 2 clicks by 2 seperate visitors. This is the code in my iframe if (_gaq && gaAccount.length > 0){ _gaq.push(['_setAccount', gaAccount]); _gaq.push(['_setAllowLinker', true]); //_gaq.push(['_setDomainName', 'none']); _gaq.push(['_trackPageview', 'mytestcountername']); } And this is the code in the containing page: <script type="text/javascript"> var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-9605474-4']); _gaq.push(['_setAllowLinker', true]); //_gaq.push(['_setDomainName', '.domain.nl']); _gaq.push(['_trackPageview']); (function() { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); </script>

    Read the article

  • rich snippets ignored by google [closed]

    - by Thoir Fáidh
    Possible Duplicate: Why would Google Rich Snippets work for one site author but not another? I'm facing one problem here. I made rich snippets - microdata for the website but google ignores all of them. Here is how it looks like in testing tool . It doesn't detect any errors. I've read that google ignores the microdata in hidden fields. Unfortunately this is partially the case since I use jquery to interact with the contect, but nevertheless it is not hidden everywhere and I believe that google should recognize at least the microdata visible to the user permanently. Am I missing something here? It is now about 3 weeks since I updated website with rich snippets.

    Read the article

  • Strategy for managing lots of pictures for a website

    - by Nate
    I'm starting a new website that will (hopefully) have a lot of user generated pictures. I'm trying to figure out the best way to store and serve these pictures. The CMS I'm using (umbraco) has a media library that puts a folder on the server for each image. Inside of there you can have different sizes of that same image. That folder has an ID on it and the database has additional information for that image along with the ID of the folder. This works great for small sites, but what if the pictures get up to 10,000, 100,000 or 1,000,000? It seems like the lookup on the directory would take a long time to find the correct folder. I'm on windows 2008 if that makes a difference. I'm not so worried about load. I can load balance my server pretty easily and replicate the images across the servers. The nature of the site won't have a lot of users on it either, but it could have a lot of pics. Thanks. -Nate EDIT After some thought I think I'm going to create a directory for each user under a root image folder then have user's pictures under that. I would be pretty stoked if I had even 5,000 users, so that shouldn't be too bad of a linear lookup. If it does get slow I will break it down into folders like /media/a/adam/image123.png. If it ever gets really big I will expand the above method to build a bigger tree. That would take a LOT of content though.

    Read the article

  • Canonicalization of single, small pages like reviews or product categories

    - by Valorized
    In general I pretty much like the idea of canonicalization. And in most cases, Google explains possible procedures in a clear way. For example: If I have duplicates because of parameters (eg: &sort=desc) it's clear to use the canonical for the site, provided the within the head-tag. However I'm wondering how to handle "small - no to say thin content - sites". What's my definition of a small site? An Example: On one of my main sites, we use a directory based url-structure. Let's see: example.com/ (root) example.com/category-abc/ example.com/category-abc/produkt-xy/ Moreover we provide on page, that includes all products example.com/all-categories/ (lists all products the same way as in the categories) In case of reviews, we use a similar structure: example.com/reviews/product-xy/ shows all review for one certain product example.com/reviews/product-xy/abc-your-product-is-great/ shows one certain review example.com/reviews/ shows all reviews for all products (latest first) Let's make it even more complicated: On every product site, there are the latest 2 reviews at the end of the page. So you see, a lot of potential duplicates. Q1: Should I create canonicals for a: example.com/category-abc/ to example.com/all-categories/ b: example.com/reviews/product-xy/abc-your-product-is-great/ to example.com/reviews/product-xy/ or to example.com/review/ or none of them? Q2: Can I link the collection of categories (all-categories/) and collection of all reviews (reviews/ and reviews/product-xy/) to the single category respectively to the single review. Example: example.com/reviews/ includes - let's say - 100 reviews. Can I somehow use a markup that tells search engines: "Hey, wait, you are now looking at a collection of 100 reviews - do not index this collection, you should rather prefer indexing every single review as a single page!". In HTML it might be something like that (which - of course - does not work, it's only to show you what I mean): <div class="review" rel="canonical" href="http://example.com/reviews/product-xz/abc-your-product-is-great/"> HERE GOES THE REVIEW</div> Reason: I don't think it is a great user experience if the user searches for "your product is great" and lands on example.com/reviews/ instead of example.com/reviews/product-xy/abc-your-product-is-great/. On the first site, he will have to search and might stop because of frustration. The second result, however, might lead to a conversion. The same applies for categories. If the user is searching for category-Z, he might land on the all-categories page and he has to scroll down to the (last) category, to find what he searched for (Z). So what's best practice? What should I do?

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >