Search Results

Search found 10444 results on 418 pages for 'macbook pro retina'.

Page 118/418 | < Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >

  • Tracking state of a one time event on a big website

    - by Mattis
    Assume a website with 250 million active users. I add a new feature to the website. Once a user visits I want to use a short tutorial to teach them how to use said feature. I only want them to complete the tutorial once (or actively click it away). What is the smart way to code the verification check for this? How do I track the progress in the database? Having a separate table with like NewTutorial_completed = 1 for user_id = 21312315 would just snowball. It also feels intuitively bad to check for every one-time event for every user on every page view. While writing the question I got one idea, to have a separate event log that is checked periodically for any new action the user need to see or perform. I push events to this log and once they are completed they are removed from the log. No need to store NewTutorial_completed = 1-type variables this way. I am sure this is a common problem. I would appreciate any input on what best practice is.

    Read the article

  • Recovering a lost website with no backup?

    - by Jeff Atwood
    Unfortunately, our hosting provider experienced 100% data loss, so I've lost all content for two hosted blog websites: http://blog.stackoverflow.com http://www.codinghorror.com (Yes, yes, I absolutely should have done complete offsite backups. Unfortunately, all my backups were on the server itself. So save the lecture; you're 100% absolutely right, but that doesn't help me at the moment. Let's stay focused on the question here!) I am beginning the slow, painful process of recovering the website from web crawler caches. There are a few automated tools for recovering a website from internet web spider (Yahoo, Bing, Google, etc.) caches, like Warrick, but I had some bad results using this: My IP address was quickly banned from Google for using it I get lots of 500 and 503 errors and "waiting 5 minutes…" Ultimately, I can recover the text content faster by hand I've had much better luck by using a list of all blog posts, clicking through to the Google cache and saving each individual file as HTML. While there are a lot of blog posts, there aren't that many, and I figure I deserve some self-flagellation for not having a better backup strategy. Anyway, the important thing is that I've had good luck getting the blog post text this way, and I am definitely able to get the text of the web pages out of the Internet caches. Based on what I've done so far, I am confident I can recover all the lost blog post text and comments. However, the images that go with each blog post are proving…more difficult. Any general tips for recovering website pages from Internet caches, and in particular, places to recover archived images from website pages? (And, again, please, no backup lectures. You're totally, completely, utterly right! But being right isn't solving my immediate problem… Unless you have a time machine…)

    Read the article

  • website particular url suddenly disappeared from google search result

    - by Ragavendran Ramesh
    i have a website , in that a particular page url was indexed in google search result in the first 10 results , but suddenly it disappeared , not that page is not even in the 100results , what would be the reason. i am feeling that the page has be spammed by our competitors . is it possible to avoid that , or can i find that page has been spammed or not. Is it possible to find the particular page in a website is spam or malicious.

    Read the article

  • Domains with similar names and issues

    - by abel
    I recently purchased one of those domain names like del.ico.us. While registering I found that delicious.com was being used. Argument: I found that delicious.com belonged to the same category as my to-be website. It served premium delicious dishes. Counter Argument: My to-be domain though belonging to the same category, specialized in serving free but delicious dishes or in giving out links(affiliate) to other sites serving premium delicious dishes. Additional Counter Arguments: 1.delicious.com was not in English. 2.the del.icio.us in my domain name though having the same spelling, is not going to be used in the same fashion. For eg.(this may not make sense, because the names have been changed)the d in delicious on my website actually stands for the greek letter Delta(?/d) and since internationalized domains are still not easily typable, I am going for the english equivalent.The prefix holds importance for the theme of the service which my website intends to offer. My Question: Can I use the domain name del.icio.us for my website? How are these kinds of matters dealt? (The domain names used are fictitious. And I have already registered the domain but have not started using it.I chanced upon this domain name because it was short, easy to remember and suited the theme of my website.)

    Read the article

  • facebook share and opengraph

    - by hannit cohen
    I'm managing a video blog. The blog contains a main youtube video on the homepage and several thumbnail images of other videos elsewhere. I have a share button for the main page and the single posts pages and have added opengraph to specify the used image. For some reason facebook ignores my opengraph image and uses othe images it finds in the page... the header looks like this: (for the homepage) <!-- Facebook Opengraph --> <meta property="fb:app_id" content="155967927783206" /> <meta property="og:url" content="http://mttv.co.il/2010/12/%d7%9e%d7%a4%d7%92%d7%a9-%d7%a1%d7%99%d7%99%d7%a2%d7%95%d7%aa-%d7%92%d7%a0%d7%a0%d7%95%d7%aa-%d7%9c%d7%93%d7%95%d7%a8%d7%aa%d7%99%d7%94%d7%9d-1959-2010-%d7%91%d7%9e%d7%a2%d7%9c%d7%95%d7%aa/"/> <meta property="og:title" content="???? ?????? ????? ???????? 1959-2010 ??????" /> <meta property="og:description" content="???? ????? ?? ?????? ????? ?????? ????????? ???? ?????? 1959 ??? 2010 ?????? ????? ??????? ???' ??? ??? ????? ???? ??????? ???????? ?????? ?&quot;? ???? ??? ?&quot;? ????? ?????? ????? ???? 08.12.10 " /> <meta property="og:type" content="article" /> <meta property="og:image" content="http://www.mttv.co.il/wp-content/uploads/2010/12/Gvi91UEjCAw_mid-135x77.jpg" /> The website address is: http://mttv.co.il Any help will be appreciated

    Read the article

  • Point domain to 3rd Party DNS

    - by PhilCK
    I have a few of domain names and a rather simple website (small company type thing). We are in the process of having a web designer create a new website for us, but I don't want to give access to the control panel for the domain names (and have no way to limit it, it seems), while at the same time I don't want to be the go between guy for it the settings. Is there a way or a service for me to point the domain's at a 3rd party DNS system, that I can then give access for the web designer, without worry that he can find my personal info or try and transfer my domain out. Thanks.

    Read the article

  • 301 redirecting a blog's RSS feed URL?

    - by Marc Charbonneau
    I moved my personal blog from Wordpress to Ghost this weekend, which changes the RSS feed URL from /feed/ to /rss/. By default Ghost returns a 301 redirect for /feed/, which I've verified by checking the response header and looking at the logs: In Feedly though, new posts aren't being picked up (at least after 24 hours. I'm not sure if they might have a waiting period before updating the URL). What's the correct thing to do in this situation? Do I need to keep /feed/ alive instead of returning a 301? If so, is there a rewrite rule that would let me do this in nginx instead of having to modify the Ghost source code?

    Read the article

  • How to Configure Name Servers using Webmin in Unmanaged VPS on Centos

    - by John
    I want to configure my site's name servers and all related stuff. I'm not able to find any good documention steps to do it straight-forwardly without understanding the nitty-gritty of this. I wish I could afford managed Vps I feel that I'm the odd one out looking for this documentation. I've followed doc at these places: http://www.webtop.com.au/blog/how-to-setup-dns-using-webmin-2009052848 , http://www.beer.org.uk/bsacdns and https://www.virtacoresupport.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=134

    Read the article

  • Managing OpenX campaigns for several advertisers

    - by Mauricio Scheffer
    We're running our own OpenX instance with several advertisers. Each month, the advertiser's campaign should reset its available impressions, based on whatever plan the advertiser paid. I think this is a pretty common scenario. Question is: how do you manage this? Do you expire the old campaign and create a new one with the new amount of impressions? Or do you just add the impressions to the existing campaign?

    Read the article

  • What will be the impact on SEO if we remove our SSL certificate (url become http instead of https)?

    - by pixeline
    For some weird reason, our domain's content is returned for any https request set to any of our server's hosted domain names. https://domain.com leads to our website, with a proper SSL certificate (so, no warning). https://domain2.com, also hosted on our server but without SSL certificate, leads to a warning, and if accepted, to our website's content! The problem is that any search for our keywords in Google shows "fake websites" on top of ours, with the warning et al. It seems unsolvable so we are thinking about switching back ton nonsecure http . I'm just afraid of losing whatever indexing we have. How can i avoid that? Thanks, a.

    Read the article

  • FTP file access problem

    - by Fahad Uddin
    I recently got a malware on my website. I have made the backup of the website on my computer and trying to wipe off my FTP. I am trying to delete the root folder but getting this error message on all of the malicious files, Response: 550 Could not delete index.php: Permission denied I am the sole administrator of the ftp so permission should not have been an issue. My host provider seems not to suffer from this problem as his websites are running well without any malware. I have also tried to change the root to 777 to see if the file permission change could help me delete the files but still I am getting the same error. Please help out. Thanks

    Read the article

  • Paypal PDT and IPN , how does it work?

    - by slow diver
    PDT Payment Data Transfer is getting the transaction data of the purchase that was made on paypal site and you want to fetch that on your own site and display to the user. Also you may want to store it in your database for archive and tracking purposes. But I cannot exactly follow the documentation here What I am not getting is Once you have activated PDT, every time a buyer makes a website payment and is redirected to your return URL, a transaction token will be passed along as a "GET" variable to this return URL. In order to properly use PDT and display transaction details to your customer, you should fetch the transaction token, variable name "tx", and retreive transaction details from PayPal by constructing an HTTP POST to PayPal. Your POST should be sent to https://www.paypal.com/cgi-bin/webscr. You must post the transaction token using the variable "tx" and the value of the transaction token previously received (e.g. "tx=transaction_token"), and the special identity token using the variable at and the value of your PDT identity token (e.g. "at=identity_token"). You will also need to append a variable named "cmd" with the value "_notify-synch", for example "cmd=_notify-synch", to the POST string. IPN I have setup Instant Payment Notification through setting according to this documentation. This is basically logging into your paypal account and enable IPN while specifying a url where the notification will be sent. This is used to complete an order so that the product can be shipped. What I did is setup a PHP page. I have created a table and whenever that page is called (or hit), it registers an entry in the table so I know a notification came from Paypal. But it does not work either. What am I really doing wrong? The first thing I want to trouble shoot though is when the buyer pays the amount, he is automatically redirected to my site. I have enabled this but automatic redirection just does not work. Instead he is shown the url as an option after payment confirmation is shown. Can someone guide my how the PDT process goes? Where do I make the request for PDT, is it along the very first request (Buy Now button) or it is sent later? Addition I found some good sampling code of how everything should work but it still does not work. I use this code http://officetrio.com/modules/free-php-paypal-ipn-script.php for IPN. I am using this for PDT. This one uses SSL, I changed SSL to regular HTTP (copied paypal version), still does not work. http://ykyuen.wordpress.com/2010/02/17/paypal-payment-data-transfer-sample-code/

    Read the article

  • PCI compliance when using third-party processing

    - by Moses
    My company is outsourcing the development of our new e-commerce site to a third party web development company. The way they set up our site to handle transactions is by having the user enter the necessary payment info, then passing that data to a third party merchant that processes the payment, then completing the transaction if everything is good. When the issue of PCI/DSS compliance was raised, they said: You wont need PCI certification because the clients browser will send the sensitive information directly to the third party merchant when the transaction is processed. However, the process will be transparent to the user because all interface and displays are controlled by us. The only server required to be compliant is the third party merchant's because no sensitive card data ever touches your server or web app. Even though I very much so trust and respect the knowledge of our web developers, what they are saying is raising some serious red flags for me. The way the site is described, I am sure we will not be using a hosted payment page like PayPal or Google Checkout offers (how could we maintain control over UI if we were?) And while my knowledge of e-commerce is laughable at best, it seems like the only other option for us would be to use XML direct to communicate with our third party merchant for processing. My two questions are as follows: Based off everything you've read, is "XML Direct" the only option they could conceivably be using, or is there another method I don't know of which they could be implementing? Most importantly, is it true our site does not need PCI certification? As I understand it, using the XML direct method means that we do have to be PCI/DSS certified, and the only way around getting certified is through a payment hosted page (i.e. PayPal).

    Read the article

  • Disqus-like comment server

    - by wxs
    Hi all, I'm looking at setting up a blog, and I think I want to go the static website compiler route, rather than the perhaps more conventional Wordpress route. I'm looking at using blogofile, but could use jekyll as well. These tools recommend using disqus to embed a javascript comment widget on blog posts. I'd go that route, but I'd rather host the comments myself, rather than use a third party. I could certainly write my own dirt-simple comment server, but I was wondering if anyone knew of one that already exists (of the open source variety). Thanks!

    Read the article

  • SEO Keyword Research

    - by James
    Hi Everyone, I'm new at SEO and keyword research. I am using Market Samurai as my research tool, and I was wondering if I could ask for your help to identify the best key word to target for my niche. I do plan on incorporating all of them into my site, but I wanted to start with one. If you could give me your input on these keywords, I would appreciate it. This is all new to me :) I'm too new to post pictures, but here are my keywords (Searches, SEO Traffic, and SEO Value / Day): Searches | SEO Traffic | PBR | SEO Value | Average PR/Backlinks of Current Top 10 1: 730 | 307 | 20% | 2311.33 | 1.9 / 7k-60k 2: 325 | 137 | 24% | 822.94 | 2.3 / 7k-60k 3: 398 | 167 | 82% | 589.79 | 1.6 / 7k-60k I'm wondering if the PBR (Phrase-to-broad) value of #1 is too low. It seems like the best value because the SEOV is crazy high. That is like $70k a month. #3 has the highest PBR, but also the lowest SEOV. #2 doesn't seem worth it because of the PR competetion. Might be a little too hard to get into the top page of Google. I'm wondering which keywords to target, and if I should be looking at any other metric to see if this is a profitable niche to jump into. Thanks.

    Read the article

  • Share on Facebook does not show thumbnail images

    - by matt_tm
    I have a PHP application which has a "Share on Facebook" button that, On the development server shows the thumbnail images correctly and allows the user to select between them On the live server, it does NOT show the thumbnail images at all. The relevant portion of the .htaccess file is: # Set up caching on media files for 2 days <FilesMatch "\.(gif|jpg|jpeg|png|flv)$"> ExpiresDefault A172800 Header append Cache-Control "public" </FilesMatch> I'm using the exact same set of php files and .htaccess, but the server configuration is different. What could be causing this? Note that the text appears fine. Edit1 We are also doing some URL rewriting related to images in the .htaccess (on both servers): ... RewriteRule ^.*/content/image/(.*)$ content/image/$1 [L] ... RewriteRule ^.*/images/(.*)$ images/$1 [L] ... Would that be somehow making a difference? Images appear fine all throughout the site. (I posted this question earlier as http://stackoverflow.com/questions/4142597/share-on-facebook-does-not-show-thumbnail-images) )

    Read the article

  • Log oddities: 404s for client-garbled image URLs

    - by Chris Adams
    I've noticed some odd 404s which appear to be broken URL rewriting code: Our deep zoom view generates images URLs like this: /media/204/service/dzi/1/1_files/7/0_0.jpg I see some - well under <1% - requests for slightly altered URLs: /media/204/s/rvice/d/i/1/1_files/7/0_0.jpg These requests come from IP addresses all over the world (US, Canada, China, Russia, India, Israel, etc.), desktop and mobile users with multiple user-agents (Chrome, IE, Firefox, Mobile Safari, etc.), and there is plenty of normal activity in the same session so I'm assuming this is either widespread malware or some broken proxy service. I have not seen them from anything other than images, which suggests that this may be some sort of content filter. Has anyone else seen this? My CDN logs show the first request on June 8th ramping up from several dozen to several hundred per day.

    Read the article

  • Are copyright notices really required?

    - by Alasdair
    Ever since I made my first web page 13 years ago I have followed the pattern of showing a copyright notice in the footer of each page. Over the years the format of this notice has changed in the following way; Copyright © <NAME> yyyy. All rights are reserved. Copyright © <NAME> yyyy © yyyy <NAME> © <NAME> This has generally mirrored the format used by Google. However, I recently noticed that they no longer display a copyright notice on their home page nor have one in their source code/meta tags. I see they still display it on most (if not all) other pages. I understand that Google are very keen to keep the word count down on their homepage, which could be the reason for this sacrafice, but my question is more general and relates to all websites. Since I've always just done it out of habit, I'm hoping someone can explain if/when I a copyright notice is actually required to protect your content and rights. Also, when it is required, is there a format in which the notice must adhere to in order to be valid?

    Read the article

  • A record DNS, nameserver help

    - by Josip Gòdly Zirdum
    I just installed kloxo on my vps and I want to link my domain to that server...which it sort of already is...I made it connect to it via an A record. That works, like the IP is pointing to my server but how do I make a website using it. I tried adding the domain but this doesn't work ...I feel i'm not explaining this well um, on my server it asked me to create DNS template so I did I created the nameservers ns1.mydomain.com, ns2.mydomain.com Then I added the domain to the panel mydomain.com I go to the folder it creates to it, but no matter the file there it wont work...any ideas? Is there a way to possibly just not even add a domain to kloxo and just kind of treat the ip of the server as the domain? Since the A record points there anyway? I don't intend to host another website on the server anyway...

    Read the article

  • Content theft - Where can I go from here?

    - by Toby
    I am the webmaster of a very successful blog in a fairly small niche. Recently our success has started to bite us with people copying posts on the site without consent and trying to pass them off as their own work. Most sites stop as soon as you contact them but there is one in particular that is a blogger site which persists in passing off our content as their own. Every post we find we report to Google and they have been fairly good at taking the posts offline within a day or two but this isn't good enough or a long term solution. Given the nature of what is being blogged about after 24 hours the post is pretty much useless so I need some way to just stop them from taking our content? Any ideas? I don't want to go down the route of using a third party for people to get our RSS feed but I guess that is one option?

    Read the article

  • Multiple values for a specif custom variable in Google Analytics

    - by Nicola Pacini
    we're trying to get rid of a this question : would it be possible to setup more than one value in a custom variable in Google Analytics, at page level ? Eg: _gaq.push(['_setCustomVar',3,'Tag','Custom Variables',3]); We'd like to track most popular tags on a web site who publishes news, articles and stuff. Contents are categorized (each content belongs to one category) and tagged (1 or more tags for each article). So, we'd like to apply this code: _gaq.push(['_setCustomVar',3,'Tag','Custom Variables',3]); _gaq.push(['_setCustomVar',3,'Tag','Google Analytics',3]); in a page that shows an article with these two tags assigned. What do you think? Honestly I didn't find anything in documentation from Google and some other example sites. Many thanks! Nicola

    Read the article

  • Should a link validator report 302 redirects as broken links?

    - by Kevin Vermeer
    A while ago, sparkfun.com changed their URL structure from /commerce/product_info.php?products_id=9266 to /products/9266 This is nice, right? We don't need to know that it is (or was) a PHP page, and commerce, product_info, and products_id all tell us that we're looking at some products. The latter form seems like a great improvement. However, the change would have broken existing links. So, nicely, they stuck in 302 redirects. Visit http://www.sparkfun.com/commerce/product_info.php?products_id=9266 and your browser will issue GET /commerce/product_info.php?products_id=9266 HTTP/1.1 to which Sparkfun's servers reply HTTP/1.1 302 Found Location: http://www.sparkfun.com/products/9266 This 302 redirect is caught by Stack Exchange's link validator as a broken link. It's not broken it works just fine. Here, try it: http://www.sparkfun.com/commerce/product_info.php?products_id=9266 I understand that a 302 redirect is intended to be a temporary redirect, while a 301 should be used for permanent changes per RFC 2616. That said, Wikipedia and common practice use it as a redirect. Who is in error in this situation? Is this an error in Sparkfun's redirect implementation or in Stack Exchange's URL validator?

    Read the article

  • IE6 Support - When to drop it? [closed]

    - by Scott Brown
    Possible Duplicate: Should I bother supporting IE6? I'm thinking about IE6 support and when to give up on it. Do you have a percentage of total visitors figure in mind for when to drop support? Would you let a trend develop past this figure or are you just going take the first opportunity? I've seen a 44% drop in IE6 visitors in the past 12 months from 23%(ish) of visitors down to 13%(ish). Even if it was 5% it still seems too early to drop support to me (it's still 1 in every 20 users). What are people's thoughts on this?

    Read the article

  • website development, where to start from [closed]

    - by hopefulLLl
    hello everyone.. i am a computer science student,and i know C language. i want to learn making websites but dont know how to go about it. i did learn some HTML, and right now learning CSS from www.w3schools.com . now can anyone tell me what shall i learn next and what all things[languages] i need to learn to start making websites. also refer to the study material if u can. thanks. also, how long will it take me to make some nice websites?

    Read the article

  • Are Meta tags useful for SEO?

    - by Lynda
    Reading a Search Results with this or a similar phrased question can lead to reading a lot of conflicting answers. Are there any meta tags that matter in SEO? From what I have read I do know that meta keywords are no longer used (or so little it is not worth using them) and don't worry with using them. Meta Description tags are not used for page ranking but can effect click through rates so should be used but be less than 160 characters. I know the following meta tags exist: author - the author's name and possibly email address robots - to allow or disallow indexing by robots copyright - the copyright date of the page How much do these meta tags matter and are there others that are new (including ones that may not be used by all but might be used in the future or are used by only one of the big players like Google or Bing) meta tags that should be included? Note: Even if a meta tag doesn't matter in SEO but helps with click through rates similar to the descript tag does then feel free to include it with your answer.

    Read the article

< Previous Page | 114 115 116 117 118 119 120 121 122 123 124 125  | Next Page >