Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 143/216 | < Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >

  • Suggestions for a Self-serv advertising service

    - by Mystere Man
    I am seeking a self-serv advertising service for my websites, but I have a few restrictions that seem to make what i'm looking for hard to find. Specifically, I want to place "advertise here" links on my pages and allow end-users to purchase advertising on that site, page, and location. These ads will not be part of a national network. Supports multi-tenancy - That is, I have a number of domains using the same "web application" but with customized content per domain. When a customer wants to advertise on a given domain, then the ads will only appear on that domain and on that page of the domain (even though the page name may be the same across multiple domains). Supports fixed ad prices, not just CPC. I need monthly and quarterly pricing regardless of performance. Integrates with OpenX and other ad networks, so that if there is no self-serv on a given zone, it will use national advertising or direct advertising. Shiny Ads has much of this, but i'm looking for alternatives, as their prices are a bit crazy (20%) and can only do PayPal.

    Read the article

  • Specific website not working on my connection [closed]

    - by Tsury
    For some reason, the website http://www.woopra.com/ doesn't work well (shows as plain html, no css/images/JS) on my computer and my Android tablet (they both share the same internet connection). It does however, work on my WP 7.5 phone (3G). When I try to set a wifi hotspot and use the tablet it still doesn't work (tried to clean cache). I tried FF, Chrome and IE (multiple versions). What is going on here? I'm clueless! Thanks.

    Read the article

  • guest blogging, recipricle links & nofollow

    - by sam
    When writing a guest blog for a site and in the blog post i write i link back to my self, that counts as an imbound link. Writing something on my blog, like "have a look at this post i wrote for _" made that a link the links would be recipricle, correct ? thus cancling each other out.. If i was to to make the link back to my article a nofollow link then would i still get the link juice ? If i write guest blog post and the site want to also write a guest blog on my site later on whats the best way to handle it as wont these both cancel each other out and have no effect ?

    Read the article

  • Port forward based on external IP (for VPS hosting)

    - by Ben Alter
    What I want to do is to host a VPS. First, I'd like to set up a static IP address that forwards to my home IP address (so I can have more than one IP coming into my house). How can I do this without contacting my ISP (and is it even possible?; I don't care about paying for something that does this). Once I have the extra external IP address, how can I forward it to my VPS? How is my router supposed to differentiate between two separate external IP addresses?

    Read the article

  • Should I pass link juice to my pages on other websites that are already high PR domains?

    - by huzzah
    I am starting a new website for a local business and have entries listed for it on places like urbanspoon, yelp, google+ local, etc. I am thinking of listing these citation sites on my business website to encourage visitors of my site to go and review the business on those sites. If I dofollow I will pass link juice to my page on that site, but doesn't that mean that the very very little PR juice I have will be leached away from me? Is it better to nofollow them?

    Read the article

  • Optimising website IP for location

    - by Liam Sorsby
    From my understanding of SEO, websites are optimised for the current location of their IP address. For example if xxx.xxx.xxx.xx resolves to the UK then you are more likely to get higher rankings in the UK then you are in the USA. However, my query is when you use a CDN you are storing a cached version of your website across multiple servers at strategic locations across the globe to reduce load time in locations that your trying to target. Now if you use a CDN and geo-locate the website URL then it only resolves back to the USA (where our IP address resolves too) but it doesn't say it resolves to any other countries. As far as I know you can have multiple IP address resolving to one domain (from different countries). Do CDN's really help to optimise the location of your website or are they soley meant to optimise load time? Is there a better way to optimise for multiple countries with regards to the resolution of the IP address? Are VPN's as per this post here relevant to this? Any advice would be helpful.

    Read the article

  • Installing Ruby on Rails without access to command line

    - by Darwin
    I'm VERY new to this whole web dev thing but I can program and I liked Ruby when I used it before. Now, I've got web hosting and a domain and a site on there that's currently ran under Joomla but I'd like to experiment with Rails. The most access I can get to the server is FTP and maybe a setting here and there in the control panel. Definitely no command line. Is there a way to just, I don't know, upload ruby on rails to a folder and run it in a browser? That's how Joomla works I think. Literally every article I read about this starts with "you just do sudo get..." mumbo jumbo.

    Read the article

  • Password protect an alias virtual directory

    - by Jason
    I have a main domain being hosted through CPanel. I also have a sub-domain that I would like to appear as a path under the main domain instead of as a sub-domain. So I have: http://example.com/ pointing to the main hosted file. http://example.com/mydir pointing to the subdomain files. This is achieved by a httpd.conf include from the main domain section to set an alias: alias /mydir /path/to/subdomain/files/ Now, that works fine so far. The problem is that if a .htaccess file under /path/to/the/subdomain/files/ contains an error, the alias is completely skipped, and /mydir goes instead to the main host files. That is kind of surprising to me - I would expect an error to return an error instead. Now the killer: if I try to password protect /path/to/subdomain/files/, then trying to access http://example.com/mydir will again attempt to deliver from under the main hosted files and not from /path/to/subdomain/files/ I am not seeing any errors reported on the .htaccess file in the apache error log, so I am assuming the .htaccess is valid: AuthUserFile /path/to/valid/readable/.htpasswd AuthName "Secure Access" AuthType Basic Require valid-user This kind of behaviour does not seem right to me. Is there something obvious that could be causing it? Or is this just the way it works? Perhaps using an alias is the wrong way to go?

    Read the article

  • Why does google does not ignore the word "languages", although I have set to ignore it in advanced search settings

    - by jitendra1234
    Why does google does not ignore the word "languages", although I have set to ignore it in advanced search settings. here is the term I am using in google search: -languages site:http://en.wikipedia.org/wiki/ and here is the first result where the word "languages" is still present, (you can do a quick crtl+F to find out) http://en.wikipedia.org/wiki/Walter_Bedell_Smith I am just curious to know why google have not ignored the word "languages"?

    Read the article

  • Getting link to abstract indexed in Google Scholar

    - by JordanReiter
    We have a large digital library with thousands of papers indexed in Google Scholar. We allow Google Scholar to index our PDFs but they're blocked unless you have a subscription. So Google has full-text indexing/searching of our PDFs (great!) but then the links point just to those PDFs (boo!) instead of the more helpful abstract pages. Does anyone know what could cause an issue like this? I am, to the best of my knowledge, following all of the guidelines laid out in their Inclusion Guidelines. Here's some example meta data: <meta name="citation_title" content="Sample Title"/> <meta name="citation_author" content="LastName, FirstName"/> <meta name="citation_publication_date" content="2012/06/26"/> <meta name="citation_volume" content="1"/> <meta name="citation_issue" content="1"/> <meta name="citation_firstpage" content="10"/> <meta name="citation_lastpage" content="20"/> <meta name="citation_conference_title" content="Name of the Conference"/> <meta name="citation_isbn" content="1-234567-89-X"/> <meta name="citation_pdf_url" content="http://www.example.org/p/1234/proceeding_1234.pdf"/> <meta name="citation_fulltext_html_url" content="http://www.example.org/f/1234/"/> <meta name="citation_abstract_html_url" content="http://www.example.org/p/1234/"/> <link rel="canonical" href="http://www.example.org/p/1234/" /> example.org/p/1234 is the abstract page for the article; example.org/f/1234 is the fulltext link accessible to subscribers only (and to Google Scholar). example.org/p/1234/proceeding_1234.pdf is the fulltext PDF link.

    Read the article

  • If Variable = True show this, else show this [on hold]

    - by Tim Marshall
    If my variable of $userLoggedIn is true, the <?php if ($userLoggedIn == 'true') : ?> works perfect, however if the variable is false, the <?php if ($userLoggedIn == 'false') : ?> does not work at all. How do I resolve this problem? <?php $EditingWagesPage = false; $userLoggedIn = false; ?> <!DOCTYPE html> <!--[if IE 8]> <html lang="en" class="ie8 no-js"> <![endif]--> <!--[if IE 9]> <html lang="en" class="ie9 no-js"> <![endif]--> <!--[if !IE]><!--> <html lang="en" class="no-js"> <!--<![endif]--> <!-- BEGIN HEAD --> <?php if ($userLoggedIn == 'true') : ?> <head> ** HEADER STUFF ** </head> <body> ** BODY STUFF ** </body> <?php endif;?> <?php if ($userLoggedIn == 'false') : ?> <head> </head> <body> hiya </body> <?php endif;?> </html>

    Read the article

  • My Sites Were Hacked. What To Do?

    - by Vad
    I host multiple domains with this very popular hosting provider and I just went into one of my sites and... I see a black page with message "Hacked by...". I checked and all my sites with the provider are showing this same page. Inside of file system I have seen the hacker placed all default.* and index.* files with this message. So the hacker overwrote all index pages, placed new pages and that is under every, I say again, every folder. Cleaning this up will be close to a most horrible job. What to do (right now I am awaiting the restore of files from hosting provider)? How to prevent this? Whom to blame?

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • Working with different URL structures

    - by Dane411
    As I'm quite newbie to this field, I've doubts and there are some I couldn't find on Google, i.e: If I'm not wrong, index.html makes it possible to avoid to add the filename to the url, www.example.com/ is equal to www.example.com/index.html. And that works for the following subdirectories, right? www.example.com/music/ Is there any other way to achieve this without using an index.html file? (I've read smth about converting dynamic urls to static: ./?var1=value1&varN=valueN - ./value1/valueN) How can I convert www.example.com/music/ to music.example.com/ and why should it be used? Thanks in advance!

    Read the article

  • What's the canonical process for backing up a website?

    - by Walkerneo
    This is going to sound terrible, but bear with me. I currently have a cron job that does a mysql dump, a git add all and commit, and a git push to bitbucket. I set this up almost a year ago, when I didn't know much about git, backups, and general web development and administration. I haven't had the time to fix this and do it properly, but the repo has now grown quite big from accumulating large temporary files from my forum, so now I have to do something and I want to do it properly this time around. What processes do semi-large websites and personal site admins use for backing up server content? Based on what I've learned since I set this up, what I'm currently think of doing is: Making changes on a development domain and committing the code frequently Archiving the entire site after a successful deployment from the development domain Having automatic daily database and user-content backups. I still like the idea of backing up sqldumps with git, though. I know git isn't a backup tool and that this is beyond its purpose, but the textual queries that are exported would be easily managed by git and would save a lot of space in archives.

    Read the article

  • Question on business connections and page rank?

    - by Viveta
    I just want to ask this question to get a yes no answer on something that I've been wondering on lately. So regarding how there are countless numbers of sites now that use the no-follow; making it harder to get ranking for your page if your website information might be something useful and will get traffic but maybe isn't something that your business connections share content of; but I am trying to find out if the benefit to having a bunch of say "likes" to your facebook page, but all the connection to your website's content isn't passing any benefit to your main page. So are you then competing with your own website in regards to SERPs to your facebook page and that of your home page. Am I correct on this; that if you start having your facebook page doing real good as far as connections and likes (helping bump up your facebook PageRank) but if you have links on your page with certain optimized keywords, that there is no benefit to your website (other than people getting to your facebook page, and then more likely to click to your page). Hope I explained it well what I am asking. Just wanted to get a better picture of this to know what I want to focus on as far as how I'll be linking to my desired landing pages in the future.

    Read the article

  • List of freely available SEO tools (software) for keyword rank checking? [closed]

    - by Craig
    Possible Duplicate: can anyone reccommend a Google SERP tracker? Requirements: Analysis of site positions on the list of keywords in different search engines; Track keyword positions on search engines. I want see if my keyword rankings have moved up or down; Creating reports. I use Excel + Rank Checker addon for Firefox, to analyze the position of the site in search engines for my keyword list. Are there any tools which tested and working properly. Thanks.

    Read the article

  • Jokes search engine / PHP based search engine on database

    - by matt_tm
    I'm looking for a script, functioning like the Google homepage that fetches data from a database rather than the internet. This is not intended to be a search engine, but a repository of jokes that can be pulled depending on the keywords typed. No sophisticated search techniques are required - keyword based is perfectly fine. If some mechanism of up/down-voting jokes can be incorporated, that would be fantastic, but I'm presuming that will be an entirely different game.

    Read the article

  • Which CMS for photo-blog website?

    - by Gacek
    I need to add photo-blog to a site that I'm recently working on. It is very simple site so the blog doesn't have to be very sophisticated. What I need is: a CMS that allows me to create simple blog-like news with one (or more) images at the beginning and some description/comment below. Preferably, I would like to create something that works like sites like these two: http://www.photoblog.com/dreamie or http://www.photoblog.pl/mending/ it must be customizable. I want to integrate it's look as much as possible with current page: http://saviorforest.tk preferably, it should provide some mechanizm for uploading and storing images at the server. I thought about wordpress, but it seems to be a little bit too complicated for such simple task. Do you know any simple and easy in use CMS that would work here?

    Read the article

  • Is it possible to mod_rewrite BASED on the existence of a file/directory and uniqueID?

    - by JM4
    My site currently forces all non www. pages to use www. Ultimately, I am able to handle all unique subdomains and parse correctly but I am trying to achieve the following: (ideally with mod_rewrite): when a consumer visits www.site.com/john4, the server processes that request as: www.site.com?Agent=john4 Our requirements are: The URL should continue to show www.site.com/john4 even though it was redirected to www.site.com?index.php?Agent=john4 If a file (of any extension OR a directory) exists with the name, the entire process stops an it tries to pull that file instead: for example: www.site.com/file would pull up (www.site.com/file.php if file.php existed on the server. www.site.com/pages would go to www.site.com/pages/index.php if the pages directory exists). Thank you ahead of time. I am completely at a crapshot right now.

    Read the article

  • Usefulness of the Backlinks shown in Webmaster Tools

    - by Ewan Heming
    Is the list of links for a site shown in Google Webmaster Tools a complete list or just a sample? I've noticed that the links in there appear to be all the ones I didn't think would have any real value - either because they were nofollow or from irrelivant sites. The few I did think would be some use have never shown up and there's also some links that are sometimes there and sometimes not (such as my linkedIn profile). Does this mean that the missing links don't/no longer carry any value? It almost appears that the list is there for Google to either inform you about problems (there was a useful list there when someone tried to SPAM my site) or mis-imform you about which link-building strategies work or not (to keep people guessing about what works or not).

    Read the article

  • Webmaster tools showing 404 for non existent folder pages

    - by Jody
    Google webmaster tools is reporting some/many 404 urls that don't exist on my site. The links are things such as domain.com/xyz/ However that doesn't exist, but domain.com/xyz/index.html does exist. The "linked from" pages all show proper links to the "/xyz/index.html". The page without index.html DOES 404, but why is google even trying these urls if they are not linked to? My real question, is there a way to have google stop attempting to load these pages, and ultimately remove these from the crawl errors report. Thanks.

    Read the article

  • Clarity around Advanced Segment defintion

    - by Btibert3
    I am hoping to get some clarity around an advanced segment I created. For context, our website spans multiple domains. For reasons I wont get into, I created an advanced segment that looks for pages containing my subdomain of interest (subdomain.site.com). I want to ensure that my interpretation of this advanced segment is accurate. Simply, it flags all visits to our entire domain that viewed at least one page on my subdomain of interest? If I am off, what does this advanced segment represent? Many thanks in advance!

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • How to integrate a PHP CMS with paypal so that only users who completed a payment can register and authenticate?

    - by ibiza
    I am currently using a PHP CMS - cmsmadesimple - in order to create a website where services will be sold. I intend to use Paypal 'Buy Now' buttons in order to offer a few packages that will be renewable every 1-month or every 3-months and that grant access to the secure content of the website for a given period of time. Everything is going well so far but I am somewhat at loss for the user registration process as I have a few constraints I would like to use and it would be nice to automate the process if possible. Here are the constraints : User should be able to register to my website and choose a password himself Only users that paid should be able to register Access permissions should be disabled automatically after the service period if the package is not renewed And here is the process which I am thinking of : User clicks 'buy' on my website User is redirected on Paypal and completes the payment The paypal email used to pay should be returned to my server and somehow stored If it is a new email, user needs to register to my website (else if it is a returning customer, the deactivation flag for payment stopped should be removed to give back access) If a user does not renew his subscription, there should be a deactivation flag automatically set to the email used in order to lock access until next payment. Ideally, no human intervention is needed. What is the best way to implement all this? I am a bit at loss. I found this article that explained a few things and even has a nice code snippet, except that I'm not sure where to plug it. Thanks all

    Read the article

< Previous Page | 139 140 141 142 143 144 145 146 147 148 149 150  | Next Page >