Search Results

Search found 27632 results on 1106 pages for 'site definition'.

Page 21/1106 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Web Host for Small Rails-based CMS site [closed]

    - by clem
    Possible Duplicate: How to find web hosting that meets my requirements? I am building a site for someone that uses a Rails-based content management system that I built myself. All of the Rails deployment experience I have so far has been over small intranets. I'm looking at web hosts like rackspace, because it seems like they're well-suited for Rails deployment. However, for a site that's not going to have more than a couple of hundred hits a month (if even that), I'm not sure it's necessary. I've also used Dreamhost's Phusion Passenger deployment for small projects before, but it seems barely functional and not well-supported, and I've also used Heroku for deployment, but I think a regular web host may do a little bit better, as they'll need things like Google Apps for Gmail set up. If anyone could provide some guidance on this, I'd greatly appreciate it. I get confused when I see things on rackspace like "1.5c/hour", because I'm not sure how that gets computed.

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • MySQL tech writer position on Oracle jobs site

    - by stefanhinz
    Just in case you missed this, last week I announced that my team is looking for an experienced technical writer. Now the job offer has gone live on the Oracle website. Have a look! That's the EMEA job site, but the position is actually available for Europe or North America. The job offer should appear on the American site soon, too. If you want to join a great team, or if you know someone suitable who does, don't hesitate to contact me!

    Read the article

  • How to prevent a 404 Error when installing a subdomain using a wordpress multi-site installation

    - by Chris
    I have installed a multi-site instillation of WordPress onto my domain. I then added the necessary code to the wp-config.php file and .htaccess as instructed by WordPress. I also installed a plugin called Quick Page/Post Redirect Plugin which allowed me to place a 301 redirect onto the main domain as I only want to use the sub domain and not the main domain. Then I also added the following line of code to the wp-config.php file to redirect the main domain define( 'NOBLOGREDIRECT', 'URL Redirect Address' ); The site works fine with a redirect on the main domain and my subdomain runs fine when you type in subdomain.domain.com or http://subdomain.domain.com. However when I enter www.subdomain.domain.com or http://www.subdomain.domain.com the following error message is returned: Not Found The requested URL / was not found on this server. Apache/2.4.9 (Unix) Server at www.subdomain.domain.com Port 80 Any help with this would be much appreciated.

    Read the article

  • Adsense ads are not a good fit for my site

    - by Ryan Grush
    I run an academic network for college students to communicate at particular universities and we run Google Adsense. The site pulls in a decent amount for a side project but our CTR is horrible <0.2% and our RPM is equally low. The problem lies in the fact that Google pegs us as an education site (which we are) but shows our users ads for U of Phoenix, Devry U and other for-profit universities. All of our users are students of the more higher-caliber institutions and therefore have no use for these ads. I've known about this problem for some time but I don't know what to do to show more relevant ads instead (i.e. Spring Break, school apparel, poker, sports, etc). What would be the best way to change this?

    Read the article

  • Site Description h2 vs p

    - by user1010609
    I tend to follow this html structure while creating new site on my main page: <div class="header"> <img alt="keyword" title="keyword logo" src="keyword.png" /> <div> <h1>keyword</h1> <p><b>keyword + hierarchy keywords</b></p> </div> </div> As you can see Im using <p><b></b></p> to put short description of the site in it but I was wondering If maybe h2 would be better to use here?

    Read the article

  • Creating date based back entries for a blog and its site registration

    - by open_sourse
    So I am showing a blog to a colleague and telling him how the author has been regularly blogging for over ten years now. My colleague tells me that anyone can register a domain name and start entries from say circa 2000. When I argued that the site registration date can easily show that the registration was done recently he put forward two arguments: The author can claim that he moved from an old domain name which was registered many years ago and lapsed. So he took the data and rebuilt it in the new site. The author can buy an expired domain which was on the internet for many years. I am not sure if these ways can work to con someone to believing you have been a blogger for over a decade. But I do not have enough expertise in the topic to refute him. So I thought I would ask the wise community here at StackExchange. Can anyone give me some insight?

    Read the article

  • Redirect from the main page on multilanguage site

    - by Pluto
    I've decided to start developing multilingual web site. Now I'm thinking about the structure. The idea is to use a simple algorithm to find out what language is most interesting to the visitor when he makes his first visit to the root page, and then forward him to the corresponding address, for example, http://example.com/en/ for English-speaking audience and http://example.com/ru/ for Russians. The question is, how search engines react on such redirect? If this approach does not take away site's reputation, which redirects should I choose: 301 or 302? I would be very grateful for the reasoned response.

    Read the article

  • Estimate of Hits / Visits / Uniques in order to fall within a given Alexa Tier?

    - by Alex C
    Hi there! I was wondering if anyone could offer up rough estimates that could tell me how many hits a day move you into a given Alexa rank ? Top 5,000 Top 10,000 Top 50,000 Top 100,000 Top 500,000 Top 1,000,000 I know this is incredibly subjective and thus the broad brush strokes with the number ranges... BUT I've got a site currently ranked just over 1.2M worldwide and over 500k in the USA (http://www.alexa.com/siteinfo/fstr.net) Pretty cool for something hand-built on weekends (pat self on back) I was applying to an ad-platform and was told that their program doesn't accept webmasters who have an Alexa rank of greater than 100,000. (Time to take back that pat on the back I guess). I know that my hits in the last 30 days are somewhere on the order of 15,000 uniques and 20,000 pageviews. So I'm wondering how much harder do I have to work to achieve my next "goals"? I'd like to break into the top million, then re-evaluate from there. It'd be nice to know what those targets translate into (very roughly of course). I imagine that alexa ranks and tiers become very much exponential as you move up the ranks, but even hearing annecdotal evidence from other webmasters would be really useful to me. (ie: I have a site that is ranked X and it got Y hits in the last 30 days) Thanks :) - Alex

    Read the article

  • Fiddler - A useful free tool for checking Web Services (and web site) traffic

    - by TATWORTH
    Recently I had reason to be very glad that I had Fiddler. I was able to record some web service traffic and identify a problem as Fiddler can record both the call to a web service and response from the web service. By seeing the actual data traffic I was able to resolve a problem found in testing in less time than it has taken me to write this blog entry! This tool is also useful for studying general web site traffic. Fiddler is available from http://www.fiddler2.com/fiddler2/ There are training videos available on the above site.

    Read the article

  • Significant number of non-HTTP requests hitting my site

    - by Mark Westling
    I'm seeing a significant number of non-HTTP requests hitting a site I just launched. They show up in the server (nginx) logs as non-ASCII and get rejected (correctly) with a 400 status. Here are some lines from the log: 95.132.198.189 - - [09/Jan/2011:13:53:30 -0500] "œ$A\x10õœ²É9J" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:57:42 -0500] "#§i²¸oYi á¹„\x13VJ—x·—œ\x04N \x1DÔvbÛè½\x10§¬\x1E0œ_^¼+\x09ÜÅ\x08DÌÃiJeT€¿æ]œr\x1EëîyIÐ/ßýúê5Ǹ" 400 173 "-" "-" 79.100.145.126 - - [09/Jan/2011:13:58:33 -0500] "¯Ú%ø=Œ›D@\x12¼\x1C†ÄÀe\x015mˆàd˜Û%pÛÿ" 400 173 "-" "-" What should I make of this? Is this some sort of scripted attack? Or could these be correct requests that have somehow been garbled? They're not affecting the performance of the site and I'm not seeing any other signs of attacks (e.g., no strange POSTs) so at this point I'm more curious than afraid.

    Read the article

  • 302 Redirect Issue for Joomla 2.5.7 version site

    - by DDD
    For my site i am using Joomla 2.5.7 version and FB comments tools for the articles in the site. i am getting the 302 redirect problem for the FB comments for the Articles to which i post. I have checked the url's here http://www.webconfs.com/http-header-check.php and got the following result with 302 redirect. for http://www.fijoo.com HTTP/1.1 302 Moved Temporarily = Date = Wed, 21 Nov 2012 09:46:39 GMT Server = Apache/2.2.22 (Unix) mod_ssl/2.2.22 OpenSSL/1.0.0-fips mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_perl/2.0.6 Perl/v5.10.1 X-Powered-By = PHP/5.3.16 Set-Cookie = =en-GB; expires=Wed, 21-Nov-2012 10:46:40 GMT LOCATION = / Content-Length = 0 Connection = close Content-Type = text/html How to overcome this anyone please help.

    Read the article

  • Application for google adsense rejected twice due to Unacceptable site content

    - by Bootcamp
    I have a technical blog site at techaxe.com . I put up technical articles over there quite frequently. I applied for google adsense an got my application rejected twice. The issue reported both times was Unacceptable site content. I read the content policy by google and found nothing that would indicate that the content i have on my blog is unacceptable to them. Can some one please guide me as to what should be done so that i can get my adsense application accepted.

    Read the article

  • Site is working with http:// not with http://www

    - by Forza
    My site is working if i type it as domain.com. But If I type it as www.domain.com I get an error 404 page. The domain is registered with Google Apps for Business and the hosting is done with another company. I have some A records that are pointing the site to this server. However, as it appears the A records are only working halfway. What do I have to do to make it working for both http:// and http://www ?

    Read the article

  • Site returning 404 header to google, not sure why

    - by Damon
    A Drupal site that works fine for regular users returns a 404 not found error when I try to use the W3C validator on it; it is also not being indexed by google at all (which is the main issue but I suspect there is a connection). It is a https:// site with .htaccess rule to redirect any http:// request to the https://. I had had it running in google webmaster tools and thought it was fine, but it turns out I had not added the https domain. After adding the https domain it's also returning the header as HTTP/1.1 404 Not Found Date: Mon, 15 Oct 2012 19:37:43 GMT Server: Apache Expires: Sun, 19 Nov 1978 05:00:00 GMT Cache-Control: no-cache, must-revalidate, post-check=0, pre-check=0 Robots.txt just has User-agent: * Crawl-delay: 10 # Files Disallow: /cron.php How can I check what the issue is here?

    Read the article

  • Any good site that teaches C++?

    - by Shinmaru
    I am searching for any good site that teaches C++, that can explain most to all things about it(general) and has a decent active community. About Me: I am new to programming(knows nothing of it, so please bare with me), I have only learned(to a very basic form) LUA scripting Language, so yes I am your complete newbie. I got interested in programming, from scripting in LUA, so you can say it was my small stepping stone, One would basically take a course in college, but not everyone is well funded for that and I can't buy books for the same said reason(yes I'm somewhat poor, only money for essentials and bills), I'm not trying to get sympathy, just stating my conditions. and Please, something in between the lines of 'Programming for Dummies'(I'm not the brightest crayon in the box), I know this will not be easy, but your help will be most appreciated. I would learn for either Windows and/or Linux/Unix. I use both. Site(s) I know: Cplusplus.org(quite inactive and tutorials are a little for the programming savvy.

    Read the article

  • How to improve a single-paged site search result [closed]

    - by Trisism
    Possible Duplicate: How to SEO a Single-Page website I created an online CV of mine a couple of weeks ago and it has had quite a few visits. Now I want to improve the chance it will appear in google search results; however, my web CV is a one-paged site and it contains only internal links (those with hash #) so I can't really submit a sitemap. I could have changed the internal links to normal links to be processed on server-side, but there's no point of doing so. I'm very new to web SEO so I would really appreciate if somebody can show me what should I do with a single-paged site with internal links to be effectively indexed by crawlers.

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • definition of wait-free (referring to parallel programming)

    - by tecuhtli
    In Maurice Herlihy paper "Wait-free synchronization" he defines wait-free: "A wait-free implementation of a concurrent data object is one that guarantees that any process can complete any operation in a finite number of steps, regardless the execution speeds on the other processes." www.cs.brown.edu/~mph/Herlihy91/p124-herlihy.pdf Let's take one operation op from the universe. (1) Does the definition mean: "Every process completes a certain operation op in the same finite number n of steps."? (2) Or does it mean: "Every process completes a certain operation op in any finite number of steps. So that a process can complete op in k steps another process in j steps, where k != j."? Just by reading the definition i would understand meaning (2). However this makes no sense to me, since a process executing op in k steps and another time in k + m steps meets the definition, but m steps could be a waiting loop. If meaning (2) is right, can anybody explain to me, why this describes wait-free? In contrast to (2), meaning (1) would guarantee that op is executed in the same number of steps k. So there can't be any additional steps m that are necessary e.g. in a waiting loop. Which meaning is right and why? Thanks a lot, sebastian

    Read the article

  • Creating a Wiki site in SharePoint 2010

    - by Ben
    I have a SharePoint 2010 site set up and would like to add a Wiki site to it. Normally, to add a subsite to a parent site you would click "Site Actions" - "New Site" and choose the site type (e.g. Team Site) and it would work fine. In the case of adding a Wiki site, I choose "Enterprise Wiki", and get "Error An Unexpected Error has occured Correlation Id:...". So i took the CorrelationId and checked in the ULS files to see what the error is, and the first bad sign seems to be: SharePoint Foundation Feature Infrastructure " The element of type 'ContentTypeBinding' for feature 'EnterpriseWiki' (id: 76d688ad-c16e-4cec-9b71-7b7f0d79b9cd) threw an exception during activation: Specified argument was out of the range of valid values. Parameter name: Content type not found (Id: '0x010100C568DB52D9D0A14D9B2FDCC96666E9F2007948130EC3DB064584E219954237AF39004C1F8B46085B4D22B1CDC3DE08CFFB9C')." There are more errors in the log, but this seems to be the first one. Any ideas on whats going on? Thanks

    Read the article

  • Need full news article on my site

    - by Kamlesh Ghate
    I'm working on news site (developed in PHP) where I'm displaying news from different feeds. I'm parsing the XML and storing the content in my database. But when a reader comes to the site and clicks on any title to read the full article it redirects to the site from where I've taken that feed. I want full article so that user can read that on my site without leaving the site. How can I get full article to display that on my site.

    Read the article

  • Extand legacy site with another server-side programming platform best practice

    - by Andrew Florko
    Company I work for have a site developed 6-8 years ago by a team that was enthusiastic enough to use their own private PHP-based CMS. I have to put dynamic data from one intranet company database on this site in one week: 2-3 pages. I contacted company site administrator and she showed me administrative part - CMS allows only to insert html blocks & manage site map (site is deployed on machine that is inside company & fully accessible & upgradable). I'm not a PHP-guy & I don't want to dive into legacy hardly-who-ever-heard-about CMS engine I also don't want to contact developers team, 'cos I'm not sure they are still present and capable enough to extend this old days site and it'll take too much time anyway. I am about to deploy helper asp.net site on iis with 2-3 pages required & refer helper site via iframe from present site. New pages will allow to download some dynamic content from present site also. Is it ok and what are the pitfalls with iframe approach?

    Read the article

  • What is a proper way to store site-level global variables in a SharePoint site?

    - by ccomet
    One thing that has driven me nuts about SharePoint2007 is the apparent inability to have defineable settings that apply specifically to a site or site collection itself, and not the content. I mean, you have some pre-defined settings like the Site Logo, the Site Name, and various other things, but there doesn't appear to be anywhere to add new kinds of settings. The application I am working on needs to be able to create multiple kinds of "project site collections" that all follow a basic template, but have certain additional settings that apply specifically to that site collection and that one alone. In addition to the standard site name we also need to define the Project Number, the Project Name, and the Client Name. And given the requests of some of our clients, we also reach a point where we have to have configurable settings that alter how some of the workflows work, like whether files are marked with Letters or Numbers. Our current solution, which I'm hesitant about, has been to store an XML file on the SharePoint server. This file contains one node for each site collection, identified by the URL of the root site. Inside the node are all of the elements that need to be defined for that site collection. When we need them, we have to access the XML file (which will always require SPSecurity.RunWithElevatedPrivileges to access files right on the server) every time to load it and retrieve the data. There are a lot of automated processes which will have to do this, and I'm hesitant about the stability of this method when we reach hundreds of sites with thousands of files running tens of thousands of workflows, all wanting to access this file. Maybe they're unfounded worries, but I'd rather worry than risk everything breaking in a couple years. I was looking into the SPWeb object and found the AllProperties hashtable. It looks like just the kind of thing which might work, but I don't know how safe it is to be modifying this. I read through both MSDN and the WSS SDK but found nothing that clarified on adding completely new properties into AllProperties. Is it safe to use AllProperties for this kind of thing? Or is there yet another feature that I am missing, which could handle the concept of global variables at the site collection or site scope?

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >