Search Results

Search found 9717 results on 389 pages for 'pro metedor'.

Page 79/389 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Domain forwarding (GoDaddy) - Forward only / Forward with masking

    - by jan
    I am trying to configure my domain to forward to my app engine application. For the forwarding I can choose from "forward only" and "forward with masking". Assume my domain is called "myDomain.com" and my app is located at myApp.appspot.com. If I choose "forward only" and I go to myDomain.com, I get redirect to myApp.appspot.com (myApp.appspot.com also showing in the address bar - But I want to show myDomain.com of course). If I choose "forward with masking", "myDomain.com" is always shown in the address bar, even if I navigate to some subpage. The URL then should look like e.g.: "myDomain.com#!page:xyz", but still shows "myDomain.com". Is there some way in the middle?

    Read the article

  • Hiding php includes from search spiders?

    - by 21stcn
    Quick and simple question. I have 80+ html files which I want to be crawled. They are individual product pages. Each of these pages calls its content using php includes. These php include files are in a separate folder on the server and contain the core content for the individual product pages. I just wanted to ask, if I use robots.txt or .htaccess to prevent crawling of the directory that holds the php content files, will there be no issue crawling the html pages which include these files? What I want to achieve is have the html files indexed with the php content included in them, but I don't want visitors landing on the php content pages, nor have these php files indexed as duplicate content. Just clarification needed as to whether it is safe to block spiders from accessing the php folder, without this affecting the html files being indexed with the included content. Is this the best way to do things? Or should I just leave the content php files to be crawled?

    Read the article

  • News Portal CMS

    - by George Grigorita
    I am looking for a specific news portal CMS. I know all the major "general" CMS (like WordPress, Drupal or Joomla) and even the less known ones (like TYPO3, Expression Engine, Text Pattern or Concrete5). I'm already working with a Drupal distribution called OpenPublish and another WordPress installation to determine which would be better, but these are more of a Plan B. I would like to work directly with a CMS that was build exactly for this kind of tasks specific to a news / media portal. It doesn't matter if the CMS is commercial (however, I don't want to pay a monthly fee) or free, but I need to be able to use it on my own server / hosting and I need to be able to access it's source code (not to modify it, but to integrate it with future plugins / modules). If you know any CMS that qualifies for this job, please let me know. In the last few days I was all over Google but I couldn't anything worth mentioning.

    Read the article

  • Getting fingerprint from Apache certificate (combined with key)

    - by Alois Mahdal
    I have just created a certificate for my Apache SSL host using: make-ssl-cert /usr/share/ssl-cert/ssleay.cnf /etc/ssl/private/myhost.crt Now that is the correct way to get the fingerprint out of it? (So I can keep it in other place for visual comparison---in case I need to connect and really don't trust the network?) openssl sha1 /etc/ssl/private/myhost.crt returns different SHA1 than Opera tells me about the cert. Is this because it's combined with the key? (...or am I spoofed already? :-)).

    Read the article

  • Setup site folders on Apache and PHP [closed]

    - by Cobus Kruger
    I'm trying to set up my first Apache server on my Windows PC at home and I have real trouble finding out which configuration settings go where. I downloaded and installed XAMPP which seemed to get everything nicely set up and can see a working website on http://localhost. So far so good. The point of this is to develop a website of course, and to make my life easier (irony?), I wanted to let the web site root point to my Eclipse project folder. So I opened httpd.conf, uncommented a VirtualHost block and changed its DocumentRoot to my local path. Now when I try to load http://localhost I get a 403 (Access denied) error. So where do I configure permissions for my folder? And is that all I need to let my site run from the folder specified or am I going to have to clear another hurdle?

    Read the article

  • Quoting people for website dev. work

    - by Jason
    Hi All, I have recently given some quotes to a few people. And I need some advice about how things should be done... Q1: I've seen, heard of and read about a lot of developers using free resource sites online to obtain free Privacy Policy, Disclaimers etc for their/customers websites. A customer I quoted the other day expected me to write/get a disclaimer for their site. Who in their right mind would expect a document like that from a Web Developer? I just told them that they need to sort that stuff out themselves with a Lawyer or something, and then to send it to me so I can paste it on a webpage for them. Q2: If you're charging per hour, and you estimate that the project would take 1week to finish (including testing/releasing), but you soon realise that you'll require more time, do you RE-quote them? Or do you just finish off the site at the original quote price? Q3: How do you figure out how much you will charge your customers? Do you charge per-feature, or per hour, or per day, or all of the above? Thanks :)

    Read the article

  • What are some reputable merchant account providers for high risk payment web sites?

    - by GregH
    I am helping to set up an online cigar web site. However, it has become a real pain to take payments online since tobacco is considered a "high-risk" item and nobody will provide a merchant account to process the payments. It looks like there are companies that specialize in high-risk merchant accounts. I was wondering if anybody could recommend a high-risk merchant account and payment processing provider?

    Read the article

  • Tracking click conversions with Google Analytics

    - by Joel
    Is there anyway I can use Google Analytics to track click conversions on a link? For example, if I have a link to www.a.com , is it possible for google to track the number of times that particular link was shown on my page and then track how many times it was really clicked? The problem is that I do not show the link to www.a.com every time the page loads. I am using a random function (server side) to generate a different link everytime. I would like Google Analytics to provide me with the click conversion for each of the links I choose to show the user. Thanks, Joel

    Read the article

  • Which shopping cart / ecommerce platform to choose?

    - by fabien7474
    I need to build an ecommerce website within a tight budget and schedule. Of course, I have never done that before, so I have googled out what my solutions are and I have concluded that the following were not valid candidates anymore : Magento : Steep learning curve osCommerce : old, bad design, buggy and not user-friendly Zencart, CRE Loaded, CubeCart : based on osCommerce Virtuemart, uberCart, eCart : based on CMS (Joomal, Drupal, WordPress) that is not necessary for my use-case So I finally narrowed down my choices to these solutions : PrestaShop : easy-to-use, great templating engine (smarty) but many modules are not free buy yet indispensable OpenCart : security issues and not a great support from the main developer. See here and here. So, as you can see, I am a little bit confused and if you can help me choosing an easy-to-use, lightweight and cheap (not-necessarily free) ecommerce solution, I would really appreciate. By the way, I am a Java/Grails programmer but I am also familiar with PHP and .NET. (not with Python or Ruby/Rails) EDIT: It seems that this question is more appropriate for the Webmaster StackExchange site. So please move this question to where it belongs (I cannot do that) instead of downvoting it. BTW, I have found out a question quite similar on SO (http://stackoverflow.com/questions/3315638/php-ecommerce-system-which-one-is-easiest-to-modify) which is quite popular.

    Read the article

  • Webhosting for a TV channel with streaming video

    - by Murtez
    Hi guys, I'm making a website for a web based TV channel, so I'm assuming it will be heavy on bandwidth usage, but I'm no good at calculating bandwidth. Couple of questions: Assuming the site streams HD video 24 / 7 to 1000 people, how much bandwidth is that? Where should something like this be hosted? The channel will have a fiber internet optic connection, but I don't know the limit on their bandwidth, would it be better to get their own server or host online? In either case in question 2, any recommendations? I'm usually a regular web designer for minor businesses, so this is a new level. Your help is appreciated.

    Read the article

  • Legal responsibility for emebedding code

    - by Tom Gullen
    On our website we have an HTML5 arcade. For each game it has an embed this game on your website copy + paste code box. We've done the approval process of games as strictly and safely as possible, we don't actually think it is possible to have any malicious code in the games. However, we are aware that there's a bunch of people out there smarter than us and they might be able to exploit it. For webmasters wanting to copy + paste our games on their websites, we want to warn them that they are doing it at their own risk - but could we be held responsible if say for instance a malicious game was hosted on an important website and it stole their users credentials and cause them damage? I'm wondering if having an HTML comment in the copy + paste code saying "Use at your own risk" is sufficient.

    Read the article

  • Duplicating someone's content legitimately & writing HTML to support that

    - by Codecraft
    I want to add content from other blogs to my own (with the authors permission) to help build additional relevant content and support articles I've found useful that others have written. I'm looking into how to do this responsibly - ie, by giving the original content author a boost and not competing against them for search traffic which should go to their site. In order to keep my duplicate content out of search, and to hint to the search engines where the original content is to be found i've implemented: <head> <meta name='robots' content='noindex, follow'> <link rel='canonical' href='http://www.originalblog.com/original-post.html' /> </head> Additionally, to boost the original article and to let readers know where it came from i'll be adding something like this: <div> Article originally written by <a href='http://www.authorswebsite.com'>Authors Name</a> and reproduced with permission.<br/> <a href='http://www.originalblog.com/original-post.html' target='new'> Read the original article here. </a> </div> All that remains is a way to 'officially' credit the original author in the HTML for the search spiders to see. Can anyone tell me a way to do this possibly using rel="author" (as far as I can see thats only good for my own original content), or perhaps it doesn't matter given that the reproduced pages will be kept out of search engines? Also, have I overlooked anything in the approach?

    Read the article

  • Windows 7 - traceroute hop with high latency! [closed]

    - by Mac
    I've been experiencing this problem for quite a while, and it's quite frustrating. I'll do a traceroute, to www.l.google.com, for example. This is the result (please note: I will replace some parts of personal information with text - i.e. ISP.IP is in reality an actual IP address, and ISPNAME replaces the actual ISP name): Tracing route to www.l.google.com [173.194.34.212] over a maximum of 30 hops: 1 1 ms 1 ms <1 ms 192.168.1.1 2 9 ms 8 ms 10 ms ISP.EXCHANGE.NAME [ISP.IP.172.205] 3 161 ms 171 ms 177 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 4 12 ms 9 ms 10 ms host-ISP.IP.215.246.ISPNAME.net [ISP.IP.215.246] 5 10 ms 9 ms 17 ms host-ISP.IP.224.165.ISPNAME.net [ISP.IP.224.165] 6 10 ms 9 ms 10 ms 10.42.0.3 7 9 ms 9 ms 10 ms host-ISP.IP.202.129.ISPNAME.net [ISP.IP.202.129] 8 10 ms 9 ms 9 ms host-ISP.IP.209.33.ISPNAME.net [ISP.IP.209.33] 9 77 ms 129 ms 164 ms host-ISP.IP.198.162.ISPNAME.net [ISP.IP.198.162] 10 43 ms 42 ms 43 ms 72.14.212.13 11 42 ms 42 ms 42 ms 209.85.252.36 12 59 ms 59 ms 59 ms 209.85.241.210 13 60 ms 76 ms 68 ms 72.14.237.124 14 59 ms 59 ms 58 ms mad01s08-in-f20.1e100.net [173.194.34.212] Trace complete. Notice that there is a spike on the 3rd hop, but also notice that the 3rd and 4th hop are to the exact same destination. Furthermore, when I ping the offended hop separately, I get the low latency I would expect to that server: Pinging ISP.IP.215.246 with 32 bytes of data: Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=12ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=9ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Reply from ISP.IP.215.246: bytes=32 time=10ms TTL=253 Ping statistics for ISP.IP.215.246: Packets: Sent = 10, Received = 10, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 9ms, Maximum = 12ms, Average = 9ms I'm baffled as to why or how this is happening, and it seems to "fix itself" at random times. Here is an example of where it was working as expected: http://i.imgur.com/bysno.png Notice how many fewer hops were taken. Please note that all the posted results occurred within 10 minutes of testing. I've tried contacting my ISP, and they seem clueless; in their eyes, as long as "the download speed is not slow", then they're doing everything right. Any insight would be very much appreciated, and thanks in advanced!

    Read the article

  • fsockopen() error : Network is unreachable port 43 in php [closed]

    - by hamid
    i've writed some Php code that lookup for domain (whois) but it fails !! this is some of my code : function checkdomain($server,$domain){ global $response; $connection = fsockopen($server,43); fputs($connection, "domain " . $domain . "\r\n"); while(!feof($connection)){ $response .= fgets($connection, 4096); } fclose($connection); } checkdomain("whois.crsnic.net","www.example.com"); the code work on my localhost ( apache,php,mysql, OS - Win XP ) but when i uploaded it to my host (Linux) it failed. and i always see the Below Error/message : Warning: fsockopen() [function.fsockopen]: unable to connect to whois.crsnic.net:43 (Network is unreachable) in /home/hamid0011/public_html/whois/whois.php on line 37 what should i do ? is this my host's problem or whois server ( but it work in localhost ) or my code ? TNX

    Read the article

  • Best tutorial ever! Is there one just like it for XHTML and CSS...?

    - by Joshua C
    I have been learning Ruby on Rails using www.railstutorial.org, and I LOVE it! My only problem? Well, I can build the applications just fine, but my knowledge of designing the skin (CSS) of the application is limited. Is there a really good XHTML and CSS which is very similar to the Ruby on Rails Tutorial by Michael Hartl? If not, perhaps you can point me towards some of the best? Thanks, Joshua Collins P.S. Only if Michael would create a CSS and XHTML tutorial himself... sigh

    Read the article

  • Author Bio on all pages - Is it duplicate content?

    - by Rana Prathap
    In a website with user generated content, I provide a author bio under every article on the site. The author bio will be the same under every article the same author wrote. For some authors, the author bio is no longer then a couple of sentences, but for some descriptive writers, it is a good 100 words. These 100 words get repeated in almost 15 pages, some of them without substantial original content(such as haikus). Will this lead to duplicate content?

    Read the article

  • Tracking multiple subdomains and domains going to the same site, separately in Google Analytics

    - by miles
    I have a new site that has multiple top-level domains and subdomains all going to it: www.domain.com, campaign.domain.com, chicago.domain.com, domain2.com - all go to the same site/site directory. Right now I have one Google Analytics account profile set up for it, but I want to be able to track the traffic that is hitting those different URLs separately. The domains are being routed on the server-side (not .htaccess). How can I do this in Google Analytics? Do I need to create filters? Or create different profiles for each domain?

    Read the article

  • Submitting a sitemap to take care of inherited Google crawler errors

    - by leeand00
    I have an awful lot of Google Crawler errors (1000 or so) after I inherited a site that the previous owner migrated without moving much of their content. Would generating a map of the current site and submitting it to Google help fix this? Is there any quicker, automated way to eliminate errors other than clicking each and every site error? Note: I have already tried automating this on my own.

    Read the article

  • How to solve "Login Only" rejection?

    - by Renan
    Recently, a site of mine was rejected due to "Login Only": "Login Only: During our review of your website, we found that the majority of pages on your site are behind a login, or there is restricted access. Please note that we will not approve applications for login-protected pages, as we are not able to review their content for acceptance into the program." Although the site does require login to send content, it doesn't require any to view any page. How do I tell the Googlebot or whatever is used to crawl pages to adsense that all the content is publicly available but registration is needed to post?

    Read the article

  • GWT: reporting crawling errors for non existing links

    - by pixeline
    Google Webmaster Tools is reporting crawl errors for links that never existed, and if i check the "Linked from" tab for a given error link, it shows another that never existed. They all mention joomla/ which is not the cms used on this domain (it's wordpress fyi). Exampled: http://example.com/joomla/index.php/component/user/register Linked from: http://example.com/joomla/component/user/login?return=L2###### What is going on? UPDATE 1 I tried something: I provided one of the faulty urls to the "Fetch as Google" functionality. Instead of returning a 404, it returns a 301 to another Joomla page. HTTP/1.1 301 Moved Permanently Server: Apache/2.4.3 X-Powered-By: PHP/5.4.4-10 X-Pingback: http://example.com/xmlrpc.php Expires: Wed, 11 Jan 1984 05:00:00 GMT Cache-Control: no-cache, must-revalidate, max-age=0 Pragma: no-cache Set-Cookie: PHPSESSID=1fgr5v2oip39miibuptd51s8h0; path=/ Set-Cookie: woocommerce_items_in_cart=0; expires=Sat, 12-Jan-2013 11:44:01 GMT; path=/ Location: http://example.com/joomla/component/user/register Content-Type: text/html; charset=iso-8859-1 Content-Length: 387 Date: Sat, 12 Jan 2013 12:44:01 GMT Via: 1.1 varnish Connection: keep-alive Accept-Ranges: bytes Age: 0 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="http://example.com/joomla/component/user/register">here</a>.</p> <p>Additionally, a 301 Moved Permanently error was encountered while trying to use an ErrorDocument to handle the request.</p> </body></html>

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

  • Is there a good [and modern] reason to not have static HTML pages with AJAX content , rather than generate pages?

    - by user1725
    Assumptions: We don't care about IE6, and Noscript users. Lets pretend we have the following design concept: All your pages are HTML/CSS that create the ascetics, layout, colours, general design related things. Lets pretend this basic code below is that: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <link href="/example.css" rel="stylesheet" type="text/css"/> <script src="example.js" type="text/javascript"></script> <head> <body> <div class="left"> </div> <div class="mid"> </div> <div class="right"> </div> </body> </html> Which in theory should produce, with the right CSS, three vertical columns on the web page. Now, here's the root of the question, what are the serious advantages and/or disadvantages of loading the content of these columns (lets assume they are all indeed dynamic content, not static) via AJAX requests, or have the content pre-set with a scripting language? So for instance, we would have, in the AJAX example, lets asume jquery is used on-load: //Multiple http requests $("body > div.left").load("./script.php?content=news"); $("body > div.right").load("./script.php?content=blogs"); $("body > div.mid").load("./script.php?content=links"); OR--- //Single http request $.ajax({ url: './script.php?content=news|blogs|links', method: 'json', type: 'text', success: function (data) { $("body > div.left").html(data.news); $("body > div.right").html(data.blogs); $("body > div.mid").html(data.links); } }) Verses doing this: <body> <div class="left"> <?php echo function_returning_news(); ?> </div> <div class="mid"> <?php echo function_returning_blogs(); ?> </div> <div class="right"> <?php echo function_returning_links(); ?> </div> </body> I'm personally thinking right now that doing static HTML pages is a better method, my reasoning is: I've separated my data, logic, and presentation (ie, "MVC") code. I can make changes to one without others. Browser caches mean I'm just getting server load mostly for the content, not the presentation wrapped around it. I could turn my "script.php" into a more robust API for the website. But I'm not certain or clear that these are legitimately good reasons, and I'm not confidently aware of other issues that could happen, so I would like to know the pros-and-cons, so to speak.

    Read the article

  • Customizing Google Sites look and feel

    - by David Parunakian
    I find the site layout and theming capabilities in Google Sites (found in the Manage Site screen) very limited; for instance, I do not seem to be able to place the horizontal navigation buttons directly to the right of the logo and to customize their style, as well as to use the standard trick of making a horizontally stretchable background image of a box with rounded corners by splitting it into three parts and replicating the middle one, etc. Am I missing something? Are there any advanced settings available? Thanks.

    Read the article

  • Why is there nobody talking about an alternative to HTML & CSS? [closed]

    - by Nic
    HTML is such an old and cumbersome language, which was intended just to markup text. Today it's very rare to see a static HTML website, or a site with only text or a very simple layout. As a web developer I find it inconvenient to use HTML & CSS, very repetitive and cumbersome. I think that for a lot of website it could be simplified a lot. Tim Berners-Lee (W3) wrote a document named "The World Wide Web: Past, Present and Future" in August 1996 ... though HTML will be considered part of the established infrastructure (rather than an exciting new toy), there will always be new formats coming along, and it may be that a more powerful and perhaps a more consistent set of formats will eventually displace HTML. So, more than 15 years later, HTML is still here and it's here to stay. Why? Why searching for xml alternatives brings so much relevant result, but searching for html alternatives brings almost none relevant results? Answers like "it's too hard to change a standard" aren't answering the question since a lot of new standards emerged since the initiation of the web. I'm also not searching for answers that suggest using tools to simplify the process or formats that anyhow depends on HTML or CSS, technologies that currently require a plugin and not even trying to become an open standards (like Flash) aren't an answer neither. BTW, here are 2 articles written more than two years ago as food for thought, it might help with writing a better answers. "HTML, CSS, and Web Development Practices: Past, Present, and Future" describing a very related problem, by Jens O. Meiert. "A Brief History of HTML" by Scott Reynen, Here is a quote from the end: So now you can answer questions about HTML5 without even looking at the draft, which is handy, because the draft is 400+ pages long. Why is there a new tag in HTML5? Because some browser vendor (maybe the one that also owns a large video site) wanted it. Why are there so many scriptable interface elements in HTML5? Because some browser vendor (maybe the one selling phones without Flash support) wants them. Why is there no support for RDFa in HTML5? Apparently no browser vendor wanted it. Is that the future?

    Read the article

  • Apache redirecting: reason unknown

    - by Sinan
    I have a simple php script. The script is not important. It just prints out $_SERVER. When I request an URL like www.server.com/?ref=bar everything is fine. However if the request contains something like www.server.com/?ref=http://www.test.com (?ref=http%3A%2F%2Fwww.test.com) the server redirects to 403.shtml. No redirect for http://x but redirects http://x.y As far as I can understand somehow the server doesn't like "http://x". It always redirects to 403.shtml when there is a valid query string in the form of a valid url. my .htacess file is the same both on my server and local test server and local test server behaves as expected (no redirects). So I don't it is related to .htaccess. I'm on shared host on Hostgator. Can anyone help? Edit: Here's the .htaccess file Options +FollowSymLinks RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?$1 [L,QSA] When there is an http://xx.x it redirects to 403 even if there is physical file. However if I remove the .htaccess redirect to 403 also disappears. But I need the above .htaccess file. Is there a way to get around this?

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >