Search Results

Search found 5380 results on 216 pages for 'webmasters'.

Page 125/216 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Active Directory auto login to website for domain users

    - by Darkcat Studios
    I am putting together an Intranet for a company - I have set up authentication to get into the Intranet from a login box linked to AD via LDAP/ However the client wants (if possible) to have users automatically authenticate into the intranet if they are logged into the domain. AD and IIS7.5 are on separate servers (in the same network). I believe that I need to use WindowsAuthentication to do this - but will that work? as the web server is not part of the domain: do I need to tell IIS where the AD server is? The next part could be more complex: once the user has authenticated, I need to drag user details from AD about the user, I guess with LDAP, however I will need to know the user's username in order to do this, won't I? as the user hast had to type this in, how do I get that? The intranet site is in asp.net 4 VB.

    Read the article

  • Keeping rackspace vserver alive

    - by mit
    It appears to me that rackspace somehow freezes cloud VMs after some idle time. This means the first page request to a php page takes much longer to respond than the subsequent requests. This is in some cases good, in other cases not acceptable. I am actually querying a machine with wget from a different host now to keep it "alive". But I wonder what frequency would be necessary. Does anyone know the time period after which they send a VM to "sleep"? I guess it would be some minutes. EDIT: There is absolutely no caching involved on the php site. It just recently moved from another vhost and there was never such latency on the first request.

    Read the article

  • Opinions on .gr (Greek) registrars?

    - by Marc Bollinger
    None of the previous questions tackle some of the one-off (or further) countries' registries, beyond .co.uk, .it, et al. or else I'd have found an answer myself. I'm just looking for information for a vanity domain, so obviously I'm alright without an answer, but it's an unasked question (or at least, unanswered), and I'm not exactly in a hurry to give my credit card information over country lines, sight unseen.

    Read the article

  • Sidebar for Navigation in Website

    - by Johnson Smith
    I want to have a sidebar in my website with navigation in it. I will use script like phpBB etc. but I want sidebar to be displayed on every page. So I am thinking about making a Sidebar in HTML and then using frame tag for displaying other pages/scripts. But as Frames are getting obsolute, Is there any other method to display a sidebar in everypage without using frames and without adding html coding on every page?

    Read the article

  • What did I do wrong when buying my domain through Google Apps?

    - by Michael
    Several times in the past I've bought a domain through Google Apps, and every year Google automatically renews the domain for $10. Somehow when I did this today, I ended up additionally being signed up for an annual Google Apps for Business plan at $12/license/yr. I don't need this; I'm only one user. I'd like to cancel this $12/yr Google Apps plan without cancelling the automatical $10/yr domain renewal. However, there is no "cancel" or "downgrade" link, apparently (according to the help docs) because I have an annual pre-paid account. If I choose not to auto-renew my annual account, it also declines auto-renewal of my domain. How can I make this domain just like my others -- auto-renewed at $10/yr, with a free Google Apps account in front of it?

    Read the article

  • Google suddenly only indexes https and not http

    - by spender
    So all of a sudden, searches for our site "radiotuna" give out the result as an HTTPS link. https://www.google.com/?q=radiotuna#hl=en&safe=off&output=search&sclient=psy-ab&q=radiotuna&oq=radiotuna&gs_l=hp.12...0.0.0.3499.0.0.0.0.0.0.0.0..0.0.les%3B..0.0...1c.LnOvBvgDOBk&pbx=1&bav=on.2,or.r_gc.r_pw.r_qf.&fp=177c7ff705652ec3&biw=1366&bih=602 We only use https for the download of two specific files (these urls are resources used for autoupdate functionality of an app we distribute). All other parts of the site should be served over http. We wouldn't like to see any other traffic over https, nor any of our site links to appear in search engines as https. I'd like to address this issue. It seems that the following solutions are available: hand out an https specific robots.txt as such: User-agent: * Disallow: / and/or at app-level, 301 permanent redirect all requests (except the two above) to HTTP if they come in as HTTPS. My concern with the robots method is that, say (for some reason) google decided not to index http pages, disallowing https pages might mean that google has nothing left to index with disastrous consequences for our ranking. This means I'm inclined to go with a 301 redirect. Any thoughts?

    Read the article

  • SEO: Make hashtag links look static

    - by elias94xx
    So I have a website which displays all my content vertically. (like modern websites often do these days). Thus I can't create static links to each section. I'm currently handling the scrolling with javascript. My navigation looks like this. <ul> <li><a href="#services">Services</a></li> <li><a href="#references">References</a></li> <li><a href="#blog">Blog</a></li> <li><a href="#contact">Contact</a></li> </ul> I also created 301 redirect links with htaccess. E.g. /services which leads to /#services. If I were to use them in my navigation, I'd have to trigger the scrolling with the onpopstate event. Thats not really a problem, but would searchengines accept that kind of setup ? I also created a sitemap and submitted it to google, but the indexing is still pending.

    Read the article

  • PHP code works on Chrome, but not on Firefox or IE (send email via HTML form) [on hold]

    - by Cachirro
    My brother has this form: <form id="lista" action="lista2.php" method="post"> <input name="cf_name" type="text" size="50" hidden="yes" class="obscure"> <input name="cf_email" type="text" size="50" hidden="yes" class="obscure"> <textarea name="cf_message" cols="45" rows="10" hidden="yes" class="obscure"> </textarea> <input type="image" name="submit" value="Enviar Lista por Email" src="imagens/lista_email.png" width="40" height="40" onclick="this.form.elements['cf_message'].value = lista_mail;this.form.elements['cf_name'].value = prompt('Escreva o seu nome:', '');this.form.elements['cf_email'].value = prompt('Escreva o seu email:', '');"> <input name="submit2" type="submit" value="Enviar" hidden="yes" class="obscure"> </form> That calls this PHP file: <?php if ( isset($_POST['submit']) ) { // Dados de autenticacao SMTP $smtpinfo['host'] = 'localhost'; $smtpinfo['port'] = '25'; $smtpinfo['auth'] = true; $smtpinfo['username'] = 'xxx'; $smtpinfo['password'] = 'xxx'; // Dados recebidos do formulario $nome = $_POST['cf_name']; $email = $_POST['cf_email']; $mensagem = $_POST['cf_message']; // Inclusão de ficheiro PEAR. Certifique-se que o PEAR está activado no seu alojamento require_once "Mail.php"; // Corpo da mensagem $body = "Nome: ".$nome; $body.= "\n\n"; $body.= nl2br($mensagem); $headers = array ('From' => $email, 'To' => $smtpinfo["username"], 'Subject' => 'Encomenda Website'); $mail_object = Mail::factory('smtp', $smtpinfo); $mail = $mail_object->send($smtpinfo["username"], $headers, $body); if ( PEAR::isError($mail) ) { echo ("<p>" . $mail->getMessage() . "</p>"); } else { echo ('<b><font color="FFFF00">Mensagem enviada com sucesso.<br><br></b>Seu email: ' . $email . '<br><br></font>'); }} ?> This basically sends an email with some selected products, name and email. The problem is that it works perfectly on Chrome, but not on FF or IE. When the submit image is pressed, the URL changes to the PHP file, but it displays a blank page. After display errors activated: ini_set('display_errors',1); ini_set('display_startup_errors',1); error_reporting(-1) FF/IE display blank page and email isn't sent, Chrome sends the email and displays this: Strict Standards: Non-static method Mail::factory() should not be called statically in /var/www/vhosts/[site url]/httpdocs/lista2.php on line 33 Strict Standards: Non-static method PEAR::isError() should not be called statically, assuming $this from incompatible context in /usr/share/php/Mail/smtp.php , dont know if it helps So, what is causing the email to be sent on chrome and not on FF or IE? Thank you.

    Read the article

  • Access denied 403 errors after migrating my site

    - by AgA
    I've recently migrated my Joomla site from one shared hosting to another with Hostgator. GWT notified me about many 403 access denied pages. I've checked with Firebug too, and even though browser is displaying full page correctly but http return is 403. I've checked the home page but it's correctly returing 200 response. The same is shown by Fetch as Google in GWT(pasted this in the bottom). The site is 3 years old and I regularly do such migrations. I've copied the files and database "AS IS". I've even cleared all the caches but no luck. There is only one change: previously the site was primary domain but now it's add-on one. What could be the issue? This is how Googlebot fetched the page. Fetch as Google URL: http://MYSITE.COM/-----------------REMOVED.html Date: Thursday, June 20, 2013 at 10:32:14 PM PDT Googlebot Type: Web Download Time (in milliseconds): 3899 HTTP/1.1 403 Forbidden Date: Fri, 21 Jun 2013 05:32:15 GMT Server: Apache P3P: CP="NOI ADM DEV PSAi COM NAV OUR OTRo STP IND DEM" Expires: Mon, 1 Jan 2001 00:00:00 GMT Cache-Control: post-check=0, pre-check=0 Pragma: no-cache Set-Cookie: 0e4f6b53991c80cf39d57a6db58bb58d=ee2d880e8db0f1fc03c5612ea5a77004; path=/ Last-Modified: Fri, 21 Jun 2013 05:32:19 GMT Keep-Alive: timeout=5, max=75 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en-gb" lang="en-gb" > <head> <base href="http://www.mysite.com/-----------------rajiv-yuva-shakthi-programme-finance-planning.html" /> <meta http-equiv="content-type" content="text/html; charset=utf-8" /> <meta name="robots" content="index, follow" /> <meta name="keywords" content="" /> <<<<<<TRIMMED>>>>>>>>>>>>>>

    Read the article

  • Usefulness of the Backlinks shown in Webmaster Tools

    - by Ewan Heming
    Is the list of links for a site shown in Google Webmaster Tools a complete list or just a sample? I've noticed that the links in there appear to be all the ones I didn't think would have any real value - either because they were nofollow or from irrelivant sites. The few I did think would be some use have never shown up and there's also some links that are sometimes there and sometimes not (such as my linkedIn profile). Does this mean that the missing links don't/no longer carry any value? It almost appears that the list is there for Google to either inform you about problems (there was a useful list there when someone tried to SPAM my site) or mis-imform you about which link-building strategies work or not (to keep people guessing about what works or not).

    Read the article

  • High quality/performance shared hosting (in northern Europe)

    - by Bente
    I work as a web developer on almost all levels. However, my typical customer is a 1-5 guys running some sort of consulting business. They have (or want) a web page with some kind of CMS so the can perform most (or all) editing themselves. I normally opt for Concrete5 as my default CMS because it's the most user friendly (and free) CMS I have found. My good recurring customers I host on my own server as a service, but I need a good host for the customers where I want to deliver a product and not be responsible for whatever may happen in the future. However, I still struggle with hosting! Experience shows that the typical ~1$ shared hosting is waaay to slow to run concrete5 smoothly, and a VPS is out of the question because I don't want to maintain it. So, where can I find as fast (from northern Europe), reliable, shared host where I can put a site and don't have to worry about the server going down or being unmaintained. I expect this should cost around $10-$20 but I'm open to all kinds of suggestions because different customers have different budgets.

    Read the article

  • Redirect error in Google Webmaster Tools report

    - by Aurelio De Rosa
    I built a CMS and I used it to create the following website http://www.tkdmontecatini.com . After some days, Google Webmaster Tools started to give me several "Redirect error" on some pages like the follows: http://www.tkdmontecatini.com/it/photogallery http://www.tkdmontecatini.com/it/pagina/9/Informazioni/Corsi/Chi-Siamo http://www.tkdmontecatini.com/it/pagina/2/Informazioni/Eventi/Eventi The funny things are: If I access those links from a browser, it's all right and I've not redirect loops or other similar issues If I use the "Fetch as Googlebot" function, I get a great "Success" result Question: Any idea of why this happens and how can I fix it?

    Read the article

  • After changing web host, I get a 'file does not exist' error

    - by Jordan
    I run a WordPress blog, and have recently changed web hosts. When changing web hosts, I copied all files and exported/imported the database etc as explained by lots of tutorials found easily on Google. The blog home page works fine. What goes wrong: When I click on any link from the home page, the browser gets stuck in a redirect loop. Looking at the error log, I see: File does not exist: /usr/local/apache/htdocs/index.php The directory /usr doesn't even exist for my website - so perhaps this is looking for a file that was present using my old Web Host and is no longer present with my new web host? What is going on, and how might I resolve it?

    Read the article

  • How to structure my AdWords campaign for testing and different groups of keywords?

    - by Romain Dorange
    I am starting an AdWords campaigns and I will measure conversion rates using the AdWords conversion tracking pixel. Conversion might be account creation or a concrete sale. As it will be a test campaign to have some insights on CTR, CR, etc... on the future, I am likely to try several configurations: Two different ads with different landing URL and messages: one with a focus on the product / the other will contains a discount embedded in the URL. 4 different groups or themes of keywords. I guess I have to build 4 ads groups based on the keywords 2 ads with the different messages assign the two ads to each ads groups follow the campaign precisely in the ads tabs where I can see the effectiveness of each Ads per Ads Groups (for a total of 8 lines of reporting) Also, what are the key performance indicators that I can have from an AdWords campaign to measure global effectiveness? measure of return on investment from concrete sales (tracking pixel with e-commerce tag on confirmation page) measure o return on investment from leads acquisition (tracking pixel on account creation) measure of traffic increase with the campaign

    Read the article

  • Does the EU cookie law apply to an EU site that is hosted outside of the EU?

    - by mickburkejnr
    I have been reading up about this EU cookie law, and have also had in depth conversations with my girlfriend who is a solicitor/lawyer and with colleagues while building websites. While we are now working towards implementing a way to abide by the EU law, I have thought of something which no one really knows the answer to and has caused a few arguments. It's my understanding that any website in the EU must abide by these cookie laws, which is understandable. However, say if I were to have a .co.uk or .eu domain name pointing to a website which is hosted in America for example, do I still need to abide by the EU laws even though the website is hosted outside of the EU? One person I have asked has said that because the domain name is .co.uk or .eu (a European TLD) then the website is still accountable under EU law. Another person I have asked has said because the actual website is held outside of the EU, it doesn't actually have to bother with this law.

    Read the article

  • Separate urls for a set of pages sharing 80% duplicate content

    - by user131003
    Issue: Currently my site has one particular page which has country specific data. So I've URLs like : mysite.com/sale-united-states mysite.com/sale-united-kingdom mysite.com/sale-sweden etc. All these pages have 80-90% common content and 10-20% country specific content. currently all these pages canonically point to mysite.com/sale-united-states. The problem is when someone searches for "sale Sweden", Google correctly shows mysite.com/sale-united-states page, which does not feel correct as it shows US page instead of Sweden. Now I'm thinking of not using canonical url so that country specific urls are produced in Google saerch. But I'm not sure how 80% duplicate content is going to affect SEO? What should be the recommended approach for this situation? A friend of mine suggested a "separate subdomain per country" based approach but it seems overkill for one page.

    Read the article

  • PHP accessible shared content between two websites on the same VPS on different domains/IPs

    - by Lee Fentress
    I have two ecommerce websites, selling music digital downloads, on the same VPS, currently using cPanel/WHM (but thinking of switching to Virtualmin). They have separate domains and IPs of course. They both share from the same set of music files, so I have duplicate copies in each website directory, which takes up a lot of disk space. How might I go about sharing the same set of music files across both sites, allowing PHP access, so that it does not break my shopping cart's functionality of serving customers the downloads after they have paid for them? I thought of maybe using symlinks or something, but I don't know if it's possible, or if it would have to somehow circumvent built-in security features of the server. I'm new to VPS management.

    Read the article

  • How to point one sub-domain to another sub-domain and they can be used interchangeably

    - by Talon
    I'm trying to do this secure.domain2.com -loads content from- secure.domain1.com So if somebody goes to secure.domain2.com it will load the content of secure.domain1.com Note that I don't want a redirect, so if someone goes to secure.domain2.com in the address bar it will still say secure.domain2.com even though it's loading content from secure.domain1.com I've read that it's possible with a CName or something like that, what is the best way to do that?

    Read the article

  • How can I get cross-browser consistent behavior for TR heights within a table with a set height? [migrated]

    - by Dan
    I have an arbitrary number of tables with an arbitrary number of rows in each, and all tables are the same height. My initial approach was to just set the overall height of the table and hope the rows were smart enough to distribute themselves appropriately. That's not the case. I have 4 different behaviors going on with 4 browsers, but I need them to all render at the very least in a similar way. Safari & Chrome (WebKit): All rows are equal height, creating scroll bars as needed and fitting within table height. Firefox: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Additionally, If the content of the rows does not take up all of the height, only the part of the table with content in it takes the background (though it seems, through use of Firebug, that the actual table [and TR] extend to the bottom of the proper table height). IE: All rows are the height necessary to fit their content, with the remaining rows overflowing out of the table. Obviously this only includes one version of each browser and additional variation would likely appear with more being tested. Ideally, a solution where the browser renders TRs with less content smaller than those with larger content, while still using scrolling within the variable height TRs when the overall height of the table is not enough would be optimum. I could potentially see a solution to achieve that with JS, but can it be done with CSS? Or, if not, can the behavior that WebKit displays be made to work across the browsers? Thanks! PS: Example can be found here.

    Read the article

  • Should I prevent search engines indexing tag/category pages?

    - by Macha
    On my site, I currently have no special rules for search engines. It is a blog, statically generated using a Python program. When I search for some of my articles on Google, there is usually a tag or category page included in the results. Sometimes it even ranks ahead of the article itself. Obviously, as these links aren't always going to have the article on them, this aren't the results I want people to click on. So, I'm thinking of setting noindex on these pages. Is there any possible downside to doing so? Is this possible to do via robots.txt, or do I have to add it to all the relevant templates? All I can find for robots.txt are ways to stop the search engine crawling those pages, which isn't what I want - while I don't want them indexed, it's still the only surefire way to find all my blog posts.

    Read the article

  • Website signaled as containing malware

    - by Bakaburg
    I've got a nasty problem with one of our websites. It has been signaled to us by Google and other agencies that it contains malware. We weren't able to understand how to cope with the problem. Could anyone drive us in the right direction? UPDATE: I used google webmaster tools to review the suspicious website. And now it says it's ok! Even if I didn't change anything! How could it be? false alarm?

    Read the article

  • My First robots.txt

    - by Whitechapel
    I'm creating my first robots.txt and wanted to get a second opinion on it. Basically I have a FTP setup on my board for some special users to transfer files between each other and I do NOT want that included in the search by the bots. I also want to point to my sitemap which gets auto generated by a PHP page. So here is what I have, what else should I include, and if I need to fix anything with it? Also, it's linking to xmlsitemap.php because that generates the sitemap when called. My goal is to allow any search bot crawl the forums to grab meta data. User-agent: * Disallow: /admin/ Disallow: /ali/ Disallow: /benny/ Disallow: /cgi-bin/ Disallow: /ders/ Disallow: /empire/ Disallow: /komodo_117/ Disallow: /xanxan/ Disallow: /zeroordie/ Disallow: /tmp/ Sitemap: http://www.vivalanation.com/forums/xmlsitemap.php Edit, I'm not sure how to handle all the user's folders under /public_html/ since the robots.txt will be going in /public_html.

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Two different websites in one remote hosting

    - by Kor
    My client asked me that a website that is hosted in one server (and pointing there through a domain) should also be accessed (into a specific directory) from another domain, which is not pointing there. For example: http://www.foo.com, hosted at GoDaddy, with the full website http://www.bar.com, hosted at Bluehost, needs to access http://www.foo.com/bar, as if it was the http://www.bar.com's root. So, if anybody enters through http://www.bar.com, it should internally load http://www.foo.com/bar, without visually changing the url. I amb not sure if this is possible using .htaccess or anything like this. Could anybody show me some light? Thanks in advance

    Read the article

  • Is there a class or id that will cause an ad to be blocked by most major adblockers?

    - by Moak
    Is there a general class or ID for a html element that a high majority of popular adblockers will block on a website it has no information for. My intention is to have my advertisement blocked, avoiding automatic blocking is easy enough... I was thinking of maybe lending some ids or classes from big advertisment companies that area already being fought off quite actively. right now my html <ul id=partners> <li class=advertisment><a href=# class=sponsor><img class=banner></a></li> </ul> Will this work or is there a more solid approach?

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >