Search Results

Search found 9717 results on 389 pages for 'pro'.

Page 177/389 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • Github Feed affecting my WordPress installation? [on hold]

    - by saul
    Any idea how this fork is affecting my site? I went to verify my website log stats, and realized this may be the cause of a strange redirect constantly happening on my WordPress installation. Here's a line I found on my log: 54.81.91.95 - - [07/May/2014:22:52:08 -0400] "GET /category/selfie/feed/ HTTP/1.1" 200 1826 "-" "feedzirra http://github.com/pauldix/feedzirra/tree/master" And this is the Github fork (or however these are called). https://github.com/feedjira/feedjira/tree/master Basically, I think everytime I update my categories, (selfie in this case), I get redirected to install.php. Probably by triggering some GET function on that feed. to the best of my knowledge, this feed parses all url with this structure, blocking them, kind of like a DDoS attack?? Any ideas how to go about it??

    Read the article

  • File access forbidden in htpasswd

    - by Nerd-Herd
    I have been using the htpasswd generated in this question and it seemed to have been working well until recently. Since yesterday, I am not able to access the newest file created in the folder ChatLogs(named 10_07_2012.txt). The server returns a 403 Forbidden error saying: Forbidden You don't have permission to access /ChatLogs/2012/07/10_07_2012.txt on this server. I am still able to access older files(until 09 July, 2012). At first I thought it might be because of file permissions, but they are the same as on other 9 files in the folder. What could be the problem? Please Help.

    Read the article

  • No description for any page on the website is available in Google despite robots.txt allowing crawling

    - by Abhijit
    I seem to have the weirdest issue with Search Engine Optimization, and I asked the IT folks at my university, I asked people on Joomla forums and I am trying to sort this issue out using Google Webmaster Tools for more than 2 months to little avail. I want to know if I have some blatantly wrong configuration somewhere that is causing search engines to be unable to index this site. I noticed a similar issue with another website I searched for online (ECEGSA - The University of British Columbia at gsa.ece.ubc.ca), making me believe this might be a concern that people might be looking an answer for. Here are the details: The website in question is: http://gsa.ece.umd.edu/. It runs using Joomla 2.5.x (latest). The site was up since around mid December of 2013, and I noticed right from the get go that the site was not being indexed correctly on Google. Specifically I see the following message when I search for the website on Google: A description for this result is not available because of this site's robots.txt – learn more. The thing is in December till around March I used the default Joomla robots.txt file which is: User-agent: * Disallow: /administrator/ Disallow: /cache/ Disallow: /cli/ Disallow: /components/ Disallow: /images/ Disallow: /includes/ Disallow: /installation/ Disallow: /language/ Disallow: /libraries/ Disallow: /logs/ Disallow: /media/ Disallow: /modules/ Disallow: /plugins/ Disallow: /templates/ Disallow: /tmp/ Nothing there should stop Google from searching my website. And even more confusingly, when I go to Google Webmaster tools, under "Blocked URLs" tab, when I try many of the links on the site, they are all shown up as "Allowed". I then tried adding a sitemap, putting it in the robots.txt file. That did not help. Same exact search result, same behavior in the "Blocked URLs" tab on the webmaster tools. Now additionally, the "sitemaps" tab says for several links an error saying "URL is robotted out". I tried those exact links in the "Blocked URLs" and they are allowed! I then tried deleting the robots.txt file. No use. Same exact problem. Here is an example screenshot from Google's Webmaster Tools: At this point I cannot give a rational explanation to why this is happening and neither can anyone in the IT department here. No one on Joomla forums can seem to understand what is going on. Based on what I explained, does it seem that I have somehow set a setting in the robots.txt or in .htaccess or somewhere else, incorrectly?

    Read the article

  • Tumblr Custom URL Not Working

    - by user3177012
    I have bought a domain that I want to use as a unique Tumblr website but I can't get the url to pick up on Tumblr settings. The domain is a .com registered with 123-reg. I've set the CNAME to the correct tumblr url and also set the A record too. When I visit the url I get the Tumblr error page so I know that the domain is pointing, however when I go to settings in Tumblr and "Test" the url it says that it's not pointing and I can't save it. What could be the problem?

    Read the article

  • How to prevent Google Analytics from adding a second slash between domain and page specific URL when viewing a page?

    - by Jeromy Anglim
    I have a blog http://foo.tumblr.com. I sometimes go to Site Content - All Pages on Google Analytics and then navigate to page listing and then click the icon to take me to that page on my blog. However, instead of opening http://foo.tumblr.com/post/1234/blah.html Google Analytics is opening http://foo.tumblr.com//post/1234/blah.html (i.e., it is adding a second slash between the domain the page specific component of the URL). How can I stop Google Analytics from doing this?

    Read the article

  • Can web applications running on IIS7 Windows Server 2008 R2 be forced to immediately detect changes to hosts file?

    - by Brenda Bell
    We have several web applications running on several load-balanced servers. We want to have our web applications communicate with each other without first traversing outside the load balancer. For example: http://appA.example.com is running on 192.0.2.1 and 192.0.2.2 http://appB.example.com is also running on 192.0.2.1 and 192.0.2.2 The load balancer's public IP address is 198.51.100.3 By default, when appA on 192.0.2.1 makes a call to a WCF service hosted in appB, the HTTP request is routed to 192.51.100.3; this establishes a new session and the load balancer will direct the call to either of the two servers We want the call to be routed to the instance of appB running on the same server so we add 192.0.2.1 appB.example.com to the hosts file on 192.0.2.1. This eventually works, but we either have to wait for the app pool to naturally recycle or do a manual reset before appA sees the new address. Is there any way to have the change automatically detected without having to recycle the app pool?

    Read the article

  • How do I find information on who links to my sites?

    - by bobdobbs
    I'm trying to figure out if there's a free way to get information on backlinks to my site. I've had webmaster tools and google analytics set up for years. But I can't find access to data about site backlinks in either toolset. Webmaster tools, under 'traffic'-'links to your site' gives me the same message for all of my sites: "No data available". I haven't been able to find anything in GA that gives any information on backlinks. I've heard of using "links:" as an operator in google search, but for each of my sites, this returns either zero or very few results in cases when I know I have many backlinks. Most of the links simple aren't shown. My thinking is that google maintains a graph of who links to my site, so I figured that they might let me see it. But I can't figure out how. I've found this tool on a spammy website: http://www.backlinkwatch.com. It offers more data than google on my backlines, and offers more results in exchange for a paid subscription. The data it offers for free looks good, but the results are limited and the site has popups and obnoxious ads. So, in short: how do I get data on who links to me? Is there a free way?

    Read the article

  • Google Analytics Campaigns Not Tracking E-Commerce

    - by Paul
    I am running email campaigns via MailChimp and tracking the success of my campaigns via Google Analytics. I can successfully see data being tracked for: Reporting > Conversions > Ecommerce (Receiving Data) Reporting > Traffic Sources > Campaigns (Receiving Data) However, I am not receiving any Ecommerce data for the individual campaigns: Reporting > Traffic Sources > Campaigns > Ecommerce (No data) So I see data like: Visits: 18,501 Revenue: $0.00 Everything I have read leads me to believe this should just "work" if Ecommerce is setup. Is there some additional action I need to take for this work? Any help would be appreciated!

    Read the article

  • Redirect public traffic to a different subfolder, while local traffic remains unchanged

    - by ecnepsnai
    I would like to have local (intranet) HTTP traffic go to the /var/www/html folder while any public traffic goes to the subfolder, /var/www/html/public I've tried this configuration, with some variation, in httpd.conf <VirtualHost PRIVATE-IP> DocumentRoot /var/www/html ServerName ecn ErrorLog /var/www/logs/error/private CustomLog /var/www/logs/access/private common </VirtualHost> <VirtualHost PUBLIC-IP> DocumentRoot /var/www/html/public ServerName PUBLIC-DOMAIN-NAME ErrorLog /var/www/logs/error/public CustomLog /var/www/logs/access/public common </VirtualHost> PUBLIC-IP, PRIVATE-IP, and PUBLIC-DOMAIN name are all replaced with the correct values in the actual document. The problem is, local traffic can browse fine but remote traffic is directed to the root folder and getting 403d (because I have that folder blocked off through my .htaccess file). If I append /public to the URL it works fine.

    Read the article

  • Google Analytics: Why does "/" appear in goal funnel visualization?

    - by Lauren
    This is the goal funnel for checkout. Does anyone have any idea where the "/" is coming from? The cart page is at site: game on glove dot com (I don't want this stackoverflow page being indexed in google particularly well). Go to the site, click on the order button, make your selection, and click the button to enter the cart (it resolves to /Cart and /Shop-Cart). I believe I used the regular expression matching to match "cart". So why the "/" (I don't know what is causing the home page to reload when users are on the Cart page within a Colorbox lightbox where the only way back to home or "/" is to hit the exit button in the top right of the lightbox)? Here's my one guess for the former question but it doesn't seem likely: See the "check out with paypal" button? If you hovered over it, it does default to the home page which is what might be the "/"... but it really redirects the user to the paypal.com page so it shouldn't also load the home page.

    Read the article

  • Apache config that uses two document roots based on whether the requested resource exists in the first [closed]

    - by mattalexx
    Background I have a client site that consists of a CakePHP installation and a Magento installation: /web/example.com/ /web/example.com/app/ <== CakePHP /web/example.com/app/webroot/ <== DocumentRoot /web/example.com/app/webroot/store/ <== Magento /web/example.com/config/ <== Site-wide config /web/example.com/vendors/ <== Site-wide libraries The server runs Apache 2.2.3. The problem The whole company has FTP access and got used to clogging up the /web/example.com/, /web/example.com/app/webroot/, and /web/example.com/app/webroot/store/ directories with their own files. Sometimes these files need HTTP access and sometimes they don't. In any case, this mess makes my job harder when it comes to maintaining the site. Code merges, tarring the live code, etc, is very complicated and usually requires a bunch of filters. Abandoned solution At first, I thought I would set up a new subdomain on the same server, move all of their files there, and change their FTP chroot. But that wouldn't work for these reasons: Firstly, I have no idea (and neither do they remember) what marketing materials they've sent out that contain URLs to certain resources they've uploaded to the server, using the main domain, and also using abstract subdomains that use the main virtual host because it has ServerAlias *.example.com. So suddenly having them only use static.example.com isn't feasible. Secondly, The PHP scripts in their projects are potentially very non-portable. I want their files to stay in as similar an environment as they were built as I can. Also, I do not want to debug their code to make it portable. Half-baked solution After some thought, I decided to find a way to section off the actual website files into another directory that they would not touch. The company's uploaded files would stay where they were. This would ensure that I didn't break any of their projects that needed HTTP access. It would look something like this: /web/example.com/ <== A bunch of their files are in here /web/example.com/app/webroot/ <== 1st DocumentRoot; A bunch of their files are in here /web/example.com/app/webroot/store/ <== Some more are in here /web/example.com/site/ <== New dir; Contains only site files /web/example.com/site/app/ <== CakePHP /web/example.com/site/app/webroot/ <== 2nd DocumentRoot /web/example.com/site/app/webroot/store/ <== Magento /web/example.com/site/config/ <== Site-wide config /web/example.com/site/vendors/ <== Site-wide libraries After I made this change, I would not need to pay attention to anything except for the stuff within /web/example.com/site/ and my job would be a lot easier. I would be the only one changing stuff in there. So here's where the Apache magic would happen: I need an HTTP request to http://www.example.com/ to first use /web/example.com/app/webroot/ as the document root. If nothing is found (no miscellaneous uploaded company projects are found), try finding something within /web/example.com/site/app/webroot/. Another thing to keep in mind is, the site might have some problems if the $_SERVER['DOCUMENT_ROOT'] variable reads /web/example.com/app/webroot/ but the actual files are within /web/example.com/site/app/webroot/. It would be better if the DOCUMENT_ROOT environment variable could be /web/example.com/site/app/webroot/ for anything within the /web/example.com/site/app/webroot/ directory. Conclusion Is my half-baked solution possible with Apache 2.2.3? Is there a better way to solve this problem?

    Read the article

  • creating a tag-based website and not using programming?

    - by monodial
    I want to create a tag-based website, and I need a tool that I could use (preferably without programming). It's a site where a user could pick tags on a certain item. All tags will be placed under a group that they are logically linked to (I will do that by hand). On the other end - a visitor could choose a tag, and then be redirected to a few items on which that tag was selected the most. Besides this, I need to set up a registration form (for the visitors who want to select tags on a desired item). stackoverflow.com may serve as an example of what I want to achieve. Functionally it is a quite similar approach. I am not sure if further detailing will bring me closer to getting a development advice, but nevertheless - following this template what I would be missing on is: ability to categorize the tags; and so they would fit in one page (overall i assume <200 tags) box where a user could enter a tag and it would be pending until a certain number of users enter such tag ability to limit the number of 'questions' that appear when a visitor chooses a tag - 'question' stands for an item to which users are selecting tags (displayed items would depend on the frequency the tag was assigned - say the top two items) Which software should I try / How should I go about it? Thank you. Lukas P.S. I have bought hosting account through GoDaddy.com. This is a first website that I am trying to build.

    Read the article

  • Recommend hosting with fast MySQL database please [closed]

    - by Keith Groben
    Possible Duplicate: How to find web hosting that meets my requirements? I am frustrated to no end with my current hosting provider, mediaTemple. Yes, they are flashy, and have some decent degree of flexibility with their GS plan, which I have. But anytime I install a site that needs a database, it is slow. like really slow. Taking anywhere from 10 - 15 seconds just to load a page. I would host in house, but there are a lot of complications that come with a LAMP server that I don't want to deal with. Honestly, I'd rather spend the time developing. What can you recommend?

    Read the article

  • SEO implications of blocking users viewing more than X pages

    - by Noam
    I'm considering the option to block non-premium users after viewing more than X pages. This basically means blocking the content after a fixed amount of pageviews per session. I can either: Keep displaying full-content for Search Engines. Can this be considered cloaking? Keep the real content in the background, and a pop-up that makes it not-viewable (like quora does). Can this make pages rank lower?

    Read the article

  • Is it safe to Block These URLs with Robots.txt?

    - by Edgar Quintero
    I have a website that has all URLs optimized and 301 redirected from nasty URLs to clean ones. However, everywhere throughout the site the unclean URLs are linked in menus, content, products, etc. Google currently has all clean URLs indexed, along with a few unclean URLs too. So the site still has linked everywhere the old URLs (ideally this wouldn't be the case but this is how it is ATM). I would like to block the unclean URLs with robots.txt. The question: If I block these unclean URLs with the robots.txt, when the entire website is linked with them (but they all redirect to the clean version), will this affect the indexing status at all?

    Read the article

  • XMLHttpRequest not working, trying to test database connection [closed]

    - by Frederick Marcoux
    I'm currently creating my own CMS for personnal use but I'm blocked at a code. I'm trying to make a installation script but the AJAX request to test if database works, doesn't work... There's my JS code: function testDB() { "use strict"; var host = document.getElementById('host').value; var username = document.getElementById('username').value; var password = document.getElementById('password').value; var db = document.getElementById('db_name').value; var xmlhttp = new XMLHttpRequest(); var url = "test_db.php"; var params = "host="+host+"&username="+username+"&password="+password+"&db="+db; xmlhttp.open("POST", url, true); xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", params.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.send(params); $('#loader').removeAttr('style'); if (xmlhttp.responseText !== '') { if (xmlhttp.readyState===4 && xmlhttp.status===200) { $('#next').removeAttr('disabled'); $('#test').attr('disabled', 'disabled'); $('#test').text('Connection Successful!'); $('#test').addClass('btn-success'); $('#login').addClass('success'); $('#login1').addClass('success'); $('#db').addClass('success'); $('#loader').attr('style', 'display: none;'); } else { $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } else { $('#next').attr('disabled', 'disabled'); $('#next').attr('disabled', 'disabled'); $('#test').removeClass('btn-success'); $('#test').removeAttr('disabled'); $('#test').text('Test Connection'); $('#login').removeClass('success'); $('#login1').removeClass('success'); $('#db').removeClass('success'); $('#loader').attr('style', 'display: none;'); } } And there's my PHP code: <?php $link = mysql_connect($_POST['host'], $_POST['username'], $_POST['password']); if (!$link) { echo ''; } else { if (mysql_select_db($_POST['db'])) { echo 'Connection Successful!'; } else { echo ''; } } mysql_close($link); ?> I don't know why it doesn't work but I tried with JQuery $.ajax, $.get, $.post but nothing work...

    Read the article

  • .htaccess redirect to subfolder in different domain, maintaining old domain in the URL

    - by Naoise Golden
    Redirect has been widely discussed and most problems solved, so I am sorry for opening yet another post about this, but none of the codes I am trying work. I have a WordPress site hosted in http://mydomain.com/clientsdomain.com/wordpress I would like to temporarily redirect http://clientsdomain.com/ to the abovementioned URL, maintaining the clientsdomain.com domain in the URL. So for example http://clientsdomain.com/some/page would be pointing to http://mydomain.com/clientsdomain.com/wordpress/some/page Is this even possible with .htaccess? Maybe som configuration or plugin option with WordPress?

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Google nofollow, Disavow and Link Removal Requests

    - by PsychoDad
    I am the owner of http://www.YouReview.net and I am constantly getting requests from people asking me to remove links to their sites or they will Disavow the links and they threaten me with Google penalties. All of this is a bit frustrating because first I use nofollow on any link outside the YouReview.net domain. Second, I've never heard of Google penalizing a site for linking to other websites. My question is twofold: Do disavowed links penalize the site that was disavowed? and Does the "nofollow" attribute on tags absolutely guarantee that the link is not followed and not counted for search engine ranking? Why don't more people know about nofollow?

    Read the article

  • php application deployment to redhat [closed]

    - by Subhash Dike
    I am a .net background person with most of the work with Windows environment and IIS etc. I have been given a task to deploy one php application on RedHat Linux box. All I have is the credentials and the server IP of that box. Can someone help about how to get started creating a new website on Apache 2.0 on redhat box and then putting php code over to that. I do not have physical access to that server as well. I know it's very basis question, but I am in dark for these things. Even if you can point me to some documentation that's fine or else at least the analogy with IIS should also help to certain extent.

    Read the article

  • Responsive Design: Which Framework Should I Use? CSS3 & HTML5

    - by Jayhal
    I've been looking for a suitable set of HTML5/CSS3 foundation files to start new projects on. I started off piecing together my own files, but I believe I might be better served in finding a solid and fairly compatible(with me) CSS3/HTML5 framework and then tweaking certain things that may not best suit my own process. I'd love to find something that is responsive and that includes aspects focusing on layout, type(hor and vert baselines), form and interface components, cross-browser issues, and preferably built on something other than a just imple css reset, but that does include rebuilding elements consistently across browsers for a clean work slate. Extra features like polyfills or others area great, as is good documentation and examples. So far, off the top of my head I know of, Skeleton 1140 Grid 320 & Up (plus BP) HTML5 Boilerplate 2.0 and Mobile Inuit.css Less Framework Fluir Perkins.Less A few WP themes Are there any great one I don't know about? I work a lot in WP, and something that is easily incorporated (but also stand alone) is ideal. Plugins and wide set feature while maintaining the ability to cut it down when needed(flexibility) is also a big plus, and in par with a faster learning, since I want to start using whatever I find immediately . What are some of the better options you guys might be able to recommend? Systems or scripts, plugins, and other related tools are also welcome, Thanks!

    Read the article

  • I want to trace the activity of my customize link sent via email or on chat to my custome

    - by anilkumble789
    I want to trace the activity of my customize link sent via email or on chat to my customer. Activity like : whether they opened the link or not? How much time they were on page? examples: I have decided to sent business proposal link to Mr.ABC and Mr.XYZ So, for ABC the link would be like : www.mycompany.com/proposal ....abc... So, for XYZ the link would be like : www.mycompany.com/proposal ....xyz... its like link analytic. How to go ahead with with it?

    Read the article

  • Splitting a sitemap by content type

    - by James
    I currently am tasked with submitting our website sitemap to the search engines every week. We have a module which does offer sitemap generation but we find using it does not work very well as not all pages are included and it does not split the sitemap by content. I've used various (online and offline) tools to generate the sitemaps which is not the problem. The problem is that after every generation (which takes most of each Monday) I have to manually go through the sitemap and categorise the links in to products, pages, categories and sub categories. I've experimented successfully with XSL to split the sitemap but it is still a labour intensive process. Does anyone know of a good method to split the sitemap? Currently there are around 20,000 links (iirc) in total.

    Read the article

  • High Traffic Web Host Solution?

    - by Calsy
    Hi All, Im currently shopping around for a web host for our website we are hoping to release in the near future. This is my first real step into this area. Just wondering what I should be looking for. It is an ASP.net MVC website with an MS SQL Server backend. I need to know that the server will not buckle if the traffic booms. Currently im looking at a managed dedicated server from singlehop. Does anyone know any better or have any advice. Thanks in advance

    Read the article

  • Disallow robots.txt from being accessed in a browser but still accessible by spiders?

    - by Michael Irigoyen
    We make use of the robots.txt file to prevent Google (and other search spiders) from crawling certain pages/directories in our domain. Some of these directories/files are secret, meaning they aren't linked (except perhaps on other pages encompassed by the robots.txt file). Some of these directories/files aren't secret, we just don't want them indexed. If somebody browses directly to www.mydomain.com/robots.txt, they can see the contents of the robots.txt file. From a security standpoint, this is not something we want publicly available to anybody. Any directories that contain secure information are set behind authentication, but we still don't want them to be discoverable unless the user specifically knows about them. Is there a way to provide a robots.txt file but to have it's presence masked by John Doe accessing it from his browser? Perhaps by using PHP to generate the document based on certain criteria? Perhaps something I'm not thinking of? We'd prefer a way to centrally do it (meaning a <meta> tag solution is less than ideal).

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >