Search Results

Search found 18126 results on 726 pages for 'core pro'.

Page 231/726 | < Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >

  • Will search engines reindex a page that has been set to redirect on to a newer site page?

    - by Luke Duddridge
    We were asked by a client to change a website so that any pages/Urls we were hosting on an older site would now redirect to a newer site hosted somewhere else and a different domain name to boot. We did this by changing each page in the IIS site management, to redirect to a url on their new domain instead of rendering a page locally. According to the redirect tool here: http://www.webconfs.com/redirect-check.php . What we have done is search engine friendly. Problem now is... the client has been on a course learning all about meta tags and so thinks they have a better understanding of the "matrix" (remember there is no spoon). As Google still has the older site appearing in a search, this isnt helping matters. I have tried to explain, we have to wait for Google to reindex. I'm not blowing smoke am I? I'm now starting to wonder... will the older site always appear in a search, even though the pages don't exist? Is there a better way I should be redirecting their site to ensure google will stop keeping an index of pages that no longer exist and would instead replace them with the content in the newer site? a suggestion on the site mentioned above is to use the code: Response.Status="301 Moved Permanently" Response.AddHeader "Location","http://www.new-url.com/" Does using the option in the IIS management tool to redirect the url not do the same?

    Read the article

  • Website with sections in Drupal?

    - by Matt Hampel
    What is the best way to create a website with sections in Drupal? Users need to be able to add, remove, and nest pages fairly easily. Pages added to a section should have an appropriate URL, like "/[section name]/[page title]". This seems like a straightforward task, but I can't find the right combination of tools to do it. Subsite comes close, but for some odd reason, doesn't set up the correct content paths. The closest I got was creating a book for each subsection, but that feels like I'm using the wrong tool for the job. Edited with my solution: I used organic groups with pathauto. I set pathauto so that pages in groups had URLs that were of the form [group path]/[page title].

    Read the article

  • How does Wikipedia's SEO work?

    - by Josh Siegl
    i'm sorry if this question is misplaced or doesn't belong here. I'm currently developing an app for android and IOS and of course i'm thinking about the best ways to market it. Last night I Google'd somebody else's app and the third link in was a Wikipedia page on it, I never even thought of apps having Wikipedia pages, but alas there it was. And of course it was very helpful in determining exactly what the app did and in what cases it was useful for (something that's absolutely crucial for potential customers to understand). So then I got to thinking that I should create a Wiki for my app, but how does Wikipedia apply SEO? I know that the question could be overly complicated or specific, i'm just looking for general answers. For instance when somebody Google's my app, where does Wikipedia display on the results? When I create a Wiki for my app, how do I ensure that the Wikipedia page shows in the search results (is there any way to do that? ) I'm sure i'll find all of this out later when I create a Wiki for my app, I guess i'm just asking this out of curiosity. So how does Wikipedia's search engine optimization work? (on a page by page basis)

    Read the article

  • How much a website like bytes.com earn

    - by robin das
    i am running a website similar to bytes.com (IT QnA site) my site attract 600 unique visitors daily. we are planning our self as bytes.com. Atleast any one can tell me that whether we can earn some serious money with this kind of website. websiteoutlook estimate Daily Pageview - 700636 to bytes.com Can any one please let me know what kind of earning we can expect for dotcom like bytes.com. Please give some light on this topic as a lot of energy, time and money goes in building this kind of website thanks robin Das

    Read the article

  • Directing from a 1und1 hosting solution, with urls intact

    - by Jelmar
    I have done this before on GoDaddy without a hitch, but I cannot seem to figure out this particular case. I have a domain space with temporary url http://yogainun.mysubname.com/ and am hosting the domain name that is to be applied to it at 1und1.de. Right now I have set it up so that from the 1und1 domain name hosting the address http://www.yoga-in-unternehmen.de/ is frame redirected to the subdomain that I just referred to. But this is not what I want. http://www.yoga-in-unternehmen.de/ is to be the domain. With the frame redirect, url's like http://www.yoga-in-unternehmen.de/example-article do not show up. But this is what I want. With godaddy in a similar case, I just turned on DNS and changed the name servers. That worked without problem, but with 1und1 not. Is there something I am missing?

    Read the article

  • How to parse JSON data from web more faster [closed]

    - by Kaidul Islam Sazal
    I have json inventory inventory.json on the server like this: [ { "body" : "SUV", "color" : { "ext" : "White diamond pearl", "int" : "Taupe" }, "id" : "276181", "make" : "Acura", "miles" : 35949, "model" : "RDX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JNBD/001_0292.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.3L DOHC PGM-FI 16-VALVE", "trans" : "Automatic" }, "price" : { "net" : 29488 }, "stock" : "6942", "trim" : "AWD 4dr Tech Pkg SUV", "vin" : "5J8TB2H53BA000334", "year" : 2011 }, { "body" : "Sedan", "color" : { "ext" : "Premium white pearl", "int" : "Taupe" }, "id" : "275622", "make" : "Acura", "miles" : 40923, "model" : "TSX", "pic" : [ { "full" : "http://images1.dealercp.com/90961/000JMC6/001_1765.jpg" } ], "power" : { "drive" : "Front wheel drive", "eng" : "2.4L L4 MPI DOHC 16V", "trans" : "Automatic" }, "price" : { "net" : 22288 }, "stock" : "6945", "trim" : "4dr Sdn I4 Auto Sedan", "vin" : "JH4CU2F66AC011933", "year" : 2010 } ] here are two index, There are almost 5000 index like this. I parsed this json like this: var url = "inventory/inventory.json"; $.getJSON(url, function(data){ $.each(data, function(index, item){ //straight-forward loop if(item.year == 2012) { $('#desc').append(item.make + ' ' + item.model + ' ' + '<br/>' + item.price.net + '<br/>' + item.pic[0].full); } }); }); This is working fine.But the problem is that, this searching and fetching process is little bit slow as there are 5000 indexes already and it's increasing day by day. It seems that, it is a straight-forward loop to parse the data and a normal brute-force method. Now I want to know if there any time efiicient way to parse more faster.Any faster method to parse instead of straight-forward loop ?

    Read the article

  • QapTcha error issue. Works locally, not on live server [migrated]

    - by BlassFemur
    I am adding QapTcha (http://demos.myjqueryplugins.com/qaptcha/) to a website that I am working on and I'm getting the error "Uncaught TypeError: Cannot read property 'error' of null". What's weird to me is everything is working perfectly locally. No errors or anything. Once I uploaded via ftp to the live server, I get the above error. Below is the block of code that seems to be generating the error: Slider.draggable({ revert: function(){ if(opts.autoRevert) { if(parseInt(Slider.css("left")) (bgSlider.width()-Slider.width()-10)) return false; else return true; } }, containment: bgSlider, axis:'x', stop: function(event,ui){ if(ui.position.left (bgSlider.width()-Slider.width()-10)) { // set the SESSION iQaptcha in PHP file $.post(opts.PHPfile,{ action : 'qaptcha', qaptcha_key : inputQapTcha.attr('name') }, function(data) { if(!data.error) Uncaught TypeError: Cannot read property 'error' of null { Slider.draggable('disable').css('cursor','default'); inputQapTcha.val(''); TxtStatus.text(opts.txtUnlock).addClass('dropSuccess').removeClass('dropError'); form.find('input[type=\'submit\']').removeAttr('disabled'); if(opts.autoSubmit) form.find('input[type=\'submit\']').trigger('click'); } },'json'); } } }); I'm not really sure what's going on as to why it works locally and not on the server. Any help/ suggestions would be appreciated. Thanks

    Read the article

  • Wordpress .htaccess preventing subfolder access

    - by John K.
    This is sort of a goofy setup, but it's not in my power to reconfigure it at this time. I'm running in a shared hosting environment. The domain is example.com. This is an add-on domain on the host side with example.com being redirected to the www/example.com sub-directory. That directory houses a standard Wordpress site which acts as the main site when you visit example.com. The .htaccess file within that directory is: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteRule ^wp-admin/profile\.php$ /ssm/welcome [R] </IfModule> I have a subdirectory, at the root level with the /example.com subdirectory that houses a cake php application. That subdirectory is /tracker. My problem is that when I attempt to browse to example.com/tracker, I get a 404 from Wordpress because perma links are on. What I think I need is a rewrite rule in the Wordpress .htaccess file that short circuits the existing rewrite rules and permits example.com/tracker to work independently of the Wordpress install. Or a rewrite rule at the root level that short circuits the redirect to the /example.com directory in the first place. Not sure how well I explained that so here's a summary. The www/ directory structure: example.com/ tracker/ Add on domain of www.example.com redirecting to the /example.com directory with Wordpress and a tracker/ directory running CakePHP which I would like to access via www.example.com/tracker. If you need further info or clarification let me know!

    Read the article

  • mod_rewrite works within directory not on root

    - by Anvesh Saxena
    I am having problem in my RewriteRule for the tags portion. What I am able to debug is that the rule is been triggered at least because the page "tags.php" is been rendered but without the URL parameters. This .htaccess file with the rules is within root for my sub-domain and has following content for tags postion. # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ tags.php?tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ tags.php?tag_name=$1 RewriteRule ^tags/?$ tags.php?tag_name= Another problem that I ain't able to debug is that the similar .htaccess file exists for a directory within my sub domain and is working as expected with the necessary URL parameters also been available. The .htaccess file within the directory reads as follows # Rewrite rule for tags RewriteRule ^tags/(\w+)/(\d+)/?$ restAPI.php?type=tags&tag_name=$1&tag_id=$2 RewriteRule ^tags/(\w+)/?$ restAPI.php?type=tags&tag_name=$1 RewriteRule ^tags/?$ restAPI.php?type=tags&tag_name= Could anyone point me the problem that I might be having in my Rewrite rules, I am also facing Internal server error sometimes which I am second guessing is due to the linked problem. Note:- I have Apache version 2.2.23 on my shared hosting.

    Read the article

  • Besxt Text-to-Speech Solution for my Website

    - by Tim Marshall
    I'm working on the 'Ease of Access' section of my website with the options to increase the font-size displayed on pages to a minimum, invert colours and whatnot. I wish to implement a plugin which, if enabled by the user, to read content on my website. Presumably my best option is a website plugin, however there might be some programming I've not come across which allows the likes of PHP to read content. I'm not entirely sure how this all works. Best Regards, Tim

    Read the article

  • Estimate of Hits / Visits / Uniques in order to fall within a given Alexa Tier?

    - by Alex C
    I was wondering if anyone could offer up rough estimates that could tell me how many hits a day move you into a given Alexa rank ? Top 5,000 Top 10,000 Top 50,000 Top 100,000 Top 500,000 Top 1,000,000 I know this is incredibly subjective and thus the broad brush strokes with the number ranges... BUT I've got a site currently ranked just over 1.2M worldwide and over 500k in the USA (http://www.alexa.com/siteinfo/fstr.net) Pretty cool for something hand-built on weekends (pat self on back) I was applying to an ad-platform and was told that their program doesn't accept webmasters who have an Alexa rank of greater than 100,000. (Time to take back that pat on the back I guess). I know that my hits in the last 30 days are somewhere on the order of 15,000 uniques and 20,000 pageviews. So I'm wondering how much harder do I have to work to achieve my next "goals"? I'd like to break into the top million, then re-evaluate from there. It'd be nice to know what those targets translate into (very roughly of course). I imagine that alexa ranks and tiers become very much exponential as you move up the ranks, but even hearing annecdotal evidence from other webmasters would be really useful to me. (ie: I have a site that is ranked X and it got Y hits in the last 30 days) Thanks :) - Alex

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • Why am I seeing unexpected requests for "crossdomain.xml" in my logs?

    - by Bogdacutu
    I've getting lots of 404 errors from crossdomain.xml. Here are the request details, as provided by Google App Engine: 404 22ms 19cpu_ms 0kb Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30 69.130.*.* - - [24/Jul/2011:07:43:42 -0700] "GET /crossdomain.xml HTTP/1.1" 404 124 "http://s.nsdsvc.com/App/DddWrapper.swf?c=3" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30" "app.*.*.*" ms=22 cpu_ms=19 api_cpu_ms=0 cpm_usd=0.000633 instance=00c61b117c557326bef77d341a345431e66b I'm not sure what is going on. Can anyone help me solve this issue?

    Read the article

  • mssql or mysql: learning

    - by Yehuda
    I have been using MySQL for about 9 months now for websites, and i have become quite good in getting what I want out of the Database. However i am still missing most of the complicated parts. I have an excellent tutorial but it is on sql-server 2008. 1) Is it worth me switching over to mssql (I understand the SQL is different) so that I will learn all about SQL and databases in general? 2) Do most people use MySQL or MSSQL 3) What is best practice, and I am talking mainly for websites.

    Read the article

  • Reach Local Proxy Page - Duplicate content?

    - by Simon Bennett
    We have a client who has instructed Reach Local to manage their paid SEO work etc. RL have created a proxy version of the page at http://example-px.rtrk.co.uk which mirrors the existing site completely. Would I be correct in assuming that this would count as duplicate content and one or both of the sites would be penalized because of this? And would the addition of a rel="canonical" meta-tag on the proxy site assist with this? Many thanks in advance.

    Read the article

  • Search ranking for important keywords has gone down drastically [duplicate]

    - by Vaivhav
    This question already has an answer here: How to diagnose a search engine ranking drop? 5 answers Firstly, we are a small entrepreneurial team of 3 persons and I am more like an amateur webmaster of the company's website as we cannot really afford a technical guy/department right now. A few weeks earlier, our website traffic and rankings for most keywords decreased overnight. I did a lot of reading henceforth and learned about Penguin 2.1 which people said is the reason for the drop. Something like this had never happened before. Now, I have gone through the entire Google webmaster help section. It says there that if a manual penalty is taken against us, we would notice a message in Manual Actions page. So far, we haven't received any notice from Google for web spam. Some SEO guys I contacted said they found spam links in our backlink profile. I do believe I had mistakenly purchased a cheap link/SEO scheme when I was yet very new to SEO. This was more than a year back but since then we have been legitimate. Moreover, how do I find out which is a spam link and which is not? Our content is all original, refreshing and the best you will find in our niche. We also have a blog but on a different domain (wordpress.com) from where we send out anchored links to our business website. Is this a good thing to do? Now, how should we proceed and recover our traffic/rankings. I tried searching in webmasters for a way to reach google and ask them why the traffic has decreased suddenly, but I couldn't find a contact form or something. Can someone please go through our website and help in making things more clear regarding the reason for the drop, along with a solution. Will really appreciate this as I can't get to figure this out and its taking a lot of time. Vaivhav

    Read the article

  • Which is the best image hosting site for hosting images for the website? [closed]

    - by rahul dagli
    Possible Duplicate: How to find web hosting that meets my requirements? I currently have a website and blog and using a limited web hosting plan. When I upload images on my hosting server it consumes a lot of bandwidth and space. So I was thinking of hosting images on some-other image hosting site and direct linking it to my site. I found out few sites like imageshack, photobucket, tinypic, imgur. However, I see all have certain restrictions. The features i am looking for are as follows: 1. At least 10gb space 2. At least 500gb bandwidth (bec I hav very high traffic) 3. Very high speed even during heavy load like 1000 visitors accessing every hour. 4. Ultra reliable servers (99.9% uptime) 5. Privacy control 6. Must not ever delete image if inactive 7. Create and manage albums 8. Company that will last long in business atleast for next 10 years. 9. Free of cost 10. Hotlinking/ Directlinking image.

    Read the article

  • Google ranking, page crawl

    - by Nawaf Mubarak
    please don't mind me for asking this newbie question about Google ranking. I know that in order to get ranked the page has to be crawled by Google bots, I have had a page example of which I will get a better understanding of how the system works with Google. I have made a page on my website last month, it got indexed pretty quickly, then I found that it's in Google's page 15 on my keyword as a start, next day it made it to page 13, then after a week it was jumping back and forth in page 17/18 up to 20. Now a month passed by, when and it isn't listed in any position of that 'keyword' sometimes I will find it in page 30, but later I won't find it anywhere, keep happening this way these days. Even if it isn't listed in any page for my keyword if I do a search for "site:thepageadress" it will be listed which means I'm not penalized and my page is there for google to see, but it isn't in the search result for my keyword. But when I write "site:thepage_adress" and I hit "search tools" option and click on "Past day" or "past week" it isn't listed, it is only listed when I click on "Past month" which I think means that Google indexed the page, looked at it once when I published it, and never looked at it again, is this a fair statement? So two questions that comes to mind here. 1- Should Google keep looking at a page even if I haven't changed any info for it? and is this an indication for me that my page is doing fine? or is it normal that Google see's it once and thats it? 2- Why and how to fix the fact that my page keeps jumping back and forth in the ranking result for keyword, and sometimes it isn't even listed, what does that mean? Sorry for the long msg, I hope to god that somebody help me with this. Thank you!

    Read the article

  • MediaWiki: how to make DISPLAYTITLE be used in categories listings

    - by Konstantin Boyandin
    The problem: a MediaWiki-driven site uses subpages to build pages hierarchy. When I add something like Page1/Page2/Subpage the exactly above string appears in listings and looks clumsy. I can't efficiently use short subpage title (Subpage in this example), since it can appear in different contexts and could confuse users. I can use DISPLAYTITLE magic word, with proper values of $wgRestrictDisplayTitle and $wgAllowDisplayTitle, to reassign page title and make it show on the page. However, when I look into categories listing this page, I will still see "Page1/Page2/Subpage" instead of the title assigned. Is there a simple way (through 'hack' or via relevant extension) to make the new title appear in every listing as well?

    Read the article

  • What is the typical example of old school website design?

    - by Pierre 303
    I want to build a website for a retro thing that was popular in the mid 90s (beginning of the commercial internet). So I want use old designs that was very popular at that time. The first thing that comes to my mind was those "under construction" animated gifs. People often put animated gifs everywhere. But also those awful repeating backgrounds. So yes, I want my website to look exactly like in the mid nineties ;) (please suggest practical and usable features, I guess an Java Applet menu would not work today, or saying on the bottom that this website is optimized for Netscape 3) EDIT: for those that wants to see the result: Retrology

    Read the article

  • Shared hosting banwidth limits

    - by mike
    I have a shared hosting account with a 20GB monthly bandwidth limit. I have exceeded my monthly limit and according to my host my counter is never reset, they say they use a continuous 30 day counter. So for example, I make payment on the 1st of each month, say I use 20GB in the last week of the month. My bandwidth counter is not reset on the 1st of the new month and my bandwidth will only become available in the last week of the new month. Is this common practice by shared hosting companies? Sounds a bit shady to me. Surely my counters should be reset on the 1st of every month when I make payment and 20GB of bandwidth should be available from the day payment is made?

    Read the article

  • Are shorter URLS better for SEO?

    - by articlestack
    Many people shorten their URLs. But as per my understanding it creates overhead of extra redirection, other can not guess about the target article with their url, and it should be less friendly for "inurl:..." type search. Should I shorten the URLs of my sites? Is there any advantage with short URLs besides the fact that they take fewer characters in anchor tags on the page (good for site loading)?

    Read the article

  • Why is Joomla based website that was copied off of live server into localhost not showing pictures and throwing 404 error?

    - by Darius
    I have copied Joomla based website via FTP onto my machine and I am trying to make it run on my localhost which is provided by the latest version of XAMPP. I have exported and imported the DB with no problems. I have placed all the files and folders into htdocs folder but when I go to localhost/examplesite all I get is the text that is on the front page but no pictures and it displays 404 Error. Do I need to make changes to .htaccess? If so, can some one point me to the right direction? Thanks

    Read the article

  • Creating an encrypted, web-based proxy

    - by Jason
    I have moved to Asia where my internet connection is censored and I'd like to check my messages from social sites which happen to be blocked. As virtually all proxy servers are blocked in this country, I've decided to attempt to roll my own encrypted proxy server. Please note, the key word here is encrypted—if the sniffer sees anything like f@c3b00k or w:k:p3d:ia travelling down the wire I'm had. I have a website hosted with GoDaddy (Windows with PHP 5.2 & IIS 7). Is there any way I can set up an encrypted proxy through this service? If so, how, and what open source tools are available to use?

    Read the article

  • Google page events monitoring and analysis

    - by Homunculus Reticulli
    I have read the Google page event documentation, but I am not sure I understand it correctly. I am new to Google analytics, and I have two questions: Once I have google analytics enabled for my site (i.e. I have inserted the tracking code in my pages etc), do I need to set anything else up (at the Google end - i.e. in my Google analytics account) It is not clear to me how the event data particularly, relating to how the data can be aggregated and analyzed. For instance, if I want to track an event under category category for click action action, I will use the following code snippet: <a href="some-uri.htm" onclick="_gaq.push(['_trackEvent', 'category', 'action', 'label']);">Do Something</a> For the sake of simplicity, lets say I am interested in monitoring click events in my header and footer, and I want to find which pages the header and or footer is clicked most often. How would I set things up so that I can analyze the header/footer clicks aggregated at the page level?

    Read the article

< Previous Page | 227 228 229 230 231 232 233 234 235 236 237 238  | Next Page >