Search Results

Search found 897 results on 36 pages for 'rank'.

Page 27/36 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Web pages with mixed ownership photos

    - by dstonek
    I have a photo website. 15% of the photos belong to approved registered users. They agree my terms about uploading their images in my web pages. I include a photographer credit on right bottom corner. About identifying the site with google, every page contains a google+ button to MY google+ page it also contains <link href="https://plus.google.com/nnnnnnnnnn/" rel="publisher" /> I need some advice in order to respect google rules about my pages containing other photographers images not to be penalized because of possible duplicated or interpreted as stolen content. My concern is also about adding G+ links (to MY photo page) and Google publisher id would harm my site rank because of pages containing third-party photos.

    Read the article

  • SEO: Is promoting your backlinks a good strategy for improving search results for my site's name?

    - by user4394
    I run a website that's been around for about three years in the sports space. I am successfully ranking well for targeted keywords, but searching for the name of my site itself returns very poor results - it shows my site, its FB/Twitter, and then 15 pages of unrelated spam that happen to contain two words that, when combined, form my website's name. After that, my backlinks begin to show up spordically. As far as I can tell, I simply don't have enough backlinks and the backlinks I do have are ranked worse than the spam. (Site Explorer lists 200 external links to any page on our domain and 20 external links directly to the front page). To counter this, my strategy is to promote my backlinks so they get a better page rank than the spam. Does that make sense? Am I going in the right direction or should I just focus on getting more backlinks pointing directly to my site? Thanks in advance and I'd be happy to answer any questions I can (without giving away my site of course).

    Read the article

  • E-Commerce Website

    - by haargott
    I am planning to create an e-commerce website for users to buy products and services. In this website I want users to register and also participate in something like a browser game, where every user may receive some questions which they have to answer. For each question they successfully answered, they receive points and the number of collected points will decide on which rank they are. Edit 2 Currently I am considering using only HTML, CSS, JavaScript, PHP, SQL to design this e-commerce website. Together with this I was thinking about learning jQuery as it may help me, but I am not sure if I should code everything specifically by myself or just use the library to make it faster. 1) Could you tell me if those languages are sufficient enough for creating such a website described? 2) Could you tell me what kind of free software tools and frameworks are most appropriate to use when creating this e-commerce website?

    Read the article

  • Advantages of country TLD vs. .com

    - by Tschareck
    I want to get a domain for my site. The site's topic would be about Vienna, but the content will be in English. I was thinking, if I should get .com domain or .at domain. .at is both much cheaper and easier to get (there is less chance that my desired phrase is already registered). Is there any disadvantage in terms of SEO and page rank, if my domain does not end with .com? The site will be in English and targeted not just for Austria, but globally, mostly foreign tourists. I don't care if it's easy to remember the address, I expect most traffic to be from search engines anyway.

    Read the article

  • why my website not ranking in first page of google? [on hold]

    - by India SEO Analyst
    Iam handling website www.usamovingandstorage.com and targeting keywords "chicago movers", but my website is on third page. But my website has nice backlink, and recently i removed irrelevant backlinks also. I compared my competitors' websites such as www.ampolmoving.com, www.chicagomovers.com and have no such big backlinks, but they are ranking first page in google. I compared the three websites in www.opensiteexplorer.org. In that my site has good results. Then How is it happened? I need full comparison, why my site is ranking in third page? what are the actions i need to take to rank in first page.

    Read the article

  • Can we 301 redirect to a new page, but still publish the old content somewhere else?

    - by KBS
    We have a page on the site which ranks well for an SEO term (top 5) but contains old information. We have added a new page but Google doesn't rank it that well. Information on these pages is time sensitive. Old: example.com/2013-related-information.html New: example.com/2014-related-information.html Obvious solution is to delete old page and do a 301 redirect to the new page. Now, can we still keep the old page by giving it a new URL. example.com/2013-related-information.html is redirected to example.com/2014-related-information.html example.com/2014-related-information.html is recreated with a new address such as example.com/new-2013-related-information.html What we are trying to do is to send the user to the fresh page but still not destroying the record copy if someone wants to go and dig up the old information.

    Read the article

  • How to recover organic position in Google results after server down?

    - by ElHaix
    I have several sites that were doing quite well in terms of organic SEO rankings. I have the important sites setup in Google's Webmaster tools. Long story short, the system was down for about two weeks. Now in AdSense and Analytics, I am seeing that the page views are SLOWLY increasing. and I would like to know if there is anything I can do now to try to expedite the process of regaining those positions. Since there were several errors from that server, is it possible that Google will now rank any site from that IP address lower due to those two weeks of errors? Is this something that I just have to let ride out? Thanks.

    Read the article

  • quickest way to research a set of pages backlinks

    - by JeremyB
    I have a list of 300+ pages (they were chosen based on which pages rank for a keyword I'm interested in) and I want to compile a list all the (known) inbound links to those pages. What's the fastest way to do this? It seems like the only tools out there-- Yahoo Site Explorer, SEOMoz, Majestic, require you to either a) manually export each set of links by hand, or b) get data at the domain level (e.g. Majestic's clique hunter). Does anyone know of any efficient way to do this? I ask because I'm about to write a bunch of code and I don't want to waste my time if there's another tool that will work. I know SEOMoz and Majestic have API's but I'm wondering if there's a more user-friendly option.

    Read the article

  • How to remove a page from site without affecting google serp

    - by Savas Zorlu
    I have a travel website. Just for information purposes, I had put a weather page. Now I realize that this page is increasing my overall bounce rate; because people who are looking for the weather forecast are landing on that page and getting what they want and exiting. What is the safest method to get rid of that page? Would it hurt my google rank if I remove it completely? Or is there a better way to handle this situation? I realize that around 21 percent of my daily hits are on that page. I would have been happy if my aim was to provide weather data for the location; however, my site needs to focus on selling hotels. So I think I need to get rid of this weather page immediately. What do you think?

    Read the article

  • After replacing all tables in an old website with divs, what other steps should I take?

    - by guisasso
    I have designed a website a few years back, and it ranks pretty well, customer is happy, no problems there. I took one of the pages and replaced manually all the tables with divs, used structured data and got the website to look exactly the same. I would like to know, what other steps should I take to improve or at least not hurt this page's rank, or if perhaps I should juts not bother altogether. What are best practices here? The page is not live yet. Thanks.

    Read the article

  • will any one solve my GOOGLE ranking Confusion? [on hold]

    - by India SEO Analyst
    Iam handling website www.usamovingandstorage.com and targeting keywords "chicago movers", but my website is on third page. But my website has nice backlink, and recently i removed irrelevant backlinks also. I compared my competitors' websites such as www.ampolmoving.com, www.chicagomovers.com and have no such big backlinks, but they are ranking first page in google. I compared the three websites in www.opensiteexplorer.org. In that my site has good results. Then How is it happened? I need full comparison, why my site is ranking in third page? what are the actions i need to take to rank in first page.

    Read the article

  • Alt text vs CSS sprites (SEO vs speed)

    - by leeoniya
    I'm reworking our site to reduce HTTP requests and blocking requests by concatenating JS, css, gzipping, loading all JS via LABjs and using CSS sprites for images that were loaded individually via <img> tags before. Progress has been great so far - 5x page load performance improvement. However, we're in the top 5 organic search ranking in google for many targeted keywords and phrases. I'm afraid eliminating so many img tags with alt attributes can hurt our SEO. Does anyone have any experience with alt tag manip/removal and effects on SEO positions? Is previous rank "sticky"?

    Read the article

  • How to choose a company to work in? [on hold]

    - by 0x90
    I would like to make some pro and cons of 3 jobs I can take. I thought of these parameters and rank each option according all of them? What source control system they use ? What debug tools they have ? What profiler tools they use ? Is there a validation team ? How often they build ? What bug control system they use ? For silicon companies: what emulators, simulators, pre-silicon platforms they have ? How supportive is the IT in the company ? Salary/Bonuses What else should I take into consideration ?

    Read the article

  • Things to Look For in Finding the Best SEO Company

    Preparing to employ the best SEO company? Due to the impact of lookup engine optimisation, or SEO on lookup motor rankings, finding the best SEO company for your business is a lot more crucial than ever. In a way, it's like discovering the right shoe that fits-it's easy to wear but resilient and lasts lengthy. When SEO services are correctly handled, websites and blogs rank very high on major search engines like Yahoo, Google, and Bing by utilizing on-page and off-page SEO techniques and best a SEO company can assist you in this region.

    Read the article

  • SEO Marketing - How to Promote Your Website and Gain More Traffic?

    Having problems in promoting your website? Do your risk everything to put your website on top with weak SEO marketing strategy? SEO Marketing is a very important part in promoting your website and to market your products. It will help you gain more traffic to your website and increase your page rank. However, it will be only a waste of money if your website has weak seo marketing strategy. Remember that people nowadays use the internet to gain any information in any website or probably your website.

    Read the article

  • Using Artificial Intelligence (AI) to predict Stock Prices

    - by akaphenom
    Given a set of datavery similar to the Motley Fool CAPS system, where individual users enter BUY and SELL recommendations on various equities. What I would like to do is show each recommendation and I guess some how rate (1-5) as to whether it was good predictor<5 (ie corellation coeffient = 1) of the future stock price (or eps or whatever) or a horrible predictor (ie corellation coeffient = -1) or somewhere inbetween. Each recommendation is tagged to a particular user, so that can be tracked over time. I can also track market direction (bullish / bearish) based off of something like sp500 price. The components I think that would make sense in the model would be: user direction (long/short) market direction sector of stock The thought is that some users are better in bull markets than bear (and vice versa), and some are better at shorts than longs- and then a cobination the above. I can automatically tag the market direction and sector (based off the market at the time and the equity being recommended). The thought is that I could present a series of screens and allow me to rank each individual recommendation by displaying available data absolute, market and sector out performance for a specfic time period out. I would follow a detailed list for ranking the stocks so that the ranking is as objective as possible. My assumtion is that a single user is right no more than 57% of the time - but who knows. I could load the system and say "Lets rank the recommendation as a predictor of stock value 90 days forward"; and that would represent a very explicit set of rankings. NOW here is the crux - I want to create some sort of machine learning algorithm that can identify patterns over a series of time so that as recommendations stream into the application we maintain a ranking of that stock (ie. similar to correlation coeeficient) as to the likelihood of that recommendation (in addition to the past series of recommendations ) will affect the price. Now here is the super crux. I have never taken an AI class / read an AI book / never mind specific to machine learning. So I cam looking for guidance - sample or description of a similar system I could adapt. Place to look for info or any general help. Or even push me in the right direction to get started... My hope is to implment this with F# and be able to impress my friends with a new skillset in F# with an implementation of machine learnign and potentially something (application / source) I can include in a tech portfolio or blog space; Thank you for any advice in advance.

    Read the article

  • it means quick-select algorithm?

    - by matin1234
    Hi, I have a question from my homework. I think my teacher needs an algorithm like quick select for this question is this correct? The question: Following a program (Subroutine) as a "black box" is given (for example, inside it is not clear and we do not even inside it) with the worst case linear time, can find the middle of n elements. Using this black box, get a simple linear algorithm that takes input i and find the element which its rank is equal to i (among the n elements) Thanks.

    Read the article

  • Most useful jQuery plugins

    - by Binoj Antony
    Which are the most useful jQuery plugins you have used. List out one per answer(to rank the best plugins individually), and describe what it does as well. BlockUI - Can block certain elements (or the whole page) during ajax requests. Form Plugin JQueryUI JQuery Validation TableSorter Taconite

    Read the article

  • What software should I install on a new PC?

    - by Armentia
    What software would be a good idea to get for a new/freshly reformatted PC? Feel free to rank how vital each piece of software is, as well as possible alternatives. Microsoft Office Suite // OpenOffice is the first thing to come to mind, but things such as .net framework doesn't pop up to mind and things of that nature are a pain to deal with later on in certain situations.

    Read the article

  • SQL Select - adding field to Select is changing the results

    - by nycdan
    I'm stumped by this SQL problem that I suspect will be easy pickings for someone out there. I have a table that contains rows representing several daily lists of ranked items. The relevent fields are as follows: ID, ListID, ItemID, ItemName, ItemRank, Date. I have a query that returns the items that were on a list yesterday but not today (Items Off List) as follows: Select ItemID, ListID, ItemName, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, COUNT(ItemName) Basically I'm looking for records where the max date is yesterday. It works fine and shows the number of days each item was previously on the list (although not necessarily consecutively, but that's fine for now). The problem is when I try to add ranking to see what yesterday's rank was. I tried the following: Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list From Table Group By ItemID, ListID, ItemName, ranking Having Max(date) = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 Order By ListID, ItemName, ranking, COUNT(ItemName) This returns a great deal more records than the previous query so something isn't right with it. I want the same number of records, but with the ranking included. I can get the rank by doing a self-join with a subquery and getting records where the ItemID occurs yesterday but not today - but then I don't know how to get the Count any more. Appreciation in advance for any help with this. ======== SOLVED ============== Select ItemID, ListID, ItemName, ranking, convert(varchar(10),MAX(date),101) as date, COUNT(ItemName) as days_on_list from Table T Where date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and ListID = 1 and T.ItemID Not In (select T.ItemID from Table T join Table T2 on T.ItemID = T2.ItemID and T.ListID = T2.ListID where T.date = DATEADD("d",-1,convert(varchar(10),getdate(),101)) and T2.date = convert (varchar(10),getdate(),101) and T.ListID = 1) Group by ItemID, ListID, ItemName, ranking Basically, what I did was create a subquery that finds all items that appear in both days, and finds items that appeared yesterday but are not in the set of items that appeared both days. Then I was able to do the aggregate function and grouping correctly. I would NOT be surprised if this is more convoluted than necessary but I understand it and can modify it as needed and performance doesn't seem to be an issue. Thanks everyone for the assist.

    Read the article

  • xpath: string manipulation

    - by Jindan Zhou
    So in my scrapy project I was able to isolate some particular fields, one of the field return something like: [Rank Info] on 2013-06-27 14:26 Read 174 Times which was selected by expression: (//td[@class="show_content"]/text())[4] I usually do post-processing to extract the datetime information, i.e., 2013-06-27 14:26 Now since I've learned a little more on the xpath substring manipulation, I am wondering if it is even possible to extract that piece of information in the first place, i.e., in the xpath expression itself? Thanks,

    Read the article

  • Output on namespaced xpath in java

    - by user347928
    I have the following code and have had some trouble with a specific field and it's output. The namespace is connected but doesn't seem to be outputting on the required field. Any info on this would be great. import org.w3c.dom.Document; import org.xml.sax.SAXException; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.ParserConfigurationException; import javax.xml.xpath.XPathFactory; import javax.xml.xpath.XPath; import javax.xml.xpath.XPathExpressionException; import java.io.ByteArrayInputStream; import java.io.IOException; public class test { public static void main(String args[]) { String xmlStr = "<aws:UrlInfoResponse xmlns:aws=\"http://alexa.amazonaws.com/doc/2005-10-05/\">\n" + " <aws:Response xmlns:aws=\"http://awis.amazonaws.com/doc/2005-07-11\">\n" + " <aws:OperationRequest>\n" + " <aws:RequestId>blah</aws:RequestId>\n" + " </aws:OperationRequest>\n" + " <aws:UrlInfoResult>\n" + " <aws:Alexa>\n" + " <aws:TrafficData>\n" + " <aws:DataUrl type=\"canonical\">harvard.edu/</aws:DataUrl>\n" + " <aws:Rank>1635</aws:Rank>\n" + " </aws:TrafficData>\n" + " </aws:Alexa>\n" + " </aws:UrlInfoResult>\n" + " <aws:ResponseStatus xmlns:aws=\"http://alexa.amazonaws.com/doc/2005-10-05/\">\n" + " <aws:StatusCode>Success</aws:StatusCode>\n" + " </aws:ResponseStatus>\n" + " </aws:Response>\n" + "</aws:UrlInfoResponse>"; DocumentBuilderFactory xmlFact = DocumentBuilderFactory.newInstance(); xmlFact.setNamespaceAware(true); DocumentBuilder builder = null; try { builder = xmlFact.newDocumentBuilder(); } catch (ParserConfigurationException e) { e.printStackTrace(); } Document doc = null; try { doc = builder.parse( new ByteArrayInputStream( xmlStr.getBytes())); } catch (SAXException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } System.out.println(doc.getDocumentElement().getNamespaceURI()); System.out.println(xmlFact.isNamespaceAware()); String xpathStr = "//aws:OperationRequest"; XPathFactory xpathFact = XPathFactory.newInstance(); XPath xpath = xpathFact.newXPath(); String result = null; try { result = xpath.evaluate(xpathStr, doc); } catch (XPathExpressionException e) { e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates. } System.out.println("XPath result is \"" + result + "\""); } }

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >