Search Results

Search found 9960 results on 399 pages for 'iwork pages'.

Page 249/399 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • Google indexed the same page under two URLs (despite rel-canonical)

    - by unor
    The Super User question "Playing mp3 in quodlibet displays “GStreamer output pipeline could not be initialized” error" is indexed under two URLs in Google: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058 The first one is the canonical one; the corresponding rel-canonical is included in both pages: <link rel="canonical" href="http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia" /> Google also indexed http://superuser.com/a/652058, which redirects to the answer: http://superuser.com/questions/651591/playing-mp3-in-quodlibet-displays-gstreamer-output-pipeline-could-not-be-initia/652058#652058 Now, the second URL from above is the same as this one minus the fragment #652058. So Google seems to strip the fragment, which results in exactly the same page under another URL (= containing the answer ID /652058 as suffix), and indexes it, too -- despite rel-canonical and duplicate content. Shouldn’t Google recognize this and only index the canonical variant? And what could be the reason why Stack Exchange includes the answer ID in the URL path, and not only in the fragment (resulting in various URL variants for the same page)?

    Read the article

  • Making WatiN Wait for JQuery document.Ready() Functions to Complete

    - by Steve Wilkes
    WatiN's DomContainer.WaitForComplete() method pauses test execution until the DOM has finished loading, but if your page has functions registered with JQuery's ready() function, you'll probably want to wait for those to finish executing before testing it. Here's a WatiN extension method which pauses test execution until that happens. JQuery (as far as I can see) doesn't provide an event or other way of being notified of when it's finished running your ready() functions, so you have to get around it another way. Luckily, because ready() executes the functions it's given in the order they're registered, you can simply register another one to add a 'marker' div to the page, and tell WatiN to wait for that div to exist. Here's the code; I added the extension method to Browser rather than DomContainer (Browser derives from DomContainer) because it's the sort of thing you only execute once for each of the pages your test loads, so Browser seemed like a good place to put it. public static void WaitForJQueryDocumentReadyFunctionsToComplete(this Browser browser) { // Don't try this is JQuery isn't defined on the page: if (bool.Parse(browser.Eval("typeof $ == 'function'"))) { const string jqueryCompleteId = "jquery-document-ready-functions-complete"; // Register a ready() function which adds a marker div to the body: browser.Eval( @"$(document).ready(function() { " + "$('body').append('<div id=""" + jqueryCompleteId + @""" />'); " + "});"); // Wait for the marker div to exist or make the test fail: browser.Div(Find.ById(jqueryCompleteId)) .WaitUntilExistsOrFail(10, "JQuery document ready functions did not complete."); } } The code uses the Eval() method to send JavaScript to the browser to be executed; first to check that JQuery actually exists on the page, then to add the new ready() method. WaitUntilExistsOrFail() is another WatiN extension method I've written (I've ended up writing really quite a lot of them) which waits for the element on which it is invoked to exist, and uses Assert.Fail() to fail the test with the given message if it doesn't exist within the specified number of seconds. Here it is: public static void WaitUntilExistsOrFail(this Element element, int timeoutInSeconds, string failureMessage) { try { element.WaitUntilExists(timeoutInSeconds); } catch (WatinTimeoutException) { Assert.Fail(failureMessage); } }

    Read the article

  • What are solutions and tradeoffs to maintain search result consistency in a web application

    - by iammichael
    Consider a web application with a custom search function that must display the results in a paged manner (twenty per page with up to hundreds of thousands of total results) and the ability to drill down to individual results that maintain next/previous links to navigate through the results. Re-executing the search on each page request to get the appropriate results for that page of data can be too expensive (up to 15s per search). Also, since the underlying data can change frequently (e.g. addition of new results), re-executing could cause the next/previous functionality to result in inconsistent behavior (e.g. the same results reappearing on a later page after having been viewed on an earlier page). What options exist to ensure the search results can be viewed across multiple pages in a consistent manner, and what tradeoffs does each option have in terms of network, CPU, memory, and storage requirements? EDIT: I thought caching the query search results was an obvious necessity. The question is really asking about where to cache the result set and what tradeoffs might exist to each. For example, storing the ids of the entities in the result set on the client, or storing the IDs of the entities themselves in the users session on the web server, or in a temporary table in the database. I'm not looking specifically for a single solution as different scenarios may result in different approaches (and such a question would be more suited for stackoverflow.com rather than here), but more of a design comparison between the possible approaches.

    Read the article

  • How do sites avoid SEO issues / legalities with subdomain unique ids?

    - by JM4
    I was looking through a few websites recently and noticed a trend I'm not sure I understand. Sites are creating unique referral URLs for customers in the form of: http://customname.site.com (If somebody were to use http://www.site.com/customname it would function the same way). I can see the sites are using 302 redirects at some point using Google Chrome then doing some sort of htaccess redirect, taking the subdomain name (customname) and applying it as a referral parameter then keeping in session during the entire process. However, there must be thousands of these custom URLs that people are typing in. How are each one of these "subdomains" not treated as separate URLs which in turn are redirected to the same page (in short, generating tons of links all pointing to the same page which Google would normally frown upon)? Additionally, the links also appear on the site themselves as clickable links so I'm not sure how these are not tracked. Similarly, the "unique" url is not indexed or cached in any Google search results. How is this capability handled? It does NOT highlight the referral aspect, but a true example of this is visiting http://sfgiants.com which does a 302 redirect to the much longer proper San Francisco Giants MLB homepage. I am wondering how SFgiants.com is not indexed (assuming that direct shortened link appears on several MLB pages)? 1 - I know these are 302 redirects, I can see this on the sites network flow. 2 - These links do in fact appear on the page itself because in some areas (for example, the bottom of the page may say: send this page to a friend! http://name.site.com/ which in turn would again redirect to something like http://www.site.com?id=name so the id value could be stored in session

    Read the article

  • Why is Ubuntu unmounting my primary hard drive?

    - by Twisol
    I'm running Ubuntu 10.10 on my laptop (an Asus G73j), dual-booting Windows 7 if that matters. After using the computer for couple of hours or so, I get a popup complaining that a file was unmounted, then my GNOME desktop panels disappear. I can't save any unsaved work (the file browser shows "Filesystem" as totally empty), and other programs break in odd ways (like Chrome can't browse to any new pages, but keeps current ones going... at least I still have Pandora to listen to when this happens!). I've tried looking in the system logs to no avail; I'm assuming that it can't write any errors to the logs because, of course, the logs are on the primary hard drives. This started happening maybe a few days ago. Yesterday I upgraded from 10.4, but I believe it was happening before then. Any advice for figuring this out? EDIT: It just happened again, and I heard a small little clicky sound from the hard drive about five seconds before things went south. I'm thinking I should start backing up ASAP. In response to a comment, here's the dmesg output: http://askubuntu.pastebin.com/uYGshBay Also, the SMART status says the disk has a few bad sectors, and the detailed data says there are 14. It says it passed the self-assessment though. Lastly, this doesn't seem to be happening when I'm on Windows. I recently re-enabled ureadahead (which I disabled ages ago because it was causing Ubuntu to hang at the startup logo), could that be the source of the problem? I've disabled it again to see.

    Read the article

  • Lots of Internet browsing issues, all browsers

    - by dario_ramos
    Before the upgrade, everything was working fine. Now, however, I can connect to the Internet but a lot of stuff fails, and the weirdest thing is that it happens with Firefox, Chromium and Opera. Some of the things that fail: I can't log in to Stack Overflow, after entering user/pass it loads for a long time on Firefox and throws Error 408 (browser request timed out) on Chromium and Opera I can't log in to Hotmail, similar symptoms I can login to Facebook, but when I try to write a comment, or just post something in my wall, it stays loading for a long time, and then fails The first two issues seem to be related to secure pages, and the second one is another issue altogether, I believe. However, they all happen with all browsers, which is really weird. Talking about weird: I connect using a Huawei SmartAX MT 810 USB modem, which cost me blood and tears to get it working under Ubuntu. I ordered an ethernet modem/router with my ISP, and I'm still waiting, but this issue intrigues me anyway. Has anyone experienced this kind of problems? I Googled around, but couldn't find a similar case.

    Read the article

  • Lots of goodies

    - by wcoekaer
    We just issued a press release with a number of very good updates for everyone There are a few things of importance : 1) As of right now, Oracle Linux 6 with the Unbreakable Kernel is certified with a number of Oracle products such as Oracle Database 11gR2 and Oracle Fusion Middleware. The certification pages in the Oracle Support portal will be updated with the latest certification status for the various products. As always we have gone through a long period of very comprehensive testing and validation to ensure that the whole stack works really well together, with very large database workloads, middleware application workloads etc. 2) Standard certification efforts for Oracle Linux 6 with the Red Hat Compatible Kernel are in progress and we expect that to be completed in the next few months. Because of the compatibility between OL6 and RHEL6 we can then also state certification for RHEL6. 3) Oracle Linux binaries (and of course source code) have been free for download -and- use (including production, not just trial periods) since day one. You can freely redistribute the binaries, unlike many other Linux vendors where you need to pay a support subscription to even get access to the binaries. We offered both the base distribution release DVDs (OL4, OL5, OL6) and the update releases, such as 5.1, 5.2 etc. this way. Today, in this announcement, we also started to make available the bugfix and security updates released in between these update releases. So the errata streams (both binary and source code) for OL4, 5 and 6 are now free for download and use from http://public-yum.oracle.com. This includes uek and uek2. The nice thing is, if you want a complete up to date system without support, use this, if you then need support, get a support subscription. Simple, convenient, effective. We have great SLA's in producing our update streams, consistency in release timing and testing of all the components. Have at it!

    Read the article

  • How to ask developers to think about high resolution screens

    - by WhiteWind
    I just got a ASUS N56VZ laptop, and it`s all good, but the screen resolution 1920x1080px. I am not asking, how to scale ui elements. If someone is interested I have found some tricks: increase font size in system settings increase unity dock size in MyUnity or system settings in modern Ubuntu versions. tweak userChrome.js of FireFox to make buttons|panels|icons larger add DefaultZoomLevel extension to FireFox to make it zoom pages initially. But all of it is miserable, because there are some big bugs: window decoration elements are way too small to pick them with mouse. I can't scale window easily and I can't position my cursor fast on the close|maximize buttons. Tuning lines like in sound volume dialog are hardly clickable at all. Unity top panel (status panel & tray) hardly can contain the bigger font, so it looks ugly, but icons are still the same. Sometimes I can`t read text, as it is cropped (and I cant scale some dialogs as it has fixed size) Chromium is not usable at all (ok, it's not Ubuntu problem, but the problem still exists) JAVA applications are not scalable (same as above) In FF I am able to get descent results in most cases, but multiplication of system font increase and browser font increase makes system controls (combobox, lists, drop down lists) extremely big, so I cant even control the zoom level on the page. IMHO, we should post a bug report (but what kind of bug?) and vote for it! The problem is even deeper, but at least we should ask developers to think about it. So my question is: how can I post a report (the right words and right place) and how can we (who already has that problem, or who want it to be solved before hardware upgrade) vote for faster solution. Any ideas?

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Working with Google Webmaster Tools

    - by com
    My first question is about Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. One of them is HTTP. I assume that all broken links in HTTP was somehow found by crawler, this is not the links from sitemap. If this was found by scanning all sitemap pages for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. And what the meaning of Linked From, I thought if the name of section is sitemap, therefore all URLs should be taken from sitemap, so why there is Linked From? The second question, what is the best way to trreat searching on the site. How come the searching result page are getting indexed? Because of the fact that all searching result page are getting indexed, I have to many page in Linked From. What's the right practice? Question three: In order to improve response time in WMT, can I redirect all crawler's requests to designated free web server? Is this good practice? Question four: How should I treat Google Analytics Code (with parameters PageView, PageLoadTime), in the case user request non existing page, should I render Google code or not? Right now I use Google Analytics Code on the common template page, such that every page, also non existing page with error message contains Google Analytics Code, it seems like it has influence on WMT.

    Read the article

  • When trying to install Wine on 12.10, 'Sudo' command will not let me type in a password.

    - by Nocturnus
    As the title explains, I have been attempting to install Wine on my laptop which is running 12.10. When I access the command terminal and entered "sudo add-apt-repository ppa:ubuntu-wine/ppa" I was of course met by a password block, when I attempted to enter my password, it flat out wouldn't let me type anything, the only key that got a response from the terminal was "enter" which was met by "incorrect password". To bypass this issue I backed out and used the 'Gksudo' command, this new dialogue box seemed to give me access to sudo commands. I then entered "sudo apt-get update" and "sudo apt-get install wine1.5". Up until the installation everything went fine, but after entering the final command (still using gksudo) The terminal read "the following packages have unmet dependencies" and proceeded to list a bunch of "recommends" So my guess is that Wine hasn't been updated to run on 12.10... Is this true, and is there any other way to open .exe's? Also what was with that funky password misshap? I'm totally new to Ubuntu so I've just been using support pages and tutorials, sorry if I'm a bit naive in these matters...

    Read the article

  • Focusing and Selecting the Text in ASP.NET TextBox Controls

    When a browser displays the HTML sent from a web server it parses the received markup into a Document Object Model, or DOM, which models the markup as a hierarchical structure. Each element in the markup - the <form> element, <div> elements, <p> elements, <input> elements, and so on - are represented as a node in the DOM and can be programmatically accessed from client-side script. What's more, the nodes that make up the DOM have functions that can be called to perform certain behaviors; what functions are available depend on what type of element the node represents. One function common to most all node types is focus, which gives keyboard focus to the corresponding element. The focus function is commonly used in data entry forms, search pages, and login screens to put the user's keyboard cursor in a particular textbox when the web page loads so that the user can start typing in his search query or username without having to first click the textbox with his mouse. Another useful function is select, which is available for <input> and <textarea> elements and selects the contents of the textbox. This article shows how to call an HTML element's focus and select functions. We'll look at calling these functions directly from client-side script as well as how to call these functions from server-side code. Read on to learn more! Read More >

    Read the article

  • Whats a good host for an active vBulletin site?

    - by Kyle
    I've been switching hosts using a VPS each time and I'm just really not sure I'm finding the right VPS's. I've used a VPS from burst.net & rubyringtech and I just feel like it's slowly killing my site because of the slow speed. I really don't know if it's the network or the VPS itself but I really wish to fix this. When I TOP into the VPS peak times it shows this: top - 03:18:56 up 16:33, 1 user, load average: 1.33, 1.40, 1.33 Tasks: 30 total, 1 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 27.2%us, 13.6%sy, 0.0%ni, 59.2%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048576k total, 679712k used, 368864k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached And pages take atleast a good 2-3 minutes to load. I have only like 50-60 members on the forum also. I had a shared hosting account and the forum was lightning fast.... Is a VPS a bad idea? :\ What should I do to fix this? I'm running lighttpd with xcache, and the latest mysql + php version. The server is a intel i7 2600 w/ 1gb uplink (I think the 1gb uplink is a lie because I've tested the network and the highest download speed I've seen was 20mb/s from a code.google page) All in all I've seen people talking about linode. Should I try them? I honestly don't need a dedicated server yet it's only 50-70 members online. What should I do? I really want a VPS because I enjoy root access. Does anyone have any suggestions?

    Read the article

  • By what features and qualities are "free" and "premium" themes differentiated

    - by Sinthia V
    I have a lot of time invested in creating Wordpress templates. I want to release combinations of these templates along with different styles and Fancy Front pages as "Premium Wordpress Themes". What I need to know is what does "Premium" mean? What do people expect of a GPL theme vs. a Premium theme? Are there features that are considered required to be premium? Are there features that are in demand but considered "exceptional" i.e. not part of every premium theme? How can I tell the difference? I have heard tounge-in-cheek answers that say that any theme that makes money is premium, but I mean to ask about what gives an outstanding theme it's quality. Why is it worth more? I am technically able to do many things, but as a lone developer with a family to feed, I can't afford to spend time on features that no one cares about. I have to try to isolate the things that people want. This is serious food and rent to me. How can I get this kind of info so I can make my project successful?

    Read the article

  • What guidelines should be followed when implementing third-party tracking pixels?

    - by Strozykowski
    Background I work on a website that gets a fair amount of traffic, and as such, we have implemented different tracking pixels and techniques across the site for various specific reasons. Because there are many agencies who are sending traffic our way through email campaigns, print ads and SEM, we have agreements with a variety of different outside agencies for tracking these page hits. Consequently, we have tracking pixels which span the entire site, as well as some that are on specific pages only. We have worked to reduce the total number of pixels available on any one page, but occasionally the site is rendered close to unusable when one of these third-party tracking pixels fails to load. This is a huge difficulty on parts of the site where Javascript is needed for functionality built into the page, but is unable to initialize until a 404 is returned on the external tracking pixel. (Sometimes up to 30 seconds later) I have spent some time attempting to research how other firms deal with this sort of instability with third-party components, but have come up a bit short. The plan currently is to implement our own stop-gap method to deal with these external outages, but rather than reinventing the wheel, we wanted to find out how this is dealt with on other sites. Question Is there a good set of guidelines that should be followed when implementing third-party tracking pixels? I would love to see some white papers or other written documents about how other people have dealt with this issue.

    Read the article

  • Is "send us a page with code" a typical interview requirement?

    - by acm
    Recently I was asked to show "a page with code" for a job interview. Being mainly a back-end programmer, and that's the position I applied for, I first said to the person I was talking to exactly that: PHP is executed at the server and therefore not visible by just giving a "page". However, following their desire, I sent links to the pages I've worked on before. Obviously they couldn't see anything except for the HTML, CSS, JS... They said it was not enough, they could not see the PHP. Understanding that they probably just wanted to know my skills and/or interest I sent them my Stack Overflow profile. Among all my questions and answers, most of them with code, certainly the PHP is there. But it seems this is not what they wanted. Well, I don't have any code put together that I can simply publish for someone to see. And I would never do it for the code I have deployed, obviously. So my question is/are: What does "send us a page with code" mean? What should I send? Is this a typical interview requirement?

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • I am confused between PHP and ASP.NET to choose as a career in Indian software development context

    - by Confused_Guy
    I need your help(specially from the software professionals of India). I have completed MCA in 2009. After that instead of joining software company I did a Teaching job nearby by home. In the mean time I prepared myself for public sector jobs(bank). I continued my job for 1 year more and left it in 2010. Now in 2012 ,I feel that I should have done the software jobs,so that I could earn my bread and butter and in the mean time I could have prepared for the job.Because,according to my qualification it will give me the best salary. Now I want to go back in software industries. Now all of them are asking for experiences.And I don't have any.....So which language should I learn? And what should I do,because I have two year gap. Some of my friends suggested me to go with PHP as its easier and quicker to get job in India. But Here the PHP guys are getting less salary as compared to ASP.NET. I am planning to begin with PHP and but is it possible to switch to ASP.NET after two years experience. JAVA: I know upto servlet & JSP. Which is nothing in current market. ASP.NET: I know the basics of asp.net upto database connection ie(Gridview). PHP: Only the basics. So what should I do now. Which is most demanding. Does PHP is good, I feel its more like JSP pages. Please guide me, All your suggestions are needed for me.

    Read the article

  • Can too many 301 redirects cause a DNS error?

    - by Graham
    For a site http://imageocd.com that I just set up I initially spelled the category "automobiles" as "autimobiles"... I know it's rediculous. I then set up over 10,000 pages behind that category e.g. http://imageocd.com/automobiles/hillman-minx-cabrio-pictures-and-wallpapers. So, I set up over 10,000 301 url redirects to change the spelling on automobiles. I just checked my Google Webmasters report and got an error saying: http://www.imageocd.com/: Googlebot can't access your siteSep 7, 2012 Over the last 24 hours, Googlebot encountered 2 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 66.7%. Could the overabundance of 301 redirects be causing this? I host 13 sites on this dedicated server and all sites are running fine. I also contacted GoDaddy and they said the server is running fine. Any ideas on what might be going on? Also, I have "canonical" set up for every URL. Could this be part of the error? Thanks.

    Read the article

  • Unity no longer loads in 13.04 for main user

    - by user152973
    When Ubuntu starts up, Unity fails to load (I can only see my desktop with no unity sidebar and no system bar in the top right). I tried the advice of Unity does not start in Ubuntu 13.04 which recommended the following commands: dconf reset -f /org/compiz/ unity --reset-icons &disown I ran the commands without errors an restarted the computer, but the problem persists. I am currently running Gnome. I have looked at other pages from the Google search "ubuntu unity failed to load 13.04", but the advice was similar to above and seems to be concerned with a system upgrade in April 18, 2013. I suspect my issue is something far more recent. Please give me advice on how to restore Unity on my account or at least figure out what the problem is. Thank you. Some information that might be relevant: -Unity has worked fine on 13.04 for the 6 months that I've had it until today. (November 10, 2013) -I have set up the update tool to automatically update when available. It is very possible that the system applied some updates without my knowledge. -Interestingly, Unity works fine on the Guest account. -I have made it so the system automatically logs me in at start-up. -This is a personal laptop. No one else has access to it. -I was not doing anything with the system settings or the terminal and have not installed any new software for the past 3 days. -I am running the System76 native Linux laptop Ultra Lemur. I did not contact their support yet because it seemed unlikely that this is a System76-specific error.

    Read the article

  • Multiple domains for different products?

    - by alexandertr
    I have a website with software applications. Is it good for SEO to choose one keyword rich domain name for each of our software products or should we stick to a single domain? From a user's perspective I think it would be easier to remember a domain that is keyword rich as the user will instantly know what this product is for. But I have read articles that the latest trend in SEO is to stick to one domain for all of your products and invest on this single domain website. Is that true? What do you advise? Should I register a separate domain for each of our products or should I use only one single domain? Should I do a 301 redirect with a .htaccess to a single domain? And what about the sitemaps? Should I register all sites in Google Webmaster Tools and post a separate sitemap for each one of them? should my main site sitemap include all pages or should separate domains have their own sitemaps?

    Read the article

  • Why not XHTML5?

    - by eegg
    So, HTML5 is the Big Step Forward, I'm told. The last step forward we took that I'm aware of was the introduction of XHTML. The advantages were obvious: simplicity, strictness, the ability to use standard XML parsers and generators to work with web pages, and so on. How strange and frustrating, then, that HTML5 rolls all that back: once again we're working with a non-standard syntax; once again, we have to deal with historical baggage and parsing complexity; once again we can't use our standard XML libraries, parsers, generators, or transformers; and all the advantages introduced by XML (extensibility, namespaces, standardization, and so on), that the W3C spent a decade pushing for good reasons, are lost. Fine, we have XHTML5, but it seems like it has not gained popularity like the HTML5 encoding has. See this SO question, for example. Even the HTML5 specification says that HTML5, not XHTML5, "is the format suggested for most authors." Do I have my facts wrong? Otherwise, why am I the only one that feels this way? Why are people choosing HTML5 over XHTML5?

    Read the article

  • Transformation?

    - by Joe G
    I started working at Oracle in 1997.  Since then, we (and most everyone) have been talking about transforming finance operations....but what does that mean exactly?  From my perspective, I thought it meant eliminate waste and menial tasks and giving your finance team more time to work on more strategic things.  That seems logical and simplistic, but how much progress have finance teams (and their IT departments) really made over the past fifteen years? I have yet to talk to a customer that doesn't have one amusing task that makes me chuckle.  Sometimes they still print hard copies of transactions to "file," or sometimes they print 700 pages of data to "analyze," or sometimes they cut and paste from one or more reports into a spreadsheet.  Upon hearing these things, my first question is always, "Why do you do that?" to which their response is rarely the same.    Sometimes it's related to trust (both the employee and the system).  Sometimes, it's habit-based.  And sometimes it is just impossible to accomplish the end result without some manual effort. I will say that I used to print nearly everything that I needed to review.  Partly, because I liked having the ability to scribble notes on the paper, and partly, because it was uncomfortable to read online.  However, I have changed. Rarely do I print anything anymore.  It's easier for me to read and notate online, and well, I guess I've just changed my habits. So where do you think our resistance to change comes from?  Is it truly deficits in our systems, or is it our own personal resistance to change?  What's your most annoying & untransformed task?

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Setting up a LAMP VM server for Development and Testing?

    - by TdotThomas
    Info: I would like to set up a VM server on my local computer which will serve pages in the exact same way as my current hosting (but only to me on my local computer). I currently pay a big web hosting company to host my website & web store and they are doing a great job, but I would like to be able to work on my Web site and its corresponding MySQL DB, HTML, and PHP code without being at risk of messing something completely up on the live servers. My current plan of action: Set up a VM webserver with Debian, MySQL, PHP, Apache. Copy web store (PHP/HTML) code to VM server. Copy my current MySQL databases from my hosting provider and install on VM server. Modify and test new features on VM server. Upload MySQL DB and HTML/PHP code back to web host's server where it should work as before but with new modifications. Questions: Now I'm pretty sure I have steps one and two down correctly but I can't for the life of me figure out how to proceed next, so here are my questions. I have my /etc/host file set up so www.MySite.test redirects to the IP address of the local VM webserver. Once I import my PHP/HTML files and MySQL file whats the best way to navigate around the fact that all of my files and DBs will reference www.MySite.com. I can export my MySQL dbs but do I also have to export my MySQL users and passwords to access those db or are those coded into my html/php code?

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >