Search Results

Search found 459 results on 19 pages for 'unclear'.

Page 6/19 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • How do I restore my system from a "Backup and Restore Center" backup?

    - by Daniel R Hicks
    The Windows (Vista) documentation and available online info is comprehensively vague. If I have a moderately brain dead system and want to restore it, and I have a "Backup and Restore Center" backup whose "delta" is not quite a week old (but with a "full backup" behind it), what steps do I go through to recover my box back to that backup point? It's totally unclear whether simply doing "restore all" from the (advanced) "Center" is sufficient, or do I need to first take the box back to day zero with the system restore DVD, et al? (Just editing this to get my correct ID associated with it.)

    Read the article

  • What's the entry path towards a database administrator job?

    - by FarmBoy
    I've recently lost my job, and I'm working towards changing vocations. My degrees are in Mathematics, but I'm interested in IT, particularly working as a DBA or a programmer. I don't have IT experience, but I have the resourses to be patient with the transition, and I'm currently learning SQL and Java. Obviously, I need some job experience. My question is this: What entry-level jobs might allow me to gain useful experience towards obtaining a DBA job? It seems to me that programmers often start as testers, and system administrators could start at a help-desk position, but it is unclear how one begins to work with a company's database.

    Read the article

  • LLVM-3.1 libLLVMSupport.a undefined reference to `dladdr'

    - by user91387
    I'm trying to compile using the llvm-3.1 package. I'm running 12.04 x64 (3.2.0-26 kernel) && 12.10 (3.5.0-4) x64 backported llvm-3.1 from quantal, then debian experimental. Next I tried 12.10 with the native ubuntu llvm-3.1 package; this failed as well. user@system:/tmp/llvm-test# make compiling cpp yacc file: decaf-llvm.y output file: decaf-llvm bison -b decaf-llvm -d decaf-llvm.y /bin/mv -f decaf-llvm.tab.c decaf-llvm.tab.cc flex -odecaf-llvm.lex.cc decaf-llvm.lex g++ -o ./decaf-llvm decaf-llvm.tab.cc decaf-llvm.lex.cc decaf-stdlib.c `llvm-config --cppflags --ldflags --libs core jit native` -ly -ll /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x6c): undefined reference to `dladdr' /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x18f): undefined reference to `dladdr' collect2: error: ld returned 1 exit status make: *** [decaf-llvm] Error 1 I know the code works as I've run it in centos fine using llvm-3.1-6.fc18(rpm) Google was a bit helpful with this: "On some systems, incluning Ubuntu 11.10, linking may fail with message that libLLVMSupport.a in function PrintStackTrace(void*) has undefined reference to dladdr." "Workaround is to compile LLVM with cmake specifying the following variable: -DCMAKE_EXE_LINKER_FLAGS=-ldl" http://svn.dsource.org/projects/bindings/trunk/llvm-3.0/Readme I double checked y ldflags and everything seems ok. user@system:/llvm-config --ldflags -L/usr/lib/llvm-3.1/lib -lpthread -lffi -ldl -lm I'm unclear of what to do next; any suggestions?

    Read the article

  • Managing JS and CSS for a static HTML web application

    - by Josh Kelley
    I'm working on a smallish web application that uses a little bit of static HTML and relies on JavaScript to load the application data as JSON and dynamically create the web page elements from that. First question: Is this a fundamentally bad idea? I'm unclear on how many web sites and web applications completely dispense with server-side generation of HTML. (There are obvious disadvantages of JS-only web apps in the areas of graceful degradation / progressive enhancement and being search engine friendly, but I don't believe that these are an issue for this particular app.) Second question: What's the best way to manage the static HTML, JS, and CSS? For my "development build," I'd like non-minified third-party code, multiple JS and CSS files for easier organization, etc. For the "release build," everything should be minified, concatenated together, etc. If I was doing server-side generation of HTML, it'd be easy to have my web framework generate different development versus release HTML that includes multiple verbose versus concatenated minified code. But given that I'm only doing any static HTML, what's the best way to manage this? (I realize I could hack something together with ERB or Perl, but I'm wondering if there are any standard solutions.) In particular, since I'm not doing any server-side HTML generation, is there an easy, semi-standard way of setting up my static HTML so that it contains code like <script src="js/vendors/jquery.js"></script> <script src="js/class_a.js"></script> <script src="js/class_b.js"></script> <script src="js/main.js"></script> at development time and <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script> <script src="js/entire_app.min.js"></script> for release?

    Read the article

  • Windows Upgrade vs Full Install

    - by James Atkinson
    I'm in the process of purchasing a Netbook for use while traveling. The included OS is XP, however, I would like to upgrade(?) to Windows 7. My question: Does a Windows Upgrade have the same physical footprint and performance as a full install? Does an upgrade leave behind non used files/resources that were originally included in XP? If so, are there ways to reduce this? I'm trying to reduce as much OS bloat as possible. Please let me know if my question is unclear. Thanks. Related to http://superuser.com/questions/60646/is-a-clean-install-really-better-than-an-upgrade however, this doesn't address the "leftovers" question.

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • In Varnish stats, what does "Backend conn. reuses" and "recycles" mean?

    - by electblake
    I have varnish installed and I think it's working properly (not sure if it matters but I am using iptables reroute method to route ports incoming:80 > varnish:8080 > apache:80 Anyway, In varnishstat I see a pretty high Hitrate average (60-80%) which I am working on but I am unclear at what all of the stats presented by varnishstat Specifically the following Backend stats: 380 0.00 0.26 Backend conn. success 10122 15.00 6.85 Backend conn. reuses 267 0.00 0.18 Backend conn. was closed 10391 15.00 7.04 Backend conn. recycles I've read a blog post called "Varnishstat for dummies" which outlines a lot of details of varnishstat (I recommend it for beginners) but it does not go over these Backend stats. Feel free to explain here or link to a resource I've missed :) thanks!

    Read the article

  • Connect wired-only devices to a remote wireless access point?

    - by billpg
    Hi everyone. In building A, I have a Netgear wireless access point using WPA2. Works great, no problems. In building B, I have some devices that only have wired Ethernet ports. They can't see my access point. What I need is a gizmo that connects over-the-air to my access point in building A, talks WPA2, and converts the packets to and from a wired Ethernet port. Netgear-access-point in building A      (WPA2 WiFi) Wireless bridge device          <-- Looking for this.      (Cat5 ethernet) My devices in building B. I've looked for devices on Amazon, but the descriptions are infuriatingly unclear. It says it supports WPA2, but does it support it as a client? Grrr... Any recommendations please?

    Read the article

  • New cloud development workflow using Github, Cloud9ide and CloudFoundry.

    - by weng
    So time is changing towards cloud development/computing. I'm trying to get the new "cloud" workflow based on the services I'm going to use: Github, Cloud9ide and CloudFoundry. Here is what is on my mind: Github acts like a central (main repo) just like yesterday's local filesystem. Every service will base it service upon this main repo. Workflow: Github: I create a new Github repo served as main repo for the project. Cloud9ide. I open my Github repo and write my tests and implementation (BDD/TDD). When I'm ready I save (commit) it to main repo on Github. X: A running instance of Jenkins detects someone has committed and fetches the latest commit, builds, deploys, tests (yeti and/or selenium) and reports if the tests were passed or not. If not, I make another commit til all tests are passing. X: I run the CloudFoundry commands to push the main Github repo to CloudFoundry's server and it will deploy my app automatically. What I'm still confused about is where this X environment will be. On a local server where I have to install Jenkins? Or could I install it on Cloud9ide (when java is supported) or will it be on another cloud service? Also, that X environment has to be able to fetch (clone) the Github repo and run the build scripts. And since the concept of Cloud9ide is very new and there haven't been any other predecessors I really wonder how the workflow will look like. We all know Github's workflow. We now know CloudFoundry's workflow (deploy/scale with a restful API/command line tool). But how Cloud9Ide will operate is still somewhat unclear to me. Someone on Cloud9ide mentioned that there will be buttons like deploy so I can deploy with one click. But that I guess will depend on what services that deploy process will hook up into etc. Could someone enlighten this cloud workflow topic and fill in the gaps. Thanks.

    Read the article

  • Difference between *:80 and _default_:80 in Apache2

    - by Johannes Ernst
    I'm trying to understand the difference between the following two terms: *:80 _default_:80 in the Apache configuration file. The documentation here is unclear to me, and the only mailing list conversation that I could find here does not shed any (comprehensible, to me) light on the matter either. I have a bunch of name-based virtual hosts declared like this: <VirtualHost *:80> ServerName example.com ... and I'd like to have an entry that fires when none of those match, i.e. when a request comes in without a virtual host name, or with a virtual host name that has not been declared. Should I use *:80 or default:80?

    Read the article

  • Diagnosing xmodmap errors

    - by intuited
    I'm getting this error when trying to use xmodmap to get rid of caps lock: $ xmodmap -e 'clear Lock' X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 118 (X_SetModifierMapping) Value in failed request: 0x17 Serial number of failed request: 8 Current serial number in output stream: 8 I'm running xfce on Maverick "10.10" Meercat. This problem did not occur before I added the Keyboard Layouts applet to a panel; before doing that, I was able to run my xmodmap script to swap Esc and CapsLock: !Remap Caps_Lock as Escape remove Lock = Caps_Lock keysym Caps_Lock = Escape It may be relevant that I chose alt-capslock as the keyboard switch combo in the Keyboard Layouts preferences. I've had a similar problem before, on a different machine, running openbox. On that machine, this problem started when I upgraded to Lucid, and has persisted in Maverick (release 10.10). I reported a bug in xorg. However, it remains unclear whether it's really a problem with xorg, or if I'm just doing something wrong with my configuration. Have other people experienced this problem? Can someone shed some light on what's going on here? It seems there are quite a few layers involved, and I don't understand any of them particularly well, so any information would be helpful. update I've discovered that the problem is specifically triggered by adding the Canada layout variant "Multilingual" (ca-multix). If I instead add the variant "Multilingual (first part)", the problem does not occur. I think this will probably end up being a usable workaround, but I don't yet know what the difference between these variants is. I've filed a freedesktop issue, and am commenting on a related ubuntu issue.

    Read the article

  • What is the correct configuration for multiple apache2 vhosts and multiple php5-fpm pools?

    - by farinspace
    I have a group of sites (group A) which I would like to run using one php5-fpm pool and a second group of sites (group B) which I would like to run using a second php5-fpm pool. I can effectively define/create the pool in the fpm.conf file and I confirmed that it is running with the different user/group I've defined. However I am unclear as to how to setup the apache virtual host config. I've tried a few apache2 configurations but I seem to not be able to add the second pool. If you've done this please help.

    Read the article

  • missing libjpeg.so.62 from ia32 shared library

    - by user170200
    I am trying to install a chemical/molecular biology modeling program called Molsoft ICM-Pro. Initially after downloading the program and trying to open, it gave me error messages that I was missing shared libraries, and after talking with my network administrator he recommended I install the ia32 shared libraries using sudo apt-get install ia32-libs Which gives sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done ia32-libs is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. so I am assuming the libraries installed correctly, but now when I try to run the program I get this error: ubuntu:/home/reilly/icmd icm icm: error while loading shared libraries: libjpeg.so.62: cannot open shared object file: No such file or directory So my question is, where can I get the library containing libjpeg.so.62? Additionally, I was told I would need libXmu.so.6 and libtiff.so.3 . Is there a shared library that could be missing that would contain these files? I am an ubuntu noob, so sorry if the information I provided was unclear. Any help would be immensely appreciated! btw I am using ubuntu 12.04 dual boot with windows on an HP pavilion dv6

    Read the article

  • A website to compare software/hardware/electronics?

    - by liori
    Hello, When choosing software or hardware I often Wikipedia's comparision pages like the page on Comparison of project management software. However, criteria on including items on such lists are sometimes unclear (f.e. project has to be "notable", has to have a Wikipedia page), and the number of compared features are limited. Also, Wikipedia's software doesn't make it easy to edit such lists. I am looking for an external webapp/website that would do the task of comparing different solutions in the world of software, hardware, maybe even electronics, in this kind of social way. Is there any kind of such service? It should allow users to add new entries, add new features to compare and discuss items in a collaborative way. It should be also easy to browse already entered items, filter by features. (this might be a good idea for a startup ;-))

    Read the article

  • Where is the central ZFS website now?

    - by Stefan Lasiewski
    Oracle dumped OpenSolaris in Fall 2010, and it is unclear if Oracle will continue to publicly release updates to ZFS, except maybe after they release their next major version of Solaris. FreeBSD now has ZFS v28 available for testing. But where did v28 come from? I notice that the main ZFS website does not show version 28 available. Has this website been abandoned? If so, where is the central website for the ZFS project, so that I can browse the repo, read the mailing lists, read the release notes, etc. (I realize that OpenSolaris has been dumped by Oracle, and that they are limiting their ZFS releases to the community).

    Read the article

  • Port Forwarding a Specific Port (e.g. 22)

    - by Jerry Blair
    I'm still confused about establishing an SSH connection (port 22) between two computers on different internal networks. For example: I am on my computer with internal IP address IIP-1, connected to my router RT-1. There are 10 IIPs connected to RT-1. I want to establish an SSH connection to IIP-3 which is connected to router RT-2. There are 10 IIPs connected to RT-2. At any time, there can be multiple SSH connections between IIPs on RT-1 and RT-2. Since I only have port 22 available, I don't know which SSH session is talking between which IIPs. I looked at a couple of similar questions but am still unclear on the solution. Thanks much, Jerry

    Read the article

  • Standards for how developers work on their own workstations

    - by Jon Hopkins
    We've just come across one of those situations which occasionally comes up when a developer goes off sick for a few days mid-project. There were a few questions about whether he'd committed the latest version of his code or whether there was something more recent on his local machine we should be looking at, and we had a delivery to a customer pending so we couldn't wait for him to return. One of the other developers logged on as him to see and found a mess of workspaces, many seemingly of the same projects, with timestamps that made it unclear which one was "current" (he was prototyping some bits on versions of the project other than his "core" one). Obviously this is a pain in the neck, however the alternative (which would seem to be strict standards for how each developer works on their own machine to ensure that any other developer can pick things up with a minimum of effort) is likely to break many developers personal work flows and lead to inefficiency on an individual level. I'm not talking about standards for checked-in code, or even general development standards, I'm talking about how a developer works locally, a domain generally considered (in my experience) to be almost entirely under the developers own control. So how do you handle situations like this? Are the one of those things that just happens and you have to deal with, the price you pay for developers being allowed to work in the way that best suits them? Or do you ask developers to adhere to standards in this area - use of specific directories, naming standards, notes on a wiki or whatever? And if so what do your standards cover, how strict are they, how do you police them and so on? Or is there another solution I'm missing? [Assume for the sake of argument that the developer can not be contacted to talk through what he was doing here - even if he could knowing and describing which workspace is which from memory isn't going to be simple and flawless and sometimes people genuinely can't be contacted and I'd like a solution which covers all eventualities.]

    Read the article

  • 2D Collision masks for handling slopes

    - by JiminyCricket
    I've been looking at the example at: http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel and am trying to figure out how to adjust the sprite once a collision has been detected. As David suggested at XNA 4.0 2D sidescroller variable terrain heightmap for walking/collision, I made a few sensor points (feet, sides, bottom center, etc.) and can easily detect when these points actually collide with non-transparent portions of a second texture (simple slope). I'm having trouble with the algorithm of how I would actually adjust the sprite position based on a collision. Say I detect a collision with the slope at the sprite's right foot. How can I scan the slope texture data to find the Y position to place the sprite's foot so it is no longer inside the slope? The way it is stored as a 1D array in the example is a bit confusing, should I try to store the data as a 2D array instead? For test purposes, I'm thinking of just using the slope texture alpha itself as a primitive and easy collision mask (no grass bits or anything besides a simple non-linear slope). Then, as in the example, I find the coordinates of any collisions between the slope texture and the sprite's sensors and mark these special sensor collisions as having occurred. Finally, in the case of moving up a slope, I would scan for the first transparent pixel above (in the texture's Ys at that X) the right foot collision point and set that as the new height of the sprite. I'm a little unclear also on when I should make these adjustments. Collisions are checked on every game.update() so would I quickly change the position of the sprite before the next update is called? I also noticed several people mention that it's best to separate collision checks horizontally and vertically, why is that exactly? Open to any suggestions if this is an inefficient or inaccurate way of handling this. I wish MSDN had provided an example of something like this, I didn't know it would be so much more complex than NES Mario style pure box platforming!

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Linux software RAID 10 implementation

    - by fabrik
    Hello there! I don't want to force anybody to make it on behalf of me but trust me: i've looked hundreds of sites and i can't find a good starting point for this. I have 4x500Gb HDD's which i want to set up in RAID 10. The most promising description is here, but it's a little old and unclear for me, above all i prefer Debian over Ubuntu (i know there are slight or no differences). Is it possible to build RAID 10 with Debian's installer or i need to build RAID 1 first in the installer then use mdadm later? What is the best practice for building software RAID 10 under Linux (Debian)? Thanks for your time, fabrik

    Read the article

  • Creating sitemap for Googlebot - how to mark dynamic content / dynamic subpages?

    - by ojek
    I have a website that is an Internet forum. This forum has many categories, and a single category page that contains a lot of subpages with listed threads. This Internet forum is brand new, and about a week ago I filled it with a few hundred thousand threads. I then looked at my Google Webmasters Tools page to see any changes in indexing, but the index went up from 300 to about 1200, so that means it did not index my added threads (although it added something). The following is what my sitemap.xml contains, which I uploaded to their website. Of course there is a lot more code, this is just a snippet for a single category. In my real sitemap file I have all the categories listed as below: <url> <loc>http://mysite.com/Forums/Physics</loc> <changefreq>hourly</changefreq> </url> Now, I would expect Googlebot to go into mysite.com/Forums/Physics, and crawl through all the subpages with thread links, and then crawl inside of each thread and index its content. How can I achieve this? Also if this is unclear, I will add a real link to my website.

    Read the article

  • Documenting mathematical logic in code

    - by Kiril Raychev
    Sometimes, although not often, I have to include math logic in my code. The concepts used are mostly very simple, but the resulting code is not - a lot of variables with unclear purpose, and some operations with not so obvious intent. I don't mean that the code is unreadable or unmaintainable, just that it's waaaay harder to understand than the actual math problem. I try to comment the parts which are hardest to understand, but there is the same problem as in just coding them - text does not have the expressive power of math. I am looking for a more efficient and easy to understand way of explaining the logic behind some of the complex code, preferably in the code itself. I have considered TeX - writing the documentation and generating it separately from the code. But then I'd have to learn TeX, and the documentation will not be in the code itself. Another thing I thought of is taking a picture of the mathematical notations, equations and diagrams written on paper/whiteboard, and including it in javadoc. Is there a simpler and clearer way? P.S. Giving descriptive names(timeOfFirstEvent instead of t1) to the variables actually makes the code more verbose and even harder too read.

    Read the article

  • Tracking feature requests for small-scale components

    - by DXM
    I'm curious how other development teams (especially those that work in moderate to large development groups) track "future" features/wishlists for functionality for internally developed frameworks or components. I know the standard advice is that a development team should find one good tool for tracking bugs/features and use that for everything and I agree with that if the future requests are for the product itself. In my company we have an engineering department, which is broken up into multiple groups and within each there can be one to several agile teams. The bug tracking product we use has been "a leader since 1997" (their UI/usability seems to also be evaluated against that year even today) but my agile team or even group doesn't really control what is being used by the whole department. What we are looking to track is not necessarily product features but expansion/nice to have functionality for internal components that go into our product. So to name a few for example... framework/utility library on top of CppUnit which our developers share low-level IPC communications framework Common development SDK that myself and several other team leads started to help share some common code/tools at the department-wide level (this SDK is released as internal "product" to each of the groups). Is the standard practice to use the one bug tracking tool? Or would it make more sense to setup something more localized specifically for our needs and maintain it ourselves? It's also unclear how management will feel if developers start performing "IT" roles of maintaining software and servers. At the same time, right now, we use excel files, internal wiki and MS OneNote for this kind of stuff and that just doesn't feel right. (I'm afraid to ask for actual software recommendations, since that might make this question more localized or something. Also developers needs this way more than management, so it would be nice to find something either free or no more than the cost of a happy hour).

    Read the article

  • Cannot install Ubuntu on an Acer Aspire One 756

    - by Byron807
    I have used Ubuntu before, in virtual machines, but today I decided to make the leap and I bought a netbook to install Ubuntu as a "real" OS alongside Windows. The netbook I bought is an Acer Aspire One 756, with a 64-bit Intel processor, 4GB RAM, and Windows 8 as the default OS. I have now encountered several obstacles that actually prevent me from installing Ubuntu 12.10. Here are all the things I have tried so far: Used a live CD, in combination with a USB DVD drive. (I should point out that the Aspire One does not have an optical drive.) The computer does not boot in Ubuntu; the drive keeps spinning, but nothing happens, even though I changed the boot order in the BIOS. Used a USB drive created via the tool available on pendrivelinux.com. Again, I've made changes to the BIOS to make sure the computer tries to boot from USB before using the built-in HDD. The results vary in this case: sometimes, the computer keeps rebooting like crazy until I remove the USB drive, at which point the computer boots into Windows 8, as expected. If I use a different USB drive, I get an error message that says that the USB drive has been blocked due to "the current security policy". Tried to install Ubuntu via Wubi. The program appears to install something, but at some point during the installation process, I get a non-specified error message and nothing else happens. I am not sure if these are known issues; in any case, searching the forum has not yielded any results, so I thought I should simply describe my problem here in the hope that this question has not been answered before. I would greatly appreciate any help with this annoying problem. Of course, if anything is unclear, do not hesitate to ask for further details.

    Read the article

  • Update Google Sitemap for Mobile

    - by dimo414
    I have a series of utilities to generate Google sitemaps for my whole site. These files are massive, and slow to build. We want to start telling Google these pages are mobile-crawl-able too, by adding them to mobile sitemaps, but the documentation is unclear if I need to specify physically different files for my mobile URLs than for my normal ones. If this is my current sitemap: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://mobile.example.com/article100.html</loc> </url> </urlset> Can I simply change it to: <?xml version="1.0" encoding="UTF-8" ?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9" xmlns:mobile="http://www.google.com/schemas/sitemap-mobile/1.0"> <url> <loc>http://mobile.example.com/article100.html</loc> <mobile:mobile/> </url> </urlset> Or do I need to create new files with the additional markup, alongside my existing files?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >