Search Results

Search found 19305 results on 773 pages for 'above the gods'.

Page 122/773 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Cannot install ubuntu on Asus UX51VZ

    - by Andrey Frunt
    I removed Win8, changed partition table from GPT to MBR and installed Win7. The problem is that I don't see the grub after I installed ubuntu and only win7 boots. Any ideas? I tried to reinstall all of listed above about 5 times but with no success :( I tried both 13.04 and 13.10 UPD: I also tried Boot-Repair and it installed the grub2 for me but unfortunately only with options to run ubuntu, so I repaired bootrecord using windows tools to have an ability to run to windows. Anyway I still cannot configure the dual boot. Please help :)

    Read the article

  • Designing binary operations(AND, OR, NOT) in graphs DB's like neo4j

    - by Nicholas
    I'm trying to create a recipe website using a graph database, specifically neo4j using spring-data-neo4j, to try and see what can be done in Graph Databases. My model so far is: (Chef)-[HAS_INGREDIENT]->(Ingredient) (Chef)-[HAS_VALUE]->(Value) (Ingredient)-[HAS_INGREDIENT_VALUE]->(Value) (Recipe)-[REQUIRES_INGREDIENT]->(Ingredient) (Recipe)-[REQUIRES_VALUE]->(Value) I have this set up so I can do things like have the "chef" enter ingredients they have on hand, and suggest recipes, as well as suggest recipes that are close matches, but missing one ingredient. Some recipes can get complex, utilizing AND, OR, and NOT type logic, something like (Milk AND (Butter OR spread OR (vegetable oil OR olive oil))) and I'm wondering if it would be sane to model this in a graph using a tree type representation? An example of what I was thinking is to create three "node" types of AND, OR, and NOT and have each of them connect to the nodes value underneath. How else might this be represented in a Graph Database or is my example above a decent representation?

    Read the article

  • Remote Control Holder Mod Stores Tablet Close At Hand

    - by Jason Fitzpatrick
    If you spend most of your iPad time lounging on your couch or in bed, this simple IKEA hack will keep your favorite tablet stowed right at your finger tips. IKEA’s inexpensive remote control holder, the $4.99 Flort, is easy to hack from a remote holster into an tablet holder. You simply flip it around, sew up the edge of the back flap, and holster your tablet in it–your tablet fits all the way inside, in the above image the iPad is tucked in semi-precariously to demonstrate it sliding inside. Hit up the link below for step-by-step pictures. Smartest Way to Store Your iPad for $4.99 [IKEAHackers] HTG Explains: When Do You Need to Update Your Drivers? How to Make the Kindle Fire Silk Browser *Actually* Fast! Amazon’s New Kindle Fire Tablet: the How-To Geek Review

    Read the article

  • 301 url rewrite loop

    - by anyvendetta
    I need to do a 301 rewrite to force all urls to become lowercase i put in htaccess (RewriteMap lc int:tolower in httpd.conf) RewriteCond %{REQUEST_URI} [A-Z] RewriteRule . ${lc:{REQUEST_URI}} [R=301,L] Everything works just fine except to urls with subcategories which in this case are: /category-1256-Product-page-example.html the numer 1256 refers to a "subcategory" So when i try to access /category-1256-Product-page-example.html gives me a loop error message I think another redirect rules are making the loop but dunno how to fix it because are just this urls rewrite rules that don't work with the above rewrite. Rewriterule ^main-site-url/category-([0-9]*)-([-_a-zA-Z0-9]*)\.html$ /subcategories.php?idcategory_main=1&idcategory=$1&category=$2 [L] Rewriterule ^main-site-url/([0-9]*)-([-_a-zA-Z0-9]*)-([0-9]*)\.html$ /file.php?idcategory_main=1&idsubcategory=$1&product=$2&idproduct=$3 [L]

    Read the article

  • Help needed in customizing Ubuntu 13 distribution

    - by Bilal Wajid
    I have been using Ubuntu for over 3 years. I want to custom-build-ubuntu-distribution. I am trying to do the needful using Ubuntu 13.04. I have tried the following:- Remastersys - failed. Relinux - failed. Ubuntu Customization Kit - failed. LiveCDCustomizationFromScratch (help.ubuntu.com) - failed. Kindly guide me how to do this, based on the above failed attempts. I think a manual of how to do it would be very very helpful. Thanks and Best Wishes, B. Wajid

    Read the article

  • Download the Mountain Views from Romania Theme for Windows 7 and 8

    - by Asian Angel
    Are you ready to add some serene and beautiful mountain scenery to your desktop? Then you will definitely want to grab a copy of the Mountain Views from Romania Theme for Windows 7 and 8. The theme comes with five wonderful images from photographer Mihai Despan to add a peaceful mood to your favorite computer. Special Note: The photos in the theme do not contain the black strip shown in the image above. Those were ‘added’ during the image editing process for our post. Uncovering Artists Through Windows Themes – Mihai Despan [7 Tutorials] Why Enabling “Do Not Track” Doesn’t Stop You From Being Tracked HTG Explains: What is the Windows Page File and Should You Disable It? How To Get a Better Wireless Signal and Reduce Wireless Network Interference

    Read the article

  • What package do I need to build a Qt 5 & CMake application?

    - by Kevin Reid
    I'm trying to build sdrangelove, which wants Qt 5 and uses CMake for its build system, on Ubuntu 13.10. What package do I need to install to give it the file it's asking for here? There are a lot of *qt5* packages, and I've tried installing the promising looking ones to no effect. All the discussions I've found either have things working fine or are talking about writing CMake build rules rather than executing them. I don't have a lot of experience with the organization of Debian/Ubuntu packaging. CMake Error at CMakeLists.txt:14 (find_package): By not providing "FindQt5Core.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "Qt5Core", but CMake did not find one. Could not find a package configuration file provided by "Qt5Core" (requested version 5.0) with any of the following names: Qt5CoreConfig.cmake qt5core-config.cmake Add the installation prefix of "Qt5Core" to CMAKE_PREFIX_PATH or set "Qt5Core_DIR" to a directory containing one of the above files. If "Qt5Core" provides a separate development package or SDK, be sure it has been installed.

    Read the article

  • Dummy Guide to NetBeans Android Development

    - by Geertjan
    Start by setting up the Android SDK (fantastic Ubuntu instructions here), then install NBAndroid. Now you can create a new Android project: Having set up the Android SDK, you're able to select your Android platform in the IDE: The project structure created by the above templates is nice and easy to understand: Build the project and you have your APK file and everything else generated in the Files window: Nice features are included, such as code completion in Android XML files: Several other features are included, as described here, such as "Export Signed Android Package", as well as deployment to the Android emulator. Now that I have everything set up (took literally about 10 minutes from start to finish), I'm going to be experimenting a bit with Android development via NetBeans IDE.

    Read the article

  • IL and case-sensitivity

    - by Ali .NET
    Quoted from A Brief Introduction To IL code, CLR, CTS, CLS and JIT In .NET CLS stands for Common Language Specifications. It is a subset of CTS. CLS is a set of rules or guidelines which if followed ensures that code written in one .NET language can be used by another .NET language. For example one rule is that we cannot have member functions with same name with case difference only i.e we should not have add() and Add(). This may work in C# because it is case-sensitive but if try to use that C# code in VB.NET, it is not possible because VB.NET is not case-sensitive. Based on above text I want to confirm two points here: Does the case-sensitivity of IL is a condition for member functions only, and not for member properties? Is it true that C# wouldn't be inter-operable with VB.NET if it didn't take care of the case sensitivity?

    Read the article

  • Stop bots from crawling old links with extensions

    - by Jared
    I've recently switched to MVC3 which is extension-less for the URL's, but Google and Bing have a wealth of links that they are crawling which no longer exist. So I'm trying to find out if there is a way to format robots.txt (or by some other method) to tell google/bing that any link that ends in an extension isn't a valid link... Is this possible? On pages that I'm concerned about a User having saved as a fav I'm displaying a 404 page that lists the links to take once they are redirected to the new page (I decided to not just redirect them as I don't want to maintain these forever). For Google/Bing sake I do have the canonical tag in the header. User-agent: * Allow: / Disallow: /*.* EDIT: I just added the 3rd line (in text above) and it APPEARS to do what I'm wanting. Allow a path, but disallow a file. Can anyone confirm this?

    Read the article

  • Are libc versions tied to kernel versions?

    - by mathematician1975
    After reading the answers to my previous question I have come to the conclusion that an answer to the following question is what I was actually looking for: Does a particular version of the kernel require a particular version of libc to run properly? Basically my problem stems from building an application on my 12.04 ubuntu and trying to run it on 8.04. I have since learned from this and other stackexchange forums that it is backward compatibility of libc that causes these problems. Therefore what I am perhaps naively trying to do is build the same version of libc that exists on my target and then link against that on my host when I build the application. Then in an ideal world when I copy this to the host, having been linked to the "correct" libc it should work (in my head at least). I have been totally unable to find a way to install an older libc on my system, and wondered if each version is tightly bound to a kernel version, hence the above question.

    Read the article

  • Unity3d: Box collider attached to animated FBX models through scripts at run-time have wrong dimension

    - by Heisenbug
    I have several scripts attached to static and non static models of my scene. All models are instantiated at run-time (and must be instantiated at run-time because I'm procedural building the scene). I'd like to add a BoxCollider or SphereCollider to my FBX models at runtime. With non animated models it works simply requiring BoxCollider component from the script attached to my GameObject. BoxCollider is created of the right dimension. Something like: [RequireComponent(typeof(BoxCollider))] public class AScript: MonoBehavior { } If I do the same thing with animated models, BoxCollider are created of the wrong dimension. For example if attach the script above to penelopeFBX model of the standard asset, BoxCollider is created smaller than the mesh itself. How can I solve this?

    Read the article

  • How to find which w3wp.exe to attach when debugging your SharePiont2010 project

    - by ybbest
    When debugging SharePoint2010 project, you need to attach w3wp.exe process, however there are often quite a few of them and it is very hard to figure out which one to attach. Today, I will show you how to find out which process to attach using a tool called process explorer. 1. Download the process explorer and run it after you download it. 2. Find the w3wp.exe processes under wininit.exe right-click the columns header and click Select Columns. 3. Include Command Line under Process Image. 4. Now you can see your IIS site name next to w3wp.exe, in my case I’d like to attach the “SharePoint – BenDev80″.You can see the PID of the process is 2920. 5. From the above process you know the process ID you’d like to attach is 2920, you can then go ahead to attach the process from Visual Studio.

    Read the article

  • Informed TDD &ndash; Kata &ldquo;To Roman Numerals&rdquo;

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/28/informed-tdd-ndash-kata-ldquoto-roman-numeralsrdquo.aspxIn a comment on my article on what I call Informed TDD (ITDD) reader gustav asked how this approach would apply to the kata “To Roman Numerals”. And whether ITDD wasn´t a violation of TDD´s principle of leaving out “advanced topics like mocks”. I like to respond with this article to his questions. There´s more to say than fits into a commentary. Mocks and TDD I don´t see in how far TDD is avoiding or opposed to mocks. TDD and mocks are orthogonal. TDD is about pocess, mocks are about structure and costs. Maybe by moving forward in tiny red+green+refactor steps less need arises for mocks. But then… if the functionality you need to implement requires “expensive” resource access you can´t avoid using mocks. Because you don´t want to constantly run all your tests against the real resource. True, in ITDD mocks seem to be in almost inflationary use. That´s not what you usually see in TDD demonstrations. However, there´s a reason for that as I tried to explain. I don´t use mocks as proxies for “expensive” resource. Rather they are stand-ins for functionality not yet implemented. They allow me to get a test green on a high level of abstraction. That way I can move forward in a top-down fashion. But if you think of mocks as “advanced” or if you don´t want to use a tool like JustMock, then you don´t need to use mocks. You just need to stand the sight of red tests for a little longer ;-) Let me show you what I mean by that by doing a kata. ITDD for “To Roman Numerals” gustav asked for the kata “To Roman Numerals”. I won´t explain the requirements again. You can find descriptions and TDD demonstrations all over the internet, like this one from Corey Haines. Now here is, how I would do this kata differently. 1. Analyse A demonstration of TDD should never skip the analysis phase. It should be made explicit. The requirements should be formalized and acceptance test cases should be compiled. “Formalization” in this case to me means describing the API of the required functionality. “[D]esign a program to work with Roman numerals” like written in this “requirement document” is not enough to start software development. Coding should only begin, if the interface between the “system under development” and its context is clear. If this interface is not readily recognizable from the requirements, it has to be developed first. Exploration of interface alternatives might be in order. It might be necessary to show several interface mock-ups to the customer – even if that´s you fellow developer. Designing the interface is a task of it´s own. It should not be mixed with implementing the required functionality behind the interface. Unfortunately, though, this happens quite often in TDD demonstrations. TDD is used to explore the API and implement it at the same time. To me that´s a violation of the Single Responsibility Principle (SRP) which not only should hold for software functional units but also for tasks or activities. In the case of this kata the API fortunately is obvious. Just one function is needed: string ToRoman(int arabic). And it lives in a class ArabicRomanConversions. Now what about acceptance test cases? There are hardly any stated in the kata descriptions. Roman numerals are explained, but no specific test cases from the point of view of a customer. So I just “invent” some acceptance test cases by picking roman numerals from a wikipedia article. They are supposed to be just “typical examples” without special meaning. Given the acceptance test cases I then try to develop an understanding of the problem domain. I´ll spare you that. The domain is trivial and is explain in almost all kata descriptions. How roman numerals are built is not difficult to understand. What´s more difficult, though, might be to find an efficient solution to convert into them automatically. 2. Solve The usual TDD demonstration skips a solution finding phase. Like the interface exploration it´s mixed in with the implementation. But I don´t think this is how it should be done. I even think this is not how it really works for the people demonstrating TDD. They´re simplifying their true software development process because they want to show a streamlined TDD process. I doubt this is helping anybody. Before you code you better have a plan what to code. This does not mean you have to do “Big Design Up-Front”. It just means: Have a clear picture of the logical solution in your head before you start to build a physical solution (code). Evidently such a solution can only be as good as your understanding of the problem. If that´s limited your solution will be limited, too. Fortunately, in the case of this kata your understanding does not need to be limited. Thus the logical solution does not need to be limited or preliminary or tentative. That does not mean you need to know every line of code in advance. It just means you know the rough structure of your implementation beforehand. Because it should mirror the process described by the logical or conceptual solution. Here´s my solution approach: The arabic “encoding” of numbers represents them as an ordered set of powers of 10. Each digit is a factor to multiply a power of ten with. The “encoding” 123 is the short form for a set like this: {1*10^2, 2*10^1, 3*10^0}. And the number is the sum of the set members. The roman “encoding” is different. There is no base (like 10 for arabic numbers), there are just digits of different value, and they have to be written in descending order. The “encoding” XVI is short for [10, 5, 1]. And the number is still the sum of the members of this list. The roman “encoding” thus is simpler than the arabic. Each “digit” can be taken at face value. No multiplication with a base required. But what about IV which looks like a contradiction to the above rule? It is not – if you accept roman “digits” not to be limited to be single characters only. Usually I, V, X, L, C, D, M are viewed as “digits”, and IV, IX etc. are viewed as nuisances preventing a simple solution. All looks different, though, once IV, IX etc. are taken as “digits”. Then MCMLIV is just a sum: M+CM+L+IV which is 1000+900+50+4. Whereas before it would have been understood as M-C+M+L-I+V – which is more difficult because here some “digits” get subtracted. Here´s the list of roman “digits” with their values: {1, I}, {4, IV}, {5, V}, {9, IX}, {10, X}, {40, XL}, {50, L}, {90, XC}, {100, C}, {400, CD}, {500, D}, {900, CM}, {1000, M} Since I take IV, IX etc. as “digits” translating an arabic number becomes trivial. I just need to find the values of the roman “digits” making up the number, e.g. 1954 is made up of 1000, 900, 50, and 4. I call those “digits” factors. If I move from the highest factor (M=1000) to the lowest (I=1) then translation is a two phase process: Find all the factors Translate the factors found Compile the roman representation Translation is just a look-up. Finding, though, needs some calculation: Find the highest remaining factor fitting in the value Remember and subtract it from the value Repeat with remaining value and remaining factors Please note: This is just an algorithm. It´s not code, even though it might be close. Being so close to code in my solution approach is due to the triviality of the problem. In more realistic examples the conceptual solution would be on a higher level of abstraction. With this solution in hand I finally can do what TDD advocates: find and prioritize test cases. As I can see from the small process description above, there are two aspects to test: Test the translation Test the compilation Test finding the factors Testing the translation primarily means to check if the map of factors and digits is comprehensive. That´s simple, even though it might be tedious. Testing the compilation is trivial. Testing factor finding, though, is a tad more complicated. I can think of several steps: First check, if an arabic number equal to a factor is processed correctly (e.g. 1000=M). Then check if an arabic number consisting of two consecutive factors (e.g. 1900=[M,CM]) is processed correctly. Then check, if a number consisting of the same factor twice is processed correctly (e.g. 2000=[M,M]). Finally check, if an arabic number consisting of non-consecutive factors (e.g. 1400=[M,CD]) is processed correctly. I feel I can start an implementation now. If something becomes more complicated than expected I can slow down and repeat this process. 3. Implement First I write a test for the acceptance test cases. It´s red because there´s no implementation even of the API. That´s in conformance with “TDD lore”, I´d say: Next I implement the API: The acceptance test now is formally correct, but still red of course. This will not change even now that I zoom in. Because my goal is not to most quickly satisfy these tests, but to implement my solution in a stepwise manner. That I do by “faking” it: I just “assume” three functions to represent the transformation process of my solution: My hypothesis is that those three functions in conjunction produce correct results on the API-level. I just have to implement them correctly. That´s what I´m trying now – one by one. I start with a simple “detail function”: Translate(). And I start with all the test cases in the obvious equivalence partition: As you can see I dare to test a private method. Yes. That´s a white box test. But as you´ll see it won´t make my tests brittle. It serves a purpose right here and now: it lets me focus on getting one aspect of my solution right. Here´s the implementation to satisfy the test: It´s as simple as possible. Right how TDD wants me to do it: KISS. Now for the second equivalence partition: translating multiple factors. (It´a pattern: if you need to do something repeatedly separate the tests for doing it once and doing it multiple times.) In this partition I just need a single test case, I guess. Stepping up from a single translation to multiple translations is no rocket science: Usually I would have implemented the final code right away. Splitting it in two steps is just for “educational purposes” here. How small your implementation steps are is a matter of your programming competency. Some “see” the final code right away before their mental eye – others need to work their way towards it. Having two tests I find more important. Now for the next low hanging fruit: compilation. It´s even simpler than translation. A single test is enough, I guess. And normally I would not even have bothered to write that one, because the implementation is so simple. I don´t need to test .NET framework functionality. But again: if it serves the educational purpose… Finally the most complicated part of the solution: finding the factors. There are several equivalence partitions. But still I decide to write just a single test, since the structure of the test data is the same for all partitions: Again, I´m faking the implementation first: I focus on just the first test case. No looping yet. Faking lets me stay on a high level of abstraction. I can write down the implementation of the solution without bothering myself with details of how to actually accomplish the feat. That´s left for a drill down with a test of the fake function: There are two main equivalence partitions, I guess: either the first factor is appropriate or some next. The implementation seems easy. Both test cases are green. (Of course this only works on the premise that there´s always a matching factor. Which is the case since the smallest factor is 1.) And the first of the equivalence partitions on the higher level also is satisfied: Great, I can move on. Now for more than a single factor: Interestingly not just one test becomes green now, but all of them. Great! You might say, then I must have done not the simplest thing possible. And I would reply: I don´t care. I did the most obvious thing. But I also find this loop very simple. Even simpler than a recursion of which I had thought briefly during the problem solving phase. And by the way: Also the acceptance tests went green: Mission accomplished. At least functionality wise. Now I´ve to tidy up things a bit. TDD calls for refactoring. Not uch refactoring is needed, because I wrote the code in top-down fashion. I faked it until I made it. I endured red tests on higher levels while lower levels weren´t perfected yet. But this way I saved myself from refactoring tediousness. At the end, though, some refactoring is required. But maybe in a different way than you would expect. That´s why I rather call it “cleanup”. First I remove duplication. There are two places where factors are defined: in Translate() and in Find_factors(). So I factor the map out into a class constant. Which leads to a small conversion in Find_factors(): And now for the big cleanup: I remove all tests of private methods. They are scaffolding tests to me. They only have temporary value. They are brittle. Only acceptance tests need to remain. However, I carry over the single “digit” tests from Translate() to the acceptance test. I find them valuable to keep, since the other acceptance tests only exercise a subset of all roman “digits”. This then is my final test class: And this is the final production code: Test coverage as reported by NCrunch is 100%: Reflexion Is this the smallest possible code base for this kata? Sure not. You´ll find more concise solutions on the internet. But LOC are of relatively little concern – as long as I can understand the code quickly. So called “elegant” code, however, often is not easy to understand. The same goes for KISS code – especially if left unrefactored, as it is often the case. That´s why I progressed from requirements to final code the way I did. I first understood and solved the problem on a conceptual level. Then I implemented it top down according to my design. I also could have implemented it bottom-up, since I knew some bottom of the solution. That´s the leaves of the functional decomposition tree. Where things became fuzzy, since the design did not cover any more details as with Find_factors(), I repeated the process in the small, so to speak: fake some top level, endure red high level tests, while first solving a simpler problem. Using scaffolding tests (to be thrown away at the end) brought two advantages: Encapsulation of the implementation details was not compromised. Naturally private methods could stay private. I did not need to make them internal or public just to be able to test them. I was able to write focused tests for small aspects of the solution. No need to test everything through the solution root, the API. The bottom line thus for me is: Informed TDD produces cleaner code in a systematic way. It conforms to core principles of programming: Single Responsibility Principle and/or Separation of Concerns. Distinct roles in development – being a researcher, being an engineer, being a craftsman – are represented as different phases. First find what, what there is. Then devise a solution. Then code the solution, manifest the solution in code. Writing tests first is a good practice. But it should not be taken dogmatic. And above all it should not be overloaded with purposes. And finally: moving from top to bottom through a design produces refactored code right away. Clean code thus almost is inevitable – and not left to a refactoring step at the end which is skipped often for different reasons.   PS: Yes, I have done this kata several times. But that has only an impact on the time needed for phases 1 and 2. I won´t skip them because of that. And there are no shortcuts during implementation because of that.

    Read the article

  • How do I resolve "No JSON object could be decoded" on mythbuntu live CD?

    - by Neil
    I have been running a MythTV frontend on my laptop for some time against a MythTV backend installed in Linux Mint 12 on another computer, and everything works fine. Now, I'm trying out the Mythbuntu Live CD (12.04.1 32-bit) on the laptop, to turn it into a dedicated front end. It's connecting to the network just fine, and I can see my server. When I click on the frontend icon on the desktop, it asks me for the security code, which I've verified against mythtv-setup on the server. However, when I test that code, it shows the error message "No JSON object could be decoded". I've looked in the control center to see if there's something else I should be setting up. The message above implies to me that it can't find the server, but I can find no place in the control center to tell it where to find my myth backend, which I find a little odd. Does the live CD not work against a backend server on another machine?

    Read the article

  • Dofollow or Nofollow trackbacks?

    - by Ernesto Marrero
    Trackbacs Dofollow? ¿yes or not? Authoritative sources, Pleasse Thinking about SEO, is it better that they are dofollow trackback or nofollow trackbacks? When I write a post on my blog, I automatically send trackback to my site from http://bitacoras.com. The above link is dofollow. Is it advisable to give relevance to this link in also handling dofollow link? For example any domain. on the homepage has pagerank 3. This site has a pagerank 0 of the internal page that I send trackbacks to. Is it convenient to send dofollow links to that page to increase the pagerank? Will they increase my pagerank if I perform this operation?

    Read the article

  • Switch encoding of terminal with a command

    - by Tomas Lycken
    One of the servers I quite often ssh to uses western encoding instead of utf-8 (and there's no way I can change that). I've started writing a bash script to connect to this server, so I won't have to type out the entire address every time, but I would like to improve this script so it also changes the encoding of the terminal window correctly. The change I need to do can be performed using the mouse by navigating to "Terminal"-"Set Character Encoding..."-"Western (ISO-8859-1)". Is there a terminal command that does the same thing, for the current terminal window/screen? To clarify: I'm not interested in ways of switching the locale of the system on the remote site - that system is administered by someone else, and I have no idea what stuff might depend on the latin-1 encoding there. What I want to do is to let this terminal window on my side switch character encoding to the above mentioned, in the same way I can do with my mouse and the menus.

    Read the article

  • Formatted Ubuntu partition & now grub says "error: no such partition" - can't enter windows

    - by qwBJ
    I had installed Ubuntu (current version: 11.xx) alongside Windows Vista. Now I formatted the Ubuntu partition & merged it with another partition (without thinking, obviously). Now, when I restart the computer, GRUB probably tries to find the old partition (which does no longer exist) and says: error: no such partition. grub rescue> Now I dont know what to do (I'm a total beginner). I tried to re-install Ubuntu on the newly formatted partition but this won't work, because after removing the install-usb (which I am said to do during installation) I find the above error-message again. I guess I need some way to reconfigure grub OR to reinstall grub/ubuntu (on the newly formatted partition) OR to reinstall the windows boot manager (without reinstall. Windows), but I have no idea how to do either of these things.

    Read the article

  • initramfs - Unable to find a medium containing a live file system

    - by LittleBobbyTables
    I'm desperately trying to install ubuntu 12.10 64bit on my new Ultrabook. Its a Sony T13 with 8gb ram, 256gb ssd, i7, windows 8. I have an extra partition, D: "UBUNTU" already created with about 30gb space using FAT32. Ubuntu is MD5 checked, on a previously working USB stick using UNetBootin. Grub loads fine When I ask to test out Ubuntu ("Try ubuntu without installing") it shows the purple loading screen for a bit then brings up this error in a busybox terminal: initramfs - Unable to find a medium containing a live file system Things I've tried that don't work: Different versions of Linux (Fedora, Arch, SL, even gParted) Using USB2/3 - No difference Legacy or UEFI - different interface, but same error BIOS has no option for anything "ACHI" related I have read through tons of other people having this problem and diligently tried all the above solutions, with no luck.

    Read the article

  • Portable Class Library even better in .NET 4.5

    - by nmarun
    Visual Studio 2012 makes Cross-Platform development even easier. It comes with a feature called Portable Class Library (PCL). This feature was available in Visual Studio 2010 as well, but it required an additional install as against being out-of-the-box for 2012. It’s also worth noting that PCL is available only for Pro and above versions of 2012. So it’s not available with the Express edition of Visual Studio 2012. Let’s get started. In Visual Studio 2012 you can see a template called Portable Class...(read more)

    Read the article

  • Window decoration of emacs23 window on fluxbox is outside screen

    - by mit
    I am starting emacs remotely over an ssh connection. But on the emacs window I cannot find a way to resize or move it. There is no fluxbox title bar visible, and I guess the title bar is above the visible viewport, because emacs starts vertically with more height than the screen has. The lower border of the emacs window is also below the viewport border, so I cannot resize the window. I am starting emacs like this: emacs23 This is the emacs version: This is GNU Emacs 23.1.1 (x86_64-pc-linux-gnu, GTK+ Version 2.20.0) of 2010-03-29 on yellow, modified by Debian The remote system that runs emacs is 10.04 Lucid Lynx amd64. The local system is running 9.10 Karmic Koala 32 bit and Fluxbox 1.1.1-2

    Read the article

  • Sphere Texture Mapping shows visible seams

    - by AvengerDr
    As you can see from the above picture there is a visible seam in the texture mapping. The underlying mesh is a geosphere based on octahedron subdivisions. On that particular latitude, vertices have been duplicated. However there still is a visible seam. Here is how I calculate the UV coordinates: float longitude = (float)Math.Atan2(normal.X, -normal.Z); float latitude = (float)Math.Acos(normal.Y); float u = (float)(longitude / (Math.PI * 2.0) + 0.5); float v = (float)(latitude / Math.PI); Is this a problem in the coordinates or a mipmapping issue?

    Read the article

  • Any valid reason to Nest Master Pages in ASP.Net rather than Inherit?

    - by James P. Wright
    Currently in a debate at work and I cannot fathom why someone would intentionally avoid Inheritance with Master Pages. For reference here is the project setup: BaseProject MainMasterPage SomeEvent SiteProject SiteMasterPage nested MainMasterPage OtherSiteProject MainMasterPage (from BaseProject) The debate came up because some code in BaseProject needs to know about "SomeEvent". With the setup above, the code in BaseProject needs to call this.Master.Master. That same code in BaseProject also applies to OtherSiteProject which is just accessed as this.Master. SiteMasterPage has no code differences, only HTML differences. If SiteMasterPage Inherits MainMasterPage rather than Nests it, then all code is valid as this.Master. Can anyone think of a reason why to use a Nested Master Page here instead of an Inherited one?

    Read the article

  • Crashes while playing Mp3 songs

    - by sid
    I have Downloaded and Installed Ubuntu last month while downloading codecs for playing Music and Video Formats my Laptop (Dell XPS) crashed. later i again started the system now the problems i face are 1) After Signing in as User/Admin the wallpaper loads while all other windows disappear no UI (task bar and dock) is displayed even after say 30 min. 2) I uninstalled and reinstalled Ubnutu hence there were no problems but when i play Music files the Laptop crashes and the same sequence as above follows this has happened for last 6 times. 3) Whenever the UI disaapears after logging in the Hard Disk starts to heat up and there is considerable increase in power usage of the system. where in the power drain is notable. Please suggest any changes or rectify the issue. Regards Sid

    Read the article

  • Minimal Ubuntu remastering

    - by kapitanluffy
    So i was trying to remaster a ubuntu mini remix using the tool called 'customizer'. my goal is to create a version with a gui capable of networking too. i don't like all the tomboy notes, evolution and unity stuff that natty has came with. http://www.ubuntu-mini-remix.org/ http://u-customizer.sourceforge.net/ the packages i installed are just xorg and gnome-desktop-environment apt-get --no-install-recommends xorg gnome-desktop-environment well it worked quite well. i just want to ask if there are any minimum packages other that the two mentioned above? the lubuntu-desktop is great too. but i want gnome more coz it has a wider community (imo) oh and please don't refer me to LFS .im still too noob for that xD

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >