Search Results

Search found 19539 results on 782 pages for 'pretty print'.

Page 545/782 | < Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >

  • Finding work as a college student

    - by lightburst
    I'm a 3rd year CS student. I'm currently really, really, bored and tired of cheap school programming(I go to a fairly respectable(albeit not top) university in my country, but, really, it's not MIT) so I've been thinking about getting a part-time dev job for a long while now. I'm not some hotshot programmer or anything, but "Add/Delete XYZ class objects to list" or "Do this web feature/pattern in PHP" does get old after a few semesters. I also only learned(heard?) of programming when I entered college, so the duration of me being in contact with any code is short. I can't really apply as an intern as I have not accumulated the necessary credits yet to do that so I was thinking of selling myself as a part-time dev. I still need to go to school, and don't want to subject myself to living two lives. Plus, I think I'd have better chances considering my lack of things to write on the resume. The only language I know at heart is C (I've written several pointer-oriented stuff on my freshman year, which is apparently pretty leet(for some reason), or so Joel says. That sort of boosted my morale a bit) but I've worked with several other languages only for the sake of course work such as C#/Java/PHP/ASM. My only user-worthy project was a recursive quicksort simulator I wrote for class using GTK+ that used a textbox to output the progress. I also have not taken the compiler theory class yet, or my thesis. All that being said, I wonder if any legitimate software company(big or small) would hire somebody like me considering all that. If there are companies that do accept anybody like me, would I be doing programming or maybe just be a tester or something? Would anybody hire me as a dev at all? I know I don't have much(not even a degree) but what I lack in experience, I compensate with interest? I am less interested in websites and 'management' software(no offense or anything. also, most of the places here ONLY have those), and more into 'process driven'(I'm not sure how to call it) and system software. I have my eyes on a startup that sells basically an extension of Google Drive, but I feel like I'm too 'risky' for them. Oh, I'm also 19 if that means anything. We're not K-12 so kids go to college earlier than in the US.

    Read the article

  • State / Screen management in Entity Component Systems

    - by David Lively
    My entity/component system is happily humming along and, despite some performance concerns I initially had, everything is working fine. However, I've realized that I missed a crucial point when starting this thing: how do you handle different screens? At the moment, I have a GameManager class which owns a component manager and entity manager. When I create an entity, the entity manager assigns it an ID and makes sure it's tracked. When I modify the components that are assigned to an entity. an UpdateEntity method is called, which alerts each of the systems that they may need to add or remove the entity from their respective entity lists. A problem with this is that the collection of entities operated on by each system is determined solely by the individual Systems, typically based on a "required component" filter. (An entity has to have a Renderable component to be rendered, for instance.) In this situation, I can't just keep collections of entities per screen and only Update/Draw those collections. They'd have to either be added and removed depending on their applicability to the current screen, which would cause their associated components to be removed, or enable/disable entities in a group per screen to hide what's not supposed to be visible. These approaches seem like really, really crappy kludges. What's a good way to handle this? A pretty straightforward way that comes to mind is to create a separate GameManager (which in my implementation owns all of the systems, entities, etc.) per screen, which means that everything outside of the device context would be duplicated. That's bothersome because some things are always visible, or I might want to continue to display the game under a translucent menu window. Another option would be to add a "layer" key to the GameManager class, which could be checked against a displayable layer stack held by the game manager. *System.Draw() would be called for each active layer, in the required order as determined by the stack. When the systems request an iterator for their respective entity collections, it would be pre-filtered to a (cached) set of those entities that participate in the active layer. Those collections could be updated from the same UpdateEntity event that's already used to maintain each system's entity collections. Still, kinda feels like a hack. If I've coded myself into a corner, feel free to throw tomatoes as long as they're labeled with a helpful suggestion. Hooray for learning curves.

    Read the article

  • Forms&Reports upgrade characterset issues

    - by Lukasz Romaszewski
    Hello,This quick post is based on my findings during recent IMC workshops, especially those related to upgrading the Forms 6i/9i/10g applications to Forms 11g platform. The upgrade process itself is pretty straightforward and it basically requires recompiling your Forms application with a latest version of frmcmp tool. For some cases though, especially when you migrate from Forms 6i which is a client-server architecture to a 3-tier web solution (Forms 11g), you need to rewrite some parts of your code to make it run on new platform. The things you need to change range from reimplementing (using webutil library) typical client-site functionality like local IO operation, access to WinAPI, invoking DLLs etc. to changing deprecated or obsolete APIs like RUN_PRODUCT to RUN_REPORT_OBJECT. To automate those changes Oracle provides complete Java API  which allows you to manipulate the code and structure of you modules (JDAPI). To make it even easier we can use Forms Migration Assistant tool (written in Java using JDAPI) which is able to replace all occurrences of old API entries with their 11g equivalents or warn you when the replacement is not possible. You can also add your own replacement definitions in the search_replace.properties file. But you need to be aware of some issues that can be encountered using this tool. First of all if you are using some hard-coded text inside your triggers you may notice that after processing them by the Migration Assistant tool the national characters may be lost. This is due to the fact that you need to explicitly tell Java application (which MA really is) what kind of characterset it should use to read those text properly. In order to do that just add to a script calling MA the following line:  export JAVA_TOOL_OPTIONS=-Dfile.encoding=<JAVA_ISO_ENCODING>  when the particular encoding must match the NLS_LANG in your Forms Builder environment (for example for Polish characterset you need to use ISO-8859-2).Second issue you can encounter related to national charactersets is lack of national symbols in you reports after migration. This can be solved by adding appropriate NLS_LANG entry in your reports environment. Sometimes instead of particular characterset you see "Greek characters" in your reports. This is just default font used by reports engine instead of the one defined in your report. To solve it you must copy fonts definitions from your old environment (e.g. Forms 10g installation) to appropriate directory in new installation (usually AFM folder). For more information about this and other issues please refer to https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=1297012.1at My Oracle Support site. That's all for today, stay tuned for more posts on this topic! Lukasz

    Read the article

  • Smooth animation when using fixed time step

    - by sythical
    I'm trying to implement the game loop where the physics is independent from rendering but my animation isn't as smooth as I would like it to be and it seems to periodically jump. Here is my code: // alpha is used for interpolation double alpha = 0, counter_old_time = 0; double accumulator = 0, delta_time = 0, current_time = 0, previous_time = 0; unsigned frame_counter = 0, current_fps = 0; const unsigned physics_rate = 40, max_step_count = 5; const double step_duration = 1.0 / 40.0, accumulator_max = step_duration * 5; // information about the circ;e (position and velocity) int old_pos_x = 100, new_pos_x = 100, render_pos_x = 100, velocity_x = 60; previous_time = al_get_time(); while(true) { current_time = al_get_time(); delta_time = current_time - previous_time; previous_time = current_time; accumulator += delta_time; if(accumulator > accumulator_max) { accumulator = accumulator_max; } while(accumulator >= step_duration) { if(new_pos_x > 1330) velocity_x = -15; else if(new_pos_x < 70) velocity_x = 15; old_pos_x = new_pos_x; new_pos_x += velocity_x; accumulator -= step_duration; } alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; al_clear_to_color(al_map_rgb(20, 20, 40)); // clears the screen al_draw_textf(font, al_map_rgb(255, 255, 255), 20, 20, 0, "current_fps: %i", current_fps); // print fps al_draw_filled_circle(render_pos_x, 400, 15, al_map_rgb(255, 255, 255)); // draw circle // I've added this to test how the program will behave when rendering takes // considerably longer than updating the game. al_rest(0.008); al_flip_display(); // swaps the buffers frame_counter++; if(al_get_time() - counter_old_time >= 1) { current_fps = frame_counter; frame_counter = 0; counter_old_time = al_get_time(); } } I have added a pause during the rendering part because I wanted to see how the code would behave when a lot of rendering is involved. Removing it makes the animation smooth but then I'll have to make sure that I don't let the frame rate drop too much and that doesn't seem like a good solution. I've been trying to fix this for a week and have had no luck so I'd be very grateful if someone can read through my code. Thank you! Edit: I added the following code to work out the actual velocity (pixels per second) of the ball each time the ball is rendered and surprisingly it's not constant so I'm guessing that's the issue. I'm not sure why it's not constant. alpha = accumulator / static_cast<double>(step_duration); render_pos_x = old_pos_x + (new_pos_x - old_pos_x) * alpha; cout << (render_pos_x - old_render_pos) / delta_time << endl; old_render_pos = render_pos_x;

    Read the article

  • What can a Service do on Windows?

    - by Akemi Iwaya
    If you open up Task Manager or Process Explorer on your system, you will see many services running. But how much of an impact can a service have on your system, especially if it is ‘corrupted’ by malware? Today’s SuperUser Q&A post has the answers to a curious reader’s questions. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Forivin wants to know how much impact a service can have on a Windows system, especially if it is ‘corrupted’ by malware: What kind malware/spyware could someone put into a service that does not have its own process on Windows? I mean services that use svchost.exe for example, like this: Could a service spy on my keyboard input? Take screenshots? Send and/or receive data over the internet? Infect other processes or files? Delete files? Kill processes? How much impact could a service have on a Windows installation? Are there any limits to what a malware ‘corrupted’ service could do? The Answer SuperUser contributor Keltari has the answer for us: What is a service? A service is an application, no more, no less. The advantage is that a service can run without a user session. This allows things like databases, backups, the ability to login, etc. to run when needed and without a user logged in. What is svchost? According to Microsoft: “svchost.exe is a generic host process name for services that run from dynamic-link libraries”. Could we have that in English please? Some time ago, Microsoft started moving all of the functionality from internal Windows services into .dll files instead of .exe files. From a programming perspective, this makes more sense for reusability…but the problem is that you can not launch a .dll file directly from Windows, it has to be loaded up from a running executable (exe). Thus the svchost.exe process was born. So, essentially a service which uses svchost is just calling a .dll and can do pretty much anything with the right credentials and/or permissions. If I remember correctly, there are viruses and other malware that do hide behind the svchost process, or name the executable svchost.exe to avoid detection. Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Determining cause of random latency and loading issues

    - by Sherwin Flight
    I'm not sure exactly what details to post in regards to my issue, because I'm not sure what is relevant. Prior to the end of September my websites all loaded quickly, in almost all cases. Loading time wasn't usually more than a few seconds. However, since the end of September I noticed a big increase in page loading times. In some cases pages were taking 30 seconds or more to load. I do have a remote monitoring service monitoring some of the sites as well, and the image below shows the response times over the past month. The response times shown at the beginning of this graph were what the usual response times were prior to this issue occurring. You can see that there has been a significant increase in response times from the beginning to the end of this graph. The thing is, the problem is not happening 100% of the time. If I click through the site, or even just keep refreshing the page, about 25% of the time the pages load quickly, the remaining 75% of the time they load slowly. Sometimes the pages take so long to load that they time out, and don't load at all. I have contacted my hosting provider, and they said things at their end was fine. I don't believe the problem is my home internet provider, because all other websites load without a problem. The server is located in Texas, USA. This also raises another interesting point. My remote monitor checks my site from two locations, California, USA, and London, England. As you can see in the chart below the response time is actually shorter when checked from London, which doesn't seem to make sense, since the server is physically closer to the California monitoring location. I would have expected the London monitoring location to have higher response times since they are physically farther away. I should also point out that in some traceroute test I've done it seem like the first connection to the server seems to take the longest, then after that the rest of the page loads quickly. Below is a little chart showing the times for the first connection to the server. So, what could be causing this problem, and what steps can I take to resolve it or at least narrow down the problem? Sending the request to the server was very quick, and receiving the reply back seems pretty quick, but the WAIT time is really long. So it connects, sends the request, but then waits close to 30 seconds before it starts receiving data back. I am also aware that there are things I can do to speed up page loading times, like reducing the number of CSS and JS files used on a page, compressing images, etc. This is not really what the source of the problem is though, because nothing has really changed on the site since before the problem started, and other sites on the same server are loading slowly as well.

    Read the article

  • The value of money

    - by ambreesh
    A dictionary definition of money is "any circulating medium of exchange, including coins, paper money, anddemand deposits". If you ask an economist for a definition of money, you will be introduced to terms like M1, M2, M3, all of which denote tangible assets - currency, and anything that is liquid enough to be used as currency; checks, stamps and now mobile minutes being examples. The macroeconomic theory of money is fascinating - the effect of money supply on exchange rates and interest rates, the concept of the "money multiplier" (if I deposit $10 into a bank, the bank will likely loan $8 of it to someone else, who will then give it to someone else in exchange for goods and services, who will then likely deposit it again, which will result in the bank loaning it again and so on - making that $10 of money supply worth a lot more ($10+$8+$x+...)).  But all this depends on money supply - in other words, money that is printed by the mint. The Treasury Department spends a lot of time figuring out how much money to print, there is lot being written on QE2 now-a-days, which is intended to increase the money supply. Money is used to purchase goods and services, and yes it is saved too but that is so one can purchase goods and services later. Completely unrelated, there is a sea change occurring in the web world, dominated by, I believe, Facebook. With 500M active users and growing, FB has the ability to introduce a "money supply" which is completely unrelated to today's "money". Using today's money, a FB user can buy a certain number of FB$s, and then use the FB$s within FB to purchase goods and services - with the money multiplier kicking in. I remember talking with a colleague about this a few years ago, the true way to monetize the web is to introduce an alternative system to the existing, and FB has the ability to do just that. There is enough momentum, enough mass for FB to start to monetize its user base. And completely screw up the economists at the Treasury, not to mention disintermediating the banks completely. The only other ubiquitous asset is mobile minutes. People exchanging mobile minutes for tangible goods and services happens today, the big difference however is the demographic. While Safaricom offers this ability in Kenya today, FB has the 15-40 year middle class user as their user. And the next generation is growing up with FB as a standard channel for communicating with their peers. Virtual flowers when going in for the kill? If your target is an avid FB user, why not? It certainly is a lot more green - no pun intended!

    Read the article

  • Referencing a picture in another DLL in Silverlight and Windows Phone 7

    - by Laurent Bugnion
    This one has burned me a few times, so here is how it works for future reference: Usually, when I add an Image control into a Silverlight application, and the picture it shows is local (as opposed to loaded from the web), I set the picture’s Build Action to Content, and the Copy to Output Directory to Copy if Newer. What the compiler does then is to copy the picture to the bin\Debug folder, and then to pack it into the XAP file. In XAML, the syntax to refer to this local picture is: <Image Source="/Images/mypicture.jpg" Width="100" Height="100" /> And in C#: return new BitmapImage(new Uri( "/Images/mypicture.jpg", UriKind.Relative)); One of the features of Silverlight is to allow referencing content (pictures, resource dictionaries, sound files, movies etc…) located in a DLL directly. This is very handy because just by using the right syntax in the URI, you can do this in XAML directly, for example with: <Image Source="/MyApplication;component/Images/mypicture.jpg" Width="100" Height="100" /> In C#, this becomes: return new BitmapImage(new Uri( "/MyApplication;component/Images/mypicture.jpg", UriKind.Relative)); Side note: This kind of URI is called a pack URI and they have been around since the early days of WPF. There is a good tutorial about pack URIs on MSDN. Even though it refers to WPF, it also applies to Silverlight Side note 2: With the Build Action set to Content, you can rename the XAP file to ZIP, extract all the files, change the picture (but keep the same name), rezip the whole thing and rename again to XAP. This is not possible if the picture is embedded in an assembly! So what’s the catch? Well the catch is that this does not work if you set the Build Action to Content. It’s actually pretty simple to explain: The pack URI above tells the Silverlight runtime to look within an assembly named MyOtherAssembly for a file named MyPicture.jpg in the Images folder. If the file is included as Content, however, it is not in the assembly. Silverlight does not find it, and silently returns nothing. The image is not displayed. And the fix? The fix, for class libraries, is to set the Build Action to Resource. With this, the picture will gets packed into the DLL itself. Of course, this will increase the size of the DLL, and any change to the picture will require recompiling the class library, which is not ideal. But in the cases where you want to distribute pictures (icons etc) together with a plug-in assembly, well, this is a good way to have everything in the same place Happy coding, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Developing for 2005 using VS2008!

    - by Vincent Grondin
    I joined a fairly large project recently and it has a particularity… Once finished, everything has to be sent to the client under VS2005 using VB.Net and can target either framework 2.0 or 3.0… A long time ago, the decision to use VS2008 and to target framework 3.0 was taken but people knew they would need to establish a few rules to ensure that each dev would use VS2008 as if it was VS2005… Why is that so? Well simply because the compiler in VS2005 is different from the compiler inside VS2008…  I thought it might be a good idea to note the things that you cannot use in VS2008 if you plan on going back to VS2005. Who knows, this might save someone the headache of going over all their code to fix errors… -        Do not use LinQ keywords (from, in, select, orderby…).   -        Do not use LinQ standard operators under the form of extension methods.   -        Do not use type inference (in VB.Net you can switch it OFF in each project properties). o   This means you cannot use XML Literals.   -        Do not use nullable types under the following declarative form:    Dim myInt as Integer? But using:   Dim myInt as Nullable(Of Integer)     is perfectly fine.   -        Do not test nullable types with     Is Nothing    use    myInt.HasValue     instead.   -        Do not use Lambda expressions (there is no Lambda statements in VB9) so you cannot use the keyword “Function”.   -        Pay attention not to use relaxed delegates because this one is easy to miss in VS2008   -        Do not use Object Initializers   -        Do not use the “ternary If operator” … not the IIf method but this one     If(confition, truepart, falsepart).   As a side note, I talked about not using LinQ keyword nor the extension methods but, this doesn’t mean not to use LinQ in this scenario. LinQ is perfectly accessible from inside VS2005. All you need to do is reference System.Core, use namespace System.Linq and use class “Enumerable” as a helper class… This is one of the many classes containing various methods that VS2008 sees as extensions. The trick is you can use them too! Simply remember that the first parameter of the method is the object you want to query on and then pass in the other parameters needed… That’s pretty much all I see but I could have missed a few… If you know other things that are specific to the VS2008 compiler and which do not work under VS2005, feel free to leave a comment and I’ll modify my list accordingly (and notify our team here…) ! Happy coding all!

    Read the article

  • TechCast Live: "Java and Oracle, One Year Later" Replay Now Available

    - by Justin Kestelyn
    Earlier this week I had the opportunity to chat with Ajay Patel, Oracle's VP leading the Java Evangelist team, about "the state of the union" wrt Oracle and Java. Take a look: And here are some choice quotes, some paraphrased, as helpfully transcribed by Java evangelist Terrence Barr: "One key thing we have learned ... Java is not just a platform, it is also an ecosystem, and you can't have an ecosystem without a community." "The objectives, strategically [for Java at Oracle] have been pretty clear: How do we drive adoption, how do we build a larger, stronger developer community, how do we really make the platform much more competitive." "It's about transparency, involvement. IBM, RedHat, Apple have all agreed to working with us to make OpenJDK the best platform for open source development ... it is a sign that the community has been waiting to move the Java platform forward." "It's not just about Oracle anymore, it's about Java, the technology, the community, the developer base, and how we work with them to move the innovation forward." "Java is strategic to Oracle, and the community is strategic for Java to be successful ... it is critical to our business." On JavaFX 2.0: "... is coming to beta soon, with a release planned in second half [of 2011] ... will give you a new, high-performance graphics engine, the new API for JavaFX ... you will see a very strong, relevant platform for levering rich media platforms." On the JDK and SE: "... aggressively moving forward, JDK 7 is now code complete ... looking good for getting JDK 7 out by summer as we promised. Started work on JDK 8, Jigsaw and Lambda are moving along nicely, on track for JDK 8 release next year ... good progress." On Java EE and Glassfish: "... Very excited to have Glassfish 3.1 released, with clustering and management capabilities ... working with the JCP to shortly submit a number of JSRs for Java EE 7 ... You'll see Java EE 7 becoming the platform for cloud-based development." "You will see Oracle continue to step up to this role of Java steward, making sure that the language, the technology, the platform ... is competitive, relevant, and widely adopted." Making progress!

    Read the article

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

  • Antenna Aligner Part 5: Devil is in the detail

    - by Chris George
    "The first 90% of a project takes 90% of the time and the last 10% takes the another 200%"  (excerpt from onista) Now that I have a working app (more or less), it's time to make it pretty and slick. I can't stress enough how useful it is to get other people using your software, and my simple app is no exception. I handed my iPhone to a couple of my colleagues at Red Gate and asked them to use it and give me feedback. Immediately it became apparent that the delay between the list page being shown and the list being drawn was too long, and everyone who tried the app clicked on the "Recalculate" button before it had finished. Similarly, selecting a transmitter heralded a delay before the compass page appeared with similar consequences. All users expected there to be some sort of feedback/spinny etc. to show them it is actually doing something. In a similar vein although for opposite reasons, clicking the Recalculate button did indeed recalculate the available transmitters and redraw them, but it did this too fast! One or two users commented that they didn't know if it had done anything. All of these issues resulted in similar solutions; implement a waiting spinny. Thankfully, jquery mobile has one built in, primarily used for ajax operations. Not wishing to bore you with the many many iterations I went through trying to get this to work, I'll just give you my solution! (Seriously, I was working on this most evenings for at least a week!) The final solution for the recalculate problem came in the form of the code below. $(document).on("click", ".show-page-loading-msg", function () {            var $this = $(this),                theme = $this.jqmData("theme") ||                        $.mobile.loadingMessageTheme;            $.mobile.showPageLoadingMsg(theme, "recalculating", false);            setTimeout(function ()                           { $.mobile.hidePageLoadingMsg(); }, 2000);            getLocationData();        })        .on("click", ".hide-page-loading-msg", function () {              $.mobile.hidePageLoadingMsg();        }); The spinny is activated by setting the class of a button (for example) to the 'show-page-loading-msg' class. Recalculate This means the code above is fired, calling the showPageLoadingMsg on the document.mobile object. Then, after a 2 second timeout, it calls the hidePageLoadingMsg() function. Supposedly, it should show "recalculating" underneath the spinny, but I've not got that to work. I'm wondering if there is a problem with the jquery mobile implementation. Anyway, it doesn't really matter, it's the principle I'm after, and I now have spinnys!

    Read the article

  • Height Map Mapping to "Chunked" Quadrilateralized Spherical Cube

    - by user3684950
    I have been working on a procedural spherical terrain generator for a few months which has a quadtree LOD system. The system splits the six faces of a quadrilateralized spherical cube into smaller "quads" or "patches" as the player approaches those faces. What I can't figure out is how to generate height maps for these patches. To generate the heights I am using a 3D ridged multi fractals algorithm. For now I can only displace the vertices of the patches directly using the output from the ridged multi fractals. I don't understand how I generate height maps that allow the vertices of a terrain patch to be mapped to pixels in the height map. The only thing I can think of is taking each vertex in a patch, plug that into the RMF and take that position and translate into u,v coordinates then determine the pixel position directly from the u,v coordinates and determine the grayscale color based on the height. I feel as if this is the right approach but there are a few other things that may further complicate my problem. First of all I intend to use "height maps" with a pixel resolution of 192x192 while the vertex "resolution" of each terrain patch is only 16x16 - meaning that I don't have any vertices to sample for the RMF for most of the pixels. The main reason the height map resolution is higher so that I can use it to generate a normal map (otherwise the height maps serve little purpose as I can just directly displace vertices as I currently am). I am pretty much following this paper very closely. This is, essentially, the part I am having trouble with. Using the cube-to-sphere mapping and the ridged multifractal algorithm previously described, a normalized height value ([0, 1]) is calculated. Using this height value, the terrain position is calculated and stored in the first three channels of the positionmap (RGB) – this will be used to calculate the normalmap. The fourth channel (A) is used to store the height value itself, to be used in the heightmap. The steps in the first sentence are my primary problem. I don't understand how the pixel positions correspond to positions on the sphere and what positions are sampled for the RMF to generate the pixels if only vertices cannot be used.

    Read the article

  • NuGet JustMock

    - by mehfuzh
    As most of us already know JustMock got  a free edition. The free edition is not a stripped down of the features of the full edition but I would rather say its a strip down of the type you can mock. Technically, free version runs on  proxy as full version runs on proxy + profiler. In full version, It switches to profiler when you are mocking final methods or sealed class or anything else that can not be done using inheritance. Like in full version you can mock non public methods , in free version you can still do it but it has to be virtual for protected or must be done through InternalsVisibleTo attribute for internal virtual methods (If you have access to the source and can apply the attribute). Now, you can get a copy of free edition from the product page. Install it and off you go. But it is also exposed to NuGet. Those of you are not familiar with NuGet (that will be odd). But still NuGet is the centralized package manager from Microsoft that cuts the workflow of manual inclusion of  libraries in your project. I think NuGet in future will limit the scope of  “.vsi” packages and installers because of its ease (except in some cases). Its similar to ruby gems. In ruby, virtually you can install any library in this way “gems  install <target_library>” and you are off to go. It will check the dependencies, install them or less prompt with the steps you need to do.   Now sticking to the post, to get started you first need to install NuGet package manager. Once you have completed the step pressing “Ctrl + W, Ctrl + Z” it will bring up an console like one below:   Once you are here, you just have to type “install-package justmock” Next, it will should print the confirmation when the installation is complete: Moving to visual studio solution explorer, you will now see:   Finally, NuGet is still in its early ages and steps that are shown here may not remain the same in coming releases, but feel free to enjoy what is out there right now. Regarding JustMock free edition, there is a nice post by Phil Japikse at Introducing JustMock Free Edition. I think its worth checking if not already.   Have fun and happy holidays!

    Read the article

  • Introducing Agile development after traditional project inception

    - by Riggy
    About a year and a half ago, I entered a workplace that claimed to do Agile development. What I learned was that this place has adopted several agile practices (such as daily standups, sprint plannings and sprint reviews) but none of the principles (just in time / just good enough mentality, exposing failure early, rich communication). I've now been tasked with making the team more agile and I've been assured that I have complete buy-in from the devs and the business team. As a pilot program, they've given me a project that just completed 15 months of requirements gathering, has a 110 page Analysis & Design document (to be considered as "written in stone"), and where I have no access to the end users (only to the committee made up of the users' managers who won't actually be using the product). I started small, giving them a list of expected deliverables for the first 5 sprints (leaving the future sprints undefined), a list of goals for the first sprint, and I dissected the A&D doc to get enough user stories to meet the first sprint's goals. Since then, they've asked why we don't have all the requirements for all the sprints, why I haven't started working on stuff for the third sprint (which they consider more important but is based off of the deliverables of the first 2 sprints) and are pressing for even more documentation that my entire IT team considers busy-work or un-related to us (such as writing the user manual up-front, documenting all the data fields from all the sprints up front, and more "up-front" work). This has been pretty rough for me as a new project manager, but there are improvements I have effectively implemented such as scrumban for story management, pair programming, and having the business give us customer acceptance tests up front (as part of the requirements documentation). So my questions are: What can I do to more effectively introduce change to a resistant business? Are there other practices that I can introduce on the IT side to help show the business the benefits of agile? The burden of documentation is strangling us - the business still sees it as a risk management strategy instead of as a risk. What can we do to alleviate their documentation concerns and demands (specifically the quantity of documentation and their need for all of it up front)? We are in a separate building from our business, about 3 blocks away and they refuse to have their people on the project co-habitate b/c that person "won't be able to work on their other projects while they're at our building." They expect us to always go over there and to bundle our questions so that we can ask them all at once and not waste that person's time with "constant interruptions." What can we do to get richer communication from them? Any additional advice would also be appreciated. Thanks!

    Read the article

  • Diving into Scala with Cay Horstmann

    - by Janice J. Heiss
    A new interview with Java Champion Cay Horstmann, now up on otn/java, titled  "Diving into Scala: A Conversation with Java Champion Cay Horstmann," explores Horstmann's ideas about Scala as reflected in his much lauded new book,  Scala for the Impatient.  None other than Martin Odersky, the inventor of Scala, called it "a joy to read" and the "best introduction to Scala". Odersky was so enthused by the book that he asked Horstmann if the first section could be made available as a free download on the Typesafe Website, something Horstmann graciously assented to. Horstmann acknowledges that some aspects of Scala are very complex, but he encourages developers to simply stay away from those parts of the language. He points to several ways Java developers can benefit from Scala: "For example," he says, " you can write classes with less boilerplate, file and XML handling is more concise, and you can replace tedious loops over collections with more elegant constructs. Typically, programmers at this level report that they write about half the number of lines of code in Scala that they would in Java, and that's nothing to sneeze at. Another entry point can be if you want to use a Scala-based framework such as Akka or Play; you can use these with Java, but the Scala API is more enjoyable. " Horstmann observes that developers can do fine with Scala without grasping the theory behind it. He argues that most of us learn best through examples and not through trying to comprehend abstract theories. He also believes that Scala is the most attractive choice for developers who want to move beyond Java and C++.  When asked about other choices, he comments: "Clojure is pretty nice, but I found its Lisp syntax a bit off-putting, and it seems very focused on software transactional memory, which isn't all that useful to me. And it's not statically typed. I wanted to like Groovy, but it really bothers me that the semantics seems under-defined and in flux. And it's not statically typed. Yes, there is Groovy++, but that's in even sketchier shape. There are a couple of contenders such as Kotlin and Ceylon, but so far they aren't real. So, if you want to do work with a statically typed language on the JVM that exists today, Scala is simply the pragmatic choice. It's a good thing that it's such a nice choice." Learn more about Scala by going to the interview here.

    Read the article

  • Canonicalization issue regarding academic URL vs. blog URL

    - by user5395
    I'm sorry if what I am about to write is long-winded. I only wish to be clear. I am an academic in the scientific community. I maintain a web site for my research, teaching, and other professional activities. Until recently, the content for this site was hosted in a directory on my university department's own server. The address is of the typical form (universityname).edu/~(myusername) I decided that I wanted to use WordPress in order to host and manage my page. So I set up a WordPress.com blog and then replaced the index.html file in (universityname).edu/~(myusername) with a new one consisting of a single frame, containing the WordPress.com blog. Now when a user visits (universityname).edu/~(myusername), he or she sees the blog instead. This has been pretty nice because, even when the user clicks on links between pages or posts in the blog, the only thing showing up in the address bar of the browser is www.(universityname).edu/~(myusername), because the blog is constrained to a frame. However, the effect of this change on the search side of things has not been so kind to me. Before, when someone searched for my name in Google, the first result was always (universityname).edu/~(myusername). This is the most desirable outcome, for professional reasons. (Having my academic URL come up first suggests that I am an accredited professional, and not just some crank with a blog!) But now, Google seems to have canonicalized my web presence under the blog's WordPress.com address. It has completely forgotten about my academic URL and considers the WordPress.com address to be the best address representing me on the web. Unfortunately, WordPress.com doesn't support the canonical tag, so I can't tell the blog to advertise itself as my academic URL in the header. (It doesn't seem to help at all that I have used the WordPress.com dashboard to turn on no-indexing of the blog.) One obvious solution would be to use the departmental server to host my content again, and use a local installation of the WordPress platform. For reasons beyond my control, the platform will not be deployed on the departmental server at this time. Another solution would be to use shared hosting with WordPress.org support, because the WordPress.org platform does support the canonical tag (albeit via a plug-in). But this seems to usually require purchasing a domain name and other fees, and there is no guarantee that Google will listen to the canonical tag (it might use whatever domain name I end up with instead). Is there a way I can more cleverly integrate the WordPress.com blog into a page hosted on my department's server? Is there some PHP code I can write to retrieve the blog's contents in a way that Google won't treat as a link / "perceive" the blog? Please note: I am a PHP novice at best. I just feel there should be a simpler solution to all this, within the constraints of what I have described above. Thanks!

    Read the article

  • Robust line of sight test on the inside of a polygon with tolerance

    - by David Gouveia
    Foreword This is a followup to this question and the main problem I'm trying to solve. My current solution is an hack which involves inflating the polygon, and doing most calculations on the inflated polygon instead. My goal is to remove this step completely, and correctly solve the problem with calculations only. Problem Given a concave polygon and treating all of its edges as if they were walls in a level, determine whether two points A and B are in line of sight of each other, while accounting for some degree of floating point errors. I'm currently basing my solution on a series of line-segment interection tests. In other words: If any of the end points are outside the polygon, they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B crosses any of the edges from the polygon, then they are not in line of sight. If both end points are inside the polygon, and the line segment from A to B does not cross any of the edges from the polygon, then they are in line of sight. But the problem is dealing correctly with all the edge cases. In particular, it must be able to deal with all the situations depicted below, where red lines are examples that should be rejected, and green lines are examples that should be accepted. I probably missed a few other situations, such as when the line segment from A to B is colinear with an edge, but one of the end points is outside the polygon. One point of particular interest is the difference between 1 and 9. In both cases, both end points are vertices of the polygon, and there are no edges being intersected, but 1 should be rejected while 9 should be accepted. How to distinguish these two? I could check some middle point within the segment to see if it falls inside or not, but it's easy to come up with situations in which it would fail. Point 7 was also pretty tricky and I had to to treat it as a special case, which checks if two points are adjacent vertices of the polygon directly. But there are also other chances of line segments being col linear with the edges of the polygon, and I'm still not entirely sure how I should handle those cases. Is there any well known solution to this problem?

    Read the article

  • How do I prevent having to log in on 3 separate prompts every time I start my machine?

    - by JC
    Ubuntu 11.04 Natty Narwhal, Ubuntu Classic desktop Each time I start my machine, I have to log in 3 times. I spent a week in IRCFreenode#ubuntu and got nothing but condescension. I've searched on the official Ubuntu fora for similar problems, tried every recommendation, and still get 3 login screens. As a workaround, I have reset login such that I get a login screen at startup, which I'd prefer not to get since this machine is accessible by no one but me, physically. I have gone into System Preferences Passwords and Encryption Keys, set first 'Passwords: default' to 'Default' and unlocked it, and unlocked the 'Passwords: login' key, too. Next, since that changed nothing, I set 'Passwords: login' to 'Default', and checked to make sure it was still unlocked. Again, no change, still get 3 login prompts at startup. I've checked twice to insure that I am the owner of the files; I am. At the suggestion of several people in #ubuntu, I've deleted first one, then the other password key in 'Passwords and Encryption Keys'. Still get 3 login prompts. I changed from the Unity desktop to Ubuntu Classic. While that didn't fix the above problem, it is a much more elegant desktop than Unity, and I'll keep it. From what I've read, this seems to be a Seahorse issue, but beyond that, no one seems to have a solution that works. I'm lost. This shouldn't be this difficult or annoying. I'm trying to help our local Old Time music collective get their machines switched over to Ubuntu in order to save them some money which they can use to promote their DRM-free music. But from what I've seen of Ubuntu so far on my own machine, I can't really recommend that they make this switch. I hope to be proved wrong on that point. But as it stands, if I was out of town or out of country and they ran into a problem, they'd have no way of fixing it as they're all less experienced than even I am. I'm not trying to cast aspersions on Ubuntu or Linux, but it seems pretty clear that KNOWLEDGEABLE, HELPFUL support for Ubuntu is lacking barring any desire on the problem-experiencing-user's part to avoid condescension. Having worked with, and run, several non-profits over the past 20 years, I know that getting volunteers to act professionally can be like herding cats. But an organization's reputation can be denigrated by sarcastic behaviors on the part of those who serve, effectively, as its public face. Thank you all for your help and support. Now...does anyone have a solution to my problem?

    Read the article

  • Good, simple reasons for having a multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't phased because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I cant really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • Shelving &ndash; What is it &ndash; and more importantly, can it help me?

    - by Chris Skardon
    Since we shifted to TFS we’ve had the ability to perform what is known as ‘shelving’. Shelving (whilst not a wholly new topic in the world of SCC) is new to us, and didn’t exist in our previous SCC solution – SVN. Soo… what is it? What? Shelving is a way to check-in but not check-in your code. By shelving you submit a copy of your ‘pending changes’ to the SCC server, (which maintains a list of the shelvesets) and once that is done you can either continue working, or undo your changes, safe in the knowledge that a backup copy exists on the server. You can unshelve your code at any time and get back to the state you were when you shelved. Yer, that is great but why not just check it in?? Shelvesets don’t have to build. The shelveset you put in there could be entirely broken, or it might solve every bug in the system – shelves aren’t continuously integrated so you can shelve anything. Hmmmm… What else? Shelving allows us to do some pretty cool stuff that beforehand was quite frankly a pain. For instance – Gated Check-ins are implemented via the shelving mechanism, when code is checked-in, what you’re actually doing is shelving it, the Build Controller will build the shelveset with the original code and if it succeeds, the code will be committed, if it fails – well – it’s only you that has to fix the code :) Other nice features are things like the ability to share code you are working on… For example, if I was having trouble with a particular piece of code, I could shelve it, and then you (yes you) could then get that shelveset and check out the problem for yourself, and if you fix it?? Well – you could check-it in! Nice, but day-to-day shizzle? Let’s say you’ve been working on your project and your project manager comes over to you and says: “Hey, errr, bad times, there is an urgent bug we need you to fix, it needs to go out now!” (also for this to play out – we’ll need to assume you’re currently working in the 'release’ branch for another bug fix (maybe))… You could undo all your current changes (obviously you’ll probably backup your code using zip or something I imagine) fix the bug, then re-copy your backup over the top, or you could shelve and unshelve. Perhaps some other uses will awaken the shelver in you… :) Before each checkin – if you shelve, you no longer need to worry (if indeed you do) about resolving conflicts and mysteriously losing your code… Going home at night? Not checking in straight away? Why not shelve, this way – should the worst come to the worst and your local pc gives up, you can just get the shelveset onto another machine and be up and running in literally seconds minutes…

    Read the article

  • Virtualized data centre&ndash;Part four: The design

    - by marc dekeyser
    Welcome back to the fourth post in this series! Today we will have a look at what Microsoft recommends as a “private cloud design” and what I will make of it. Whilst my own solution is based of the reference architecture, it is quite different indeed! An important thing to know is that, whilst I am using the private cloud as a reference, I am skipping most of the steps in designing a private cloud. If that is why you are here, please read the links at the end of the article and skim through my own content. A private cloud is much more process driven than just building a virtual infrastructure… The architecture of it all… So imagine for a minute that you have unlimited funds to build this lab of yours… You’d want redundancy on all levels and separation of each network where possible! Unfortunately we don’t have that luxury and, as you saw me hinting at in the previous article, our own design will be more limited but still quite capable! Networking From the networking perspective I will not have a fully redundant network, after all, this is but a lab environment! Thanks to Server 2012 I will be able to use bonding on my NIC’s and use LACP to improve the performance on that part. Storage As I mentioned in the previous article a Synology DS1218+ will be used for iSCSI provisioning. This device has 2 NICs on-board which can be bonded in to one 2 Gbps interface giving me a decent throughput and making the disks the most limiting factor in the storage design. Domain controllers and extra infrastructure Server 2012 completely supports running domain controllers virtualized and has no need to actually have a reachable DC when booting… That being said I need a remote access machine to power on the hosts (I have no need for them running 24/7) and a possible System Center VMM 2012 box (although server 2012 is not supported until SP1 :( ). Undecided on if I am to install those boxes separately or as a virtual machine… Which amounts to… Something like this pretty picture!                   Sources Microsoft Private Cloud Solutions Repository (en-US) http://social.technet.microsoft.com/wiki/contents/articles/12131.microsoft-private-cloud-solutions-repository-en-us.aspx Reference  Architecture: http://social.technet.microsoft.com/wiki/contents/articles/3819.reference-architecture-for-private-cloud.aspx Private Cloud Reference Model: http://social.technet.microsoft.com/wiki/contents/articles/4399.private-cloud-reference-model.aspx

    Read the article

  • how do I get dual monitors to work properly in Ubuntu 11.10 on a Dell Latitude D630?

    - by wes cook
    I have spent a lot of time trying to get dual monitors to work on Ubuntu 11.10 on my Dell Latitude D630 (nVidia NVS 135m video card). - For starters, the System Displays settings app always only showed one unknown monitor, even though I had the external Acer monitor connected. - So I downloaded and installed the nVidia drivers. According to what I read I would need to only use the nVidia driver app (nVidia X Server Settings), so that's what I've done. (System Displays settings continued to only show a single monitor anyway). - nVidia settings app only showed on monitor until I changed the BIOS setting to use the onboard video for external monitor (not the dock video, which it was set to, even though I don't have a docking station). - The nVidia setting app now recognized both monitors. So, I setup the X Server display config as Separate X screen for both monitors. My laptop screen shows up as AUO 1440x900 and my external monitor as Acer E211H 1920x1080. - Everything seemed like it would work, but the external monitor was just a complete white screen. The external monitor was non-functional, even though sometimes it would show the background image - still nothing would show up over there. - So, I checked the Enable Xinerama box. - Now, after logging out and back in, the wallpaper extends to both screens but I get no taskbar at the bottom or top, no system menus, and I have to press the power button to restart or log off. - After experimenting with all the shells, the only one that shows the menus and taskbars when I log in is Gnome Classic. - This is pretty much the same symptoms as found here: How do I fix 11.10 GUI?. - So, I resign myself to the older shell. - Everything works fine until ... I unplug the external monitor ... this is a laptop after all. - Anyway, after doing some work on the road, I plug back in and I still see both screens and it's functional except, ... - Now, the laptop screen (with the taskbar and menu bar) has 4 black bars at the top that windows cannot cover. The top bar is the menu bar (with Applications, Places, the date and time and the system menu on the right). But the next 3 bars (the same height as the top menu bar) are empty and are just reducing the max size of windows on that screen. - See screenshot here: http://i39.tinypic.com/35d2kh1.png - So ... 1. How do I get rid of those extra 3 black bars? They're taking valuable screen space. 2. (less critical) How do I successfully use both screens in the Ubuntu or Ubuntu 2D shell?

    Read the article

  • Why CoffeeScript is tough to maintain

    - by Renso
    I recently started trying out CoffeeScript only to find out that it caused more headaches. The abstraction level of jQuery was perfect, it did not dictate to coders how to design their code, it just works. However, I recently posted a request to the CoffeeScript team to consider introducing curly braces to help with more complex code to control the flow of logic. For example a if-then-else with many nested levels can be near impossible to debug without tracing through it when using CoffeeScript. Also with IDEs like Visual Studio, regular JavaScript intellicense and auto-formatting make it easy to appropriate indent nested levels without any work on the part of the developer and reading it is not that hard, especially with some extensions that show vertical lines in the code editor to help see what is nested within what part of the code.However with CoffeeScript that is not the case. The samples given in the CoffeeScript web site are of course just simple examples to explain the features and one gets excited pretty quick over the powerful shortcuts. I tried to convert a piece of JavaScript over to CoffeeScript and gave up since you need to first of all remove ALL non CoffeeScript coding constructs for it to even compile. However js2coffee can help with that. However to keep track of nested levels became something that was simply not manageable using CoffeeScript.Furthermore, any coding language that controls the flow of logic by indentation is extremely dangerous for obvious reasons. I liked CoffeeScript a lot, but the fact that the logical flow of the code is controlled by how much you indent code, spaces or tabs, is not reliable as there is no way the programmer has an easy way of knowing what parts of the code will get hit when the code spans a page.When I suggested introducing curly braces in CoffeeScript the team, one contributor advised me that my code needs to be re-designed! Needless to say that is absurd. When I included a piece of the code he asked my if it was legacy code. It's like saying to a Java programmer, sorry you cannot use Java because we don't agree with how you write your code.jashkenas from the CoffeeScript blog gave some great suggestions and made the point that introducing curly braces would be very problematic for them as they use them to denote objects. Makes sense, but I would still love to see some way to replace code flow control with spaces and indentation to something more concrete and human readable.

    Read the article

  • Can I prevent an IDENTIFY PACKET DEVICE command to a specific device at boot?

    - by Brian Spisak
    This is related to a previous question related to installation that is now resolved. I'm opening a new question, because I still need to get my DVD drive working. Problem: Failed boot when my ASUS DRW-24B1/ST DVD drive is attached to my asmedia ASM1061. Symptom: ata8.00: exception Emask 0x52 Sact 0x0 SErr 0xffffffff action 0xe frozen ata8: SError: { blah blah } ata8.00: failed command: IDENTIFY PACKET DEVICE ata8.00: cmd blah blah res blah blah (ATA bus error) ata8.00: status: { DRDY } ata8: hard resetting link Background: The ASM1061 is a PCIe to SATA bridge providing 2 x 6Gb/s ports and is supposed to be fully compliant to SATA specs. I just discovered in the fine print of my ASUS P8Z77-V pro motherboard that "These SATA ports are for data hard drivers only. ATAPI devices are not supported." However, I have already installed Windows 7 using this drive and I can run the Ubuntu 12.04 installer from it as well. The only time I have a problem is during Ubuntu boot when it tries an IDENTIFY PACKET DEVICE which seems to be an ATAPI command. I can't simply switch this device to another SATA port because they are already allocated to other devices. (My chipset's 2 x 6Gb/s are connected to my boot SSD and a fast HDD while the 4 x 3Gb/s ports are running a RAID 5 array.) If this can't be fixed or worked around, I suppose I'll have to go buy SATA add-in card. Blech. Thoughts: If indeed this is a device specific issue (that it doesn't support ATAPI discovery) then I can't expect - is it udev? - to work with it. But, it seems that Windows and even the Ubuntu installer work just fine. So why does udev have a problem? At the end of the day, it would be nice to have the DVD working under Ubuntu, but I can live without it. But, as this is a dual-boot machine, I can't physically disconnect it because I want it to work with Windows. (And physically disconnecting it every time I want to boot Ubuntu is NOT an option. ;-) Questions: Should this be considered a bug? My feelings are that if it works with other OS that it should probably work with Ubuntu as well. How can I work around this problem? I have a limited knowledge of linux internals, but it seems I should be able to somehow tell udev (or whatever is doing the discovery) to ignore that device. Is there a way?

    Read the article

< Previous Page | 541 542 543 544 545 546 547 548 549 550 551 552  | Next Page >