Search Results

Search found 260 results on 11 pages for 'albeit'.

Page 3/11 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • What is "2LUN" mode in connection with RAID?

    - by naxa
    I've came across RAID products that also list JBOD (just a bunch of disks) mode and 2LUN mode. What the heck is 2LUN mode? I could not find a description; the closest thing seems to be LUN 'logical unit number' but I don't get the 2LUN thing. UPDATE 1 This is what Wikipedia has to say about JBOD: JBOD (derived from "just a bunch of disks"): an architecture involving multiple hard drives, while making them accessible either as independent hard drives, or as a combined (spanned) single logical volume with no actual RAID functionality. So JBOD can actually mean two different (albeit related) things. Answer of Guest says 2LUN means no spanning. Does this suggest that 2LUN would simply mean the JBOD-variant with no span?

    Read the article

  • How can I check whether a volume is mounted where it is supposed to be using Python?

    - by Ben Hymers
    I've got a backup script written in Python which creates the destination directory before copying the source directory to it. I've configured it to use /external-backup as the destination, which is where I mount an external hard drive. I just ran the script without the hard drive being turned on (or being mounted) and found that it was working as normal, albeit making a backup on the internal hard drive, which has nowhere near enough space to back itself up. My question is: how can I check whether the volume is mounted in the right place before writing to it? If I can detect that /external-backup isn't mounted, I can prevent writing to it. The bonus question is why was this allowed, when the OS knows that directory is supposed to live on another device, and what would happen to the data (on the internal hard drive) should I later mount that device (the external hard drive)? Clearly there can't be two copies on different devices at the same path! Thanks in advance!

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • Cannot increase my screen resolution

    - by Patrick Beardmore
    I am trying to install my new monitor but my Graphics Adapter (Mobile Intel(R) 915GM/GMS,910GML Express Chipset Family) does not offer a resolution of 1920x1080 in the [Display Properties Settings] window. It only offers up to 1360x768. Can anyone explain to me how I can increase this number to the correct resolution. The monitor does show my the windows desktop, albeit at a lower resolution which is being stretched to fill the screen, making it look very blurry. I have installed the "Monitor Drivers" I found on the disk supplied with new monitor, but these do not appear to have made any difference. The Intel software that comes with the graphics card has an information window containing lots of info about the card and the monitor itself. I have placed this on a webpage so you can examine it if helpful. Many thanks with your help in getting my Christmas present to work! Patrick P.S.: Before I got this screen I checked to see if my graphics card could cope with such a large screen.

    Read the article

  • Need an idiot-proof picture resizing program for Windows.

    - by marcusw
    My friend needs to resize some pictures as part of a web publishing job, but he is rather clueless when it comes to computers. I am in charge of teaching him how to do this, but only have Linux (albeit with Wine installed) at my disposal for testing. Could you guys recommend a fast, easy, batch-capable, and hopefully open-source program that will resize pictures to the resolution he wants? It doesn't have to be anything fancy, but it needs to be quick and easy to use. Thanks!

    Read the article

  • Need an idiot-proof picture resizing program for Windblows.

    - by marcusw
    My friend needs to resize some pictures as part of a web publishing job, but he is rather clueless when it comes to computers. I am in charge of teaching him how to do this, but only have Linux (albeit with Wine installed) at my disposal for testing. Could you guys recommend a fast, easy, batch-capable, and hopefully open-source program that will resize pictures to the resolution he wants? It doesn't have to be anything fancy, but it needs to be quick and easy to use. Thanks!

    Read the article

  • Do I need to recompile PHP to make use of CURL API?

    - by amn
    I have both Apache and PHP set up manually, albeit the latter without CURL. There is this jungle of instructions and explanations on extensions for PHP. I have a very straightforward question - what do I need to do to enable CURL in a more dynamic way. I resent the idea of static linking, in fact I hate and avoid static linking like the plague. Is it possible to have my Apache and PHP understand that there is CURL in town? I can compile CURL if necessary. Package management may be out of the question, because I built PHP myself - I am on Ubuntu, and it does not provide PHP without Suhosin and a a whole lot of time, so I removed it and built PHP myself. The whole slew of related questions simply propse installing "php5-curl" package, which is exactly one thing I CANNOT do since it installs it in a completely unrelated directory, which my PHP does not even seem to bother linking to.

    Read the article

  • ATI Radeon 4350 lower resolutions do not fill screen in Windows 7

    - by rlandster
    I have a new Windows 7 64-bit machine with an ATI HD4350 card. The card is connected via VGA (DSUB) to a SyncMaster 205 BW LCD monitor. If I set the resolution to 1680x1050 the LCD screen is completely filled and everything looks fine. But, if I try to set the resolution to 1280x768 (my preferred resolution) only about 2/3 of the monitor screen is used. There are large black bars above and below the image (but not on the sides). I have successfully used this monitor at the 1280x768 resolution for many years without this problem (albeit in Windows XP and a different video card). Can anyone suggest how I can get adjust things so that the entire screen is used at the 1280x768 resolution?

    Read the article

  • Any examples of a complex, changeable system modelled with a spreadsheet?

    - by andygrunt
    When I asked this question using games as the example (hoping it would be more likely to have been done), it was closed as being off topic so let me ask it this way... Has anyone used a spreadsheet to model a complex, changeable system (something like crowd behaviour, weather systems, a closed ecology, evolution or whatever) and if so, can you point me at it? I'm hoping for a normal (albeit complex) spreadsheet using the spready's inbuilt formulae and functions rather than something specially coded. I'm also after something where it's possible to change the variables and see the changed outcome - perhaps the variables change using random numbers or the like.

    Read the article

  • Samba file shares - ownership of folder accessible for 1 group verified by MS active direcctory

    - by jackweirdy
    I have a machine set up to share a folder /srv/sambashare, here's an exerpt of the config file: [share] path = /srv/sambashare writable = yes The permissions of that folder are set at 700 and it is owned by nobody:nogroup at the moment. The problem I face is probably a simple one but I'm fairly new to Samba so I'm not sure what to do. The contents of the share should be accessible to a particular user who will authenticate with domain credentials, checked against Active Directory by kerberos. I haven't got kerberos configured yet as I wanted to test the share as soon as samba was configured, albeit basically, to ensure that it works. I've noticed that I can only access & write to the share when the folder is either owned by the user logging in or made world writable. The key issues are that this folder can't be world writable as it contains sensitive stuff, but at the same time can't be owned by a user or group since they come from the AD server. Anyone know what I should do?

    Read the article

  • Restoring Dell recovery disks while preserving Linux partition

    - by Flup
    I have a Dell laptop dual-booting to Windows 7 and Linux. I have through my own stupidity royally stuffed the Windows partition. I have the set of recovery DVDs that I created when I first got the laptop, and I've successfully booted from them in a VirtualBox VM and ended up with a fresh (albeit virtualised) installation of Windows 7. When I started the recovery process, there was mention of other partitions being preserved, but it was unclear as to whether non-NTFS partitions would survive the process. The question is: can I run the recovery procedure without risking my Linux partition?

    Read the article

  • Does the .NET Framework need to be reoptimized after upgrading to a new CPU microarchitecture?

    - by Louis
    I believe that the .NET Framework will optimize certain binaries targeting features specific to the machine it's installed on. After changing the CPU from an Intel Nehalem to a Haswell chip, should the optimization be run again manually? If so, what is the process for that? Between generations here are some notable additions: Westmere: AES instruction set Sandy Bridge: Advanced Vector Extensions Ivy Bridge: RdRand (hardware random number generator), F16C (16-bit Floating-point conversion instructions) Haswell: Haswell New Instructions (includes Advanced Vector Extensions 2 (AVX2), gather, BMI1, BMI2, ABM and FMA3 support) So my, albeit naive, thought process was that the optimizations could take advantage of these in general cases. For example, perhaps calls to the Random library could utilize the hardware-RNG on Ivy Bridge and later models.

    Read the article

  • Navigating the Unpredictable Swinging of the Financial Regulation Pendulum

    - by Sylvie MacKenzie, PMP
    Written by Guest Blogger: Maureen Clifford, Sr Product Marketing Manager, Oracle The pendulum of the regulatory clock is constantly in motion, albeit often not in any particular rhythm.  Nevertheless, given what many insurers have been through economically, any movement can send shock waves through critical innovation and operational plans.  As pointed out in Deloitte’s 2012 Global Insurance Outlook, the impact of regulatory reform can cause major uncertainty in the area of costs.  As the reality of increasing government regulations settles in, the change that comes along with it creates more challenges in compliance and ultimately on delivering the optimum return on investment.  The result of this changing environment is a proliferation of compliance projects that must be executed with an already constrained set of resources, budget and time. Insurers are confronted by the need to gain visibility into all of their compliance efforts and proactively manage them. Currently that is very difficult to do as these projects often are being managed by groups across the enterprise and they lack a way to coordinate their efforts and drive greater synergies.  With limited visibility and equally limited resources it is no surprise that reporting on project status and determining realistic completion of these projects is only a dream. As a result, compliance deadlines are missed, penalties are incurred, credibility with key stakeholders and the public is jeopardized and returns and competitive advantage go unrealized. Insurers need to ask themselves some key questions: Do I have “one stop” visibility into all of my compliance efforts?  If not, what can I do to change that? What is top priority and how does that impact my already taxed resources? How can I figure out how to best balance my resources to get these compliance projects done as well as keep key innovation and operational efforts on track? How can ensure that I have all the requisite documentation for each compliance project I undertake? Dealing with complying with regulatory efforts is a necessary evil. Don't let the regulatory pendulum sideline your efforts to generate the greatest return on investment for your key stakeholders.

    Read the article

  • Why are data structures so important in interviews?

    - by Vamsi Emani
    I am a newbie into the corporate world recently graduated in computers. I am a java/groovy developer. I am a quick learner and I can learn new frameworks, APIs or even programming languages within considerably short amount of time. Albeit that, I must confess that I was not so strong in data structures when I graduated out of college. Through out the campus placements during my graduation, I've witnessed that most of the biggie tech companies like Amazon, Microsoft etc focused mainly on data structures. It appears as if data structures is the only thing that they expect from a graduate. Adding to this, I see that there is this general perspective that a good programmer is necessarily a one with good knowledge about data structures. To be honest, I felt bad about that. I write good code. I follow standard design patterns of coding, I do use data structures but at the superficial level as in java exposed APIs like ArrayLists, LinkedLists etc. But the companies usually focused on the intricate aspects of Data Structures like pointer based memory manipulation and time complexities. Probably because of my java-ish background, Back then, I understood code efficiency and logic only when talked in terms of Object Oriented Programming like Objects, instances, etc but I never drilled down into the level of bits and bytes. I did not want people to look down upon me for this knowledge deficit of mine in Data Structures. So really why all this emphasis on Data Structures? Does, Not having knowledge in Data Structures really effect one's career in programming? Or is the knowledge in this subject really a sufficient basis to differentiate a good and a bad programmer?

    Read the article

  • As a C# developer, would you learn Java to develop for Android or use MonoDroid instead?

    - by Dan Tao
    I'd consider myself pretty well versed in C#. It's my language of choice at the moment, and it's where basically all my professional experience lies. Still, I'm puzzled by the existence of the MonoDroid project. My understanding has always been that C# and Java are very close. Like, if you know one, you can learn the other really quickly. So, as I've considered developing my first Android app, I just assumed I would familiarize myself with Java enough to get started and then just sort of learn as I go. Wouldn't this make more sense than using MonoDroid, which is likely to be less feature-rich than the Java Android SDK, and requires learning its own API (albeit a .NET API) anyway? I just feel like it would be better to learn a new language (and an extremely popular one at that) and get some experience in it—when it's so close to what you already know anyway—rather than stick with a technology you're experienced with, without gaining any more valuable skills. Maybe I'm grossly misrepresenting the average potential MonoDroid user. Maybe it's more for people who are experienced in Java and .NET and just prefer .NET. Or maybe (in fact it's likely) there are other factors I just haven't considered. I'm just wondering, why would you use MonoDroid instead of just developing for Android using Java?

    Read the article

  • Scaling Scrum within a group of 100s of programmers

    - by blunders
    Most Scrum teams lean toward 7-15 people **, though it's not clear how to scale Scrum among 100s of people, or how the effectiveness of a given team might be compared to another team within the group; meaning beyond just breaking the group into Scrum teams of 7-15 people, it's unclear how efforts between the teams are managed, compared, etc. Any suggestions related to either of these topics, or additional related topics that might be of more importance to account for in planning a large scale SCRUM grouping? ** In reviewing research related to the suggested size of software development teams, which appears to be the basis for the suggested Scrum team size, I found what appears to be an error in the research which oddly appears to show that bigger teams (15+ ppl), not smaller teams (7 ppl) are better. UPDATE, "Re: Scrum doesn't scale": Made huge amounts of progress on personally researching the topic, but thought I'd respond to the general belief of some that Scrum doesn't scale by citing a quote from Succeeding with Agile by Mike Cohn : Scrum Does Scale: You have to admire the intellectual honesty of the earliest agile authors. They were all very careful to say that agile methodolgies like Scrum were for small projects. This conservatism wasn’t because agile or Scrum turned out to be unsuited for large projects but because they hadn’t used these processes on large projects and so were reluctant to advise their readers to do so. But, in the years since the Agile Manifesto and the books that came shortly before and after it, we have learned that the principles and practices of agile development can be scaled up and applied on large projects, albeit it with a considerable amount of overhead. Fortunately, if large organizations use the techniques described regarding the role of the product owner, working with a shared product backlog, being mindful of dependencies, coordinating work among teams, and cultivating communities of practice, they can successfully scale a Scrum project. SOURCE: (ran across the book thanks to Ladislav Mrnka answer)

    Read the article

  • Need help in determing what, if any, tools can be used to create a free Flash game

    - by ReaperOscuro
    Yes I proudly -and sadly- declare that I am a complete nincompoop when it comes to Flash, and I have been fishing around the big wide web for information. The reason for this is that I have been contracted to create a game(s) for a website -the usual flash-based games caveat. Please I do not mean things like by those gaming generator websites, I mean small yet professional games- but the caveat, as always, is that impossible dream: it needs to be done all for free. The budget...well imagine it as not there. Annoyingly is that I am a game designer yes, but with a ridiculously tight deadline I haven't got much time to re-learn (ah the heady days of programming at uni) everything by the end of March, so I'd like to ask some people who know their stuff rather than keep looking at a gazillion different things. This is my understanding: with the flash sdk you can create a game, albeit you need to be pretty programming savvy. FlashDevelop helps there -yet I am not entirely sure how. Yet even FD says to use Flash for the animation/graphics. Yes its undeniably powerful but as I said there is the unattainable demand of no money. The million dollar question: what, if any, tools can I use to create a free flash game?

    Read the article

  • How many of you *really* surf around without JavaScript enabled? [closed]

    - by Stephen
    I've decided to rephrase the question. After some deliberation on Meta, I've realized that my question needs to be a bit more focused. The question: Should we (web developers) continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. OTOH, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled? I mean, like really, not figuratively or presumptuously.

    Read the article

  • Is there a resource that explains the benefits of layered programming?

    - by P.Brian.Mackey
    Some developers I know favor what I would call a procedural programming style. I recognize that procedural programming has its uses, albeit not in the business application world of .NET programming. So let's say we have a winform application with a buttonclick event. The buttonclick handles everything from the UI configuration to the database call and data manipulation. So you end up with a method that is 100's of lines of code long. Outside the fact that this code can't be considered test-able for various reasons, this style of programming is fragile to change. I can talk bout OO, Anti-patterns, etc. The problem is that any distinct topic I can dream up requires a great deal of explanation to understand the potential benefits. Outside of finding a new job (lots of businesses program this way), how can I teach these kinds of developers how to write better code? Obviously we can't sit around a round table and discuss pro's and con's all day due to time constraints and real work that has to be done. Although, training and intense training is the only thing I can think of to fix these problems. Not to say I write perfect code, I most certainly do not. I do believe there are certain best practices that should be followed as a rule E.G. OO in the context of .NET. The most common excuse I hear is "we can't write code fast enough if we do it like that".

    Read the article

  • Internet Explorer 9 Preview 2 link + webcasts for developers

    - by Eric Nelson
    At Web Directions last week in London (10th and 11th June 2010) I promised several folks I would put up a blog post to more information on IE 9.0. True to my word (albeit a little later than I had hoped), here is what I was thinking of: Install First up, Install Preview 2 and try out the demos I was showing at the conference. Remember that IE9 Preview installs side by side with IE8/7 etc. It is not a beta nor is it intended to be a full browser. It is a … preview :-)   Including good old SVG-oids :-) Learn And then check out the following webcasts which were recorded in March this year at MIX: In-Depth Look At Internet Explorer 9 Presenter:  Ted Johnson & John Hrvatin VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL28 Slides: Download Videos: MP4 Small WMV Large WMV High Performance Best Practices For Web Sites Presenter: Jason Weber VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL29 Slides: Download Videos: MP4 Small WMV Large WMV HTML5: Cross Browser Best Practices Presenter: Tony Ross VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/CL27 Slides: Download Videos: MP4 Small WMV Large WMV Internet Explorer Developer Tools Presenter: Jon Seitel VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/FT51 Slides: Download Videos: MP4 Small WMV Large WMV SVG: The Past, Present And Future of Vector Graphics For The Web Presenter: Patrick Dengler, Doug Schepers VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/EX30 Slides: Download Videos: MP4 Small WMV Large WMV Day 2 Keynote containing IE9 Presenter: Dean Hachamovitch VisitMIX URL: http://live.visitmix.com/MIX10/Sessions/KEY02 Slides: Download Videos: MP4 Small WMV Large WMV

    Read the article

  • Screen Resolution stuck at 640x480 after installing Bumblebee

    - by Saurabh Agarwal
    I have a Dell XPS 15z laptop. As you can see here, there are some issues with NVidia drivers. The site recommends installation of Bumblebee (instructions given in the link). I am posting it again for ease: $ sudo add-apt-repository ppa:bumblebee/stable $ sudo apt-get update && sudo apt-get upgrade $ sudo apt-get install bumblebee bumblebee-nvidia $ sudo usermod -a -G bumblebee $USER After restarting the computer however, the screen resolution was stuck at 640x480 and I got the following error message as soon as I logged in: **Could not apply the stored configuration for monitors** none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1366x768@60Hz (pass 1) Prior to the update, the display was absolutely normal and thus there is no doubt about the cause. Albeit, there was no support for graphic drivers. In case it helps, some features of graphics drivers seem to be functional after bumblebee, ie, all features are in order except for the resolution. And if the resolution can't be fixed, please suggest a way to retract the changes so that atleast the prior state may be reachieved. Any help in the matter would be highly appreciated.

    Read the article

  • How important is graceful degradation of JavaScript? [closed]

    - by Stephen
    Should web developers continue to spend effort progressively enhancing our web applications with JavaScript, ensuring that features gracefully degrade, thereby ensuring accessibility? Or should we spend that time focused on new features or other areas of development? The subtext of that question would be: How many of our customers/clients/users utilize our websites or applications with JavaScript disabled? Do you have any projects with requirements that specifically demand JavaScript functionality (almost all of mine do), and do those requirements also demand graceful degradation? For the sake of asking this question, I pulled up programmers.stackexchange.com without JavaScript enabled, and I was greeted with this message: "Programmers - Stack Exchange works best with JavaScript enabled". It was difficult to log in, albeit the site seemed to generally work okay. (I wasn't able to vote up any questions.) I think this is a satisfactory approach to development. Imagine the effort involved in making all of the site's features work with plain old HTML and server-side logic. On the other hand, I wonder how many users have been alienated by this approach. We've all been trained (at least the good developers among us) to use progressive enhancement and to ensure our web applications' dynamic features degrade gracefully. Is this progressive enhancement just pissing into the wind, or do some of our customers actually utilize certain web services without JavaScript enabled?

    Read the article

  • Edd strikes again &ndash; IronRuby for Rubyists on InfoQ

    - by Eric Nelson
    Colleague, friend and generally top guy on IronRuby Edd Morgan has just been published over on InfoQ. To wet the appetite… a snippet or three. IronRuby for Rubyists IronRuby is Microsoft's implementation of the Ruby language we all know and love with the added bonus of interoperability with the .NET framework — the Iron in the name is actually an acronym for 'Implementation running on .NET'. It's supported by the .NET Common Language Runtime as well as, albeit unofficially, the Mono project. You'd be forgiven for harbouring some question in your mind about running a dynamic language such as Ruby atop the CLR - that's where the DLR (Dynamic Language Runtime) comes in. The DLR is Microsoft's way of providing dynamic language capability on top of the CLR. Both IronRuby and the DLR are, as part of Microsoft's commitment to open source software, available as part of the Microsoft Public License on GitHub and CodePlex respectively… And Metaprogramming with IronRuby The art and science of metaprogramming — especially in Ruby, where it's an absolute joy — is something that could very easily span an entire article. As you would hope, IronRuby code is fully able to manipulate itself allowing you to bend your classes to your whim just as you would expect with a good dynamic language… And Riding the irails? So let's get to the point. I think it's a solid bet to make that a large proportion of Ruby programmers are familiar with the Rails framework - perhaps it's even safe to assume that most were first led to the Ruby language by the siren song of the Rails framework itself. Long story short, IronRuby is compatible enough to run your Rails app… Now… get yourself over to the full article and also check out some of Edds other work below. Related Links: 5 Steps to getting started with IronRuby Mini Book Review of IronRuby Unleashed by Shay Friedman Guest Post: Using IronRuby and .NET to produce the ‘Hello World of WPF’ – also by Edd Getting PhP and Ruby working on Windows Azure and SQL Azure Guest Post: What's IronRuby, and how do I put it on Rails? – also by Edd

    Read the article

  • Unity not running on startup

    - by Dan
    OK, so Firefox was running extremely slow, I ran it in safe mode and still slow, so I rebooted and when she came back on, I wasn't at the regular Unity login, it was like a classic Windows login (where I had to enter my username and password manually, not a list of users). I logged in and only my desktop was visible (with icons and my wallpaper). Nothing else. I was able to open a terminal with Ctrl+ Alt+T and typed... sudo unity ...which got it up (albeit with the default icons on the launcher ex. I had unlocked Libre Office and it was back). In "Startup Applications..." there was absolutely nothing at all... This happens every time I reboot. Thunderbird de-synced from my Gmail but Pidgin is still connected. When I do Ctrl+Alt+L it locks the screen as normal, but I have no option to switch user. I have the only account on this computer but I cannot access the main login screen to get to my Guest account. I'm not very Ubuntu-savy, but it's pretty clear that I'm starting in some sort of safemode. I am on a fresh install of Ubuntu 12.04.1 LTS (just installed it last night).

    Read the article

  • Is the development of CLI apps considered "backward"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backward", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >