Search Results

Search found 46749 results on 1870 pages for 'system preferences'.

Page 646/1870 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • Why does Ubuntu 13.10 not detect my Win7 partition?

    - by goutham
    I'm trying to install Ubuntu 13.10 alongside Windows 7 on my DELL INSPIRON 14z 5423 laptop and I'm new to all of this. I'm using the Ubuntu 13.10 64-bit ISO burned onto a CD. The first time I tried to install it, Ubuntu said it did not detect any other OS, which meant I only had 4 options: Erase disk and install Ubuntu (I don't want to do this) Encrypt new Ubuntu. Use LVM. Something else. If I choose the Something else option, it brings me to the partition menu and says that I have 1 disk with free space of (500Gb), but that's not true because I have Windows 7. I restarted the laptop several times and booted the CD again and I got exactly the same as I did previously. How do fix this problem and install Ubuntu alongside Windows 7? After executing "sudo fdisk -l" command in terminal ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xd2b811c5 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 314574847 157184000 7 HPFS/NTFS/exFAT /dev/sda3 314574848 629147647 157286400 7 HPFS/NTFS/exFAT /dev/sda4 629147648 976771071 173811712 7 HPFS/NTFS/exFAT After removing one partition I executed command once again ubuntu@ubuntu:~$ sudo fdisk -l WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0xd2b811c5 Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 629145599 314469376 7 HPFS/NTFS/exFAT /dev/sda3 629147648 976771071 173811712 7 HPFS/NTFS/exFAT

    Read the article

  • Whether to separate out methods or not

    - by Skippy
    I am new to java and want to learn best coding practices and understand why one method is better than another, in terms of efficiency and as the coding becomes more complicated. This is just an example, but I can take the principles from here to apply elsewhere. I have need an option to display stuff, and have put the method stuff separately from the method to ask if the user wants to display the stuff, as stuff has a lot of lines of code. For readability I have done this: public static void displayStuff () { String input = getInput ("Display stuff? Y/N \n"); if (input..equalsIgnoreCase ("Y")) { stuff (); } else if (input.equalsIgnoreCase ("N")) { //quit program } else { //throw error System.out.print("Error! Enter Y or N: \n"); } } private static String stuff () { //to lots of things here return stuff (); } Or public static void displayStuff () { String input = getInput ("Display stuff? Y/N \n"); if (input..equalsIgnoreCase ("Y")) { //to lots of things here stuff; } else if (input.equalsIgnoreCase ("N")) { //quit program } else { //throw error System.out.print("Error! Enter Y or N: \n"); } } Is it better to keep them together and why? Also, should the second method be private or public, if I am asking for data within the class? I am not sure if this is on topic for here. please advise.

    Read the article

  • Naming a class that processes orders

    - by p.campbell
    I'm in the midst of refactoring a project. I've recently read Clean Code, and want to heed some of the advice within, with particular interest in Single Responsibility Principle (SRP). Currently, there's a class called OrderProcessor in the context of a manufacturing product order system. This class is currently performs the following routine every n minutes: check database for newly submitted + unprocessed orders (via a Data Layer class already, phew!) gather all the details of the orders mark them as in-process iterate through each to: perform some integrity checking call a web service on a 3rd party system to place the order check status return value of the web service for success/fail email somebody if web service returns fail constantly log to a text file on each operation or possible fail point I've started by breaking out this class into new classes like: OrderService - poor name. This is the one that wakes up every n minutes OrderGatherer - calls the DL to get the order from the database OrderIterator (? seems too forced or poorly named) - OrderPlacer - calls web service to place the order EmailSender Logger I'm struggling to find good names for each class, and implementing SRP in a reasonable way. How could this class be separated into new class with discrete responsibilities?

    Read the article

  • Launcher icons are invisible after upgrade from 11.10 to 12.04

    - by Clo Knibbe
    I am re-purposing an old laptop. I installed 11.10 on it and then immediately upgraded to 12.04. (I could not directly install 12.04 as my system does not support PAE.) When my system was (briefly) 11.10, the desktop appeared as expected. However, after the upgrade to 12.04, the icons in the launcher area are invisible. If I hover over the spot where the icon should be the little popup window showing the tool's name appears, and I can click to invoke the tool. I just cannot see the icons. ![invisible icons in launcher][1] The icons do appear as expected in other contexts, for example in the Home folder and in Dash Home. My theme is "Ambiance (default)" I do not have a ~/.icons folder. This is the top level contents of /usr/share/icons: default DMZ-Black DMZ-White gnome handhelds hicolor HighContrast HighContrastInverse Humanity Humanity-Dark locolor LoginIcons LowContrast redglass ubuntu-mono-dark ubuntu-mono-light unity-icon-theme whiteglass (Sorry for the poor formatting, can't get it to show in list.) I suspect that the launcher isn't looking for the icons in the right place, but I don't know how to confirm that, or how to correct. This is my first foray into Linux, although I used to use Unix a few decades ago. This doesn't look much like my old Sun workstation, though! Does anyone have any suggestions or insights for me? Thanks.

    Read the article

  • Hybrid Graphics on Ubuntu 12.04 switching to discrete

    - by cfstras
    I have a Sony Vaio VPCCB-27FX with hybrid graphics. Using vgaswitcheroo enables me to switch my discrete card off to save power. Now when i want to switch to the discrete card for performance, my system freezes. I already tried logging out and killing x with service lightdm stop, but still, it freezes as soon as I echo DIS > switch. typing blindly, echo IGD > switch returns me to my console where it reads [ 179.555171] i915: switched off, but it seems the discrete card never gets switched on... running echo DDIS > switch gives me the following: [540....] [drm:atop_op_jump] *ERROR* atombios stuck in loop for more than 5secs aborting [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing CEE2 (len 62, WS 0, PS 0) @ 0xCEFE [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BBF6 (len 1036, WS 4, PS 0) @ 0xBCF3 [540....] [drm:atom_execute_table_locked] *ERROR* atombios stuck executing BB8C (len 76, WS 0, PS 0) @ 0xBB94 [541....] [drm:r600_RING_TEST] *ERROR* radeon: ring test failed (scratch(0x8504)=0xFFFFFFFF) [541....] [drm:evergreen_resume] *ERROR* evergreen startup failed on resume after that, the atombios part repeats a few times. also, the terminal locks up again and sysrq+REISUB is my only rescue. Has anybody an idea how I can switch to my discrete card without the system locking up? #uname -srvmpio Linux 3.2.0-24-generic #39-Ubuntu SMP Mon May 21 16:52:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux #lsb_release -r Description: Ubuntu 12.04 LTS

    Read the article

  • If you develop on multiple operating systems, is it better to have multiple computers + displays?

    - by dan
    I develop for iOS and Linux. My preferred OS is Ubuntu. Now my software shop (me and a partner) is developing for Windows too. Now the question is, is it more efficient to have multiple workstations, one for each target OS? Efficiency and productivity is a higher priority than saving money. I have a 3.4Ghz i7 desktop workstation running Ubuntu and virtualized Windows with two displays, and I'm putting together an even more powerful i7 Hackintosh with 16GB RAM (to replace my weak 2.2Ghz i5 Macbook Pro). My specific dilemma is whether I should sell the first computer and triple boot on the second one, or buy two more displays and run both desktop systems simultaneously. Would appreciate answers from developers who write software for multiple OSes. Running guest OSes in VirtualBox on one system not ideal, because in my experience performance is seriously degraded under virtualization. So the choice is between dual/triple booting on one system vs having two systems, one for OSX+iOS/Windows (dual boot) and the other for Ubuntu (which I prefer to use as my main OS). For much of our work, I write a server-side application in Linux and a client for iOS (or for Windows or OS X) simultaneously.

    Read the article

  • I can't enable extra effects in Ubuntu 10.10. Please help?

    - by jasoncruz98
    I installed Ubuntu 10.10 alongside Ubuntu 11.10 to use an older version of Compiz. On Ubuntu 11.10, Compiz was enabled by default and I didn't need to use any graphics driver to enjoy the effects. All I had to do was install CompizConfig Settings Manager and enable those extra effects. That was Compiz 0.9.6. Now, after installing Ubuntu 10.10, when I first logged in, the graphics were slow. When I dragged a window from one end of the screen to the other, the whole screen would blur up and pixelate and it would be very laggy. I tried going to System Preference Appearance and selecting Extra effects on the Visual effects tab, but all I got was "Desktop effects could not be enabled". I don't know whether I should install the Additional drivers (proprietary) because my Internet is slow and it would take a long time. Furthermore, in Ubuntu 11.10, after I installed the proprietary graphics driver, I immediately went into fallback mode and wasn't even offered an option to set my desktop session to Ubuntu 3D. I didn't need the driver to run Compiz in Ubuntu 11.10, it just ran so smoothly. But in Ubuntu 10.10, everything is so laggy. Should I install the ATI/AMD Proprietary FGLRX Graphics Driver for Ubuntu 10.10 to enable extra effects? Or is there something else wrong with my system? Here is the output of lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Sandy Bridge Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc Device [1002:6760] Here is the output of the same command, but in Ubuntu 11.10 (in this case the one which is correct, because I don't have the Sandy Bridge Integrated graphics controller) 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0116] (rev 09) 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc NI Seymour [AMD Radeon HD 6470M] [1002:6760]

    Read the article

  • Why does Unity not extend to my 2nd monitor, even when it is displaying an X-Screen?

    - by Gridwalker
    I recently added a 2nd video card to my system, but unity refuses to extend my desktop over to the second screen. Although the secondary monitor initialises when I boot and I can move the mouse cursor over to the 2nd screen, the screen is otherwise blank (showing no wallpaper or interface elements) and I am unable to move any windows to this monitor. Moving the mouse cursor over to the 2nd monitor changes it from the default cursor to the old-style X cursor, such as the one that appears when you run X-kill, indicating that this screen is initialised in the X Server but that Unity is not recognising it. Although the Nvidia X Server Settings application can see both monitors, the unity systems settings application does not detect the 2nd adapter. Sometimes the additional drivers application can see both adapters, but it doesn't consistently show options for them both. Xrandr also fails to detect the 2nd monitor, but iNex lists both adapters. I have experimented with several different drivers for each adapter and with setting each of the graphics cards as the primary adapter in the BIOS, but this has made little difference. The two adapters are an onboard Geforce 8200 and a PCIE Geforce 7200 GX. The onboard adapter is currently set as the primary, however this adapter crashes whenever I use the Nouveau driver and I have to switch over to the PCIE as a primary whenever I purge the proprietary drivers (switching back when the 304 driver has been reinstalled). It doesn't matter which adapter I set as my primary, the results are the same : one screen showing the unity interface and one screen showing an X-Screen that only displays the mouse cursor. All I want is to be able to run this system in a dual screen configuration. I am not a gamer, nor do I require 3D rendering capabilities. Anything you can suggest to get the desktop to extend across both screens will be massively appreciated!

    Read the article

  • two-part dice pool mechanic

    - by bythenumbers
    I'm working on a dice mechanic/resolution system based off of the Ghost/Echo (hereafter shortened to G/E) tabletop RPG. Specifically, since G/E can be a little harsh with dealing out consequences and failure, I was hoping to soften the system and add a little more player control, as well as offer the chance for players to evolve their characters into something unique, right from creation. So, here's the mechanic: Players roll 2d12 against the two statistics for their character (each is a number from 2-11, and may be rolled above or below depending on the nature of the action attempted, rolling your stat exactly always fails). Depending on the success for that roll, they add dice to the pool rolled for a modified G/E style action. The acting player gets two dice anyhow, and I am debating offering a bonus die for each success, or a single bonus die for succeeding on both of the statistic-compared rolls. One the size of the dice pool is set, the entire pool is rolled, and the players are allowed to assign rolled dice to a goal and a danger. Assigned results are judged as follows: 1-4 means the attempted goal fails, or the danger comes true. 5-8 is a partial success at the goal, or partially avoiding the danger. 9-12 means the goal is achieved, or the danger avoided. My concerns are twofold: Firstly, that the two-stage action is too complicated, with two rolls to judge separately before anything can happen. Secondly, that the statistics involved go too far in softening the game. I've run some basic simulations, and the approximate statistics follow: 2 dice (up to) 3 dice (up to) 4 dice failure ~33% ~25% ~20% partial ~33% ~35% ~35% success ~33% ~40% ~45% I'd appreciate any advice that addresses my concerns or offers to refine my simulation (right now the first roll is statistically modeled as sign(1d12-1d12), where 0 is a success).

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • How to convince a client to switch to a framework *now*; also examples of great, large-scale php applications.

    - by cbrandolino
    Hi everybody. I'm about to start working on a very ambitious project that, in my opinion, has some great potential for what concerns the basic concept and the implementation ideas (implementation as in how this ideas will be implemented, not as in programming). The state of the code right now is unluckily subpar. It's vanilla php, no framework, no separation between application and visualization logic. It's been done mostly by amateur students (I know great amateur/student programmers, don't get me wrong: this was not the case though). The clients are really great, and they know the system won't scale and needs a redesign. The problem is, they would like to launch a beta ASAP and then think of rebuilding. Since just the basic functionalities are present now, I suggested it would be a great idea if we (we're a three-people shop, all very proficient) ported that code to some framework (we like CodeIgniter) before launching. We would reasonably be able to do that in < 10 days. Problem is, they don't think php would be a valid long-term solution anyway, so they would prefer to just let it be and fix the bugs for now (there's quite a bit) and then directly switch to some ruby/python based system. Porting to CI now will make future improvements incredibly easier, the current code more secure, changing the style - still being discussed with the designers - a breeze (reminder: there are database calls in template files right now); the biggest obstacle is the lack of trust in php as a valid, scalable technology. So well, I need some examples of great php applications (apart from facebook) and some suggestions on how to try to convince them to port soon. Again, they're great people - it's not like they would like ruby cause it's so hot right now; they just don't trust php since us cool programmers like bashing it, I suppose, but I'm sure going on like this for even one more day would be a mistake. Also, we have some weight in the decision process.

    Read the article

  • You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one

    - by B R Clouse
    Before we examine Consolidation, the next step in the journey to cloud, let's take a short detour to address a critical choice you will face at the outset of your journey: whether to deploy your databases in virtual machines or not. A common misconception we've encountered is the belief that moving to cloud computing can be accomplished by simply hosting one's current operating environment as-is within virtual machines, and then stacking those VMs together in a consolidated environment.  This solution is often described as "Infrastructure as a Service" (IaaS) because the building block for deployments is a VM, which behaves like a full complement of infrastructure.  This approach is easy to understand and may feel like a good first step, but it won't take your databases very far in the journey to cloud computing.  In fact, if you follow the IaaS fork in the road, your journey will end quickly, without realizing the full benefits of cloud computing.  The better option to is to rationalize the deployment stack so that VMs are needed only for exceptional cases.  By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share.  Now, the building block will be database instances or possibly schemas within databases.  These components are the platforms on which you will deploy workloads, hence this is known as "Platform as a Service" (PaaS). PaaS opens the door to higher degrees of consolidation than IaaS, because with PaaS you will not need to accommodate the footprint (operating system, hypervisor, processes, ...) that each VM brings with it.  You will also reduce your maintenance overheard if you move forward without the VMs and their O/Ses to patch and monitor.  So while IaaS simply shuffles complex and varied environments into VMs,  PaaS actually reduces complexity by rationalizing to the small possible set of components.  Now we're ready to look at the consolidation options that PaaS provides -- in our next blog posting.

    Read the article

  • How can I triple boot Xubuntu, Ubuntu and Windows?

    - by ag.restringere
    Triple Booting Xubuntu, Ubuntu and Windows I'm an avid Xubuntu (Ubuntu + XFCE) user but I also dual boot with Windows XP. I originally created 3 partitions and wanted to use the empty one as a storage volume but now I want to install Ubuntu 12.04 LTS (the one with Unity) to do advanced testing and packaging. Ideally I would love to keep these two totally separate as I had problems in the past with conflicts between Unity and XFCE. This way I could wipe the Ubuntu w/ Unity installation if there are problems and really mess around with it. My disk looks like this: /dev/sda1 -- Windows XP /dev/sda2 -- Disk /dev/sda: 200.0 GB, 200049647616 bytes 255 heads, 63 sectors/track, 24321 cylinders, total 390721968 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Device Boot Start End Blocks Id System /dev/sda1 * 63 78139454 39069696 7 HPFS/NTFS/exFAT /dev/sda2 78141440 156280831 39069696 83 Linux /dev/sda3 156282878 386533375 115125249 5 Extended /dev/sda4 386533376 390721535 2094080 82 Linux swap / Solaris /dev/sda5 156282880 386533375 115125248 83 Linux Keep each in it's own partition and totally separate and be able to select from each of the three systems from the GRUB boot menu... sda1 --- [Windows XP] sda2 --- [Ubuntu 12.04] "Unity" sda3(4,5) -- [Xubuntu 12.02] "Primary XFCE" What is the safest and easiest way to do this without messing my system up and requiring invasive activity?

    Read the article

  • accessing live usb files from new hd ubuntu install

    - by Robin Bailey
    After my live USB (ubuntu 12.04 lts) refused to boot, I proceeded to install the same Ubuntu version on the laptop hard drive (a dual boot next to Win xp). This all went well without a hitch. Previous to this, I spent several weeks enjoying and exploring ubuntu from the usb pendrive. During this time I changed lots of settings and customized Firefox and more. Now, I'd like to import the home folder from the usb drive into the new install home folder on the hard disk, which is the purported folder that holds all those special settings to my knowledge. Unfortunately and only being familiar with Windows file systems, the view of the usb file system from the new hdd install is totally perplexing. I can't find anything that looks anywhere close to the original file system. More, I can't find any of the files I had created and stored there, like the LibreOfficeCalc file that has all my passwords (this one is really discouraging) that was stored on the ubuntu desktop. Help me find this file alone and I'll bow down with full apologies to any and all computer gods. Being able to import all those customizing settings into the new install would be a major bonus also, but hey, I'm not greedy. I'll take the passwords file and be happy! And humble! I would be very grateful for some clear, understandable help on this. Thanks

    Read the article

  • JBox2D applyLinearImpulse doesn't work

    - by Romeo
    So i have this line of code: if(input.isKeyDown(Input.KEY_W)&&canJump()) { body.applyLinearImpulse(new Vec2(0, 30), cam.screenToWorld(body.getPosition())); System.out.println("I can jump!"); } My problem is that the console display I can jump! but the body doesn't do that. Can you explain to me if i do something wrong? Some more code. This function creates my 'hero' the one supposed to jump. private Body setDynamic(float width, float height, float x, float y) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width/2, height/2); BodyDef bd = new BodyDef(); bd.allowSleep = true; bd.position = new Vec2(cam.screenToWorld(new Vec2(x + width / 2, y + height / 2))); bd.type = BodyType.DYNAMIC; bd.userData = new BodyInfo(width, height); Body body = world.createBody(bd); body.createFixture(shape, 10); return body; } And this is the main update loop: if(input.isKeyDown(Input.KEY_A)) { body.setLinearVelocity(new Vec2(-10*delta, body.getLinearVelocity().y)); } else if (input.isKeyDown(Input.KEY_D)) { body.setLinearVelocity(new Vec2(10*delta, body.getLinearVelocity().y)); } else { body.setLinearVelocity(new Vec2(0, body.getLinearVelocity().y)); } if(input.isKeyDown(Input.KEY_W)&&canJump()) { body.applyLinearImpulse(new Vec2(0, 30), body.getPosition()); System.out.println("I can jump!"); } world.step(delta * 0.001f, 10, 5); }

    Read the article

  • How can I refresh/reinstall/clear/set-to-default my bootup process?

    - by Tchalvak
    I'm currently having a problem with my bootup process that is growing progressively worse as time goes on: While booting, it does a few minutes of hard-drive reading. During that, instead of showing a boot splash screen, it shows various dashes and dots, as if the video card isn't recognizing. The splash screen actually has colors similar to the splash screen (purple), it simply is garbled. It then does a few minutes of hard-drive reads, and if I leave it long enough, sometimes it boots into the desktop (and auto-logs-in). Sometimes, unfortunately, it just hangs on that garbled screen and reads from the hard-drive forever. Notably, I've also stopped being able to access grub during bootup (perhaps it is just not displayed correctly by the video, hard to tell). This is a symptom that has grown over the course of various ubuntu upgrades, at least I suspect that the upgrade process is leaving behind cruft. So, is there a safe way for me to "refresh" the boot system so that it is clean, new, fast, and reliable? For example, to test out a cleanly configured boot, make sure that it works (try before I buy), and then apply it to the system to eliminate as much of this problem as possible? Edit: Here is the requested bootchart: http://imgur.com/9jocF

    Read the article

  • How should I architect a personal schedule manager that runs 24/7?

    - by Crawford Comeaux
    I've developed an ADHD management system for myself that's attempting to change multiple habits at once. I know this is counter to conventional wisdom, but I've tried the conventional for years & am now trying it my way. (just wanted to say that to try and prevent it from distracting people from the actual question) Anyway, I'd like to write something to run on a remote server that monitors me, helps me build/avoid certain habits, etc. What this amounts to is a system that: runs 24/7 may have multiple independent tasks to run at once may have tasks that require other tasks to run first lets tasks be scheduled by specific time, recurrence (ie. "run every 5 mins"), or interval (ie. "run from 2pm to 3pm") My first naive attempt at this was just a single PHP script scheduled to run every minute by cron (language was chosen in order to use a certain library, but no longer necessary). The logic behind when to run this or that portion of code got hairy pretty quick. So my question is how should I approach this from here? I'm not tied to any one language, though I'm partial to python/javascript. Thoughts: Could be done as a set of scripts that include a scheduling mechanism with one script per bit of logic...but the idea just feels wrong to me. Building it as a daemon could be helpful, but still unsure what to do about dozens of if-else statements for detecting the current time

    Read the article

  • Thoughts on exception handling.

    - by AndyScott
    Was working on a windows form app (something I haven't done in a while), adding threading and logging so that it would work a little more smoothly and have a record of who did what.  I was just about at the point where I was going to check it into source control when I noticed that the Output window was showing "A first chance exception of type 'System.InvalidCastException' occurred in mscorlib.dll", so I googled it.  In reading some threads about the error, I came across the following comment and it got me thinking: "In addition, while they should be avoided if possible, exceptions are a quite legitimate part of program execution. It's their going unhandled that is a real issue, because that means crashy, crashy." How do you normally use exception handling?  I feel that exceptions are intended to handle errors in code (in my experience generally related to bad data making its way into the system).  Now don't get me wrong, I understand that exceptions happen and should be dealt with, but I feel that they are a "last resort" to keep a program from crashing, but should never be a way to pass data or continue logical processing that could be handled in standard code flow. I mention this, because I have seen it done. What do you think?

    Read the article

  • Can't install the wireless driver in HP Pavilion dv2419us

    - by maqtanim
    I've just installed Ubuntu 13.04 in an old HP Pavilion dv2419us. The problem is, Ubuntu doesn't detect the wireless card. But it works fine in Windows 7. The following command returns nothing! lspci -vvnn | grep 14e4 And the lspci output is: 00:00.1 RAM memory: NVIDIA Corporation C51 Memory Controller 0 (rev a2) 00:00.2 RAM memory: NVIDIA Corporation C51 Memory Controller 1 (rev a2) 00:00.3 RAM memory: NVIDIA Corporation C51 Memory Controller 5 (rev a2) 00:00.4 RAM memory: NVIDIA Corporation C51 Memory Controller 4 (rev a2) 00:00.5 RAM memory: NVIDIA Corporation C51 Host Bridge (rev a2) 00:00.6 RAM memory: NVIDIA Corporation C51 Memory Controller 3 (rev a2) 00:00.7 RAM memory: NVIDIA Corporation C51 Memory Controller 2 (rev a2) 00:02.0 PCI bridge: NVIDIA Corporation C51 PCI Express Bridge (rev a1) 00:03.0 PCI bridge: NVIDIA Corporation C51 PCI Express Bridge (rev a1) 00:05.0 VGA compatible controller: NVIDIA Corporation C51 [GeForce Go 6150] (rev a2) 00:09.0 RAM memory: NVIDIA Corporation MCP51 Host Bridge (rev a2) 00:0a.0 ISA bridge: NVIDIA Corporation MCP51 LPC Bridge (rev a3) 00:0a.1 SMBus: NVIDIA Corporation MCP51 SMBus (rev a3) 00:0a.3 Co-processor: NVIDIA Corporation MCP51 PMU (rev a3) 00:0b.0 USB controller: NVIDIA Corporation MCP51 USB Controller (rev a3) 00:0b.1 USB controller: NVIDIA Corporation MCP51 USB Controller (rev a3) 00:0d.0 IDE interface: NVIDIA Corporation MCP51 IDE (rev f1) 00:0e.0 IDE interface: NVIDIA Corporation MCP51 Serial ATA Controller (rev f1) 00:10.0 PCI bridge: NVIDIA Corporation MCP51 PCI Bridge (rev a2) 00:10.1 Audio device: NVIDIA Corporation MCP51 High Definition Audio (rev a2) 00:14.0 Bridge: NVIDIA Corporation MCP51 Ethernet Controller (rev a3) 00:18.0 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] HyperTransport Technology Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K8 [Athlon64/Opteron] Miscellaneous Control 05:09.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller 05:09.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 05:09.2 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 0a) 05:09.3 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 05) The command lspci -nn | grep 0280 gives no output. Any suggestion regarding this?

    Read the article

  • Why does my root filesystem keep becoming read-only?

    - by Scott Severance
    I've lately been having an issue with my root filesystem becoming readonly. It happens some amount of time after boot. I don't know exactly when it happens, as I don't usually notice it until something such as suspending the computer or printing fails. It seems to be fairly random. Since most of my system is on that partition, I can't re-mount it without rebooting. After this happens, the system runs a fsck. Sometimes it prompts to fix problems; other times it apparently finds none. To troubleshoot, I've searched through the logs but found nothing relevant. This might be due in part to not knowing when the actual errors took place. The filesystem is apparently good to begin with, as when fsck runs its fixes it doesn't report any errors. I've scanned the disk with SpinRite. A while ago, SpinRite found and recovered from some bad sectors on the hard drive. I ran a level 4 scan (a thorough scan) after this probem appeared, but SpinRite found nothing. The SMART data reports that the disk is OK with 63 bad sectors. The number of bad sectors hasn't changed recently. I realize that the disk isn't in the best of conditions, and I have complete backups in case of catastrophic failure. Yet the lack of errors in the logs, combined with SpinRite's test results and the unchanged SMART data makes me think that this problem has some cause other than disk failure. Other than disk failure, what could cause my symptoms?

    Read the article

  • Oracle NoSQL Database: Cleaner Performance

    - by Charles Lamb
    In an earlier post I noted that Berkeley DB Java Edition cleaner performance had improved significantly in release 5.x. From an Oracle NoSQL Database point of view, this is important because Berkeley DB Java Edition is the core storage engine for Oracle NoSQL Database. Many contemporary NoSQL Databases utilize log based (i.e. append-only) storage systems and it is well-understood that these architectures also require a "cleaning" or "compaction" mechanism (effectively a garbage collector) to free up unused space. 10 years ago when we set out to write a new Berkeley DB storage architecture for the BDB Java Edition ("JE") we knew that the corresponding compaction mechanism would take years to perfect. "Cleaning", or GC, is a hard problem to solve and it has taken all of those years of experience, bug fixes, tuning exercises, user deployment, and user feedback to bring it to the mature point it is at today. Reports like Vinoth Chandar's where he observes a 20x improvement validate the maturity of JE's cleaner. Cleaner performance has a direct impact on predictability and throughput in Oracle NoSQL Database. A cleaner that is too aggressive will consume too many resources and negatively affect system throughput. A cleaner that is not aggressive enough will allow the disk storage to become inefficient over time. It has to Work well out of the box, and Needs to be configurable so that customers can tune it for their specific workloads and requirements. The JE Cleaner has been field tested in production for many years managing instances with hundreds of GBs to TBs of data. The maturity of the cleaner and the entire underlying JE storage system is one of the key advantages that Oracle NoSQL Database brings to the table -- we haven't had to reinvent the wheel.

    Read the article

  • Network print to brother MFC-7420

    - by trampster
    I am trying to pint to a Brother MFC-7420 from my ubuntu 10.04 machine. The brother is attached to a windows XP machine and is shared. This is what I have tried: System-Administration-Printing, Add, Expand Network Printer, Windows Printer via SAMBA, Browse (I can find the printer no problems here), Foward, Choose Driver Dialog, Brother, My printer is not in this list So the next thing I tried was to download the printer driver from here http://welcome.solutions.brother.com/bsc/public_s/id/linux/en/download_prn.html The driver installed fine but my printer still does not appear in the list. I also tried installing the cups wrapper but that gave the following error. Restarting Common Unix Printing System: cupsd [ OK ] cp: cannot stat `/usr/share/cups/model/MFC7420.ppd': No such file or directory dpkg: error processing cupswrappermfc7420 (--install): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: cupswrappermfc7420 I tried connecting the printer directly but even though I have installed the driver, when I go to printers and click on the printer (it shows up fine as a USB printer) then it say searching for drivers and then gives me a list, this is the same list as before which doesn't have my printer. It really shouldn't be this hard. on window you don't have to installing anything it just works and the same is true for my brothers Mac. How do I print to my printer?

    Read the article

  • Ubuntu 10.04 hung on unattended apt-get upgrade?

    - by hafichuk
    I'm looking into why our ubuntu web server hung this morning and see that there was some package upgrades a few hours prior to the problem. I was able to ssh into the system and get a snapshot from top: top - 08:13:54 up 210 days, 8:25, 2 users, load average: 433.30, 422.40, 375.70 Tasks: 1192 total, 381 running, 810 sleeping, 0 stopped, 1 zombie Cpu(s): 0.5%us, 6.1%sy, 0.0%ni, 93.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 49549772k total, 48518392k used, 1031380k free, 960152k buffers Swap: 11595768k total, 279368k used, 11316400k free, 39355664k cached This is a 16 processor system, so I would typically expect a load in the low teens. I tried to restart apache, which didn't work, and subsequently had to do a hard reboot to get it working again (which it is). One thing I found was that the server did an unattended package update. Is it possible that upgrading php or curl (which our web sites use) might have caused apache to stop responding? Here is the snip from the unattended-upgrades.log from this morning: 2012-09-18 06:48:30,076 INFO Initial blacklisted packages: 2012-09-18 06:48:30,076 INFO Starting unattended upgrades script 2012-09-18 06:48:30,077 INFO Allowed origins are: ["['Ubuntu', 'lucid-security']"] 2012-09-18 06:49:37,017 INFO Packages that are upgraded: gnupg-curl php5-dev linux-server dhcp3-common linux-libc-dev php5-curl gpgv gnupg linux-headers-server linux-image-server php5 php5-mysql php-pear php5-cli php5-common libapache2-mod-php5 dhcp3-client 2012-09-18 06:49:37,018 INFO Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2012-09-18_06:49:37.017909.log'

    Read the article

  • Laptop keyboard and touchpad disabled on startup

    - by JAM
    I use Ubuntu 14.04 LTS on my Toshiba Satellite L775D laptop. 14.04 is the only operating system installed. I am new to Linux and only barely scratching the surface of doing things in terminal. When I boot my laptop keyboard and touchpad are disabled (99.99% of the time) if I do nothing. The only direct effect I can have is to keep pressing the "numlock" key during boot when I notice the "numlock" light goes off. If I do this then I have a 95% chance of the keyboard and touchpad working when I am in the operating system. I am able to use my wireless mouse regardless. I have not seen any messages during boot. Previously I have tried playing with input method settings and utilities as well as language support settings. This same problem exists with the 12.... and 13... versions of ubuntu. With everything I have tried (from looking at other posts/suggestions) it seems I can have only a temporary effect. Please help me find a permanent solution to this problem. thank you.

    Read the article

  • Installed ubuntu over windows vista..cant reinstall windows

    - by Marcuz J Hinojoz
    I recently used the "compress hard drive" option within windows. i got the horid "boot mngr is compressed" after the restart. i tried booting my system back to windows vista but it doesnt read the cd that came with my computer. i tried going into system recovery and going back to a previous date but it didnt work. i kept pressing f8 but nothing. i installed ubuntu(the ubuntu cd worked but windows didnt?) i installed ubuntu so i could atleast get in my computer, and i still wasnt able to install windows from there. my hard drive got reformatted to a ext4? and windows cant install because it doesnt read it? im not sure, but its very frustrating. my computer is a gateway gt5668e windows vista home premium with sp1. im a graphic designer and use programs such as photoshop and cinema 4d to do my projects..i have been at a unfortunate halt with my work and i am really bummed out and dont know what to do... any help?

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >