Search Results

Search found 25447 results on 1018 pages for 'chester is back'.

Page 94/1018 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Multiple Monitors using nvidia-prime or bumblebee on Ubuntu 13.10

    - by user205626
    I've been unable to get multiple monitors to work with Ubuntu 13.10 using nvidia-prime or bumblebee. Could someone point me in the right direction? With nvidia-prime, I've tried the xorg.conf here http://us.download.nvidia.com/XFree86/Linux-x86/319.12/README/randr14.html, but I boot into "low graphics" mode and have to revert to get a desktop back. Any suggestions would be appreciated. Thanks. Edit: I've given up on nvidia-prime; I missed the fact that it never turns off the discrete card... So, I'm back to trying to get VIRTUAL displays working with Bumblebee.

    Read the article

  • How necessary is it to learn JavaScript before jQuery?

    - by benhowdle89
    In my opinion, when I looked at JavaScript, it looked like not my cup of tea. When I came across jQuery, I loved it. I sat and watched Nettuts+ 15 days of jQuery screencasts, 1 year later and now I'm fairly confident I wouldn't develop a website without including jQuery's library. I have never felt this has held me back but my question is, will this come back and bite me in the ass one day, the fact that I didn't have a solid JavaScript foundation before jumping feet first into one of its best (if not the best) frameworks? Did anyone else take this approach?

    Read the article

  • Can not get to login screen, background starts with terminal prompt only

    - by Doug
    my uncle has Ubuntu on his work PC. Basically I came in to work today, and he had lost his UNITY side bar. I told him start with just rebooting it. He rebooted it... now it does not even get to the login screen. It gets to the background with the word UBUNTU, and the 6 or 7 dots, does it's little loading dot thing... then stops, and a black terminal opens on the top left with the background still in place. Personally, I think he screwed it up himself. He always swears he didn't touch anything, but I know better... Either way, I can't get him back into the desktop to even see if the sidebar is back. He's always screwing around pressing the wrong buttons on the login screen, hitting admin things and such... Any ideas?

    Read the article

  • Why should I not do a masters degree

    - by aurel
    I have left university on July 2010 where I studied web design (as we all know you learn more by your self but that’s not the issue at the moment). Since then I have not managed to find a job (apart from a one month work experience), from the way things are going, and by taking into account the fact that all my university friends are in the same situation, I don’t think that I am going to find a job soon (within the industry) Now as we all do, even though I don’t have a job, I am still working on personal projects and try to keep up to date (I don’t need a job or uni to do this) – but I am thinking, because there is not work available, would it be worth going back to uni for a master degree? I know I don’t need it and I know that is unlikely that I will learn anything important, as I believe in self learning, and in most cases it is a lot more effective (but I have to say I don’t mind going back to school) The only reason I am thinking of doing the master is, (and this is where I need your help): If it takes me a year to get a job, then on the interview, would the employer think “what the hell did this guy do since he left university” – now if I go to university that would solve this problem. Or I’m I making up a problem that does not exist Plus, I know that employers need examples of sites that I have been working on, at the moment I only have 3 (as when working on personal projects, where their is not time limit I tend drag things in order to get them perfect, and they never get perfect) – so by going back to uni, then this problem maybe solved I said all this as I have read a lot about the fact that you don’t need to have a masters degree to work on web design market (and I totally agree) but considering my concern, the question is should I do a masters course to avoid just spending hours in my room working and learning in my own (but that it would be hard to convince employers that I was really learning in my room) Maybe because I’m still young age 22 not that old anyway :), but I don’t have the “dream” of being rich, so if I were to tell the truth I don’t really care of the fact that I don’t have a job (at the moment), because regardless, I am working on what I love every day, but I know that in the future when I will need the job I may find it harder to get one, if I neglect doing so now Every time I ask a question that I’m not sure about, I keep going on and on, but I really hope you get what I am trying to get across. By the way the course that I am looking at for a masters says that it would teach me how to do these: e-commerce e-government e-science e-learning I don’t know any of them, a part from e-commerce Thanks

    Read the article

  • What is the application that displays notifications in Lubuntu 12.04 and how to remove it?

    - by cipricus
    I get all the time notifications in the upper right corner and as a tray icon for all kind of actions, especially downloads. What app is that in Lubuntu? (it is not notify-isd) Is it possible to set what notifications to see? If not, how to remove this completely? Update edit: Removing notification-demon (that being it seems the culprit, as an answer suggests) involves removing some important applications: Lubuntu-desktop is a metapackage I guess. But what about removing deluge and update-notifier? Can I add these back without getting back the notifier?? I do not want in any case to remove these two.

    Read the article

  • As a tooling/automation developer, can I be making better use of OOP?

    - by Tom Pickles
    My time as a developer (~8 yrs) has been spent creating tooling/automation of one sort or another. The tools I develop usually interface with one or more API's. These API's could be win32, WMI, VMWare, a help-desk application, LDAP, you get the picture. The apps I develop could be just to pull back data and store/report. It could be to provision groups of VM's to create live like mock environments, update a trouble ticket etc. I've been developing in .Net and I'm currently reading into design patterns and trying to think about how I can improve my skills to make better use of and increase my understanding of OOP. For example, I've never used an interface of my own making in anger (which is probably not a good thing), because I honestly cannot identify where using one would benefit later on when modifying my code. My classes are usually very specific and I don't create similar classes with similar properties/methods which could use a common interface (like perhaps a car dealership or shop application might). I generally use an n-tier approach to my apps, having a presentation layer, a business logic/manager layer which interfaces with layer(s) that make calls to the API's I'm working with. My business entities are always just method-less container objects, which I populate with data and pass back and forth between my API interfacing layer using static methods to proxy/validate between the front and the back end. My code by nature of my work, has few common components, at least from what I can see. So I'm struggling to see how I can better make use of OOP design and perhaps reusable patterns. Am I right to be concerned that I could be being smarter about how I work, or is what I'm doing now right for my line of work? Or, am I missing something fundamental in OOP? EDIT: Here is some basic code to show how my mgr and api facing layers work. I use static classes as they do not persist any data, only facilitate moving it between layers. public static class MgrClass { public static bool PowerOnVM(string VMName) { // Perform logic to validate or apply biz logic // call APIClass to do the work return APIClass.PowerOnVM(VMName); } } public static class APIClass { public static bool PowerOnVM(string VMName) { // Calls to 3rd party API to power on a virtual machine // returns true or false if was successful for example } }

    Read the article

  • Ubuntu 12.04. Compiz Failure. Computer has nothing to use

    - by Teddy
    I updated to Ubuntu 12.04 with Compiz..now Compiz is failing and I have no tool bars or ways of getting to applications except for firefox from a link presented from sending an error report. How am I supposed to: Fix this mess with no navigation to get to the terminal, or software center. When will this problem be fixed? When I updated the system why didn't it change me back to the crappy Unity interface (No offense). If it can't be fixed how do I get my files back?

    Read the article

  • The Internet from a 1990s Point of View [Video]

    - by Asian Angel
    Are you ready for a retro look at the Internet? Then prepare to journey back in time to 1995 with this video and its view of the early days of the Internet. From YouTube: Trine Gallegos hosts this segment shot in 1995 when the Internet was first becoming an icon. This is an interesting look back at how clunky the applications were. I don’t even think they were using a computer mouse yet. Internet – from the 1990′s point of view [via Fail Desk] How to Own Your Own Website (Even If You Can’t Build One) Pt 1 What’s the Difference Between Sleep and Hibernate in Windows? Screenshot Tour: XBMC 11 Eden Rocks Improved iOS Support, AirPlay, and Even a Custom XBMC OS

    Read the article

  • Easiest Way To Implement "Slow Motion" and variable game speed in XNA?

    - by TerryB
    I have an XNA 4.0 game that I want to be able to switch into slow motion and back again to full speed every now and then. So if you kill an enemy the game switches into slow motion as they explode and then goes back to normal. What is the easiest way to do this in XNA 4.0 without having to alter all my existing code that relies on GameTime? I have some code that relies on the TotalGameTime, which will be wrong unless I get XNA to slow down. Is there anyway to avoid refactoring that code? Thanks!

    Read the article

  • Why would you dual-run an app on Azure and AWS?

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/11/10/why-would-you-dual-run-an-app-on-azure-and-aws.aspxI had this question from a viewer of my Pluralsight course, Implementing the Reactive Manifesto with Azure and AWS, and thought I’d publish the response. So why would you dual-run your cloud app by hosting it on Azure and AWS? Sounds like a lot of extra development and management overhead. Well the most compelling reasons are reliability and portability. In 2012 I was working for a client who was making a big investment in the cloud, and at the end of the year we published their first external API for business partners. It was hosted in Azure and used some really nice features to route back into existing on-premise services. We were able to publish a clean, simple API to partners, and hide away the underlying complexity of the internal services while still leveraging them to do all the work. Two days after we went live, we were hit by the Azure SSL certificate expiry outage, and our API was unavailable for the best part of 3 days. Fortunately we had planned a gradual roll-out to partners, so the impact was minimal, but we’d been intending to ramp up quickly, and if the outage had happened a week or two later we would have been in a very bad place. Not least because our app could only run on Azure, we couldn’t package it up for another service without going back and reworking the code. More recently AWS had an issue with a networking device in one of their data centres which caused an outage that took the best part of a day to resolve. In both scenarios the SLAs are worthless, as you’ll get back a small percentage of your cloud expenditure, which is going to be negligible compared to your costs in dealing with the outage. And if your app is built specifically for AWS or Azure then if there’s an extended outage you can’t just deploy it onto a new set of kit from a different supplier. And the chances are pretty good there will be another extended outage, both for Microsoft and for Amazon. But the chances are small that it will happen to both at the same time. So my basic guidance has been: ignore the SLAs, go for better uptime by using two clouds. As soon as you need to scale beyond a single instance, start by scaling out to another cloud. Then scale out to different data centres in both clouds. Then you’ve got dual-cloud, quadruple-datacentre redundancy, so any more scaling you need can be left to the clouds to auto-scale themselves. By running in both clouds, you’ve made your app portable, so in the highly unlikely event that both AWS and Azure go down in multiple regions, you’ll have a deployment package which will let you spin up a new stack on yet another cloud, without having to rework your solution.

    Read the article

  • Iterative and Incremental Principle Series 2: Finding Focus

    - by llowitz
    Welcome back to the second blog in a five part series where I recount my personal experience with applying the Iterative and Incremental principle to my daily life.  As you recall from part one of the series, a conversation with my son prompted me to think about practical applications of the Iterative and Incremental approach and I realized I had incorporated this principle in my exercise regime.    I have been a runner since college but about a year ago, I sustained an injury that prevented me from exercising.  When I was sufficiently healed, I decided to pick it up again.  Knowing it was unrealistic to pick up where I left off, I set a goal of running 3 miles or approximately for 30 minutes.    I was excited to get back into running and determined to meet my goal.  Unfortunately, after what felt like a lifetime, I looked at my watch and realized that I had 27 agonizing minutes to go!  My determination waned and my positive “I can do it” attitude was overridden by thoughts of “This is impossible”.   My initial focus and excitement was not sustained so I never met my goal.   Understanding that the 30 minute run was simply too much for me mentally, I changed my approach.   I decided to try interval training.  For each interval, I planned to walk for 3 minutes, then jog for 2 minutes, and finally sprint for 1 minute, and I planned to repeat this pattern 5 times.  I found that each interval set was challenging, yet achievable, leaving me excited and invigorated for my next interval.  I easily completed five intervals – or 30 minutes!!  My sense of accomplishment soared. What does this have to do with OUM?  Have you heard the saying -- “How do you eat an elephant?  One bite at a time!”?  This adage certainly applies in my example and in an OUM systems implementation.  It is easier to manage, track progress and maintain team focus for weeks at a time, rather than for months at a time.   With shorter milestones, the project team focuses on the iteration goal.  Once the iteration goal is met, a sense of accomplishment is experience and the team can be re-focused on a fresh, yet achievable new challenge.  Join me tomorrow as I expand the concept of Iterative and incremental by taking a step back to explore the recommended approach for planning your iterations.

    Read the article

  • Which hidden files and directories do I need?

    - by Sammy Black
    In a previous question, I explained my situation/plan: backing up home directory on external drive, reformatting laptop drive, installing 14.04, putting home directory back. (It hasn't happened yet because I can't seem to find the down time, in case things aren't working right away.) It occurred to me that maybe I don't want all of those hidden files and directories (e.g. .local/share/ubuntuone/syncdaemon/, .cache/google-chrome/, etc.) Just judging by the amount of time in copying, I can tell that some of these hidden directories are large. Question: Are there any hidden directories that I obviously don't need/want when I have the laptop running an updated distribution? Will they cause conflicts? (I plan on copying the backed-up directory tree back onto the laptop with the --no-clobber option.)

    Read the article

  • Can't disable giant cursors (from accessibility mode)

    - by jackweirdy
    I've just installed ubuntu 12.04 from a livecd. Out of curiosity, I enabled the accessibility options for people who are hard of sight. As you can guess this does the usual stuff of inverting colours, increasing text size and making the cursor larger. Having finished the installation I booted into the new system to find accessibility mode was still installed. From the lightdm login screen I disabled this which switched colours and text size back to default, however it's only the pointer cursor that has gone back to default. To put it another way, the "hand" icon that you get when hovering over a link, the cursor which appears when typing and pretty much every other cursor on the system are still large. I've looked on the Universal Access menu, but there's no option to disable large cursors. I've tried toggling accessibility on and off but to no avail.

    Read the article

  • Web development for people who mainly do client side..

    - by kamziro
    Okay, I'm sure there are a lot of us that has plenty of experience developing c++/opengl/objective C on the iPhone, java development on android, python games, etc (any client side stuff) while having little to no experience on web-based development. So what skillset should one learn in order to be able to work on web projects, say, to make a facebook clone (I kid), or maybe a startup that specializes on connecting random fashionistas with pics etc. I actualy do have some experience with C#/VB.net back-end development a while back, but as part of a team, I had a lot of support from the senior devs. Is C# considered a decent web development language?

    Read the article

  • add packages to squid-deb-proxy cache

    - by zpletan
    To save bandwidth and data on my Internet plan, I have installed squid-deb-proxy on a desktop, and the client on it and a few other machines I've got. However, based on the post that put me onto this , it sounds like if I take my laptop to a different network and update it there, the downloaded updates will NOT be automatically copied back to the squid-deb-proxy server when I get back on my network. Assuming that this is correct (I will be testing later), is there a way I can stick these packages into the cache so I don't have to download them one more time for other machines in the network?

    Read the article

  • Mouse and keyboard stop working after suspend or screensaver lock

    - by LEo
    If I leave the computer and let it run into screensaver and lock the screen, the mouse left click won't go back to work. If I suspend the computer, the keyboard won't get back to work. It started after upgrading to Ubuntu 11.04. Any tips to solve this problem? The follwing lines I got on dmesg after the problem happened [30536.564415] psmouse.c: TouchPad at isa0060/serio1/input0 lost sync at byte 1 [30536.565725] psmouse.c: TouchPad at isa0060/serio1/input0 lost sync at byte 1 [30536.568466] psmouse.c: TouchPad at isa0060/serio1/input0 lost sync at byte 1 [30536.569790] psmouse.c: TouchPad at isa0060/serio1/input0 lost sync at byte 1 [30536.571123] psmouse.c: TouchPad at isa0060/serio1/input0 lost sync at byte 1 [30536.571126] psmouse.c: issuing reconnect request and that after I tried to plug again my USB mouse: [31570.040088] usb 6-1: USB disconnect, address 2 [31573.490095] usb 6-1: new low speed USB device using uhci_hcd and address 3 [31573.687376] input: Microsoft Basic Optical Mouse as /devices/pci0000:00/0000:00:1d.1/usb6/6-1/6-1:1.0/input/input12 [31573.687544] generic-usb 0003:045E:0084.0002: input,hidraw0: USB HID v1.10 Mouse [Microsoft Basic Optical Mouse] on usb-0000:00:1d.1-1/input0

    Read the article

  • Client-Server connection response timeout issues

    - by Srikar
    User creates a folder in client and in the client-side code I hit an API to the server to make this persistent for that user. But in some cases, my server is so busy that the request timesout. The server has executed my request but timedout before sending a response back to client. The timeout set is 10 seconds in client. At this point the client thinks that server has not executed its request (of creating a folder) and ends up sending it again. Now I have 2 folders on the server but the user has created only 1 folder in the client. How to prevent this? One of the ways to solve this is to use a unique ID with each new request. So the ID acts as a distinguisher between old and new requests from client. But this leads to storing these IDs on my server and do a lookup for each API call which I want to avoid. Other way is to increase the timeout duration. But I dont want to change this from 10 seconds. Something tells me that there are better solutions. I have posted this question in stackoverflow but I think its better suited here. UPDATE: I will make my problem even more explicit. The client is a webbrowser and the server is running nginx+django+mysql (standard stack). The user creates a folder in webbrowser. As a result I need to hit a server API. The API call responds back, thereby client knows API call was success. This is normal scenario. Sometimes though, server successfully completes the API request but the client-side (webbrowser) connection timesout before server can respond back. The client has no clue at this point. The user thinks the request was a fail & clicks again. This time it was a success but when the UI refreshes he sees 2 folders. I want to remedy this situation.

    Read the article

  • Disable incognito in chrome or chromium

    - by TheIronKnuckle
    I'm addicted to certain websites to the point where it's interfering with my life regularly and sick of it. I want to install website blockers that aren't easy to circumvent. In Chrome, incognito mode is easily accessible with a ctrl-shift-n. That is ridiculous. Whenever I feel an urge to go on an addictive website, it doesn't matter what blockers and regulators I've got installed; three keys can get round them in a second. Simply uninstalling chrome isn't an option either, as it's way too easy to sudo apt-get install it right back. So yes, I want to disable incognito mode completely (and if possible making it totally impossible to get it back). I note that some guy has figured out how to do it on windows with a registry entry: http://wmwood.net/software/incognito-gone-get-rid-of-private-browsing/ If it can be done on windows it can be done on ubuntu!

    Read the article

  • failure to restore backup from deja dup

    - by Layla Kosakov
    I had ubuntu 12.04.1 and I made a backup with deja dup of the home folder in an external hard disk. Today I installed ubuntu 14.4 and erased the ubuntu 12.04.1. Now I'm trying to restore my back up. First it ask where is the back up to restore, then it ask of what date to restore, and then it starts, after a wile ask for the password. I put the password and it says Restoring and stays in preparing with out any advancement. Don't show any error, just stays preparing. The window of details is in white. I had all my documents... it's very bad for me, all my personal data... lost? Thanks for any help, Layla.

    Read the article

  • How can I backup my PPAs?

    - by Scaine
    Related to this question. But my concern is that over the past year, most of my more interesting (or used) applications are from PPAs, and just backing up my sources list won't add the associated launchpad keys the way that add-apt-repository does. So I'm looking for a way to list all the PPA urls (like ppa:chromium-daily/stable) so that I can easily script a series of add-apt-repository commands to add them into a new installation gracefully. Short of dumping my bash history of course. Which might be feasible, depending on how far back that file goes back?

    Read the article

  • A bacon- (and module-) saving PowerShell incident

    - by AaronBertrand
    Earlier today I made a big goof. I opened a module in Notepad, intending to use it as the basis for a new module. I was in the process of using "File > Save As" when my phone rang just at the precise instant that, for some reason, made me click on "File > Save" by mistake. After hitting Ctrl+Z 30 times to try to get the old version of the module back, I remembered that Notepad has never had more than one level of Undo. Back when I was coding ASP by hand, I was very well aware of this, but I...(read more)

    Read the article

  • Architecture for a template-building, WYSIWIG application

    - by Sam Selikoff
    I'm building a WYSIWYG designer in Ember.js. The designer will allow users to create campaigns - think MailChimp. To build a campaign, users will choose an existing template. The template will have a defined layout. The user will then be taken to the designer, where he will be able to edit the text and style, and additionally change some layout options. I've been thinking about how best to go about structuring this app, and there are a few hurdles. Specifically, the output of the campaign will be dynamic: eventually, it will be published somewhere, and when the consumers (not my users, but the people clicking on the campaign that my user created) visit the campaign, certain pieces of data will change, depending on the type of consumer viewing the campaign. That means the ultimate output of the designer will be a dynamic site. The data that is dynamic for this site - the end product - will not be manipulated by the user in the designer. However, the data that will be manipulated by the user in the designer are things like copy, styles, layout options, etc. I'll call the first set of variables server-side data, and the second client-side data. It seems, then, that the process will go something like this: I'll need to create templates for this designer that have two dynamic segments. For instance, the server-side data could be Liquid expressions, and the client-side data Handlebars expressions. When the user creates a campaign, I would compile the template on the back end using some dummy data for the server-side variables, and serve up a handlebars template to the Ember app. The user would then edit the template, and the Ember app would save all his edits to the JS variables that were powering the template. This way he'd be able to preview the template. When he saves, he'll send back the selected template, along with all the data and options he's made. When it comes time to publish, the back-end system will have to do two things: compile the template with Handlebars using the campaign data, and then compile the template with Liquid using the server-side data Is my thinking roughly accurate about this, or is there a simpler way?

    Read the article

  • Downgrading from ubuntu 11.10 to 10.10, keeping installed programs

    - by Peter
    I recently upgraded from 10.10 to 11.04 then 11.10, and I'd like to revert back to 10.10. I understand that you cannot downgrade a version as easily as you can upgrade, and that I'll probably have to get the boot CD again and reinstall the whole thing. I know that I can keep most of my files by saving the /home directory, so 2 questions: Once I've gone back to 10.10, can I juts copy my old version of home over the freshly installed one? Is there a way to keep all of my installed programs, or some sort of way of getting the new install to automatically install them? Will I have to go through the tricky setups of things like TeX all over again? Thanks

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >