Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 479/965 | < Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • Perform shell operation through secure shell

    - by Ben
    Is it possible to perform a shell operation from a bash script through a secure shell. Here is an example of why you may want to do this. Lets say you have a simple unix operating system that you need only build and run on, but you want to do all of the development on another machine. I want to write a bash script that has the following functionality: scp file to location on other machine ssh to other machine cd into correct directory make run program scp results to file on original computer exit ssh Is this remotely possible? (Pardon the Pun :p)

    Read the article

  • SQL Server 2008 R2: These are a Few of My Favorite Things

    - by smisner
    This month's T-SQL Tuesday is hosted by Jorge Segarra (blog | twitter) who decided that we should write about our favorite new feature in SQL Server 2008 R2. The majority of my published works concentrates on Reporting Services, so the obious answer for me is about favorite new features is...Reporting Services. I can't pick just one thing in Reporting Services, so instead I thought I'd compile a list of my posts of the new features in Reporting Services 2008 R2: Map Wizard for spatial data (The World is But a Stage) Pagination features (I've Got Your Page Number) Lookup functions (Look Up, Look Down, Look All Around - Part I, Part II, Part III) Test Connection button (Testing, Testing 1-2-3) Conditional formatting based on format, i.e. RenderFormat (As You Like It) And I wrote an overview of the business intelligence features in SQL Server 2008 R2 for Microsoft Press in the free e-book, Introducing Microsoft SQL Server 2008 R2, if you're curious about what else is new in both the BI platform as well as the relational engine. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to give a Linux user permission to create backups, but not permission to delete them?

    - by ChocoDeveloper
    I want to set up automated backups that are kept safe from myself (in case a virus pwns me). The problem is the "create" and "delete" permissions are the same thing: write permission. So what can I do about it? Is it possible to decouple the create/delete permissions? Another option could be to let the user "root" make the backups. The problem is my home directory is encrypted, and I don't want to backup everything. Any ideas? For the backups I'm using Deja Dup, which is installed by default in Fedora and Ubuntu.

    Read the article

  • Use Oracle Product Hub Business Events to Integrate Additional Logic into Your Business Flows

    - by ToddAC-Oracle
    Business events provide a mechanism to plug-in and integrate some additional business processes or custom code into standard business flows.  You could send a notification to a business User, write to advanced queues or perform some custom processes. In-built business events are available specifically for each flow like Item Creation, Item Updation, User-Defined Attribute Changes, Change Order Creation, Change Order Status Changes and others.To get a list of business events, refer to the PIM implementation Guide or Using Business Events in PLM and PIM Data Librarian (Doc ID 372814.1) .If you are planning to use business events, Doc ID 1074754.1 walks you through a setup with examples. How to Subscribe and Use Product Hub (PIM / APC) Business Events [Video] ? (Doc ID 1074754.1). Review the 'Presentation' section of Doc ID 1074754.1 for complete information and best practices to follow while implementing code for subscriptions. Learn things you might want to avoid, like commit statements for instance. Doc ID 1074754.1 also provides sample code for testing, and can be used to troubleshoot missing setups or frequently experienced issues. Take advantage and run a test ahead of time with the sample code to isolate any issues from within business specific subscription code.Get more out of Oracle Product Hub by using Business Events!

    Read the article

  • Questioning the motivation for dependency injection: Why is creating an object graph hard?

    - by oberlies
    Dependency injection frameworks like Google Guice give the following motivation for their usage (source): To construct an object, you first build its dependencies. But to build each dependency, you need its dependencies, and so on. So when you build an object, you really need to build an object graph. Building object graphs by hand is labour intensive (...) and makes testing difficult. But I don't buy this argument: Even without dependency injection, I can write classes which are both easy to instantiate and convenient to test. E.g. the example from the Guice motivation page could be rewritten in the following way: class BillingService { private final CreditCardProcessor processor; private final TransactionLog transactionLog; // constructor for tests, taking all collaborators as parameters BillingService(CreditCardProcessor processor, TransactionLog transactionLog) { this.processor = processor; this.transactionLog = transactionLog; } // constructor for production, calling the (productive) constructors of the collaborators public BillingService() { this(new PaypalCreditCardProcessor(), new DatabaseTransactionLog()); } public Receipt chargeOrder(PizzaOrder order, CreditCard creditCard) { ... } } So dependency injection may really be an advantage in advanced use cases, but I don't need it for easy construction and testability, do I?

    Read the article

  • Windows Despooler Requires Administrator to Print

    - by Software Monkey
    Does anyone know what changes I might need to make to allow restricted users to print using a printer configured for spooling? My Windows XP SP3 system currently requires me to use an Admin account for printing if the print is configured to spool documents before printing. If the printer is configured for direct printing it works for all accounts. This used to work and some months back it just stopped, and I can't pin down why. The printer itself is configured for all uses to have complete authority. My system is locked down for restricted users given them only read authority to the entire file system except their data directories, which is how I have run my systems for years. I assume there may be a directory somewhere that I need to allow users to write to.

    Read the article

  • Small change in MVVM Light Toolkit templates for Blend 4 RC

    - by Laurent Bugnion
    Ah, the joy of new releases… You will find that the MVVM Light Toolkit works fine with Visual Studio 2010 RTM and Blend 4 RC except for a few adjustments: Blend templates The path to the Expression Blend 4 project templates changed. If you start Expression Blend 4 RC now, you will likely not see the MVVM Light templates in the New Project dialog.   New Project dialog with MVVM Light To restore the templates, follow the steps: Open Windows Explorer Navigate to C:\Users\[username]\Documents\Expression (or simply type My Documents in Windows Explorer and then open the Expression folder). Change the name of the “Blend 4 beta” folder into “Blend 4”. That’s it, you should now see the templates in the New Project dialog in Blend 4. Note that since the new name is “Blend 4”, I hope that I won’t need to do the same exercise when Blend 4 RTM is released! Windows Phone 7 templates Since the Windows Phone 7 tools are not ready yet for Visual Studio 2010 RTM and Blend 4 RC, the templates in the Silverlight for Windows Phone folders will not work. You will get an error if you try to create a new such project in the newly released environment. I hesitated to remove these templates from the current packages, but honestly that is a lot of trouble for a very short time before the tools for Windows Phone 7 are released (note: I don’t have any information as to when these tools will be released). In the mean time, just don’t create a WinPhone7 application. Reminder: If you want to write code for Windows Phone 7, you need to keep the Visual Studio 2010 RC as well as Expression Blend 4 beta. Updated package I uploaded an update to the Blend 4 templates. It is available like before on the “Install manually” page and on the Codeplex page.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Accurate Windows equivalent of the Unix which(1) command

    - by SamB
    It's easy enough to write a simple script that works like the which(1) command from unix, which searches for a given command along the PATH. Unfortunately, the CreateProcess function is not so simple, so this type of script does not give accurate results: CreateProcess looks in a number of directories not in the PATH, looks for files with all of the extensions listed in PATHEXT, etc. Worse, who knows what might be added in future versions of Windows? Anyway, my question is: is there a robust, accurate which(1) equivalent for Windows, which always tells you what file CreateProcess would find?

    Read the article

  • Best and Proper Permissions Settings for Directory

    - by Dr. DOT
    I am interested in knowing the proper, yet security-conscious settings for a directory. Here's my scenario: I have a username for FTP access to my server called "user". For the purpose of the scenario, PHP runs as "nobody" on my server. I have a directory off the document root called "sample". The "sample" directory is chmod'd at 0755 (drwxr-xr-x) "Sample" is owned by "user" and the group is set to "user" The above is all very straight forward and standard. So I want to have a script be able to create (mkdir) and delete (rmdir) directories under "sample". Yet, I don't want to obviously overly expose my server by opening up the permissions (I could easily chmod sample to 0777 and make it world write-able). What is the best combination of permissions, owner settings and/or group settings to allow my script to create and delete directories under "sample" while retaining the ability for "user" to continue to FTP into the directory? Thanks.

    Read the article

  • Is there a network "tee"-alike with one leg returning to /dev/null ?

    - by Steff Davies
    I've just built a new PostgreSQL server for my employers, which is happily replicating using WALs. I'm now left with the problem of verifying its performance. One nice way which came up in conversation is to break replication with the slave caught up and then direct all production traffic to both servers, discarding the responses from the new server and returning those from the current one to the clients. Once we're sure performance is OK, we re-sync the slave and can fail over with confidence. Bliss. This would require a TCP proxy capable of opening two outgoing connections for each incoming one, and discarding the data returned from one of them, which is a tricky thing to google for, it seems. Do the assembled brains know of such a thing, before I dive into libevent and write one?

    Read the article

  • How to "translate" interdependent object states in code?

    - by Earl Grey
    I have the following problem. My UI interace contains several buttons, labels, and other visual information. I am able to describe every possible workflow scenario that should be be allowed on that UI. That means I can describe it like this - when button A is pressed, the following should follow - In the case of that A button, there are three independent factors that influence the possible result when pushing the A button. The state of the session (blank, single, multi, multi special), the actual work that is being done by the system at the moment of pressing the A button (nothing was happening, work was being done, work was paused) and a separate UI element that has two states (on , off)..This gives me a 3 dimensional cube with 24 possible outcomes. I could write code for this using if cycles, switch cycles etc....but the problem is, I have another 7 buttons on that ui, I can enter this UI from different states..some buttons change the state, some change parameters... To sum up, the combinations are mindbogling and I am not able come up with a methodology that scales and is systematically reliable. I am able to describe EVERY workflow with words, I am sure my description is complete and without logical errors. But I am not able to translate that into code. I was trying to draw flowcharts but it soon became visually too complicated due to too many if "emafors". Can you advice how to proceeed?

    Read the article

  • How to get initial API right using TDD?

    - by Vytautas Mackonis
    This might be a rather silly question as I am at my first attempts at TDD. I loved the sense of confidence it brings and generally better structure of my code but when I started to apply it on something bigger than one-class toy examples, I ran into difficulties. Suppose, you are writing a library of sorts. You know what it has to do, you know a general way of how it is supposed to be implemented (architecture wise), but you keep "discovering" that you need to make changes to your public API as you code. Perhaps you need to transform this private method into strategy pattern (and now need to pass a mocked strategy in your tests), perhaps you misplaced a responsibility here and there and split an existing class. When you are improving upon existing code, TDD seems a really good fit, but when you are writing everything from scratch, the API you write tests for is a bit "blurry" unless you do a big design up front. What do you do when you already have 30 tests on the method that had its signature (and for that part, behavior) changed? That is a lot of tests to change once they add up.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • How can artificially create a slow query in mysql?

    - by Gray Race
    I'm giving a hands on presentation in a couple weeks. Part of this demo is for basic mysql trouble shooting including use of the slow query log. I've generated a database and installed our app but its a clean database and therefore difficult to generate enough problems. I've tried the following to get queries in the slow query log: Set slow query time to 1 second. Deleted multiple indexes. Stressed the system: stress --cpu 100 --io 100 --vm 2 --vm-bytes 128M --timeout 1m Scripted some basic webpage calls using wget. None of this has generated slow queries. Is there another way of artificially stressing the database to generate problems? I don't have enough skills to write a complex Jmeter or other load generator. I'm hoping perhaps for something built into mysql or another linux trick beyond stress.

    Read the article

  • Is TrueCrypt robust against data corruption?

    - by Dimitri C.
    I would expect a TrueCrypt volume to be fragile when it suffers from data corruption. This could happen for example because the hard disk, CD or DVD start to deteriorate, or when an USB stick is unplugged while a write is in progress. On the TrueCrypt FAQ it is mentioned that this problem is limited because the data is encrypted in blocks of 16 bytes. However, I'dd like to know if this really so in practice. Is there anyone who has experienced severe data loss due to only small corruptions?

    Read the article

  • Goal Tracking data seems to be inaccurate?

    - by Khuram Malik
    I setup some Goal Tracking about one week ago. I had multiple goals in one set. The goal itself was the "send" button being pressed on the callback form (i did that by pushing a pageview to Google Analytics everytime the send button is pressed) For each goal, i listed the first step as a required step. So for example, the ILR Page was step 1 and set as required and the goal was "/CallbackFormFilled" Looking at the stats a week later i'm getting some very inflated numbers especially when comparing them to my manually filled excel spreadsheet and i'm struggling to understand the cause of this behaviour. I'm unable to attach screenshots unfortunately since my StackExchange account for this site is brand new My own thoughts My own thoughts were that maybe its because i have setup multiple goals with the same end goal URL, but i thought that was a valid setup since i want to track multiple routes so to speak(?) I've disabled all other goals for now to confirm this, but im waiting for stats to come in as i write this. I also wonder if the contact form im using in Wordpress is causing a problem, but i've simply added one javascript line on the send button that pushes a pageview so not sure if that should cause an issue. Here is a link to setting up analytics on this contact form plugin in wordpress for reference: (see javascript action hook section) - http://ideasilo.wordpress.com/2009/05/31/contact-form-7-1-10/

    Read the article

  • Assign PowerShell script to run at startup using PowerShell on Window Server 2012

    - by James Toyer
    I'm trying to write a PowerShell script that will run when a Windows 2012 instance is created on AWS using the configuration tools provided by AWS. My problem is that I want to change the name of the machine once it has started up, restart the machine and carry on set up process after. The main reason for this is that one of the applications, Boundary, installed in the set up process takes the name of the server when first installed. It is then doesn't seem possible to change it's name in their portal. Ideally I would have two PowerShell scripts, one to start the set up process, initialised through AWS and another that runs the first time the machine restarts. This second script would ideally be queued to run on the next start by the initial set up script. So I guess my question are: Is this possible? How would I go about doing this. My Google foo is letting me down here so any answers would be appreciated.

    Read the article

  • How to boot RHEL with no bash?

    - by nmelmun
    How can I boot a RHEL VM if I deleted /bin/bash? When trying to boot, I now get the following error: "INIT: Cannot execute "/etc/rc/d/rc.sysinit" The next thing I tried was to modify the kernel boot parameters by adding init=/bin/ksh at the end of the line, which gave me a functional shell. After this, in order to get write permissions, I remounted the root partition with: mount -o remount,rw / Then I tried to boot using ksh as the shell by tricking the system into thinking it's bash: ln -s /bin/ksh /bin/bash Then restarted the system normally. Unfortunately this didnt work since ksh is not compatible and /etc/rc.d/rc.sysinit uses several bash-specific tricks. Does anyone else have a suggestion on how I could get the system to boot normally without reinstalling bash?

    Read the article

  • Any Practical Alternative to the Signals + Slots model for GUI Programming?

    - by IntermediateHacker
    The majority of GUI Toolkits nowadays use the Signals + Slots model. It was Qt and GTK+, if I am not wrong, who pioneered it. You know, the widgets or graphical objects (sometimes even ones that aren't displayed) send signals to the main-loop handler. The main-loop handler then calls the events, callbacks or slots assigned for that widget / graphical object. There are usually default (and in most cases virtual) event-handlers already provided by the toolkit for handling all pre-defined signals, therefore, unlike previous designs where the developer had to write the entire main-loop and handler for each and every message himself (think WINAPI), the developer only has to worry about the signals he needs to implement new functionality on. Now this design is being used in most modern toolkits as far as I know. There are Qt, GTK+, FLTK etc. There is Java Swing. C# even has a language feature for it ( events and delegates ), and Windows Forms has been developed on this design. In fact, over the last decade, this design for GUI programming has become a kind of an unwritten standard. Since it increases productivity and provides greater abstraction. However, my question is: Is there any alternative design, that is parallel or practical for modern GUI programming? i.e Is the Signals + Slots design, the only practical one in town? Is it feasible to do GUI Programming with any other design? Are any modern (preferably successful and popular) GUI toolkits built on an alternative design?

    Read the article

  • usb wifi dongle on ubuntu server, cannot install realtek driver RTL 8188cus

    - by Sandro Dzneladze
    I got cheap Ebay wifi dongle from HongKong, Im trying to set it up on my ubuntu server. Occasionally need to move server, so it cannot always be connected to router via lan. Anyhow, usb wifi came with a driver cd. I uploaded files to my home directory and tried to run install script (RTL 8188cus): sudo bash install.sh But I get error: Authentication requested [root] for make driver: make ARCH=x86_64 CROSS_COMPILE= -C /lib/modules/2.6.38-8-server/build M=/home/minime/RTL 8188cus/Linux/driver/rtl8192CU_linux_v2.0.1324.20110126 modules make[1]: Entering directory `/usr/src/linux-headers-2.6.38-8-server' make[1]: *** No rule to make target `8188cus/Linux/driver/rtl8192CU_linux_v2.0.1324.20110126'. Stop. make[1]: Leaving directory `/usr/src/linux-headers-2.6.38-8-server' make: *** [modules] Error 2 Compile make driver error: 2, Please check error Mesg Any ideas what Im doing wrong? There is another driver folder for linux called: RTL 81XX, which doesn't have install.sh at all! I tried to use make command, but I get: make: *** No targets specified and no makefile found. Stop. Any help? this is first time I'm installing driver from source. Im on Ubuntu 11.04 server. lsusb Bus 001 Device 002: ID 0bda:8176 Realtek Semiconductor Corp. lspci -nn 00:00.0 Host bridge [0600]: Intel Corporation N10 Family DMI Bridge [8086:a000] (rev 02) 00:02.0 VGA compatible controller [0300]: Intel Corporation N10 Family Integrated Graphics Controller [8086:a001] (rev 02) 00:1b.0 Audio device [0403]: Intel Corporation N10/ICH 7 Family High Definition Audio Controller [8086:27d8] (rev 02) 00:1c.0 PCI bridge [0604]: Intel Corporation N10/ICH 7 Family PCI Express Port 1 [8086:27d0] (rev 02) 00:1d.0 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #1 [8086:27c8] (rev 02) 00:1d.1 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #2 [8086:27c9] (rev 02) 00:1d.2 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #3 [8086:27ca] (rev 02) 00:1d.3 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB UHCI Controller #4 [8086:27cb] (rev 02) 00:1d.7 USB Controller [0c03]: Intel Corporation N10/ICH 7 Family USB2 EHCI Controller [8086:27cc] (rev 02) 00:1e.0 PCI bridge [0604]: Intel Corporation 82801 Mobile PCI Bridge [8086:2448] (rev e2) 00:1f.0 ISA bridge [0601]: Intel Corporation NM10 Family LPC Controller [8086:27bc] (rev 02) 00:1f.2 IDE interface [0101]: Intel Corporation N10/ICH7 Family SATA IDE Controller [8086:27c0] (rev 02) 00:1f.3 SMBus [0c05]: Intel Corporation N10/ICH 7 Family SMBus Controller [8086:27da] (rev 02) 01:00.0 Ethernet controller [0200]: Atheros Communications Device [1969:1083] (rev c0) sudo lshw description: Desktop Computer product: To Be Filled By O.E.M. (To Be Filled By O.E.M.) vendor: To Be Filled By O.E.M. version: To Be Filled By O.E.M. serial: To Be Filled By O.E.M. width: 64 bits capabilities: smbios-2.6 dmi-2.6 vsyscall64 vsyscall32 configuration: boot=normal chassis=desktop family=To Be Filled By O.E.M. sku=To Be Filled By O.E.M. uuid=00020003-0004-0005-0006-000700080009 *-core description: Motherboard product: AD525PV3 vendor: ASRock physical id: 0 *-firmware description: BIOS vendor: American Megatrends Inc. physical id: 0 version: P1.20 date: 04/01/2011 size: 64KiB capacity: 448KiB capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb ls120boot zipboot biosbootspecification netboot *-cpu description: CPU product: Intel(R) Atom(TM) CPU D525 @ 1.80GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: Intel(R) Atom(TM) CPU D525 @ 1.80GHz serial: To Be Filled By O.E.M. slot: CPUSocket size: 1800MHz capacity: 1800MHz width: 64 bits clock: 200MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl tm2 ssse3 cx16 xtpr pdcm movbe lahf_lm configuration: cores=2 enabledcores=2 threads=4 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 48KiB capacity: 48KiB capabilities: internal write-back data *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 1MiB capacity: 1MiB capabilities: internal write-back unified *-memory description: System Memory physical id: c slot: System board or motherboard size: 2GiB *-bank:0 description: SODIMM DDR2 Synchronous 800 MHz (1.2 ns) product: ModulePartNumber00 vendor: Manufacturer00 physical id: 0 serial: SerNum00 slot: DIMM0 size: 2GiB width: 64 bits clock: 800MHz (1.2ns) *-bank:1 description: DIMM [empty] product: ModulePartNumber01 vendor: Manufacturer01 physical id: 1 serial: SerNum01 slot: DIMM1 *-pci description: Host bridge product: N10 Family DMI Bridge vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 02 width: 32 bits clock: 33MHz configuration: driver=agpgart-intel resources: irq:0 *-display description: VGA compatible controller product: N10 Family Integrated Graphics Controller vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 02 width: 32 bits clock: 33MHz capabilities: msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:41 memory:fea80000-feafffff ioport:dc00(size=8) memory:e0000000-efffffff memory:fe900000-fe9fffff *-multimedia description: Audio device product: N10/ICH 7 Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 02 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:43 memory:fea78000-fea7bfff *-pci:0 description: PCI bridge product: N10/ICH 7 Family PCI Express Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: 02 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:e000(size=4096) memory:feb00000-febfffff ioport:80000000(size=2097152) *-network description: Ethernet interface product: Atheros Communications vendor: Atheros Communications physical id: 0 bus info: pci@0000:01:00.0 logical name: eth0 version: c0 serial: XX size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.99 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:42 memory:febc0000-febfffff ioport:ec00(size=128) *-usb:0 description: USB Controller product: N10/ICH 7 Family USB UHCI Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:23 ioport:d880(size=32) *-usb:1 description: USB Controller product: N10/ICH 7 Family USB UHCI Controller #2 vendor: Intel Corporation physical id: 1d.1 bus info: pci@0000:00:1d.1 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:19 ioport:d800(size=32) *-usb:2 description: USB Controller product: N10/ICH 7 Family USB UHCI Controller #3 vendor: Intel Corporation physical id: 1d.2 bus info: pci@0000:00:1d.2 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:18 ioport:d480(size=32) *-usb:3 description: USB Controller product: N10/ICH 7 Family USB UHCI Controller #4 vendor: Intel Corporation physical id: 1d.3 bus info: pci@0000:00:1d.3 version: 02 width: 32 bits clock: 33MHz capabilities: uhci bus_master configuration: driver=uhci_hcd latency=0 resources: irq:16 ioport:d400(size=32) *-usb:4 description: USB Controller product: N10/ICH 7 Family USB2 EHCI Controller vendor: Intel Corporation physical id: 1d.7 bus info: pci@0000:00:1d.7 version: 02 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:fea77c00-fea77fff *-pci:1 description: PCI bridge product: 82801 Mobile PCI Bridge vendor: Intel Corporation physical id: 1e bus info: pci@0000:00:1e.0 version: e2 width: 32 bits clock: 33MHz capabilities: pci subtractive_decode bus_master cap_list *-isa description: ISA bridge product: NM10 Family LPC Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 02 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: latency=0 *-ide description: IDE interface product: N10/ICH7 Family SATA IDE Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 logical name: scsi0 version: 02 width: 32 bits clock: 66MHz capabilities: ide pm bus_master cap_list emulated configuration: driver=ata_piix latency=0 resources: irq:19 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:ff90(size=16) memory:80200000-802003ff *-disk description: ATA Disk product: WDC WD10TPVT-11U vendor: Western Digital physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: 01.0 serial: WD-WXC1A80P0314 size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00088c47 *-volume:0 description: EXT4 volume vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 logical name: /media/private version: 1.0 serial: 042daf2d-350c-4640-a76a-4554c9d98c59 size: 300GiB capacity: 300GiB capabilities: primary journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2011-11-06 11:05:03 filesystem=ext4 label=Private lastmountpoint=/media/private modified=2012-04-13 20:01:16 mount.fstype=ext4 mount.options=rw,relatime,barrier=1,stripe=1,data=ordered mounted=2012-04-13 20:01:16 state=mounted *-volume:1 description: Extended partition physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 size: 625GiB capacity: 625GiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume:0 description: Linux filesystem partition physical id: 5 logical name: /dev/sda5 logical name: /media/storage capacity: 600GiB configuration: mount.fstype=ext4 mount.options=rw,relatime,barrier=1,stripe=1,data=ordered state=mounted *-logicalvolume:1 description: Linux filesystem partition physical id: 6 logical name: /dev/sda6 logical name: /media/dropbox capacity: 24GiB configuration: mount.fstype=ext4 mount.options=rw,relatime,barrier=1,stripe=1,data=ordered state=mounted *-volume:2 description: EXT4 volume vendor: Linux physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 logical name: /media/www version: 1.0 serial: 9b0a27b4-05d8-40d5-bfc7-4aeba198db7b size: 2570MiB capacity: 2570MiB capabilities: primary journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2011-11-06 11:05:11 filesystem=ext4 label=www lastmountpoint=/media/www modified=2012-04-15 11:31:12 mount.fstype=ext4 mount.options=rw,relatime,barrier=1,stripe=1,data=ordered mounted=2012-04-15 11:31:12 state=mounted *-volume:3 description: Linux swap volume physical id: 4 bus info: scsi@0:0.0.0,4 logical name: /dev/sda4 version: 1 serial: 6ed1130e-3aad-4fa6-890b-77e729121e3b size: 4098MiB capacity: 4098MiB capabilities: primary nofs swap initialized configuration: filesystem=swap pagesize=4096 *-serial UNCLAIMED description: SMBus product: N10/ICH 7 Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 02 width: 32 bits clock: 33MHz configuration: latency=0 resources: ioport:400(size=32) *-scsi physical id: 1 bus info: usb@1:4 logical name: scsi2 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk description: SCSI Disk physical id: 0.0.0 bus info: scsi@2:0.0.0 logical name: /dev/sdb size: 3864MiB (4051MB) capabilities: partitioned partitioned:dos configuration: signature=000b4c55 *-volume description: EXT4 volume vendor: Linux physical id: 1 bus info: scsi@2:0.0.0,1 logical name: /dev/sdb1 logical name: / version: 1.0 serial: 33926e39-4685-4f63-b83c-f2a67824b69a size: 3862MiB capacity: 3862MiB capabilities: primary bootable journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2011-10-11 14:03:46 filesystem=ext4 lastmountpoint=/ modified=2012-03-19 11:47:29 mount.fstype=ext4 mount.options=rw,noatime,errors=remount-ro,barrier=1,data=ordered mounted=2012-04-15 11:31:11 state=mounted rfkill list all Doesnt show anything! dmesg | grep -i firmware [ 0.715481] pci 0000:00:1f.0: [Firmware Bug]: TigerPoint LPC.BM_STS cleared

    Read the article

  • Unity3D: How to make the camera focus a moving game object with ITween?

    - by nathan
    I'm trying to write a solar system with Unity3D. Planets are sphere game objects rotating around another sphere game object representing the star. What i want to achieve is let the user click on a planet and then zoom the camera on this planet and then make the camera follow and keep it centered on the screen while it keep moving around the star. I decided to use iTween library and so far i was able to create the zoom effect using iTween.MoveUpdate. My problem is that the focused planet does not say properly centered as it moves. Here is the relevant part of my script: void Update () { if (Input.GetButtonDown("Fire1")) { Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition); RaycastHit hit; if (Physics.Raycast(ray, out hit, Mathf.Infinity, concernedLayers)) { selectedPlanet = hit.collider.gameObject; } } } void LateUpdate() { if (selectedPlanet != null) { Vector3 pos = selectedPlanet.transform.position; pos.z = selectedPlanet.transform.position.z - selectedPlanet.transform.localScale.z; pos.y = selectedPlanet.transform.position.y; iTween.MoveUpdate(Camera.main.gameObject, pos, 2); } } What do i need to add to this script to make the selected planet stay centered on the screen? I hosted my current project as a webplayer application so you see what's going wrong. You can access it here.

    Read the article

  • Linux LVM snapshot commit or revert?

    - by Shewfig
    Hi, I'm about to perform an experimental upgrade on my CentOS 5 server. If the upgrade fails, I want to be able to back out the changes to the filesystem. This scenario seems similar to the example in Section 3.8 of the LVM HOWTO for LVM2 read-write snapshots - but the example is rather lacking in actual how-to. 1) How would I commit the changes, merging them back into the original partition? 2) How would I revert the changes, restoring the filesystem back to its original state? Should I assume that I'll need to restart several services, if not outright reboot? 3) Is it possible to snapshot only certain directories on a partition, or is it a partition-wide operation? Thanks...

    Read the article

  • What's the best self-tracking software for Linux?

    - by trench
    I'm looking for a way to track myself and receive quality data upon which I can write future scripts/programs. For example, I use Google Reader a lot. I'd like to track the hrefs that garner my clicks. Further, I'd like to drop all of the words of each href into a database where they can be stacked in a hierarchical manner. At the end of the week I want to know that "Ubuntu" garnered 448 clicks and "Cheetos" garnered 2. :) That's just one example... I'd like this tracking and data-collecting to extend beyond my browser. I know writing something to do this myself wouldn't be too awfully difficult but if something already exists I'd happily use it. Thanks in advance. Primary OS: Ubuntu 10.04

    Read the article

< Previous Page | 475 476 477 478 479 480 481 482 483 484 485 486  | Next Page >