Search Results

Search found 26126 results on 1046 pages for 'generic service contract'.

Page 693/1046 | < Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >

  • NEC Corporation uPD720200 USB 3.0 controller doesn't run at full speed

    - by Radek Zyskowski
    I have fresh install of Ubuntu 10.10. I have external HD on USB 3.0. Trying to connect this via PCI Express NEC controller. dmesg: [ 8966.820078] usb 6-3: new high speed USB device using xhci_hcd and address 0 [ 8966.839831] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.840580] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.841329] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.842079] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.843343] scsi8 : usb-storage 6-3:1.0 [ 8967.847144] scsi 8:0:0:0: Direct-Access SAMSUNG HD204UI 1AQ1 PQ: 0 ANSI: 5 [ 8967.847589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 8967.847923] sd 8:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 8967.848341] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.850959] sd 8:0:0:0: [sdb] Write Protect is off [ 8967.850963] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 8967.850966] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.851818] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.852365] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.852370] sdb: sdb1 [ 8967.871315] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.871853] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.871856] sd 8:0:0:0: [sdb] Attached SCSI disk [ 8967.950728] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.951355] sd 8:0:0:0: [sdb] Sense Key : Recovered Error [current] [descriptor] [ 8967.951361] Descriptor sense data with sense descriptors (in hex): [ 8967.951363] 72 01 04 1d 00 00 00 0e 09 0c 00 00 00 00 00 00 [ 8967.951375] 00 00 00 00 00 50 [ 8967.951380] sd 8:0:0:0: [sdb] ASC=0x4 ASCQ=0x1d [ 8968.790076] xhci_hcd 0000:02:00.0: HC died; cleaning up [ 8968.790076] usb 6-3: USB disconnect, address 2 [ 8999.008554] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008558] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK [ 8999.008562] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 39 00 00 3e 00 [ 8999.008573] end_request: I/O error, dev sdb, sector 1953535801 [ 8999.008578] Buffer I/O error on device sdb1, logical block 1953535738 [ 8999.008582] Buffer I/O error on device sdb1, logical block 1953535739 [ 8999.008585] Buffer I/O error on device sdb1, logical block 1953535740 [ 8999.008589] Buffer I/O error on device sdb1, logical block 1953535741 [ 8999.008592] Buffer I/O error on device sdb1, logical block 1953535742 [ 8999.008595] Buffer I/O error on device sdb1, logical block 1953535743 [ 8999.008600] Buffer I/O error on device sdb1, logical block 1953535744 [ 8999.008603] Buffer I/O error on device sdb1, logical block 1953535745 [ 8999.008606] Buffer I/O error on device sdb1, logical block 1953535746 [ 8999.008609] Buffer I/O error on device sdb1, logical block 1953535747 [ 8999.008642] scsi 8:0:0:0: rejecting I/O to offline device [ 8999.008747] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008749] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 8999.008752] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 77 00 00 3e 00 [ 8999.008760] end_request: I/O error, dev sdb, sector 1953535863 sudo lspci -v 2:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Physical Slot: 32 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fe9fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd If I try to put into this controller any USB 2.0, it works fine. But USB 3.0 nope. Any idea?

    Read the article

  • JSR 360 and JSR 361: A Big Leap for Java ME 8

    - by terrencebarr
    It might have gone unnoticed to some, but Java ME took a big leap forward a couple of weeks ago with the filing of two new JSRs: JSR 360: “Connected Limited Device Configuration 8″ (aka CLDC 8) JSR 361: “Java ME Embedded Profile” (aka ME EP) Together, these two JSRs will significantly update, enhance, and modernize the Java ME platform, and specifically small embedded Java, with a host of new features and functionality. JSR 360 – Connected Limited Device Configuration 8 CLDC 8 is based on JSR 139 (CLDC 1.1) and updates the core Java ME VM, language support, libraries, and features to be aligned with Java SE 8. This will include: VM updated to comply with the JVM language specification version 2 Support for SE 7/8 language features like Generics, Assertions, Annotations, Try-with-Resources, and more New libraries such as Collections, NIO subset, Logging API subset A consolidated and enhanced Generic Connection Framework for multi-protocol I/O With CLDC 8, Java ME and Java SE are entering their next phase of alignment – making Java the only technology today that truly scales application development, code re-use, and tooling across the whole range of IT platforms, from small embedded to large enterprise. JSR 361 – Java ME Embedded Profile ME EP is based on JSR 228 (IMP-NG) and updates the specification in key areas to provide a powerful and flexible application environment for small embedded Java platforms, building on the features of CLDC 8:  A new, lightweight component and services model Shared libraries Multi-application concurrency, inter-application communication, and event system Application management API optionality, to address low-footprint use cases With ME EP, application developers will have a modern application environment which allows development and deployment of  modular, robust, sophisticated, and footprint-optimized solutions for a wide range of embedded use cases and devices. Summary While these JSRs are still under development, it’s clear that there are exciting new times ahead for Java ME – turning into a serious application platform while maintaining the focus on resource-constrained devices to address the expected explosion of small, smart, and connected embedded platforms. To learn more, click on the above links for JSR 360 and JSR 361. Or review the JavaOne 2012 online presentations on the topic: CON11300: Expanding the reach of the Java ME Platform CON5943: Java ME 8 Service Platform And stay tuned for more in this space! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "jsr 360", "jsr 361", "me 8", embedded, Embedded Java, JCP

    Read the article

  • JSR 360 and JSR 361: A Big Leap for Java ME 8

    - by terrencebarr
    It might have gone unnoticed to some, but Java ME took a big leap forward a couple of weeks ago with the filing of two new JSRs: JSR 360: “Connected Limited Device Configuration 8″ (aka CLDC 8) JSR 361: “Java ME Embedded Profile” (aka ME EP) Together, these two JSRs will significantly update, enhance, and modernize the Java ME platform, and specifically small embedded Java, with a host of new features and functionality. JSR 360 – Connected Limited Device Configuration 8 CLDC 8 is based on JSR 139 (CLDC 1.1) and updates the core Java ME VM, language support, libraries, and features to be aligned with Java SE 8. This will include: VM updated to comply with the JVM language specification version 2 Support for SE 7/8 language features like Generics, Assertions, Annotations, Try-with-Resources, and more New libraries such as Collections, NIO subset, Logging API subset A consolidated and enhanced Generic Connection Framework for multi-protocol I/O With CLDC 8, Java ME and Java SE are entering their next phase of alignment – making Java the only technology today that truly scales application development, code re-use, and tooling across the whole range of IT platforms, from small embedded to large enterprise. JSR 361 – Java ME Embedded Profile ME EP is based on JSR 228 (IMP-NG) and updates the specification in key areas to provide a powerful and flexible application environment for small embedded Java platforms, building on the features of CLDC 8:  A new, lightweight component and services model Shared libraries Multi-application concurrency, inter-application communication, and event system Application management API optionality, to address low-footprint use cases With ME EP, application developers will have a modern application environment which allows development and deployment of  modular, robust, sophisticated, and footprint-optimized solutions for a wide range of embedded use cases and devices. Summary While these JSRs are still under development, it’s clear that there are exciting new times ahead for Java ME – turning into a serious application platform while maintaining the focus on resource-constrained devices to address the expected explosion of small, smart, and connected embedded platforms. To learn more, click on the above links for JSR 360 and JSR 361. Or review the JavaOne 2012 online presentations on the topic: CON11300: Expanding the reach of the Java ME Platform CON5943: Java ME 8 Service Platform And stay tuned for more in this space! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "jsr 360", "jsr 361", "me 8", embedded, Embedded Java, JCP

    Read the article

  • MVVM Light V4 preview 2 (BL0015) #mvvmlight

    - by Laurent Bugnion
    Over the past few weeks, I have worked hard on a few new features for MVVM Light V4. Here is a second early preview (consider this pre-alpha if you wish). The features are unit-tested, but I am now looking for feedback and there might be bugs! Bug correction: Messenger.CleanupList is now thread safe This was an annoying bug that is now corrected: In some circumstances, an exception could be thrown when the Messenger’s recipients list was cleaned up (i.e. the “dead” instances were removed). The method is called now and then and the exception was thrown apparently at random. In fact it was really a multi-threading issue, which is now corrected. Bug correction: AllowPartiallyTrustedCallers prevents EventToCommand to work This is a particularly annoying regression bug that was introduced in BL0014. In order to allow MVVM Light to work in XBAPs too, I added the AllowPartiallyTrustedCallers attribute to the assemblies. However, we just found out that this causes issues when using EventToCommand. In order to allow EventToCommand to continue working, I reverted to the previous state by removing the AllowPartiallyTrustedCallers attribute for now. I will work with my friends at Microsoft to try and find a solution. Stay tuned. Bug correction: XML documentation file is now generated in Release configuration The XML documentation file was not generated for the Release configuration. This was a simple flag in the project file that I had forgotten to set. This is corrected now. Applying EventToCommand to non-FrameworkElements This feature has been requested in order to be able to execute a command when a Storyboard is completed. I implemented this, but unfortunately found out that EventToCommand can only be added to Storyboards in Silverlight 3 and Silverlight 4, but not in WPF or in Windows Phone 7. This obviously limits the usefulness of this change, but I decided to publish it anyway, because it is pretty damn useful in Silverlight… Why not in WPF? In WPF, Storyboards added to a resource dictionary are frozen. This is a feature of WPF which allows to optimize certain objects for performance: By freezing them, it is a contract where we say “this object will not be modified anymore, so do your perf optimization on them without worrying too much”. Unfortunately, adding a Trigger (such as EventTrigger) to an object in resources does not work if this object is frozen… and unfortunately, there is no way to tell WPF not to freeze the Storyboard in the resources… so there is no way around that (at least none I can see. In Silverlight, objects are not frozen, so an EventTrigger can be added without problems. Why not in WP7? In Windows Phone 7, there is a totally different issue: Adding a Trigger can only be done to a FrameworkElement, which Storyboard is not. Here I think that we might see a change in a future version of the framework, so maybe this small trick will work in the future. Workaround? Since you cannot use the EventToCommand on a Storyboard in WPF and in WP7, the workaround is pretty obvious: Handle the Completed event in the code behind, and call the Command from there on the ViewModel. This object can be obtained by casting the DataContext to the ViewModel type. This means that the View needs to know about the ViewModel, but I never had issues with that anyway. New class: NotifyPropertyChanged Sometimes when you implement a model object (for example Customer), you would like to have it implement INotifyPropertyChanged, but without having all the frills of a ViewModelBase. A new class named NotifyPropertyChanged allows you to do that. This class is a simple implementation of INotifyPropertyChaned (with all the overloads of RaisePropertyChanged that were implemented in BL0014). In fact, ViewModelBase inherits NotifyPropertyChanged. ViewModelBase does not implement IDisposable anymore The IDisposable interface and the Dispose method had been marked obsolete in the ViewModelBase class already in V3. Now they have been removed. Note: By this, I do not mean that IDisposable is a bad interface, or that it shouldn’t be used on viewmodels. In the contrary, I know that this interface is very useful in certain circumstances. However, I think that having it by default on every instance of ViewModelBase was sending a wrong message. This interface has a strong meaning in .NET: After Dispose has been executed, the instance should not be used anymore, and should be ready for garbage collection. What I really wanted to have on ViewModelBase was rather a simple cleanup method, something that can be executed now and then during runtime. This is fulfilled by the ICleanup interface and its Cleanup method. If your ViewModels need IDisposable, you can still use it! You will just have to implement the interface on the class itself, because it is not available on ViewModelBase anymore. What’s next? I have a couple exciting new features implemented already but that need more testing before they go live… Just stay tuned and by MIX11 (12-14 April 2011), we should see at least a major addition to MVVM Light Toolkit, as well as another smaller feature which is pretty cool nonetheless More about this later! Happy Coding Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Unification of TPL TaskScheduler and RX IScheduler

    - by JoshReuben
    using System; using System.Collections.Generic; using System.Reactive.Concurrency; using System.Security; using System.Threading; using System.Threading.Tasks; using System.Windows.Threading; namespace TPLRXSchedulerIntegration { public class MyScheduler :TaskScheduler, IScheduler     { private readonly Dispatcher _dispatcher; private readonly DispatcherScheduler _rxDispatcherScheduler; //private readonly TaskScheduler _tplDispatcherScheduler; private readonly SynchronizationContext _synchronizationContext; public MyScheduler(Dispatcher dispatcher)         {             _dispatcher = dispatcher;             _rxDispatcherScheduler = new DispatcherScheduler(dispatcher); //_tplDispatcherScheduler = FromCurrentSynchronizationContext();             _synchronizationContext = SynchronizationContext.Current;         }         #region RX public DateTimeOffset Now         { get { return _rxDispatcherScheduler.Now; }         } public IDisposable Schedule<TState>(TState state, DateTimeOffset dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, TimeSpan dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, action);         }         #endregion         #region TPL /// Simply posts the tasks to be executed on the associated SynchronizationContext         [SecurityCritical] protected override void QueueTask(Task task)         {             _dispatcher.BeginInvoke((Action)(() => TryExecuteTask(task))); //TryExecuteTaskInline(task,false); //task.Start(_tplDispatcherScheduler); //m_synchronizationContext.Post(s_postCallback, (object)task);         } /// The task will be executed inline only if the call happens within the associated SynchronizationContext         [SecurityCritical] protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)         { if (SynchronizationContext.Current != _synchronizationContext)             { SynchronizationContext.SetSynchronizationContext(_synchronizationContext);             } return TryExecuteTask(task);         } // not implemented         [SecurityCritical] protected override IEnumerable<Task> GetScheduledTasks()         { return null;         } /// Implementes the MaximumConcurrencyLevel property for this scheduler class. /// By default it returns 1, because a <see cref="T:System.Threading.SynchronizationContext"/> based /// scheduler only supports execution on a single thread. public override Int32 MaximumConcurrencyLevel         { get             { return 1;             }         } //// preallocated SendOrPostCallback delegate //private static SendOrPostCallback s_postCallback = new SendOrPostCallback(PostCallback); //// this is where the actual task invocation occures //private static void PostCallback(object obj) //{ //    Task task = (Task) obj; //    // calling ExecuteEntry with double execute check enabled because a user implemented SynchronizationContext could be buggy //    task.ExecuteEntry(true); //}         #endregion     } }     What Design Pattern did I use here?

    Read the article

  • Cannot install ia32-libs

    - by Marcos Junior
    I don't know why I can't install ia32-libs. It claims for a dependency that cannot be found on repos. junior@mediacenter:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. junior@mediacenter:~$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: gstreamer0.10-plugins-good:i386 but it is not going to be installed Depends: gtk2-engines:i386 but it is not going to be installed Depends: gtk2-engines-murrine:i386 but it is not going to be installed Depends: gtk2-engines-pixbuf:i386 but it is not going to be installed Depends: gtk2-engines-oxygen:i386 but it is not going to be installed Depends: ibus-gtk:i386 but it is not going to be installed Depends: libcanberra-gtk-module:i386 but it is not going to be installed Depends: libcurl3:i386 but it is not going to be installed Depends: libgail-common:i386 but it is not going to be installed Depends: libglapi-mesa:i386 but it is not going to be installed Depends: libglu1-mesa:i386 but it is not going to be installed Depends: libgtk2.0-0:i386 but it is not going to be installed Depends: libqt4-opengl:i386 but it is not going to be installed Depends: librsvg2-common:i386 but it is not going to be installed Recommends: libgl1-mesa-glx:i386 but it is not going to be installed Recommends: libgl1-mesa-dri:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. Running ubuntu Precise: junior@mediacenter:~$ uname -a Linux mediacenter 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Synaptic fix broken package does nothing. Any tips?? Thanks I need this package to install other apps like teamviewer7.

    Read the article

  • Built-in network card not working?

    - by Zeeshan
    Hi, I am new to Ubuntu. I have installed Ubuntu 9.04(Jaunty). After installation I found that network card is not wokring. And id doest not list in "System Preferenes Network Connections" So , i got another card from my friend and try to search on internat about my problem but still cant find solution. Some commands output is here which may be help to solve problem root@mzeeshan-desktop:/home/mzeeshan# uname -r 2.6.28-11-generic root@mzeeshan-desktop:/home/mzeeshan# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:02:44:4a:45:12 inet addr:192.168.5.37 Bcast:192.168.5.255 Mask:255.255.255.0 inet6 addr: fe80::202:44ff:fe4a:4512/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3774 errors:0 dropped:0 overruns:0 frame:0 TX packets:3611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4307045 (4.3 MB) TX bytes:583067 (583.0 KB) Interrupt:22 Base address:0x1000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) pan0 Link encap:Ethernet HWaddr 5e:25:17:a1:18:ac BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@mzeeshan-desktop:/home/mzeeshan# lspci 00:00.0 Host bridge: Intel Corporation Device 0069 (rev 12) 00:01.0 PCI bridge: Intel Corporation Auburndale/Havendale PCI Express x16 Root Port (rev 12) 00:19.0 Ethernet controller: Intel Corporation Device 10f0 (rev 05) 00:1a.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 1 (rev 05) 00:1c.4 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 5 (rev 05) 00:1c.6 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 7 (rev 05) 00:1c.7 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 8 (rev 05) 00:1d.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Ibex Peak LPC Interface Controller (rev 05) 00:1f.2 IDE interface: Intel Corporation Ibex Peak 4 port SATA IDE Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Ibex Peak SMBus Controller (rev 05) 00:1f.5 IDE interface: Intel Corporation Ibex Peak 2 port SATA IDE Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation GeForce 8400 GS (rev a1) 06:00.0 Multimedia audio controller: Creative Labs SB Live! EMU10k1 (rev 07) 06:00.1 Input device controller: Creative Labs SB Live! Game Port (rev 07) 06:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 06:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link) root@mzeeshan-desktop:/home/mzeeshan# Motherboard is Intel DP55WG. I don't know what to do next. Any help will be greatly appreciated.. Thanks

    Read the article

  • What does your Python development workbench look like?

    - by Fabian Fagerholm
    First, a scene-setter to this question: Several questions on this site have to do with selection and comparison of Python IDEs. (The top one currently is What IDE to use for Python). In the answers you can see that many Python programmers use simple text editors, many use sophisticated text editors, and many use a variety of what I would call "actual" integrated development environments – a single program in which all development is done: managing project files, interfacing with a version control system, writing code, refactoring code, making build configurations, writing and executing tests, "drawing" GUIs, and so on. Through its GUI, an IDE supports different kinds of workflows to accomplish different tasks during the journey of writing a program or making changes to an existing one. The exact features vary, but a good IDE has sensible workflows and automates things to let the programmer concentrate on the creative parts of writing software. The non-IDE way of writing large programs relies on a collection of tools that are typically single-purpose; they do "one thing well" as per the Unix philosophy. This "non-integrated development environment" can be thought of as a workbench, supported by the OS and generic interaction through a text or graphical shell. The programmer creates workflows in their mind (or in a wiki?), automates parts and builds a personal workbench, often gradually and as experience accumulates. The learning curve is often steeper than with an IDE, but those who have taken the time to do this can often claim deeper understanding of their tools. (Whether they are better programmers is not part of this question.) With advanced editor-platforms like Emacs, the pieces can be integrated into a whole, while with simpler editors like gedit or TextMate, the shell/terminal is typically the "command center" to drive the workbench. Sometimes people extend an existing IDE to suit their needs. What does your Python development workbench look like? What workflows have you developed and how do they work? For the first question, please give the main "driving" program – the one that you use to control the rest (Emacs, shell, etc.) the "small tools" -- the programs you reach for when doing different tasks For the second question, please describe what the goal of the workflow is (eg. "set up a new project" or "doing initial code design" or "adding a feature" or "executing tests") what steps are in the workflow and what commands you run for each step (eg. in the shell or in Emacs) Also, please describe the context of your work: do you write small one-off scripts, do you do web development (with what framework?), do you write data-munching applications (what kind of data and for what purpose), do you do scientific computing, desktop apps, or something else? Note: A good answer addresses the perspectives above – it doesn't just list a bunch of tools. It will typically be a long answer, not a short one, and will take some thinking to produce; maybe even observing yourself working.

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How do I get my blacked out ttys back?

    - by con-f-use
    Original Question: After I replaced my Ubuntu 10.10 with 11.04 all I get when I Strg+Alt+F1-6 into a tty is a black screen. Also when I boot there's a while of black screen after grub2 menu is displayed. Then until just before gnome starts it stays black. I have an Nvida Geforce Quadro FX 770M on my HP EliteBook 8530w. How do I get my ttys (aka 'virtual terminals') to work again? My effords in chronological order: So grub and gfx-payload seems to be the problem, I figured. I went along with this guide for higher tty resolution. Which led to the grub2 menu displaying in my native resolution rather than 800x600. The black screen problem remains. I googlehit some bugreports on other nvidia crads having that problem. I tried uninstalling the nvidia driver. No effect. Also tried different resolutions With an older version of the kernel it works. Though not perfectly. The ttys are usable, black screen between grub2 menu and gnome start remains. Not really a solution. Tried so much, that I lost track. Reinstalled grub2 and linux-image-2.6.38-8-generic. Then did this to my /etc/default/grub in accordance with the aforementioned guide (/etc/grub.d/00_header edited as well): GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=3 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` #GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_GFXMODE=1680x1050x32 To my surprise I can now use my ttys in native resolution. Black screen between grub2 menu and gnome login screen is still there though. That is annoying since I also use an encrypted disk thus having to enter my passphrase in total dark... Still looking for a solution but urgency is low. Downloaded and installed a later version of nvidia driver. No difference to last edit. Tried GRUB_CMDLINE_LINUX="vga="-parameter. No effect. nomodeset has no effect. not even in combination with vga=... Tried echo FRAMEBUFFER=y > /etc/initramfs-tools/conf.d/splash no effect (see comment) On the verge of resignation... Bounty period soon to end.

    Read the article

  • ATI Radeon HD 4650 AGP Video card not recognized properly

    - by PastorLarry
    I have an ASUS ATI Radeon HD 4650 AGP in this system (yeah, I know how old it is). I've been on Ubuntu since 10.04, and the system has never properly recognized the card. I have always had the VESA drivers installed. Now that I have the time to address the problem, 12.04 was listing the card as "Unknown" under the System Settings. Meanwhile, Sysinfo recognizes the card as: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 0028 So I know that this card should be using the radeon driver (or even the radeonhd driver). However, when I installed the mesa-utils package, the card is suddenly reported as: Gallium 0.4 on llvmpipe (LLVM 0x300) So now, I'm completely at a loss. It seems that the llvmpipe stuff has to do with OpenGL, but it still appears that I don't have the proper video driver installed. That being said, anyone know what I can do to force the system to recognize the card and use the radeon driver? [EDIT 05.28] I did look at some other information, including glxinfo and a couple of other commands (it was REALLY late, so I don't remember the other commands) and I got these: glxinfo | grep vendor: server glx vendor string: SGI client glx vendor string: Mesa Project and SGI OpenGL vendor string: X.org glxinfo | grep renderer: OpenGL renderer string: Gallium 0.4 on AMD RV730 One of the other commands gave a whole lot of info and near the end stated that the activation string for the radeon driver was "modprobe radeon". I've tried that from sudo and as root, but it doesn't seem to change anything. I'm at a complete loss. I've even added the xorg-edgers ppa to my Software Sources and updated and rebooted the system, but nothing has changed. Most of all, I can't seem to find any documentation on this issue, as it seems that it's assumed that the radeon driver will install automatically, no questions asked. I feel like such a newbie. Does anyone have any ideas on this? [edit 05.28] results of lsmod | grep radeon (in a more readable format than the comment below): radeon 733693 3 ttm 65344 1 radeon drm_kms_helper 45466 1 radeon drm 197692 5 radeon,ttm,drm_kms_helper i2c_algo_bit 13199 1 radeon [edit 05.29] This is my /etc/X11/xorg.conf: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" So here is my question. Can I simply change the name of the driver in the device section to "radeon" instead of "fglrx" and have the radeon driver work? Or is ther a way to use this as a tmeplate and change the appropriate lines and activate the radeon driver through this file?

    Read the article

  • ArchBeat Link-o-Rama for August 1, 2013

    - by OTN ArchBeat
    Performance Tuning – Systems Running BPEL Processes | Ravi Saraswathi and Jaswant Sing Ravi Saraswathi and Jaswant Singh, the authors of "Oracle SOA BPEL Process Manager 11gR1 - A Hands-on Tutorial" explain performance tuning of SOA composite applications for optimal performance and scalability. Steps to configure SAML 2.0 with Weblogic Server | Puneeth The blogger known only as Punteeth shares an illustrated technical post that will be of interest to those working with Oracle WebLogic and the Security Assertion Markup Language (SAML). Video: Planning and Getting Started - Developer PCs | Chris Muir Tune in to the latest episode of ADF Architecture TV to see Chris Muir explain why you don't have to buy the most expensive PCs in order to run JDeveloper. Key User Experience Design Principles for working with Big Data | John Fuller User Experience Designer John Fuller shares 6 core design principles for working with big data that focus on "helping people bring together a variety of data types in a fast and flexible way." Event: OTN Developer Day: ADF Mobile - Burlington, MA - Aug 28 Through six sessions, including a hands-on workshop, you'll learn a simpler way to leverage your existing skills to develop enterprise mobile applications using Oracle ADF Mobile. Registration is free, but seating is limited. Optimizing WebCenter Portal Mobile Delivery | Jeevan Joseph FMW solution architect Jeevan Joseph "walks you through identifying and analyzing some common WebCenter Portal performance bottlenecks related to page weight and describes a generic approach that can streamline your portal while improving the performance and response times." Customizing specific instances of a WebCenter task flow | Jeevan Joseph Fusion Middleware A-Team solution architect Jeevan Joseph strikes again with this article that explains "how to set up parameters on MDS customization so that it is applied only under certain conditions...making it possible to customize individual instances of task flows." Exalogic Virtual Tea Break Snippets – Modifying Memory, CPU and Storage on a vServer | Andrew Hopkinson FMW solution architect Andrew Hopkinson walks you through "the simple process of resizing the resources associated with an already existing Exalogic vServer." Oracle ADF Mobile Virtual Developer Day - Next Week | Shay Shmeltzer JDeveloper product team lead Shay Schmeltzer shares agenda information for the OTN Virtual Developer Day event covering Mobile Application Development for iOS and Android, coming up one week from today, on August 7, 2013, 9am PT/Noon ET/1pm BRT. What's New In Oracle Enterprise Pack for Eclipse 12.1.2.1.0? New features and updates on the newly-released Oracle Enterprise Pack for Eclipse 12.1.2.1.0, now available for download from OTN. IOUG Cloud Builders Unite | Jeff Erickson Check out this great Oracle Magazine article by Jeff Erickson about IOUG members organizing around their common interest in building private clouds. Thought for the Day "Stuff that's hidden and murky and ambiguous is scary because you don't know what it does." — Jerry Garcia (August 1, 1942 – August 9, 1995) Source: brainyquote.com

    Read the article

  • Distribution upgrade problem "No new release found"

    - by fefe
    I'm using Ubuntu 11.04. The update manager once found the new release 'oneiric', and still shows up this screen when I log on use ssh: Welcome to Ubuntu 11.04 (GNU/Linux 2.6.38-14-generic x86_64) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. New release 'oneiric' available. Run 'do-release-upgrade' to upgrade to it. Last login: Wed Apr 25 16:22:48 2012 from *** But I didn't upgrade then, and change my apt sources. And now I cannot upgrade to 'oneiric'. do-relase-upgrade shows: $ sudo do-release-upgrade Checking for a new ubuntu release No new release found $ And apt-get dist-upgrade shows: $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. $ I can successfully update all my packages. File contents of source.list: $ cat /etc/apt/sources.list ## See sources.list(5) for more information, especialy # Remember that you can only use http, ftp or file URIs deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty main universe restricted multiverse deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty main universe restricted multiverse deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-security universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-security universe main multiverse restricted deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-updates universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-updates universe main multiverse restricted deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-backports universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-backports universe main multiverse restricted # deb http://ubuntu.dormforce.net/ubuntu/ lucid main universe restricted multiverse # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid main universe restricted multiverse # deb http://ubuntu.dormforce.net/ubuntu/ lucid-security universe main multiverse restricted # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid-security universe main multiverse restricted # deb http://ubuntu.dormforce.net/ubuntu/ lucid-updates universe main multiverse restricted # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid-updates universe main multiverse restricted # CDROMs are managed through the apt-cdrom tool. # deb http://archive.canonical.com lucid partner # deb http://archive.canonical.com lucid-security partner # deb http://archive.canonical.com lucid-updates partner # deb-src http://archive.canonical.com lucid partner # deb-src http://archive.canonical.com lucid-security partner # deb-src http://archive.canonical.com lucid-updates partner #medibuntu repo # deb http://packages.medibuntu.org/ lucid free non-free # deb-src http://packages.medibuntu.org/ lucid free non-free # deb http://extras.ubuntu.com/ubuntu maverick main #Third party developers repository deb http://mirrors.sohu.com/ubuntu/ natty main restricted multiverse universe deb-src http://mirrors.sohu.com/ubuntu/ natty main universe restricted multiverse #Added by software-properties deb http://security.ubuntu.com/ubuntu/ natty-security universe main multiverse restricted deb-src http://mirrors.sohu.com/ubuntu/ natty-security universe main multiverse restricted deb http://mirrors.sohu.com/ubuntu/ natty-updates universe main multiverse restricted deb-src http://mirrors.sohu.com/ubuntu/ natty-updates universe main multiverse restricted File contents of /etc/update-manager/meta-release: $ cat /etc/update-manager/meta-release # default location for the meta-release file [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed What may be the problem of this?

    Read the article

  • Atheros AR9285 / Lenovo G560 wireless not working after installing 13.04

    - by teyi
    I had Ubuntu 12.04 initially installed on my laptop. I upgraded to 12.10 then 13.04. Everything worked fine, including wireless. After adding a new memory card ( I only had 2 gb and one memory slot free) my wireess stopped working. I backed up all my data and reinstallled Ubuntu 13.04. Everything works fine except wireess. I bought this laptop in 2010 from Japan. It has Intel Core i5 CPU M 450 @2.40 Ghz * 4 3,7 Gb RAM os type 64 bit The output of iwconfig: eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off The output of rfkill list all: 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no The output of lshw -C network: *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:05:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:7d:fe:fa width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.8.0-19-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:d6400000-d640ffff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:2b:36:ac size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff The wi-fi network appears as disconnected ( it's greyed out) Strangely enough I see a wifi network ( not mine) but not mine or the rest. That network doesn't require a password . I click on it, try to connect and i get an error message: failed to connect to xxxxx ... 32) The access point/org/freedesktop/NetworkManager/AccessPoint/0 was not in the scan list. Someone help please

    Read the article

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • AI for a mixed Turn Based + Real Time battle system - Something "Gambit like" the right approach?

    - by Jason L.
    This is maybe a question that's been asked 100 times 1,000 different ways. I apologize for that :) I'm in the process of building the AI for a game I'm working on. The game is a turn based one, in the vein of Final Fantasy but also has a set of things that happen in real time (reactions). I've experimented with FSM, HFSMs, and Behavior Trees. None of them felt "right" to me and all felt either too limiting or too generic / big. The idea I'm toying with now is something like a "Rules engine" that could be likened to the Gambit system from Final Fantasy 12. I would have a set of predefined personalities. Each of these personalities would have a set of conditions it would check on each event (Turn start, time to react, etc). These conditions would be priority ordered, and the first one that returns true would be the action I take. These conditions can also point to a "choice" action, which is just an action that will make a choice based on some Utility function. Sort of a mix of FSM/HFSM and a Utility Function approach. So, a "gambit" with the personality of "Healer" may look something like this: (ON) Ally HP = 0% - Choose "Relife" spell (ON) Ally HP < 50% - Choose Heal spell (ON) Self HP < 65% - Choose Heal spell (ON) Ally Debuff - Choose Debuff Removal spell (ON) Ally Lost Buff - Choose Buff spell Likewise, a "gambit" with the personality of "Agressor" may look like this: (ON) Foe HP < 10% - Choose Attack skill (ON) Foe any - Choose target - Choose Attack skill (ON) Self Lost Buff - Choose Buff spell (ON) Foe HP = 0% - Taunt the player What I like about this approach is it makes sense in my head. It also would be extremely easy to build an "AI Editor" with an approach like this. What I'm worried about is.. would it be too limiting? Would it maybe get too complicated? Does anyone have any experience with AIs in Turn Based games that could maybe provide me some insight into this approach.. or suggest a different approach? Many thanks in advance!!!

    Read the article

  • DELL Inspiron 9400 SD Card Reader not working on 10.11 ubuntu

    - by Mario Martz
    Im new in linux, just installed Ubuntu 11.10 in a Dell Inspiron 9400, everything works fine with the exception of the SD card reader, everytime I insert a card the computer doesnt do anything, its like the SD card reader is not there. I did a $lspci and it shows the next drivers 03:01.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller 03:01.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 03:01.2 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 0a) 03:01.3 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 05) Everytime I insert a memory card, dmesg shows the next d status 0x600b00 [ 2687.227351] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.229436] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.229440] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.230512] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.230515] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.231588] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.231592] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.232674] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.232678] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.234763] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.234766] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.236864] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.236868] end_request: I/O error, dev mmcblk0, sector 70 [ 2687.238942] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.238946] end_request: I/O error, dev mmcblk0, sector 71 [ 2687.238949] Buffer I/O error on device mmcblk0, logical block 8 [ 2687.241028] mmcblk0: retrying using single block read [ 2687.243104] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.243108] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.245212] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.245215] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.247298] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.247302] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.248389] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.248393] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.250476] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.250480] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.252617] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.252621] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.254737] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 and more of the same but with different sector number Im using Kernel 3.0.0-12-generic By the way, when I was installing it and ubuntu asks about installation (If I want to install it along with windows or delete something or change the partitions of the HDD) if I go to the window to change the partition of the disc, linux detects the SD Card (if there's one inserted course). Any help with this it would be appreciated -sorry for my english Thank you

    Read the article

  • USB drives not recognized all of a sudden (module usb_storage not loading)

    - by Siddharth
    I am very close to the solution, just need to know how to get usb-storage to load I have tried most of the advice on askubuntu and other sites, usb_storage enable to fdisk -l. But I am unable to find steps to get it working again. sudo lsusb results Bus.... skipped 4 lines Bus 004 Device 002: ID 413c:3012 Dell Computer Corp. Optical Wheel Mouse Bus 005 Device 002: ID 413c:2105 Dell Computer Corp. Model L100 Keyboard Bus 001 Device 005: ID 8564:1000 sudo dmseg | tail reports [ 69.567948] usb 1-4: USB disconnect, device number 4 [ 74.084041] usb 1-6: new high-speed USB device number 5 using ehci_hcd [ 74.240484] Initializing USB Mass Storage driver... [ 74.256033] scsi5 : usb-storage 1-6:1.0 [ 74.256145] usbcore: registered new interface driver usb-storage [ 74.256147] USB Mass Storage support registered. [ 74.257290] usbcore: deregistering interface driver usb-storage fdisk -l reports Device Boot Start End Blocks Id System /dev/sda1 * 2048 972656639 486327296 83 Linux /dev/sda2 972658686 976771071 2056193 5 Extended /dev/sda5 972658688 976771071 2056192 82 Linux swap / Solaris I think I need steps to install and get usb_storage module working. Edit : I tried sudo modprobe -v usb-storage reports sudo modprobe -v usb-storage insmod /lib/modules/3.2.0-48-generic-pae/kernel/drivers/usb/storage/usb-storage.ko Edit : jsiddharth@siddharth-desktop:~$ sudo udevadm monitor --udev monitor will print the received events for: UDEV - the event which udev sends out after rule processing UDEV [4757.144372] add /module/usb_storage (module) UDEV [4757.146558] remove /module/usb_storage (module) UDEV [4757.148707] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6 (usb) UDEV [4757.149699] add /bus/usb/drivers/usb-storage (drivers) UDEV [4757.151214] remove /bus/usb/drivers/usb-storage (drivers) UDEV [4757.156873] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0 (usb) UDEV [4757.160903] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) UDEV [4757.164672] add /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165163] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9/scsi_host/host9 (scsi_host) UDEV [4757.165440] remove /devices/pci0000:00/0000:00:1d.7/usb1/1-6/1-6:1.0/host9 (scsi) Narrowing down more : Seems like I need usb_storage to load as a module jsiddharth@siddharth-desktop:~$ lsmod | grep usb usbserial 37201 0 usbhid 41937 0 hid 77428 1 usbhid Still no usb driver mounted. Nor does a device show up in /dev. Any step by step process to debug and fix this will be really helpful.

    Read the article

  • openssl/rand.h header file not found

    - by Arun Reddy Kandoor
    I have installed libssl-dev package but that did not install the include files. How do I get the openssl include files? Appreciate your help. Checking for program g++ or c++ : /usr/bin/g++ Checking for program cpp : /usr/bin/cpp Checking for program ar : /usr/bin/ar Checking for program ranlib : /usr/bin/ranlib Checking for g++ : ok Checking for node path : ok /usr/bin/node Checking for node prefix : ok /usr Checking for header openssl/rand.h : not found /home/arun/Documents/webserver/node_modules/bcrypt/wscript:30: error: the configuration failed (see '/home/arun/Documents/webserver/node_modules/bcrypt/build/config.log') npm ERR! error installing [email protected] npm ERR! [email protected] preinstall: `node-waf clean || (exit 0); node-waf configure build` npm ERR! `sh "-c" "node-waf clean || (exit 0); node-waf configure build"` failed with 1 npm ERR! npm ERR! Failed at the [email protected] preinstall script. npm ERR! This is most likely a problem with the bcrypt package, npm ERR! not with npm itself. npm ERR! Tell the author that this fails on your system: npm ERR! node-waf clean || (exit 0); node-waf configure build npm ERR! You can get their info via: npm ERR! npm owner ls bcrypt npm ERR! There is likely additional logging output above. npm ERR! npm ERR! System Linux 3.8.0-32-generic npm ERR! command "node" "/usr/bin/npm" "install" npm ERR! cwd /home/arun/Documents/webserver npm ERR! node -v v0.6.12 npm ERR! npm -v 1.1.4 npm ERR! code ELIFECYCLE npm ERR! message [email protected] preinstall: `node-waf clean || (exit 0); node-waf configure build` npm ERR! message `sh "-c" "node-waf clean || (exit 0); node-waf configure build"` failed with 1 npm ERR! errno {} npm ERR! npm ERR! Additional logging details can be found in: npm ERR! /home/arun/Documents/webserver/npm-debug.log npm not ok

    Read the article

  • How to design a scalable notification system?

    - by Trent
    I need to write a notification system manager. Here is my requirements: I need to be able to send a Notification on different platforms, which may be totally different (for exemple, I need to be able to send either an SMS or an E-mail). Sometimes the notification may be the same for all recipients for a given platform, but sometimes it may be a notification per recipients (or several) per platform. Each notification can contain platform specific payload (for exemple an MMS can contains a sound or an image). The system need to be scalable, I need to be able to send a very large amount of notification without crashing either the application or the server. It is a two step process, first a customer may type a message and choose a platform to send to, and the notification(s) should be created to be processed either real-time either later. Then the system needs to send the notification to the platform provider. For now, I end up with some though but I don't know how scalable it will be or if it is a good design. I've though of the following objects (in a pseudo language): a generic Notification object: class Notification { String $message; Payload $payload; Collection<Recipient> $recipients; } The problem with the following objects is what if I've 1.000.000 recipients ? Even if the Recipient object is very small, it'll take too much memory. I could also create one Notification per recipient, but some platform providers requires me to send it in batch, meaning I need to define one Notification with several Recipients. Each created notification could be stored in a persistent storage like a DB or Redis. Would it be a good it to aggregate this later to make sure it is scalable? On the second step, I need to process this notification. But how could I distinguish the notification to the right platform provider? Should I use an object like MMSNotification extending an abstract Notification? or something like Notification.setType('MMS')? To allow to process a lot of notification at the same time, I think a messaging queue system like RabbitMQ may be the right tool. Is it? It would allow me to queue a lot of notification and have several worker to pop notification and process them. But what if I need to batch the recipients as seen above? Then I imagine a NotificationProcessor object for which I could I add NotificationHandler each NotificationHandler would be in charge to connect the platform provider and perform notification. I can also use an EventManager to allow pluggable behavior. Any feedbacks or ideas? Thanks for giving your time. Note: I'm used to work in PHP and it is likely the language of my choice.

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to Assign a Default Signature in Outlook 2013

    - by Lori Kaufman
    If you sign most of your emails the same way, you can easily specify a default signature to automatically insert into new email messages and replies and forwards. This can be done directly in the Signature editor in Outlook 2013. We recently showed you how to create a new signature. You can also create multiple signatures for each email account and define a different default signature for each account. When you change your sending account when composing a new email message, the signature would change automatically as well. NOTE: To have a signature added automatically to new email messages and replies and forwards, you must have a default signature assigned in each email account. If you don’t want a signature in every account, you can create a signature with just a space, a full stop, dashes, or other generic characters. To assign a default signature, open Outlook and click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. To change the default signature for an email account, select the account from the E-mail account drop-down list on the top, right side of the dialog box under Choose default signature. Then, select the signature you want to use by default for New messages and for Replies/forwards from the other two drop-down lists. Click OK to accept your changes and close the dialog box. Click OK on the Outlook Options dialog box to close it. You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. Click Signature in the Include section of the New Mail Message window and select Signatures from the drop-down menu. In the next few days, we will be covering how to use the features of the signature editor next, and then how to insert and change signatures manually, backup and restore your signatures, and modify a signature for use in plain text emails.     

    Read the article

  • apt-get broken + dependencies issue + many software uninstalled

    - by vnc786
    OS=ubuntu 12.04 64bit 3.2.0-29-generic what i did? apt-get purge libre* which i thought will remove LO 3.5 but, which was a big mistake after couple of moments, i realise that its is removing all other software so i did cltl-c which stopped the apt process i restarted my system after that i found that following software were missing apt-get,evince(pdf), cheese etc..here is full list http://pastebin.com/CWHrw10y i managed to install APT deb file through dpkg but now i am not able to do any installation so i just removed /var/lib/dpkg/status and created new but that didnt help apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g Suggested packages: debconf-doc debconf-utils whiptail dialog gnome-utils libterm-readline-gnu-perl libgtk2-perl libnet-ldap-perl libqtgui4-perl libqtcore4-perl apt bzip2 ncompress xz-lzma The following NEW packages will be installed: apt-utils coreutils debconf debconf-i18n dpkg libacl1 libapt-inst1.4 libapt-pkg4.12 libattr1 libbz2-1.0 libdb5.1 libgcc1 liblocale-gettext-perl liblzma5 libselinux1 libstdc++6 libtext-charwidth-perl libtext-iconv-perl libtext-wrapi18n-perl perl-base tar tzdata xz-utils zlib1g 0 upgraded, 24 newly installed, 0 to remove and 0 not upgraded. 2 not fully installed or removed. Need to get 0 B/9,246 kB of archives. After this operation, 29.9 MB of additional disk space will be used. Do you want to continue [Y/n]? y E: Cannot get debconf version. Is debconf installed? debconf: apt-extracttemplates failed: No such file or directory dpkg: regarding .../libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb containing libgcc1, pre-dependency problem: libgcc1 pre-depends on multiarch-support multiarch-support is unpacked, but has never been configured. dpkg: error processing /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb (--unpack): pre-dependency problem - not installing libgcc1 No apport report written because MaxReports is reached already Errors were encountered while processing: /var/cache/apt/archives/libgcc1_1%3a4.6.3-1ubuntu5_amd64.deb E: Internal Error, No file name for libc6 W: Could not perform immediate configuration on 'multiarch-support:amd64'. Please see man 5 apt.conf under APT::Immediate-Configure for details. (2) E: Sub-process /usr/bin/dpkg returned an error code (1) # apt-get -u dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done You might want to run 'apt-get -f install' to correct these. The following packages have unmet dependencies: libc6 : Depends: libgcc1 but it is not installed Depends: tzdata but it is not installed E: Unmet dependencies. Try using -f. i have tried below link Unable to install due to debconf problem How do I resolve unmet dependencies? but no help... thanks

    Read the article

  • Wireless not working with a RaLink RT3090

    - by Promather
    I recently bought a new HP DV6-3118SA laptop, but I am having a very discouraging problem with wireless LAN. It simply doesn't work! Could you please help me with this? Output of lspci -k: 00:00.0 Host bridge: Intel Corporation Core Processor DRAM Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: agpgart-intel Kernel modules: intel-agp 00:01.0 PCI bridge: Intel Corporation Core Processor PCI Express x16 Root Port (rev 02) Kernel driver in use: pcieport Kernel modules: shpchp 00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation 5 Series/3400 Series Chipset HECI Controller (rev 06) Subsystem: Hewlett-Packard Company Device 144a 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 5 Series/3400 Series Chipset High Definition Audio (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 1 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.1 PCI bridge: Intel Corporation 5 Series/3400 Series Chipset PCI Express Root Port 2 (rev 05) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ehci_hcd 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Mobile 5 Series Chipset LPC Interface Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation 5 Series/3400 Series Chipset 4 port SATA AHCI Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: ahci Kernel modules: ahci 00:1f.3 SMBus: Intel Corporation 5 Series/3400 Series Chipset SMBus Controller (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel modules: i2c-i801 00:1f.6 Signal processing controller: Intel Corporation 5 Series/3400 Series Chipset Thermal Subsystem (rev 05) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: intel ips Kernel modules: intel_ips 01:00.0 VGA compatible controller: ATI Technologies Inc Manhattan [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: radeon Kernel modules: radeon 01:00.1 Audio device: ATI Technologies Inc Manhattan HDMI Audio [Mobility Radeon HD 5000 Series] Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 02:00.0 Network controller: RaLink RT3090 Wireless 802.11n 1T/1R PCIe Subsystem: Hewlett-Packard Company Device 1453 Kernel driver in use: rt2800pci Kernel modules: rt2860sta, rt2800pci 03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 03) Subsystem: Hewlett-Packard Company Device 144a Kernel driver in use: r8169 Kernel modules: r8169 7f:00.0 Host bridge: Intel Corporation Core Processor QuickPath Architecture Generic Non-core Registers (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:00.1 Host bridge: Intel Corporation Core Processor QuickPath Architecture System Address Decoder (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.0 Host bridge: Intel Corporation Core Processor QPI Link 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.1 Host bridge: Intel Corporation Core Processor QPI Physical 0 (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.2 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a 7f:02.3 Host bridge: Intel Corporation Core Processor Reserved (rev 02) Subsystem: Hewlett-Packard Company Device 144a

    Read the article

  • GPL vs plugin interfaces not designed with a specific application in mind

    - by Kristóf Marussy
    I am not seeking or in need of legal advice, but an interesting though experiment came to my mind. Imagine the following situtation (I cannot really think about a concrete example and I am unsure if a real manifestation even exists): there is a free (libre) api A licensed under some permissive license or even LGPL. Non-free application B implements this api in order host plugins, but there are other free software doing the same thing. Moreover, there is plugin C acting as a plugin under api A. It links to library D, that is under GPL, so C is also under GPL. Plugins using A are loaded into hosts via a dlopen-like mechanism and use complex data structure for host-plugin communication. Neither B nor C distribute any files that may be required for A to function properly (like headers containing the structure definitions of A or dynamic libraries containing helper functions for A written by the authors of A), but such things may exist. Now some user installs application B and plugin C on his machine, along with anything that may be required for api A to function properly. Then he proceeds and loads C into B and creates some intellectual property with B which is not a piece of software. Did a GPL violation happend at some point, and if so, who violated GPL and why? The authors of C violate D's license by making C possible to be used in non-free host B? This is a possibility because they can't give and exception of GPL (like one described in http://www.gnu.org/licenses/gpl-faq.html#GPLPluginsInNF or http://www.gnu.org/licenses/gpl-faq.html#LinkingOverControlledInterface) due to D's license terms. The authors of B violate C's and D's license by making C possible to be loaded in B? This is a possibility because http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins disallows the mechanisms A uses for communitation between the free and non-free modules. The authors of A, because the api may be used (and in this case, was used) for communication between GPL'd and non-free software. This would be extremely absurd. The user, because at the moment of loading B into C, he made a derived work of C. I think this is impossible, because he does not distribute it. But would the situation change is he decided to release a configuration file of B which makes B load C as a plugin? Nobody, because A counts as a 'system library', and both B and C directly interact only with A, not eachother. In a sane world, this would happen... A concrete example of A could be some kind of audio (think LADSPA) or image processing api. However, I could find no such interface (that is free software, generic and is also implemented by commercial tools). A real-world example could also be quite enlightening.

    Read the article

< Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >