Search Results

Search found 6357 results on 255 pages for 'generic relations'.

Page 177/255 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • How to add items to the Document Types context menu.

    - by Vizioz Limited
    I am currently working on an extension to Umbraco that needed an extra menu item on the Document Types menu in the Settings section, my first thought was to use the BeforeNodeRender event to add the menu item, but this event only fires on the Content and Media menu trees.See: Codeplex Issue 21623The "temporary" solution has been to extend the "loadNodeTypes" class, I say temporary because I assume that the core team may well extend the event functionality to the entire menu tree in the future which would be much better and would prevent you from having to complete override the menu items.This was suggested by Slace on my "is it possible to add an item to the context menu in tree's other than media content" post on the our.umbraco.org site.There are three things you need to do:1) Override the loadNodeTypesusing System.Collections.Generic;using umbraco.interfaces;using umbraco.BusinessLogic.Actions;namespace Vizioz.xMind2Umbraco{ // Note: Remember these menu's might change in future versions of Umbraco public class loadNodeTypesMenu : umbraco.loadNodeTypes { public loadNodeTypesMenu(string application) : base(application) { } protected override void CreateRootNodeActions(ref List<IAction> actions) { actions.Clear(); actions.Add(ActionNew.Instance); actions.Add(xMindImportAction.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionImport.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionRefresh.Instance); } protected override void CreateAllowedActions(ref List<IAction> actions) { actions.Clear(); actions.Add(ActionCopy.Instance); actions.Add(xMindImportAction.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionExport.Instance); actions.Add(ContextMenuSeperator.Instance); actions.Add(ActionDelete.Instance); } }}2) Create a new Action to be used on the menuusing umbraco.interfaces;namespace Vizioz.xMind2Umbraco{ public class xMindImportAction : IAction { #region Implementation of IAction public static xMindImportAction Instance { get { return umbraco.Singleton<xMindImportAction>.Instance; } } public char Letter { get { return 'p'; } } public bool ShowInNotifier { get { return true; } } public bool CanBePermissionAssigned { get { return true; } } public string Icon { get { return "editor/OPEN.gif"; } } public string Alias { get { return "Import from xMind"; } } public string JsFunctionName { get { return "openModal('dialogs/xMindImport.aspx?id=' + nodeID, 'Publish Management', 550, 480);"; } } public string JsSource { get { return string.Empty; } } #endregion }}3) Update the UmbracoAppTree table, my case I changed the following:TreeHandlerTypeFrom: loadNodeTypesTo: loadNodeTypesMenuTreeHandlerAssemblyFrom: umbracoTo: Vizioz.xMind2UmbracoAnd the result is:

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Getting Started With nServiceBus on VAN Mar 31

    - by van
    Topic: nServiceBus is mature and powerful open source framework that enables to design robust, scalable, message-based, service-oriented architectures. Latest improvements in the configuration API enables developers to quickly get started and build a working simple system that uses messaging infrastructure. The goal of this session is to give a jump start with the framework, introduce basic concepts such as message handlers, Sagas, Pub/Sub, Generic Host and also create a working demo application that uses publish/subscribe messaging. The content of the session is addressed to developers that are interested in learning how to get started using nServiceBus in order to design and build distributed systems. Bio: Bernard Kowalski is currently a Software Developer at Microdesk, one of Autodesk's leading partners in providing variety of Geospatial and Computer-Aided Design solutions. Bernard has experience developing .NET framework-based applications utilizing Windows Forms, Windows Services, ASP.NET MVC, and Web services. In a recent project, Bernard architected and implemented a distributed system based on SOA principles using an open source implementation of an Enterprise Service Bus. Bernard develops software with Agile patterns and practices using Domain Driven Design combined with TDD (Test Driven Development). He is familiar with all of the following APIs: Autodesk Vault/Product Stream API, AutoCAD ActiveX/VBA/.NET API, AutoCAD Mechanical API, Autodesk Inventor API, Autodesk MapGuide Enterprise. Prior to joining Microdesk, Bernard worked as a researcher and teacher at the University of Science and Technology in Krakow, Poland where he was awarded with a PhD in Computer Methods in Materials Science. He also participated in research projects where he developed applications for analysis of hot compression test results using advanced optimization techniques. He also developed Finite Element Method-based programs for thermal and stress analysis using C++ and FORTRAN. Bernard is a member of the Domain Driven Design and ALT.NET user groups in NYC. Virtual ALT.NET (VAN) is the online gathering place of the ALT.NET community. Through conversations, presentations, pair programming and dojos, we strive to improve, explore, and challenge the way we create software. Using net conferencing technology such as Skype and LiveMeeting, we hold regular meetings, open to anyone, usually taking the form of a presentation or an Open Space Technology-style conversation. Please see the Calendar(http://www.virtualaltnet.com/Home/Calendar) to find a VAN group that meets at a time convenient to you, and feel welcome to join a meeting. Past sessions can be found on the Recording page. To stay informed about VAN activities, you can subscribe to the Virtual ALT.NET Google Group and follow the Virtual ALT.NET blog. Times below are Central Standard Time Start Time: Wed, Mar 31, 2010 8:00 PM UTC/GMT -5 hours End Time: Wed, Mar 31, 2010 10:00 PM UTC/GMT -5 hours Attendee URL: http://www.virtualaltnet.com/van Zach Young http://www.virtualaltnet.com

    Read the article

  • How to perform feature upgrade in SharePoint2010 part1

    - by ybbest
    Once your custom SharePoint solution went into production. Any changes made to it require you to perform feature upgrade. Today, I’d like to show you how to perform feature upgrade. For the first version of my solution, I deploy a document library with a custom document set content type. You can download the solution here. Once you extract your solution, the first version is in the original folder. In order to deploy the original solution, you need to run the sitecreation.ps1 in the script folder. Next, I will modify the solution so that I will index the application number in the document library I just created in my original solution. 1. Modify the ApplicationLibrary.Template.xml as highlighted below: 2. Adding the following code into the feature event receiver. public override void FeatureUpgrading(SPFeatureReceiverProperties properties, string upgradeActionName, System.Collections.Generic.IDictionary<string, string> parameters) { base.FeatureUpgrading(properties, upgradeActionName, parameters); SPWeb web = GetFeatureWeb(properties); switch (upgradeActionName) { case "IndexApplicationNumber": SPList applicationLibrary = web.Lists.TryGetList(ApplicationLibraryNamesConstant.ApplicationLibraryName); if (applicationLibrary != null) { SPField queueField = applicationLibrary.Fields["ApplicationNumber"]; queueField.Indexed = true; queueField.Update(); } break; } } 3. Package your solution and run the feature upgrade PowerShell script. $wspFolder ="v1.1" $scriptPath=Split-Path $myInvocation.MyCommand.Path $siteUrl = "http://ybbest" $featureToCheckGuid="1b9d84cd-227d-45f1-92d4-a43008aa8fe7" $requiredFeatureVersion="0.0.0.0" $siteUrlOfFeatureToBeChecked="http://ybbest" AppendLog "Starting Solution UpgradeSolutionAndFeatures.ps1" Magenta & "$scriptPath\UpgradeSolutionAndFeatures.ps1" $siteUrl $wspFolder $featureToCheckGuid $requiredFeatureVersion $siteUrlOfFeatureToBeChecked Write-Host AppendLog "All features updated" "Green" Note: If you have not version your feature explicitly , your feature version will be 0.0.0.0 . References: Feature upgrade.

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • SQL SERVER – LOGBUFFER – Wait Type – Day 18 of 28

    - by pinaldave
    At first, I was not planning to write about this wait type. The reason was simple- I have faced this only once in my lifetime so far maybe because it is one of the top 5 wait types. I am not sure if it is a common wait type or not, but in the samples I had it really looks rare to me. From Book On-Line: LOGBUFFER Occurs when a task is waiting for space in the log buffer to store a log record. Consistently high values may indicate that the log devices cannot keep up with the amount of log being generated by the server. LOGBUFFER Explanation: The book online definition of the LOGBUFFER seems to be very accurate. On the system where I faced this wait type, the log file (LDF) was put on the local disk, and the data files (MDF, NDF) were put on SanDrives. My client then was not familiar about how the file distribution was supposed to be. Once we moved the LDF to a faster drive, this wait type disappeared. Reducing LOGBUFFER wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. (Make sure your drive where your LDF is has no IO bottleneck issues). Avoid cursor-like coding methodology and frequent commit statements. Find the most-active file based on IO stall time, as shown in the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. If you have noticed, my suggestions for reducing the LOGBUFFER is very similar to WRITELOG. Although the procedures on reducing them are alike, I am not suggesting that LOGBUFFER and WRITELOG are same wait types. From the definition of the two, you will find their difference. However, they are both related to LOG and both of them can severely degrade the performance. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Func Delegate in C#

    - by Jalpesh P. Vadgama
    We already know about delegates in C# and I have previously posted about basics of delegates in C#. Following are posts about basic of delegates I have written. Delegates in C# Multicast Delegates in C# In this post we are going to learn about Func Delegates in C#. As per MSDN following is a definition. “Encapsulates a method that has one parameter and returns a value of the type specified by the TResult parameter.” Func can handle multiple arguments. The Func delegates is parameterized type. It takes any valid C# type as parameter and you have can multiple parameters and also you have specify the return type as last parameters. Followings are some examples of parameters. Func<int T,out TResult> Func<int T,int T, out Tresult> Now let’s take a string concatenation example for that. I am going to create two func delegate which will going to concate two strings and three string. Following is a code for that. using System; using System.Collections.Generic; namespace FuncExample { class Program { static void Main(string[] args) { Func<string, string, string> concatTwo = (x, y) => string.Format("{0} {1}",x,y); Func<string, string, string, string> concatThree = (x, y, z) => string.Format("{0} {1} {2}", x, y,z); Console.WriteLine(concatTwo("Hello", "Jalpesh")); Console.WriteLine(concatThree("Hello","Jalpesh","Vadgama")); Console.ReadLine(); } } } As you can see in above example, I have create two delegates ‘concatTwo’ and ‘concatThree. The first concat two strings and another concat three strings. If you see the func statements the last parameter is for the out as here its output string so I have written string as last parameter in both statements. Now it’s time to run the example and as expected following is output. That’s it. Hope you like it. Stay tuned for more updates.

    Read the article

  • NEC Corporation uPD720200 USB 3.0 controller doesn't run at full speed

    - by Radek Zyskowski
    I have fresh install of Ubuntu 10.10. I have external HD on USB 3.0. Trying to connect this via PCI Express NEC controller. dmesg: [ 8966.820078] usb 6-3: new high speed USB device using xhci_hcd and address 0 [ 8966.839831] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.840580] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.841329] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.842079] xhci_hcd 0000:02:00.0: WARN: short transfer on control ep [ 8966.843343] scsi8 : usb-storage 6-3:1.0 [ 8967.847144] scsi 8:0:0:0: Direct-Access SAMSUNG HD204UI 1AQ1 PQ: 0 ANSI: 5 [ 8967.847589] sd 8:0:0:0: Attached scsi generic sg2 type 0 [ 8967.847923] sd 8:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.81 TiB) [ 8967.848341] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.850959] sd 8:0:0:0: [sdb] Write Protect is off [ 8967.850963] sd 8:0:0:0: [sdb] Mode Sense: 23 00 00 00 [ 8967.850966] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.851818] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.852365] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.852370] sdb: sdb1 [ 8967.871315] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.871853] sd 8:0:0:0: [sdb] Assuming drive cache: write through [ 8967.871856] sd 8:0:0:0: [sdb] Attached SCSI disk [ 8967.950728] xhci_hcd 0000:02:00.0: WARN: Stalled endpoint [ 8967.951355] sd 8:0:0:0: [sdb] Sense Key : Recovered Error [current] [descriptor] [ 8967.951361] Descriptor sense data with sense descriptors (in hex): [ 8967.951363] 72 01 04 1d 00 00 00 0e 09 0c 00 00 00 00 00 00 [ 8967.951375] 00 00 00 00 00 50 [ 8967.951380] sd 8:0:0:0: [sdb] ASC=0x4 ASCQ=0x1d [ 8968.790076] xhci_hcd 0000:02:00.0: HC died; cleaning up [ 8968.790076] usb 6-3: USB disconnect, address 2 [ 8999.008554] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008558] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_TIME_OUT driverbyte=DRIVER_OK [ 8999.008562] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 39 00 00 3e 00 [ 8999.008573] end_request: I/O error, dev sdb, sector 1953535801 [ 8999.008578] Buffer I/O error on device sdb1, logical block 1953535738 [ 8999.008582] Buffer I/O error on device sdb1, logical block 1953535739 [ 8999.008585] Buffer I/O error on device sdb1, logical block 1953535740 [ 8999.008589] Buffer I/O error on device sdb1, logical block 1953535741 [ 8999.008592] Buffer I/O error on device sdb1, logical block 1953535742 [ 8999.008595] Buffer I/O error on device sdb1, logical block 1953535743 [ 8999.008600] Buffer I/O error on device sdb1, logical block 1953535744 [ 8999.008603] Buffer I/O error on device sdb1, logical block 1953535745 [ 8999.008606] Buffer I/O error on device sdb1, logical block 1953535746 [ 8999.008609] Buffer I/O error on device sdb1, logical block 1953535747 [ 8999.008642] scsi 8:0:0:0: rejecting I/O to offline device [ 8999.008747] scsi 8:0:0:0: [sdb] Unhandled error code [ 8999.008749] scsi 8:0:0:0: [sdb] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 8999.008752] scsi 8:0:0:0: [sdb] CDB: Read(10): 28 00 74 70 97 77 00 00 3e 00 [ 8999.008760] end_request: I/O error, dev sdb, sector 1953535863 sudo lspci -v 2:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Physical Slot: 32 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fe9fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number ff-ff-ff-ff-ff-ff-ff-ff Capabilities: [150] #18 Kernel driver in use: xhci_hcd Kernel modules: xhci-hcd If I try to put into this controller any USB 2.0, it works fine. But USB 3.0 nope. Any idea?

    Read the article

  • JSR 360 and JSR 361: A Big Leap for Java ME 8

    - by terrencebarr
    It might have gone unnoticed to some, but Java ME took a big leap forward a couple of weeks ago with the filing of two new JSRs: JSR 360: “Connected Limited Device Configuration 8″ (aka CLDC 8) JSR 361: “Java ME Embedded Profile” (aka ME EP) Together, these two JSRs will significantly update, enhance, and modernize the Java ME platform, and specifically small embedded Java, with a host of new features and functionality. JSR 360 – Connected Limited Device Configuration 8 CLDC 8 is based on JSR 139 (CLDC 1.1) and updates the core Java ME VM, language support, libraries, and features to be aligned with Java SE 8. This will include: VM updated to comply with the JVM language specification version 2 Support for SE 7/8 language features like Generics, Assertions, Annotations, Try-with-Resources, and more New libraries such as Collections, NIO subset, Logging API subset A consolidated and enhanced Generic Connection Framework for multi-protocol I/O With CLDC 8, Java ME and Java SE are entering their next phase of alignment – making Java the only technology today that truly scales application development, code re-use, and tooling across the whole range of IT platforms, from small embedded to large enterprise. JSR 361 – Java ME Embedded Profile ME EP is based on JSR 228 (IMP-NG) and updates the specification in key areas to provide a powerful and flexible application environment for small embedded Java platforms, building on the features of CLDC 8:  A new, lightweight component and services model Shared libraries Multi-application concurrency, inter-application communication, and event system Application management API optionality, to address low-footprint use cases With ME EP, application developers will have a modern application environment which allows development and deployment of  modular, robust, sophisticated, and footprint-optimized solutions for a wide range of embedded use cases and devices. Summary While these JSRs are still under development, it’s clear that there are exciting new times ahead for Java ME – turning into a serious application platform while maintaining the focus on resource-constrained devices to address the expected explosion of small, smart, and connected embedded platforms. To learn more, click on the above links for JSR 360 and JSR 361. Or review the JavaOne 2012 online presentations on the topic: CON11300: Expanding the reach of the Java ME Platform CON5943: Java ME 8 Service Platform And stay tuned for more in this space! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "jsr 360", "jsr 361", "me 8", embedded, Embedded Java, JCP

    Read the article

  • JSR 360 and JSR 361: A Big Leap for Java ME 8

    - by terrencebarr
    It might have gone unnoticed to some, but Java ME took a big leap forward a couple of weeks ago with the filing of two new JSRs: JSR 360: “Connected Limited Device Configuration 8″ (aka CLDC 8) JSR 361: “Java ME Embedded Profile” (aka ME EP) Together, these two JSRs will significantly update, enhance, and modernize the Java ME platform, and specifically small embedded Java, with a host of new features and functionality. JSR 360 – Connected Limited Device Configuration 8 CLDC 8 is based on JSR 139 (CLDC 1.1) and updates the core Java ME VM, language support, libraries, and features to be aligned with Java SE 8. This will include: VM updated to comply with the JVM language specification version 2 Support for SE 7/8 language features like Generics, Assertions, Annotations, Try-with-Resources, and more New libraries such as Collections, NIO subset, Logging API subset A consolidated and enhanced Generic Connection Framework for multi-protocol I/O With CLDC 8, Java ME and Java SE are entering their next phase of alignment – making Java the only technology today that truly scales application development, code re-use, and tooling across the whole range of IT platforms, from small embedded to large enterprise. JSR 361 – Java ME Embedded Profile ME EP is based on JSR 228 (IMP-NG) and updates the specification in key areas to provide a powerful and flexible application environment for small embedded Java platforms, building on the features of CLDC 8:  A new, lightweight component and services model Shared libraries Multi-application concurrency, inter-application communication, and event system Application management API optionality, to address low-footprint use cases With ME EP, application developers will have a modern application environment which allows development and deployment of  modular, robust, sophisticated, and footprint-optimized solutions for a wide range of embedded use cases and devices. Summary While these JSRs are still under development, it’s clear that there are exciting new times ahead for Java ME – turning into a serious application platform while maintaining the focus on resource-constrained devices to address the expected explosion of small, smart, and connected embedded platforms. To learn more, click on the above links for JSR 360 and JSR 361. Or review the JavaOne 2012 online presentations on the topic: CON11300: Expanding the reach of the Java ME Platform CON5943: Java ME 8 Service Platform And stay tuned for more in this space! Cheers, – Terrence Filed under: Mobile & Embedded Tagged: "jsr 360", "jsr 361", "me 8", embedded, Embedded Java, JCP

    Read the article

  • Built-in network card not working?

    - by Zeeshan
    Hi, I am new to Ubuntu. I have installed Ubuntu 9.04(Jaunty). After installation I found that network card is not wokring. And id doest not list in "System Preferenes Network Connections" So , i got another card from my friend and try to search on internat about my problem but still cant find solution. Some commands output is here which may be help to solve problem root@mzeeshan-desktop:/home/mzeeshan# uname -r 2.6.28-11-generic root@mzeeshan-desktop:/home/mzeeshan# ifconfig -a eth0 Link encap:Ethernet HWaddr 00:02:44:4a:45:12 inet addr:192.168.5.37 Bcast:192.168.5.255 Mask:255.255.255.0 inet6 addr: fe80::202:44ff:fe4a:4512/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3774 errors:0 dropped:0 overruns:0 frame:0 TX packets:3611 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4307045 (4.3 MB) TX bytes:583067 (583.0 KB) Interrupt:22 Base address:0x1000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:4 errors:0 dropped:0 overruns:0 frame:0 TX packets:4 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:240 (240.0 B) TX bytes:240 (240.0 B) pan0 Link encap:Ethernet HWaddr 5e:25:17:a1:18:ac BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) root@mzeeshan-desktop:/home/mzeeshan# lspci 00:00.0 Host bridge: Intel Corporation Device 0069 (rev 12) 00:01.0 PCI bridge: Intel Corporation Auburndale/Havendale PCI Express x16 Root Port (rev 12) 00:19.0 Ethernet controller: Intel Corporation Device 10f0 (rev 05) 00:1a.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 1 (rev 05) 00:1c.4 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 5 (rev 05) 00:1c.6 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 7 (rev 05) 00:1c.7 PCI bridge: Intel Corporation Ibex Peak PCI Express Root Port 8 (rev 05) 00:1d.0 USB Controller: Intel Corporation Ibex Peak USB2 Enhanced Host Controller (rev 05) 00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev a5) 00:1f.0 ISA bridge: Intel Corporation Ibex Peak LPC Interface Controller (rev 05) 00:1f.2 IDE interface: Intel Corporation Ibex Peak 4 port SATA IDE Controller (rev 05) 00:1f.3 SMBus: Intel Corporation Ibex Peak SMBus Controller (rev 05) 00:1f.5 IDE interface: Intel Corporation Ibex Peak 2 port SATA IDE Controller (rev 05) 01:00.0 VGA compatible controller: nVidia Corporation GeForce 8400 GS (rev a1) 06:00.0 Multimedia audio controller: Creative Labs SB Live! EMU10k1 (rev 07) 06:00.1 Input device controller: Creative Labs SB Live! Game Port (rev 07) 06:01.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) 06:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A IEEE-1394a-2000 Controller (PHY/Link) root@mzeeshan-desktop:/home/mzeeshan# Motherboard is Intel DP55WG. I don't know what to do next. Any help will be greatly appreciated.. Thanks

    Read the article

  • Unification of TPL TaskScheduler and RX IScheduler

    - by JoshReuben
    using System; using System.Collections.Generic; using System.Reactive.Concurrency; using System.Security; using System.Threading; using System.Threading.Tasks; using System.Windows.Threading; namespace TPLRXSchedulerIntegration { public class MyScheduler :TaskScheduler, IScheduler     { private readonly Dispatcher _dispatcher; private readonly DispatcherScheduler _rxDispatcherScheduler; //private readonly TaskScheduler _tplDispatcherScheduler; private readonly SynchronizationContext _synchronizationContext; public MyScheduler(Dispatcher dispatcher)         {             _dispatcher = dispatcher;             _rxDispatcherScheduler = new DispatcherScheduler(dispatcher); //_tplDispatcherScheduler = FromCurrentSynchronizationContext();             _synchronizationContext = SynchronizationContext.Current;         }         #region RX public DateTimeOffset Now         { get { return _rxDispatcherScheduler.Now; }         } public IDisposable Schedule<TState>(TState state, DateTimeOffset dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, TimeSpan dueTime, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, dueTime, action);         } public IDisposable Schedule<TState>(TState state, Func<IScheduler, TState, IDisposable> action)         { return _rxDispatcherScheduler.Schedule(state, action);         }         #endregion         #region TPL /// Simply posts the tasks to be executed on the associated SynchronizationContext         [SecurityCritical] protected override void QueueTask(Task task)         {             _dispatcher.BeginInvoke((Action)(() => TryExecuteTask(task))); //TryExecuteTaskInline(task,false); //task.Start(_tplDispatcherScheduler); //m_synchronizationContext.Post(s_postCallback, (object)task);         } /// The task will be executed inline only if the call happens within the associated SynchronizationContext         [SecurityCritical] protected override bool TryExecuteTaskInline(Task task, bool taskWasPreviouslyQueued)         { if (SynchronizationContext.Current != _synchronizationContext)             { SynchronizationContext.SetSynchronizationContext(_synchronizationContext);             } return TryExecuteTask(task);         } // not implemented         [SecurityCritical] protected override IEnumerable<Task> GetScheduledTasks()         { return null;         } /// Implementes the MaximumConcurrencyLevel property for this scheduler class. /// By default it returns 1, because a <see cref="T:System.Threading.SynchronizationContext"/> based /// scheduler only supports execution on a single thread. public override Int32 MaximumConcurrencyLevel         { get             { return 1;             }         } //// preallocated SendOrPostCallback delegate //private static SendOrPostCallback s_postCallback = new SendOrPostCallback(PostCallback); //// this is where the actual task invocation occures //private static void PostCallback(object obj) //{ //    Task task = (Task) obj; //    // calling ExecuteEntry with double execute check enabled because a user implemented SynchronizationContext could be buggy //    task.ExecuteEntry(true); //}         #endregion     } }     What Design Pattern did I use here?

    Read the article

  • Important Note for Enablement Service Pack 1 for UPK 3.6.1

    - by marc.santosusso
    The following was originally posted to one of the UPK communities on LinkedIn. Since this post generated some feedback that this information was not well-known, I thought it would be good to repost, which I've done with permission from Earl Sullivan. This is an FYI for those who have UPK 3.6.1 and applied the Enablement Pack 1. There is a manual database update that is needed to be run. Here is the information: To correct an issue with permissioning in the Library, this Service Pack, issued in March 2010, also contains scripts to update the database on the Oracle Database or MicrosoftSQL server. Once you have run the Setup.exe file for the Service Pack, the necessary script files can be found at the root of the folder where the Developer is installed. These scripts must be run manually according to the instructions below. To update a database located on an Oracle Database server manually: Run the Setup.exe to install the files for the Service Pack. Start SQL*Plus and login with the system account. At the command prompt, enter the path to the AlterSchemaObjects.sql script located at the root of the folder where the Developer is installed. and append the following parameters: schema_owner - There is a limit of 20 characters on the schema owner name. You can find this information in the web.config file located in the Repository.WS in the folder where the server is installed. password - The existing schema owner password. Statement with generic parameters: @C:\AlterSchemaObjects.sql schema_owner password 4. Run the AlterSchemaObjects.sql script. To update a database located on a Microsoft SQL server manually: Run the Setup.exe to install the files for the Service Pack. Log in to the database using the database administrator account. Open and edit the AlterDBObjects.sql file located at the root of the folder where the Developer is installed. Replace the ODServer text with the username used when the database was installed. You can find this information in the web.config file located in the Repository.WS folder in the folder where the server is installed. Change the database from master to the name of the existing Developer database and run the AlterDBObjects.sql script. Note: The database name is the initial catalog in the connection string in the web.config file. Editor's note: The database update fixes a problem with permissions where the permissions for a user will be incorrectly updated when a group that the user was removed from has their permissions changed.

    Read the article

  • Cannot install ia32-libs

    - by Marcos Junior
    I don't know why I can't install ia32-libs. It claims for a dependency that cannot be found on repos. junior@mediacenter:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch E: Unable to correct problems, you have held broken packages. junior@mediacenter:~$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs-multiarch:i386 : Depends: gstreamer0.10-plugins-good:i386 but it is not going to be installed Depends: gtk2-engines:i386 but it is not going to be installed Depends: gtk2-engines-murrine:i386 but it is not going to be installed Depends: gtk2-engines-pixbuf:i386 but it is not going to be installed Depends: gtk2-engines-oxygen:i386 but it is not going to be installed Depends: ibus-gtk:i386 but it is not going to be installed Depends: libcanberra-gtk-module:i386 but it is not going to be installed Depends: libcurl3:i386 but it is not going to be installed Depends: libgail-common:i386 but it is not going to be installed Depends: libglapi-mesa:i386 but it is not going to be installed Depends: libglu1-mesa:i386 but it is not going to be installed Depends: libgtk2.0-0:i386 but it is not going to be installed Depends: libqt4-opengl:i386 but it is not going to be installed Depends: librsvg2-common:i386 but it is not going to be installed Recommends: libgl1-mesa-glx:i386 but it is not going to be installed Recommends: libgl1-mesa-dri:i386 but it is not going to be installed E: Unable to correct problems, you have held broken packages. Running ubuntu Precise: junior@mediacenter:~$ uname -a Linux mediacenter 3.2.0-24-generic #37-Ubuntu SMP Wed Apr 25 08:43:22 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Synaptic fix broken package does nothing. Any tips?? Thanks I need this package to install other apps like teamviewer7.

    Read the article

  • SBUG Session: The Enterprise Cache

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] I did a session on "The Enterprise Cache" at the UK SOA/BPM User Group yesterday which generated some useful discussion. The proposal was for a dedicated caching layer which all app servers and service providers can hook into, sharing resources and common data. The architecture might end up like this: I'll update this post with a link to the slide deck once it's available. The next session will have Udi Dahan walking through nServiceBus, register on EventBrite if you want to come along. Synopsis Looked at the benefits and drawbacks of app-centric isolated caches, compared to an enterprise-wide shared cache running on dedicated nodes; Suggested issues and risks around caching including staleness of data, resource usage, performance and testing; Walked through a generic service cache implemented as a WCF behaviour – suitable for IIS- or BizTalk-hosted services - which I'll be releasing on CodePlex shortly; Listed common options for cache providers and their offerings. Discussion Cache usage. Different value propositions for utilising the cache: improved performance, isolation from underlying systems (e.g. service output caching can have a TTL large enough to cover downtime), reduced resource impact – CPU, memory, SQL and cost (e.g. caching results of paid-for services). Dedicated cache nodes. Preferred over in-host caching provided latency is acceptable. Depending on cache provider, can offer easy scalability and global replication so cache clients always use local nodes. Restriction of AppFabric Caching to Windows Server 2008 not viewed as a concern. Security. Limited security model in most cache providers. Options for securing cache content suggested as custom implementations. Obfuscating keys and serialized values may mean additional security is not needed. Depending on security requirements and architecture, can ensure cache servers only accessible to cache clients via IPsec. Staleness. Generally thought to be an overrated problem. Thinking in line with eventual consistency, that serving up stale data may not be a significant issue. Good technical arguments support this, although I suspect business users will be harder to persuade. Providers. Positive feedback for AppFabric Caching – speed, configurability and richness of the distributed model making it a good enterprise choice. .NET port of memcached well thought of for performance but lack of replication makes it less suitable for these shared scenarios. Replicated fork – repcached – untried and less active than memcached. NCache also well thought of, but Express version too limited for enterprise scenarios, and commercial versions look costly compared to AppFabric.

    Read the article

  • What does your Python development workbench look like?

    - by Fabian Fagerholm
    First, a scene-setter to this question: Several questions on this site have to do with selection and comparison of Python IDEs. (The top one currently is What IDE to use for Python). In the answers you can see that many Python programmers use simple text editors, many use sophisticated text editors, and many use a variety of what I would call "actual" integrated development environments – a single program in which all development is done: managing project files, interfacing with a version control system, writing code, refactoring code, making build configurations, writing and executing tests, "drawing" GUIs, and so on. Through its GUI, an IDE supports different kinds of workflows to accomplish different tasks during the journey of writing a program or making changes to an existing one. The exact features vary, but a good IDE has sensible workflows and automates things to let the programmer concentrate on the creative parts of writing software. The non-IDE way of writing large programs relies on a collection of tools that are typically single-purpose; they do "one thing well" as per the Unix philosophy. This "non-integrated development environment" can be thought of as a workbench, supported by the OS and generic interaction through a text or graphical shell. The programmer creates workflows in their mind (or in a wiki?), automates parts and builds a personal workbench, often gradually and as experience accumulates. The learning curve is often steeper than with an IDE, but those who have taken the time to do this can often claim deeper understanding of their tools. (Whether they are better programmers is not part of this question.) With advanced editor-platforms like Emacs, the pieces can be integrated into a whole, while with simpler editors like gedit or TextMate, the shell/terminal is typically the "command center" to drive the workbench. Sometimes people extend an existing IDE to suit their needs. What does your Python development workbench look like? What workflows have you developed and how do they work? For the first question, please give the main "driving" program – the one that you use to control the rest (Emacs, shell, etc.) the "small tools" -- the programs you reach for when doing different tasks For the second question, please describe what the goal of the workflow is (eg. "set up a new project" or "doing initial code design" or "adding a feature" or "executing tests") what steps are in the workflow and what commands you run for each step (eg. in the shell or in Emacs) Also, please describe the context of your work: do you write small one-off scripts, do you do web development (with what framework?), do you write data-munching applications (what kind of data and for what purpose), do you do scientific computing, desktop apps, or something else? Note: A good answer addresses the perspectives above – it doesn't just list a bunch of tools. It will typically be a long answer, not a short one, and will take some thinking to produce; maybe even observing yourself working.

    Read the article

  • SQL SERVER – OLEDB – Link Server – Wait Type – Day 23 of 28

    - by pinaldave
    When I decided to start writing about this wait type, the very first question that came to my mind was, “What does ‘OLEDB’ stand for?” A quick search on Wikipedia tells me that OLEDB means Object Linking and Embedding Database. (How many of you knew this?) Anyway, I found it very interesting that this wait type was in one of the top 10 wait types in many of the systems I have come across in my performance tuning experience. Books On-Line: ????OLEDB occurs when SQL Server calls the SQL Server Native Client OLE DB Provider. This wait type is not used for synchronization. Instead, it indicates the duration of calls to the OLE DB provider. OLEDB Explanation: This wait type primarily happens when Link Server or Remove Query has been executed. The most common case wherein this wait type is visible is during the execution of Linked Server. When SQL Server is retrieving data from the remote server, it uses OLEDB API to retrieve the data. It is possible that the remote system is not quick enough or the connection between them is not fast enough, leading SQL Server to wait for the result’s return from the remote (or external) server. This is the time OLEDB wait type occurs. Reducing OLEDB wait: Check the Link Server configuration. Checking Disk-Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) At this point in time, I am not able to think of any more ways on reducing this wait type. Do you have any opinion about this subject? Please share it here and I will share your comment with the rest of the Community, and of course, with due credit unto you. Please read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • ArchBeat Link-o-Rama for August 1, 2013

    - by OTN ArchBeat
    Performance Tuning – Systems Running BPEL Processes | Ravi Saraswathi and Jaswant Sing Ravi Saraswathi and Jaswant Singh, the authors of "Oracle SOA BPEL Process Manager 11gR1 - A Hands-on Tutorial" explain performance tuning of SOA composite applications for optimal performance and scalability. Steps to configure SAML 2.0 with Weblogic Server | Puneeth The blogger known only as Punteeth shares an illustrated technical post that will be of interest to those working with Oracle WebLogic and the Security Assertion Markup Language (SAML). Video: Planning and Getting Started - Developer PCs | Chris Muir Tune in to the latest episode of ADF Architecture TV to see Chris Muir explain why you don't have to buy the most expensive PCs in order to run JDeveloper. Key User Experience Design Principles for working with Big Data | John Fuller User Experience Designer John Fuller shares 6 core design principles for working with big data that focus on "helping people bring together a variety of data types in a fast and flexible way." Event: OTN Developer Day: ADF Mobile - Burlington, MA - Aug 28 Through six sessions, including a hands-on workshop, you'll learn a simpler way to leverage your existing skills to develop enterprise mobile applications using Oracle ADF Mobile. Registration is free, but seating is limited. Optimizing WebCenter Portal Mobile Delivery | Jeevan Joseph FMW solution architect Jeevan Joseph "walks you through identifying and analyzing some common WebCenter Portal performance bottlenecks related to page weight and describes a generic approach that can streamline your portal while improving the performance and response times." Customizing specific instances of a WebCenter task flow | Jeevan Joseph Fusion Middleware A-Team solution architect Jeevan Joseph strikes again with this article that explains "how to set up parameters on MDS customization so that it is applied only under certain conditions...making it possible to customize individual instances of task flows." Exalogic Virtual Tea Break Snippets – Modifying Memory, CPU and Storage on a vServer | Andrew Hopkinson FMW solution architect Andrew Hopkinson walks you through "the simple process of resizing the resources associated with an already existing Exalogic vServer." Oracle ADF Mobile Virtual Developer Day - Next Week | Shay Shmeltzer JDeveloper product team lead Shay Schmeltzer shares agenda information for the OTN Virtual Developer Day event covering Mobile Application Development for iOS and Android, coming up one week from today, on August 7, 2013, 9am PT/Noon ET/1pm BRT. What's New In Oracle Enterprise Pack for Eclipse 12.1.2.1.0? New features and updates on the newly-released Oracle Enterprise Pack for Eclipse 12.1.2.1.0, now available for download from OTN. IOUG Cloud Builders Unite | Jeff Erickson Check out this great Oracle Magazine article by Jeff Erickson about IOUG members organizing around their common interest in building private clouds. Thought for the Day "Stuff that's hidden and murky and ambiguous is scary because you don't know what it does." — Jerry Garcia (August 1, 1942 – August 9, 1995) Source: brainyquote.com

    Read the article

  • ATI Radeon HD 4650 AGP Video card not recognized properly

    - by PastorLarry
    I have an ASUS ATI Radeon HD 4650 AGP in this system (yeah, I know how old it is). I've been on Ubuntu since 10.04, and the system has never properly recognized the card. I have always had the VESA drivers installed. Now that I have the time to address the problem, 12.04 was listing the card as "Unknown" under the System Settings. Meanwhile, Sysinfo recognizes the card as: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 0028 So I know that this card should be using the radeon driver (or even the radeonhd driver). However, when I installed the mesa-utils package, the card is suddenly reported as: Gallium 0.4 on llvmpipe (LLVM 0x300) So now, I'm completely at a loss. It seems that the llvmpipe stuff has to do with OpenGL, but it still appears that I don't have the proper video driver installed. That being said, anyone know what I can do to force the system to recognize the card and use the radeon driver? [EDIT 05.28] I did look at some other information, including glxinfo and a couple of other commands (it was REALLY late, so I don't remember the other commands) and I got these: glxinfo | grep vendor: server glx vendor string: SGI client glx vendor string: Mesa Project and SGI OpenGL vendor string: X.org glxinfo | grep renderer: OpenGL renderer string: Gallium 0.4 on AMD RV730 One of the other commands gave a whole lot of info and near the end stated that the activation string for the radeon driver was "modprobe radeon". I've tried that from sudo and as root, but it doesn't seem to change anything. I'm at a complete loss. I've even added the xorg-edgers ppa to my Software Sources and updated and rebooted the system, but nothing has changed. Most of all, I can't seem to find any documentation on this issue, as it seems that it's assumed that the radeon driver will install automatically, no questions asked. I feel like such a newbie. Does anyone have any ideas on this? [edit 05.28] results of lsmod | grep radeon (in a more readable format than the comment below): radeon 733693 3 ttm 65344 1 radeon drm_kms_helper 45466 1 radeon drm 197692 5 radeon,ttm,drm_kms_helper i2c_algo_bit 13199 1 radeon [edit 05.29] This is my /etc/X11/xorg.conf: Section "ServerLayout" Identifier "aticonfig Layout" Screen 0 "aticonfig-Screen[0]-0" 0 0 EndSection Section "Module" EndSection Section "Monitor" Identifier "aticonfig-Monitor[0]-0" Option "VendorName" "ATI Proprietary Driver" Option "ModelName" "Generic Autodetecting Monitor" Option "DPMS" "true" EndSection Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection Section "Screen" Identifier "aticonfig-Screen[0]-0" So here is my question. Can I simply change the name of the driver in the device section to "radeon" instead of "fglrx" and have the radeon driver work? Or is ther a way to use this as a tmeplate and change the appropriate lines and activate the radeon driver through this file?

    Read the article

  • How do I get my blacked out ttys back?

    - by con-f-use
    Original Question: After I replaced my Ubuntu 10.10 with 11.04 all I get when I Strg+Alt+F1-6 into a tty is a black screen. Also when I boot there's a while of black screen after grub2 menu is displayed. Then until just before gnome starts it stays black. I have an Nvida Geforce Quadro FX 770M on my HP EliteBook 8530w. How do I get my ttys (aka 'virtual terminals') to work again? My effords in chronological order: So grub and gfx-payload seems to be the problem, I figured. I went along with this guide for higher tty resolution. Which led to the grub2 menu displaying in my native resolution rather than 800x600. The black screen problem remains. I googlehit some bugreports on other nvidia crads having that problem. I tried uninstalling the nvidia driver. No effect. Also tried different resolutions With an older version of the kernel it works. Though not perfectly. The ttys are usable, black screen between grub2 menu and gnome start remains. Not really a solution. Tried so much, that I lost track. Reinstalled grub2 and linux-image-2.6.38-8-generic. Then did this to my /etc/default/grub in accordance with the aforementioned guide (/etc/grub.d/00_header edited as well): GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=3 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` #GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_GFXMODE=1680x1050x32 To my surprise I can now use my ttys in native resolution. Black screen between grub2 menu and gnome login screen is still there though. That is annoying since I also use an encrypted disk thus having to enter my passphrase in total dark... Still looking for a solution but urgency is low. Downloaded and installed a later version of nvidia driver. No difference to last edit. Tried GRUB_CMDLINE_LINUX="vga="-parameter. No effect. nomodeset has no effect. not even in combination with vga=... Tried echo FRAMEBUFFER=y > /etc/initramfs-tools/conf.d/splash no effect (see comment) On the verge of resignation... Bounty period soon to end.

    Read the article

  • SQL SERVER – How to Force New Cardinality Estimation or Old Cardinality Estimation

    - by Pinal Dave
    After reading my initial two blog posts on New Cardinality Estimation, I received quite a few questions. Once I receive this question, I felt I should have clarified it earlier few things when I started to write about cardinality. Before continuing this blog, if you have not read it before I suggest you read following two blog posts. SQL SERVER – Simple Demo of New Cardinality Estimation Features of SQL Server 2014 SQL SERVER – Cardinality Estimation and Performance – SQL in Sixty Seconds #072 Q: Does new cardinality will improve performance of all of my queries? A: Remember, there is no 0 or 1 logic when it is about estimation. The general assumption is that most of the queries will be benefited by new cardinality estimation introduced in SQL Server 2014. That is why the generic advice is to set the compatibility level of the database to 120, which is for SQL Server 2014. Q: Is it possible that after changing cardinality estimation to new logic by setting the value to compatibility level to 120, I get degraded performance for few queries? A: Yes, it is possible. However, the number of the queries where this impact should be very less. Q: Can I still run my database in older compatibility level and force few queries to newer cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 2312 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- Old Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 110 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 2312);; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; GO Q: Can I run my database in newer compatibility level and force few queries to older cardinality estimation logic? If yes, How? A: Yes, you can do that. You will need to force your query with trace flag 9481 to use newer cardinality estimation logic. USE AdventureWorks2014 GO -- NEW Cardinality Estimation ALTER DATABASE AdventureWorks2014 SET COMPATIBILITY_LEVEL = 120 GO -- Using New Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address]; -- Using Old Cardinality Estimation SELECT [AddressID],[AddressLine1],[City] FROM [Person].[Address] OPTION(QUERYTRACEON 9481); GO I guess, I have covered most of the questions so far I have received. If I have missed any questions, please send me again and I will include the same. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Distribution upgrade problem "No new release found"

    - by fefe
    I'm using Ubuntu 11.04. The update manager once found the new release 'oneiric', and still shows up this screen when I log on use ssh: Welcome to Ubuntu 11.04 (GNU/Linux 2.6.38-14-generic x86_64) * Documentation: https://help.ubuntu.com/ 0 packages can be updated. 0 updates are security updates. New release 'oneiric' available. Run 'do-release-upgrade' to upgrade to it. Last login: Wed Apr 25 16:22:48 2012 from *** But I didn't upgrade then, and change my apt sources. And now I cannot upgrade to 'oneiric'. do-relase-upgrade shows: $ sudo do-release-upgrade Checking for a new ubuntu release No new release found $ And apt-get dist-upgrade shows: $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. $ I can successfully update all my packages. File contents of source.list: $ cat /etc/apt/sources.list ## See sources.list(5) for more information, especialy # Remember that you can only use http, ftp or file URIs deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty main universe restricted multiverse deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty main universe restricted multiverse deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-security universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-security universe main multiverse restricted deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-updates universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-updates universe main multiverse restricted deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-backports universe main multiverse restricted deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu/ natty-backports universe main multiverse restricted # deb http://ubuntu.dormforce.net/ubuntu/ lucid main universe restricted multiverse # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid main universe restricted multiverse # deb http://ubuntu.dormforce.net/ubuntu/ lucid-security universe main multiverse restricted # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid-security universe main multiverse restricted # deb http://ubuntu.dormforce.net/ubuntu/ lucid-updates universe main multiverse restricted # deb-src http://ubuntu.dormforce.net/ubuntu/ lucid-updates universe main multiverse restricted # CDROMs are managed through the apt-cdrom tool. # deb http://archive.canonical.com lucid partner # deb http://archive.canonical.com lucid-security partner # deb http://archive.canonical.com lucid-updates partner # deb-src http://archive.canonical.com lucid partner # deb-src http://archive.canonical.com lucid-security partner # deb-src http://archive.canonical.com lucid-updates partner #medibuntu repo # deb http://packages.medibuntu.org/ lucid free non-free # deb-src http://packages.medibuntu.org/ lucid free non-free # deb http://extras.ubuntu.com/ubuntu maverick main #Third party developers repository deb http://mirrors.sohu.com/ubuntu/ natty main restricted multiverse universe deb-src http://mirrors.sohu.com/ubuntu/ natty main universe restricted multiverse #Added by software-properties deb http://security.ubuntu.com/ubuntu/ natty-security universe main multiverse restricted deb-src http://mirrors.sohu.com/ubuntu/ natty-security universe main multiverse restricted deb http://mirrors.sohu.com/ubuntu/ natty-updates universe main multiverse restricted deb-src http://mirrors.sohu.com/ubuntu/ natty-updates universe main multiverse restricted File contents of /etc/update-manager/meta-release: $ cat /etc/update-manager/meta-release # default location for the meta-release file [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed What may be the problem of this?

    Read the article

  • Atheros AR9285 / Lenovo G560 wireless not working after installing 13.04

    - by teyi
    I had Ubuntu 12.04 initially installed on my laptop. I upgraded to 12.10 then 13.04. Everything worked fine, including wireless. After adding a new memory card ( I only had 2 gb and one memory slot free) my wireess stopped working. I backed up all my data and reinstallled Ubuntu 13.04. Everything works fine except wireess. I bought this laptop in 2010 from Japan. It has Intel Core i5 CPU M 450 @2.40 Ghz * 4 3,7 Gb RAM os type 64 bit The output of iwconfig: eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off The output of rfkill list all: 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no The output of lshw -C network: *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:05:00.0 logical name: wlan0 version: 01 serial: 78:e4:00:7d:fe:fa width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.8.0-19-generic firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:17 memory:d6400000-d640ffff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: 02 serial: 88:ae:1d:2b:36:ac size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full ip=192.168.2.2 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:41 ioport:2000(size=256) memory:d2410000-d2410fff memory:d2400000-d240ffff memory:d2420000-d243ffff The wi-fi network appears as disconnected ( it's greyed out) Strangely enough I see a wifi network ( not mine) but not mine or the rest. That network doesn't require a password . I click on it, try to connect and i get an error message: failed to connect to xxxxx ... 32) The access point/org/freedesktop/NetworkManager/AccessPoint/0 was not in the scan list. Someone help please

    Read the article

  • Dynamically loading Assemblies to reduce Runtime Depencies

    - by Rick Strahl
    I've been working on a request to the West Wind Application Configuration library to add JSON support. The config library is a very easy to use code-first approach to configuration: You create a class that holds the configuration data that inherits from a base configuration class, and then assign a persistence provider at runtime that determines where and how the configuration data is store. Currently the library supports .NET Configuration stores (web.config/app.config), XML files, SQL records and string storage.About once a week somebody asks me about JSON support and I've deflected this question for the longest time because frankly I think that JSON as a configuration store doesn't really buy a heck of a lot over XML. Both formats require the user to perform some fixup of the plain configuration data - in XML into XML tags, with JSON using JSON delimiters for properties and property formatting rules. Sure JSON is a little less verbose and maybe a little easier to read if you have hierarchical data, but overall the differences are pretty minor in my opinion. And yet - the requests keep rolling in.Hard Link Issues in a Component LibraryAnother reason I've been hesitant is that I really didn't want to pull in a dependency on an external JSON library - in this case JSON.NET - into the core library. If you're not using JSON.NET elsewhere I don't want a user to have to require a hard dependency on JSON.NET unless they want to use the JSON feature. JSON.NET is also sensitive to versions and doesn't play nice with multiple versions when hard linked. For example, when you have a reference to V4.4 in your project but the host application has a reference to version 4.5 you can run into assembly load problems. NuGet's Update-Package can solve some of this *if* you can recompile, but that's not ideal for a component that's supposed to be just plug and play. This is no criticism of JSON.NET - this really applies to any dependency that might change.  So hard linking the DLL can be problematic for a number reasons, but the primary reason is to not force loading of JSON.NET unless you actually need it when you use the JSON configuration features of the library.Enter Dynamic LoadingSo rather than adding an assembly reference to the project, I decided that it would be better to dynamically load the DLL at runtime and then use dynamic typing to access various classes. This allows me to run without a hard assembly reference and allows more flexibility with version number differences now and in the future.But there are also a couple of downsides:No assembly reference means only dynamic access - no compiler type checking or IntellisenseRequirement for the host application to have reference to JSON.NET or else get runtime errorsThe former is minor, but the latter can be problematic. Runtime errors are always painful, but in this case I'm willing to live with this. If you want to use JSON configuration settings JSON.NET needs to be loaded in the project. If this is a Web project, it'll likely be there already.So there are a few things that are needed to make this work:Dynamically create an instance and optionally attempt to load an Assembly (if not loaded)Load types into dynamic variablesUse Reflection for a few tasks like statics/enumsThe dynamic keyword in C# makes the formerly most difficult Reflection part - method calls and property assignments - fairly painless. But as cool as dynamic is it doesn't handle all aspects of Reflection. Specifically it doesn't deal with object activation, truly dynamic (string based) member activation or accessing of non instance members, so there's still a little bit of work left to do with Reflection.Dynamic Object InstantiationThe first step in getting the process rolling is to instantiate the type you need to work with. This might be a two step process - loading the instance from a string value, since we don't have a hard type reference and potentially having to load the assembly. Although the host project might have a reference to JSON.NET, that instance might have not been loaded yet since it hasn't been accessed yet. In ASP.NET this won't be a problem, since ASP.NET preloads all referenced assemblies on AppDomain startup, but in other executable project, assemblies are just in time loaded only when they are accessed.Instantiating a type is a two step process: Finding the type reference and then activating it. Here's the generic code out of my ReflectionUtils library I use for this:/// <summary> /// Creates an instance of a type based on a string. Assumes that the type's /// </summary> /// <param name="typeName">Common name of the type</param> /// <param name="args">Any constructor parameters</param> /// <returns></returns> public static object CreateInstanceFromString(string typeName, params object[] args) { object instance = null; Type type = null; try { type = GetTypeFromName(typeName); if (type == null) return null; instance = Activator.CreateInstance(type, args); } catch { return null; } return instance; } /// <summary> /// Helper routine that looks up a type name and tries to retrieve the /// full type reference in the actively executing assemblies. /// </summary> /// <param name="typeName"></param> /// <returns></returns> public static Type GetTypeFromName(string typeName) { Type type = null; // Let default name binding find it type = Type.GetType(typeName, false); if (type != null) return type; // look through assembly list var assemblies = AppDomain.CurrentDomain.GetAssemblies(); // try to find manually foreach (Assembly asm in assemblies) { type = asm.GetType(typeName, false); if (type != null) break; } return type; } To use this for loading JSON.NET I have a small factory function that instantiates JSON.NET and sets a bunch of configuration settings on the generated object. The startup code also looks for failure and tries loading up the assembly when it fails since that's the main reason the load would fail. Finally it also caches the loaded instance for reuse (according to James the JSON.NET instance is thread safe and quite a bit faster when cached). Here's what the factory function looks like in JsonSerializationUtils:/// <summary> /// Dynamically creates an instance of JSON.NET /// </summary> /// <param name="throwExceptions">If true throws exceptions otherwise returns null</param> /// <returns>Dynamic JsonSerializer instance</returns> public static dynamic CreateJsonNet(bool throwExceptions = true) { if (JsonNet != null) return JsonNet; lock (SyncLock) { if (JsonNet != null) return JsonNet; // Try to create instance dynamic json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); if (json == null) { try { var ass = AppDomain.CurrentDomain.Load("Newtonsoft.Json"); json = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.JsonSerializer"); } catch (Exception ex) { if (throwExceptions) throw; return null; } } if (json == null) return null; json.ReferenceLoopHandling = (dynamic) ReflectionUtils.GetStaticProperty("Newtonsoft.Json.ReferenceLoopHandling", "Ignore"); // Enums as strings in JSON dynamic enumConverter = ReflectionUtils.CreateInstanceFromString("Newtonsoft.Json.Converters.StringEnumConverter"); json.Converters.Add(enumConverter); JsonNet = json; } return JsonNet; }This code's purpose is to return a fully configured JsonSerializer instance. As you can see the code tries to create an instance and when it fails tries to load the assembly, and then re-tries loading.Once the instance is loaded some configuration occurs on it. Specifically I set the ReferenceLoopHandling option to not blow up immediately when circular references are encountered. There are a host of other small config setting that might be useful to set, but the default seem to be good enough in recent versions. Note that I'm setting ReferenceLoopHandling which requires an Enum value to be set. There's no real easy way (short of using the cardinal numeric value) to set a property or pass parameters from static values or enums. This means I still need to use Reflection to make this work. I'm using the same ReflectionUtils class I previously used to handle this for me. The function looks up the type and then uses Type.InvokeMember() to read the static property.Another feature I need is have Enum values serialized as strings rather than numeric values which is the default. To do this I can use the StringEnumConverter to convert enums to strings by adding it to the Converters collection.As you can see there's still a bit of Reflection to be done even in C# 4+ with dynamic, but with a few helpers this process is relatively painless.Doing the actual JSON ConversionFinally I need to actually do my JSON conversions. For the Utility class I need serialization that works for both strings and files so I created four methods that handle these tasks two each for serialization and deserialization for string and file.Here's what the File Serialization looks like:/// <summary> /// Serializes an object instance to a JSON file. /// </summary> /// <param name="value">the value to serialize</param> /// <param name="fileName">Full path to the file to write out with JSON.</param> /// <param name="throwExceptions">Determines whether exceptions are thrown or false is returned</param> /// <param name="formatJsonOutput">if true pretty-formats the JSON with line breaks</param> /// <returns>true or false</returns> public static bool SerializeToFile(object value, string fileName, bool throwExceptions = false, bool formatJsonOutput = false) { dynamic writer = null; FileStream fs = null; try { Type type = value.GetType(); var json = CreateJsonNet(throwExceptions); if (json == null) return false; fs = new FileStream(fileName, FileMode.Create); var sw = new StreamWriter(fs, Encoding.UTF8); writer = Activator.CreateInstance(JsonTextWriterType, sw); if (formatJsonOutput) writer.Formatting = (dynamic)Enum.Parse(FormattingType, "Indented"); writer.QuoteChar = '"'; json.Serialize(writer, value); } catch (Exception ex) { Debug.WriteLine("JsonSerializer Serialize error: " + ex.Message); if (throwExceptions) throw; return false; } finally { if (writer != null) writer.Close(); if (fs != null) fs.Close(); } return true; }You can see more of the dynamic invocation in this code. First I grab the dynamic JsonSerializer instance using the CreateJsonNet() method shown earlier which returns a dynamic. I then create a JsonTextWriter and configure a couple of enum settings on it, and then call Serialize() on the serializer instance with the JsonTextWriter that writes the output to disk. Although this code is dynamic it's still fairly short and readable.For full circle operation here's the DeserializeFromFile() version:/// <summary> /// Deserializes an object from file and returns a reference. /// </summary> /// <param name="fileName">name of the file to serialize to</param> /// <param name="objectType">The Type of the object. Use typeof(yourobject class)</param> /// <param name="binarySerialization">determines whether we use Xml or Binary serialization</param> /// <param name="throwExceptions">determines whether failure will throw rather than return null on failure</param> /// <returns>Instance of the deserialized object or null. Must be cast to your object type</returns> public static object DeserializeFromFile(string fileName, Type objectType, bool throwExceptions = false) { dynamic json = CreateJsonNet(throwExceptions); if (json == null) return null; object result = null; dynamic reader = null; FileStream fs = null; try { fs = new FileStream(fileName, FileMode.Open, FileAccess.Read); var sr = new StreamReader(fs, Encoding.UTF8); reader = Activator.CreateInstance(JsonTextReaderType, sr); result = json.Deserialize(reader, objectType); reader.Close(); } catch (Exception ex) { Debug.WriteLine("JsonNetSerialization Deserialization Error: " + ex.Message); if (throwExceptions) throw; return null; } finally { if (reader != null) reader.Close(); if (fs != null) fs.Close(); } return result; }This code is a little more compact since there are no prettifying options to set. Here JsonTextReader is created dynamically and it receives the output from the Deserialize() operation on the serializer.You can take a look at the full JsonSerializationUtils.cs file on GitHub to see the rest of the operations, but the string operations are very similar - the code is fairly repetitive.These generic serialization utilities isolate the dynamic serialization logic that has to deal with the dynamic nature of JSON.NET, and any code that uses these functions is none the wiser that JSON.NET is dynamically loaded.Using the JsonSerializationUtils WrapperThe final consumer of the SerializationUtils wrapper is an actual ConfigurationProvider, that is responsible for handling reading and writing JSON values to and from files. The provider is simple a small wrapper around the SerializationUtils component and there's very little code to make this work now:The whole provider looks like this:/// <summary> /// Reads and Writes configuration settings in .NET config files and /// sections. Allows reading and writing to default or external files /// and specification of the configuration section that settings are /// applied to. /// </summary> public class JsonFileConfigurationProvider<TAppConfiguration> : ConfigurationProviderBase<TAppConfiguration> where TAppConfiguration: AppConfiguration, new() { /// <summary> /// Optional - the Configuration file where configuration settings are /// stored in. If not specified uses the default Configuration Manager /// and its default store. /// </summary> public string JsonConfigurationFile { get { return _JsonConfigurationFile; } set { _JsonConfigurationFile = value; } } private string _JsonConfigurationFile = string.Empty; public override bool Read(AppConfiguration config) { var newConfig = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfiguration)) as TAppConfiguration; if (newConfig == null) { if(Write(config)) return true; return false; } DecryptFields(newConfig); DataUtils.CopyObjectData(newConfig, config, "Provider,ErrorMessage"); return true; } /// <summary> /// Return /// </summary> /// <typeparam name="TAppConfig"></typeparam> /// <returns></returns> public override TAppConfig Read<TAppConfig>() { var result = JsonSerializationUtils.DeserializeFromFile(JsonConfigurationFile, typeof(TAppConfig)) as TAppConfig; if (result != null) DecryptFields(result); return result; } /// <summary> /// Write configuration to XmlConfigurationFile location /// </summary> /// <param name="config"></param> /// <returns></returns> public override bool Write(AppConfiguration config) { EncryptFields(config); bool result = JsonSerializationUtils.SerializeToFile(config, JsonConfigurationFile,false,true); // Have to decrypt again to make sure the properties are readable afterwards DecryptFields(config); return result; } }This incidentally demonstrates how easy it is to create a new provider for the West Wind Application Configuration component. Simply implementing 3 methods will do in most cases.Note this code doesn't have any dynamic dependencies - all that's abstracted away in the JsonSerializationUtils(). From here on, serializing JSON is just a matter of calling the static methods on the SerializationUtils class.Already, there are several other places in some other tools where I use JSON serialization this is coming in very handy. With a couple of lines of code I was able to add JSON.NET support to an older AJAX library that I use replacing quite a bit of code that was previously in use. And for any other manual JSON operations (in a couple of apps I use JSON Serialization for 'blob' like document storage) this is also going to be handy.Performance?Some of you might be thinking that using dynamic and Reflection can't be good for performance. And you'd be right… In performing some informal testing it looks like the performance of the native code is nearly twice as fast as the dynamic code. Most of the slowness is attributable to type lookups. To test I created a native class that uses an actual reference to JSON.NET and performance was consistently around 85-90% faster with the referenced code. That being said though - I serialized 10,000 objects in 80ms vs. 45ms so this isn't hardly slouchy. For the configuration component speed is not that important because both read and write operations typically happen once on first access and then every once in a while. But for other operations - say a serializer trying to handle AJAX requests on a Web Server one would be well served to create a hard dependency.Dynamic Loading - Worth it?On occasion dynamic loading makes sense. But there's a price to be paid in added code complexity and a performance hit. But for some operations that are not pivotal to a component or application and only used under certain circumstances dynamic loading can be beneficial to avoid having to ship extra files and loading down distributions. These days when you create new projects in Visual Studio with 30 assemblies before you even add your own code, trying to keep file counts under control seems a good idea. It's not the kind of thing you do on a regular basis, but when needed it can be a useful tool. Hopefully some of you find this information useful…© Rick Strahl, West Wind Technologies, 2005-2013Posted in .NET  C#   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • DELL Inspiron 9400 SD Card Reader not working on 10.11 ubuntu

    - by Mario Martz
    Im new in linux, just installed Ubuntu 11.10 in a Dell Inspiron 9400, everything works fine with the exception of the SD card reader, everytime I insert a card the computer doesnt do anything, its like the SD card reader is not there. I did a $lspci and it shows the next drivers 03:01.0 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller 03:01.1 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 19) 03:01.2 System peripheral: Ricoh Co Ltd R5C592 Memory Stick Bus Host Adapter (rev 0a) 03:01.3 System peripheral: Ricoh Co Ltd xD-Picture Card Controller (rev 05) Everytime I insert a memory card, dmesg shows the next d status 0x600b00 [ 2687.227351] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.229436] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.229440] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.230512] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.230515] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.231588] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.231592] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.232674] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.232678] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.234763] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.234766] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.236864] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.236868] end_request: I/O error, dev mmcblk0, sector 70 [ 2687.238942] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.238946] end_request: I/O error, dev mmcblk0, sector 71 [ 2687.238949] Buffer I/O error on device mmcblk0, logical block 8 [ 2687.241028] mmcblk0: retrying using single block read [ 2687.243104] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.243108] end_request: I/O error, dev mmcblk0, sector 64 [ 2687.245212] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.245215] end_request: I/O error, dev mmcblk0, sector 65 [ 2687.247298] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.247302] end_request: I/O error, dev mmcblk0, sector 66 [ 2687.248389] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.248393] end_request: I/O error, dev mmcblk0, sector 67 [ 2687.250476] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.250480] end_request: I/O error, dev mmcblk0, sector 68 [ 2687.252617] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 [ 2687.252621] end_request: I/O error, dev mmcblk0, sector 69 [ 2687.254737] mmcblk0: error -110 sending read/write command, response 0x0, card status 0x600b00 and more of the same but with different sector number Im using Kernel 3.0.0-12-generic By the way, when I was installing it and ubuntu asks about installation (If I want to install it along with windows or delete something or change the partitions of the HDD) if I go to the window to change the partition of the disc, linux detects the SD Card (if there's one inserted course). Any help with this it would be appreciated -sorry for my english Thank you

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >