Search Results

Search found 14283 results on 572 pages for 'django generic views'.

Page 524/572 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • Some New .NET Downloads and Resources

    - by Kevin Grossnicklaus
    Last week I was fortunate enough to spend time in Redmond on Microsoft’s campus for the 2011 Microsoft MVP Summit.  It was great to hang out with a number of old friends and get the opportunity to talk tech with the various product teams up at Microsoft.  The weather wasn’t exactly sunny but Microsoft always does a great job with the Summit and everyone had a blast (heck, I even got to run the bases at SafeCo field) While much of what we saw is covered under NDA, there a ton of great things in the pipeline from Microsoft and many things that are already available (or just became so) that I wasn’t necessarily aware of.  The purpose of this post is to share some of the info I learned on resources and tools available to .NET developers today.  Please let me know if you have any questions (or if you know of something else cool which might benefit others). Enjoy! Visual Studio 2010 SP1 Microsoft has issued the RTM release of Visual Studio 2010 SP1.  You can download the full SP1 on MSDN as of today (March 10th to the general public) and take advantage of such things as: Silverlight 4 is included in the box (as opposed to a separate install) Silverlight 4 Profiling WCF RIA Services SP1 Intellitrace for 64-bit and SharePoint ASP.NET now easily supports IIS Express and SQL CE Want a description of all that’s new beyond the above biased list (which arguably only contains items I think are important)?  Check out this KB article. Portable Library Tools CTP Without much fanfare Microsoft has released a CTP of a new add-in to Visual Studio 2010 which simplifies code sharing between projects targeting different runtimes (i.e. Silverlight, WPF, Win7 Phone, XBox).   With this Add-In installed you can add a new project of type “Portable Library” and specify which platforms you wish to target.  Once that is done, any code added to this library will be limited to use only features which are common to all selected frameworks.  Other projects can now reference this portable library and be provided assemblies custom built to their environment.  This greatly simplifies the current process of sharing linked files between platforms like WPF and Silverlight.  You can find out more about this CTP and how it works on this great blog post. Visual Studio Async CTP Microsoft has also released a CTP of a set of language and framework enhancements to provide a much more powerful asynchronous programming model.   Due to the focus on async programming in all types of platforms (and it being the ONLY option in Silverlight and Win7 phone) a move towards a simpler and more understandable model is always a good thing. This CTP (called Visual Studio Async CTP) can be downloaded here.  You can read more about this CTP on this blog post. MSDN Code Samples Gallery Microsoft has also launched new code samples gallery on their MSDN site: http://code.msdn.microsoft.com/.   This site allows you to easily search for small samples of code related to a particular technology or platform.  If a sample of code you are looking for is not found, you can request one via the site and other developers can see your request and provide a sample to the site to suit your needs.  You can also peruse requested samples and, if you find a scenario where you can provide value, upload your own sample for the benefit of others.  Samples are packaged into the VS .vsix format and include any necessary references/dependencies.  By using .vsix as the deployment mechanism, as samples are installed from the site they are kept in your Visual Studio 2010 Samples Gallery and kept for your future reference. If you get a chance, check out the site and see how it is done.  Although a somewhat simple concept, I was very impressed with their implementation and the way they went about trying to suit a need.  I’ll definitely be looking there in the future as need something or want to share something. MSDN Search Capabilities Another item I learned recently and was not aware of (that might seem trivial to some) is the power of the MSDN site’s search capabilities.  Between the Code Samples Gallery described above and the search enhancements on MSDN, Microsoft is definitely investing in their platform to help provide developers of all skill levels the tools and resources they need to be successful. What do I mean by the MSDN search capability and why should you care? If you go to the MSDN home page (http://msdn.microsoft.com) and use the “Search MSDN with Big” box at the very top of the page you will see some very interesting results.  First, the search actually doesn’t just search the MSDN library it searches: MSDN Library All Microsoft Blogs CodePlex StackOverflow Downloads MSDN Magazine Support Knowledgebase (I’m not sure it even ends there but the above are all I know of) Beyond just searching all the above locations, the results are formatted very nicely to give some contextual information based on where the result came from.  For example, if a keyword search returned results from CodePlex, each row in the search results screen would include a large amount of information specific to CodePlex such as: Looking at the above results immediately tells you everything from the page views to the CodePlex ratings.  All in all, knowing that this much information is indexed and available from a single search location will lead me to utilize this as one of my initial searches for development information.

    Read the article

  • Mouse takes a while to start working after boot

    - by warkior
    I just recently installed Ubuntu 12.04 (64 bit) and a number of my USB devices have stopped working. At least, they don't work for the first 3-5 minutes. I have two mice (one wireless, one wired) and a camera, which seem to take Ubuntu 3-5 minutes to recognize after booting up. Eventually, they do start to work, but it takes ages! lsusb results: (when the mice are working...) $ lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 002: ID 046d:c512 Logitech, Inc. LX-700 Cordless Desktop Receiver Bus 003 Device 003: ID 03f0:3f11 Hewlett-Packard PSC-1315/PSC-1317 Bus 006 Device 002: ID 046d:c00c Logitech, Inc. Optical Wheel Mouse Bus 006 Device 003: ID 046d:c52b Logitech, Inc. Unifying Receiver syslog entries for what seems (to my very untrained eye) to be the problem: Oct 12 20:12:51 REMOVED-GA-MA785GM-US2H kernel: [ 17.420117] usb 2-3: device descriptor read/64, error -110 Oct 12 20:12:57 REMOVED-GA-MA785GM-US2H goa[1879]: goa-daemon version 3.4.0 starting [main.c:112, main()] Oct 12 20:13:06 REMOVED-GA-MA785GM-US2H kernel: [ 32.636107] usb 2-3: device descriptor read/64, error -110 Oct 12 20:13:06 REMOVED-GA-MA785GM-US2H kernel: [ 32.852122] usb 2-3: new high-speed USB device number 3 using ehci_hcd Oct 12 20:13:21 REMOVED-GA-MA785GM-US2H kernel: [ 47.964131] usb 2-3: device descriptor read/64, error -110 Oct 12 20:13:37 REMOVED-GA-MA785GM-US2H kernel: [ 63.180115] usb 2-3: device descriptor read/64, error -110 Oct 12 20:13:37 REMOVED-GA-MA785GM-US2H kernel: [ 63.396126] usb 2-3: new high-speed USB device number 4 using ehci_hcd Oct 12 20:13:47 REMOVED-GA-MA785GM-US2H kernel: [ 73.804158] usb 2-3: device not accepting address 4, error -110 Oct 12 20:13:47 REMOVED-GA-MA785GM-US2H kernel: [ 73.916190] usb 2-3: new high-speed USB device number 5 using ehci_hcd Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H kernel: [ 84.324160] usb 2-3: device not accepting address 5, error -110 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H kernel: [ 84.324197] hub 2-0:1.0: unable to enumerate USB device on port 3 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: failed to claim interface Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: Failed to get parent Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: device devpath is /devices/pci0000:00/0000:00:12.0/usb3/3-3 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H udev-configure-printer: MFG:hp MDL:psc 1310 series SERN:CN47CB60BJO2 serial:CN47CB60BJO2 Oct 12 20:13:58 REMOVED-GA-MA785GM-US2H kernel: [ 84.768132] usb 5-3: new full-speed USB device number 2 using ohci_hcd Oct 12 20:14:01 REMOVED-GA-MA785GM-US2H udev-configure-printer: no corresponding CUPS device found Oct 12 20:14:13 REMOVED-GA-MA785GM-US2H kernel: [ 99.904185] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:29 REMOVED-GA-MA785GM-US2H kernel: [ 115.144188] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:29 REMOVED-GA-MA785GM-US2H kernel: [ 115.384178] usb 5-3: new full-speed USB device number 3 using ohci_hcd Oct 12 20:14:44 REMOVED-GA-MA785GM-US2H kernel: [ 130.520196] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:59 REMOVED-GA-MA785GM-US2H kernel: [ 145.760179] usb 5-3: device descriptor read/64, error -110 Oct 12 20:14:59 REMOVED-GA-MA785GM-US2H kernel: [ 146.000173] usb 5-3: new full-speed USB device number 4 using ohci_hcd Oct 12 20:15:10 REMOVED-GA-MA785GM-US2H kernel: [ 156.408168] usb 5-3: device not accepting address 4, error -110 Oct 12 20:15:10 REMOVED-GA-MA785GM-US2H kernel: [ 156.544188] usb 5-3: new full-speed USB device number 5 using ohci_hcd Oct 12 20:15:20 REMOVED-GA-MA785GM-US2H kernel: [ 166.952181] usb 5-3: device not accepting address 5, error -110 Oct 12 20:15:20 REMOVED-GA-MA785GM-US2H kernel: [ 166.952215] hub 5-0:1.0: unable to enumerate USB device on port 3 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.216164] usb 6-2: new low-speed USB device number 2 using ohci_hcd Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: checking bus 6, device 2: "/sys/devices/pci0000:00/0000:00:13.1/usb6/6-2" Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: bus: 6, device: 2 was not an MTP device Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.396138] input: Logitech USB Mouse as /devices/pci0000:00/0000:00:13.1/usb6/6-2/6-2:1.0/input/input16 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.396442] generic-usb 0003:046D:C00C.0003: input,hidraw2: USB HID v1.10 Mouse [Logitech USB Mouse] on usb-0000:00:13.1-2/input0 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.660187] usb 6-3: new full-speed USB device number 3 using ohci_hcd Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: checking bus 6, device 3: "/sys/devices/pci0000:00/0000:00:13.1/usb6/6-3" Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H mtp-probe: bus: 6, device: 3 was not an MTP device Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.859045] logitech-djreceiver 0003:046D:C52B.0006: hiddev0,hidraw3: USB HID v1.11 Device [Logitech USB Receiver] on usb-0000:00:13.1-3/input2 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.865086] input: Logitech Unifying Device. Wireless PID:400a as /devices/pci0000:00/0000:00:13.1/usb6/6-3/6-3:1.2/0003:046D:C52B.0006/input/input17 Oct 12 20:15:21 REMOVED-GA-MA785GM-US2H kernel: [ 167.865291] logitech-djdevice 0003:046D:C52B.0007: input,hidraw4: USB HID v1.11 Mouse [Logitech Unifying Device. Wireless PID:400a] on usb-0000:00:13.1-3:1 Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 139: unable get_string_descriptor -1: Operation not permitted Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 2040: invalid product id string ret=-1 Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 139: unable get_string_descriptor -1: Operation not permitted Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 2045: invalid serial id string ret=-1 Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 139: unable get_string_descriptor -1: Operation not permitted Oct 12 20:15:24 REMOVED-GA-MA785GM-US2H colord: io/hpmud/musb.c 2050: invalid manufacturer string ret=-1

    Read the article

  • CD/DVD drive not mounted when inserted with Disc of any kind

    - by Cisco Sán
    I just noticed that if a insert a CD or a DVD of any kind, the Drive will start spinning but it will not show the mounted disc. Before it used to ask me what to do with the media inserted. Now it doesn't even do that. I ran in the terminal this code: eject -n and it displays this: " eject: device is `/dev/sr0'" what can I do to get the functionality back on my drive. also ran this command: sudo mount -o ro,unhide,uid=1000 /dev/cdrom /mnt/cdrom but in return i get this: " mount: mount point /mnt/cdrom does not exist" Running Ubuntu 11.10 HERE IS THE HISTORY UNTIL NOW thanks Waltinator: I ran the 'dmesg' but don't know what I'm looking for. Im a newbie on this. The same thing with the 'ls -rlt /var/log' command. Should I create the directory for the mount? at this point really don't know what to do. – Cisco Sán 7 hours ago Here are 3 lines from my dmesg after I successfully inserted a CD: [ 4804.416018] wlan0: no IPv6 routers present [ 8214.125450] ISdit ISO 9660 Extensions: Microsoft Joliet Level 3 [ 8214.136556] ISO 9660 Extensions: RRIP_1991A The first line is a previous event, my wireless going online. The next 2 lines are a good result. The number in square brackets is "seconds since boot", the rest of the line is usually helpful. And no, you should NOT create the mount point. Let's try to get the automatic mounting to work. – waltinator 7 hours ago ok this are my last 3 lines on the 'dmesg' [ 18.130819] init: plymouth-stop pre-start process (1396) terminated with status 1 [ 28.780011] wlan0: no IPv6 routers present [ 505.632119] CE: hpet increased min_delta_ns to 20113 nsec – Cisco Sán 6 hours ago It looks like your CD/DVD drive is not connected to the data bus, and not causing an interrupt when you insert a platter. – waltinator 6 hours ago Try dmesg | grep -A8 CD-ROM which should show you what the system thought was available when it came up. – waltinator 6 hours ago here is my printout [0.774351] scsi 0:0:0:0: CD-ROM HL-DT-ST DVD+-RW GSA-T40N A100 PQ: 0 ANSI: 5 [0.778117] sr0: scsi3-mmc drive: 24x/24x writer dvd-ram cd/rw xa/form2 cdda tray [0.778122] cdrom: Uniform CD-ROM driver Revision: 3.20 [0.778282] sr 0:0:0:0: Attached scsi CD-ROM sr0 [0.778340] sr 0:0:0:0: Attached scsi generic sg0 type 5 [0.780416] Freeing unused kernel memory: 984k freed [0.780732] Write protecting the kernel read-only data: 10240k [0.780986] Freeing unused kernel memory: 20k freed [0.786331] Freeing unused kernel memory: 1400k freed [0.804912] udevd[90]: starting version 173 [0.874178] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded [0.874208] r8169 0000:02:00.0: PCI INT A - GSI 16 (level, low) - IRQ 16 OK, your system sees the drive. Can you open and close the tray with eject and eject -t? Run udevadm monitor while you insert a CD (type ^C when done) and see if you get "change" and "add" messages. – waltinator 6 hours ago ok, "eject" works perfectly "eject -t" does nothing. this is the message for "udevadm monitor": KERNEL[13771.009267] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0/block/sr0 (block) UDEV [13773.878887] change /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0 /block/sr0 (block) – Cisco Sán 6 hours ago sudo hwinfo --cdrom (the hwinfo package is installable through Software Center) describes my CD-ROM, try it. – waltinator 4 hours ago My read out from the "sudo hwinfo --cdrom" are the following: hal.1: read hal dataprocess 2753: arguments to dbus_move_error() were incorrect, assertion "(dest) == NULL || !dbus_error_is_set ((dest))" failed in file ../../dbus/dbus-errors.c line 280. This is normally a bug in some application using the D-Bus library. libhal.c 3483 : Error unsubscribing to signals, error=The name org.freedesktop.Hal was not provided by any .service files 22: SCSI 00.0: 10602 CD-ROM (DVD) [Created at block.247] Unique ID: KD9E.JgkxTS4hgl2 Parent ID: 3p2J.gdUMCD83e+E SysFS ID: /class/block/sr0 SysFS BusID: 0:0:0:0 SysFS Device Link: /devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0 Hardware Class: cdrom Model: "HL-DT-ST DVD+-RW GSA-T40N" Vendor: "HL-DT-ST" Device: "DVD+-RW GSA-T40N" Revision: "A100" Driver: "ata_piix", "sr" Driver Modules: "ata_piix" Device File: /dev/sr0 (/dev/sg0) Device Files: /dev/sr0, /dev/scd0, /dev/disk/by-id/ata-HL-DT-ST_DVD+_-RW_GSA-T40N_K048BJ74257, /dev/disk/by-path/pci-0000:00:1f.1-scsi-0:0:0:0, /dev/cdrom, /dev/cdrw, /dev/dvd, /dev/dvdrw Device Number: block 11:0 (char 21:0) Features: DVD Config Status: cfg=new, avail=yes, need=no, active=unknown Attached to: #17 (IDE interface) Drive Speed: 31 Volume ID: "Movie" Publisher: "INTERVIDEO" Creation date: "20050424162207000" Thanks for the help. To Castro, hope this is what you meant and sorry for the comments..

    Read the article

  • Building a plug-in for Windows Live Writer

    - by mbcrump
    This tutorial will show you how to build a plug-in for Windows Live Writer. Windows Live Writer is a blogging tool that Microsoft provides for free. It includes an open API for .NET developers to create custom plug-ins. In this tutorial, I will show you how easy it is to build one. Open VS2008 or VS2010 and create a new project. Set the target framework to 2.0, Application Type to Class Library and give it a name. In this tutorial, we are going to create a plug-in that generates a twitter message with your blog post name and a TinyUrl link to the blog post.  It will do all of this automatically after you publish your post. Once, we have a new projected created. We need to setup the references. Add a reference to the WindowsLive.Writer.Api.dll located in the C:\Program Files (x86)\Windows Live\Writer\ folder, if you are using X64 version of Windows. You will also need to add a reference to System.Windows.Forms System.Web from the .NET tab as well. Once that is complete, add your “using” statements so that it looks like whats shown below: Live Writer Plug-In "Using" using System; using System.Collections.Generic; using System.Text; using WindowsLive.Writer.Api; using System.Web; Now, we are going to setup some build events to make it easier to test our custom class. Go into the Properties of your project and select Build Events, click edit the Post-build and copy/paste the following line: XCOPY /D /Y /R "$(TargetPath)" "C:\Program Files (x86)\Windows Live\Writer\Plugins\" Your screen should look like the one pictured below: Next, we are going to launch an external program on debug. Click the debug tab and enter C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriter.exe Your screen should look like the one pictured below:   Now we have a blank project and we need to add some code. We start with adding the attributes for the Live Writer Plugin. Before we get started creating the Attributes, we need to create a GUID. This GUID will uniquely identity our plug-in. So, to create a GUID follow the steps in VS2008/2010. Click Tools from the VS Menu ->Create GUID It will generate a GUID like the one listed below: GUID <Guid("56ED8A2C-F216-420D-91A1-F7541495DBDA")> We only want what’s inside the quotes, so your final product should be: "56ED8A2C-F216-420D-91A1-F7541495DBDA". Go ahead and paste this snipped into your class just above the public class. Live Writer Plug-In Attributes [WriterPlugin("56ED8A2C-F216-420D-91A1-F7541495DBDA",    "Generate Twitter Message",    Description = "After your new post has been published, this plug-in will attempt to generate a Twitter status messsage with the Title and TinyUrl link.",    HasEditableOptions = false,    Name = "Generate Twitter Message",    PublisherUrl = "http://michaelcrump.net")] [InsertableContentSource("Generate Twitter Message")] So far, it should look like the following: Next, we need to implement the PublishNotifcationHook class and override the OnPostPublish. I’m not going to dive into what the code is doing as you should be able to follow pretty easily. The code below is the entire code used in the project. PublishNotificationHook public class Class1 :  PublishNotificationHook  {      public override void OnPostPublish(System.Windows.Forms.IWin32Window dialogOwner, IProperties properties, IPublishingContext publishingContext, bool publish)      {          if (!publish) return;          if (string.IsNullOrEmpty(publishingContext.PostInfo.Permalink))          {              PluginDiagnostics.LogError("Live Tweet didn't execute, due to blank permalink");          }          else          {                var strBlogName = HttpUtility.UrlEncode("#blogged : " + publishingContext.PostInfo.Title);  //Blog Post Title              var strUrlFinal = getTinyUrl(publishingContext.PostInfo.Permalink); //Blog Permalink URL Converted to TinyURL              System.Diagnostics.Process.Start("http://twitter.com/home?status=" + strBlogName + strUrlFinal);            }      } We are going to go ahead and create a method to create the short url (tinyurl). TinyURL Helper Method private static string getTinyUrl(string url) {     var cmpUrl = System.Globalization.CultureInfo.InvariantCulture.CompareInfo;     if (!cmpUrl.IsPrefix(url, "http://tinyurl.com"))     {         var address = "http://tinyurl.com/api-create.php?url=" + url;         var client = new System.Net.WebClient();         return (client.DownloadString(address));     }     return (url); } Go ahead and build your project, it should have copied the .DLL into the Windows Live Writer Plugin Directory. If it did not, then you will want to check your configuration. Once that is complete, open Windows Live Writer and select Tools-> Options-> Plug-ins and enable your plug-in that you just created. Your screen should look like the one pictured below: Go ahead and click OK and publish your blog post. You should get a pop-up with the following: Hit OK and It should open a Twitter and either ask for a login or fill in your status as shown below:   That should do it, you can do so many other things with the API. I suggest that if you want to build something really useful consult the MSDN pages. This plug-in that I created was perfect for what I needed and I hope someone finds it useful.

    Read the article

  • Disaster Recovery Discovery

    - by Rodney Landrum
    Last weekend I joined several of my IT staff on a mission to perform a DR test in our remote CoLo center in a large South East city of the US. Can I be more obtuse? The goal was simple for me as the sole DBA in a throng of Windows, Storage, Network and SAN admins – restore the databases and make them work. There were 4 applications that back ended to 7 SQL Server databases on 4 different SQL Server instances. We would maintain the original server names, but beyond that it was fair game. We had time to prepare so I was able to script out or otherwise automate the recovery process. I used sp_help_revlogin for three of the servers, a bit of a cheat actually because restoring the Master database on the target DR servers was the specified course of action according to the DR procedures ( the caveat “IF REQUIRED” left it open to interpretation. I really wanted to avoid the step of restoring Master for a number of reasons but mainly because I did not want to deal with issues starting SQL Services afterward. Having to account for the location of TempDB and the version conflicts of the resource DBs were just two of the battles I chose not to fight. Not to mention other system database location problems that might arise and prevent SQL from starting.  I was going to have to restore all of the user databases anyway, so I would not really gain any benefit, outside of logins, for taking the time to restore the source Master database over the newly installed one on the fresh server. What I wanted was the ability to restore the Master database as a user database, call it Master_Mine, from a backup on the source system and then use that restored database to script the SQL Logins and passwords on the DR systems. While I did not attempt this on the trip, the thought stuck in my mind and this past week I succeeded at scripting user accounts and passwords using only a restored copy of the Master database. Granted there were several challenges to overcome.  Also, as is usual for any work like this the usual disclaimers apply:  This is not something that I would imagine Microsoft would condone or support and this was really only an experiment for me to learn if it was even possible. While I have tested the process with success, I do not know that I would use this technique in a documented procedure because future updates for SQL Server will render this technique non-functional. I thought at first, incorrectly of course, that I could use sp_help_revlogin on a restored copy of the master database I named Master_Mine.   Since sp_help_revlogin uses system schema objects, sys.syslogins and sys.server_principals, this was not going to work because all results would come from the main Master database. To test this I added a SQL login via SSMS, backed up Master, restored  it as Master_Mine, and then deleted the login.  Even though the test account I created should presumably still be in the Master_Mine database, I should be able to get to it and script out its creation with its password hash so that I would not need to know the password, but any applications that stored that password would not have to be altered in the DR scenario. They would just work as expected. Once I realized that would not work I began looking deeper.  Knowing that sys.syslogins and sys.server_principals are system views, their underlying code should be available with sp_helptext, right? They were. And this led me to discover the two tables sys.sysxlgns and sys.sysprivs, where the data I needed was stored. These tables existed in both the real Master and the restored copy, Master_Mine.  I used this information to tweak the sp_help_revlogin stored procedure to use these tables instead to create the logins cursor used in sp_help_revlogin. For the password hash,  sp_help_revlogin uses the function LoginProperty() which takes a user name and option ‘passwordhash’ to return the hash for the user. Unfortunately, it requires the login to exist in the Master database. This would not work. So another slight modification I had to make was to pull the password hash itself (pwdhash from sys.sysxlgns) into the logins cursor and comment out the section of sp_help_revlogin that uses LoginProperty. Instead, I pass the pwdhash value as the variable @PWD_varbinary to the sp_hexadecimal stored procedure which is also created by and used within the code provided by Microsoft in the link above for sp_help_revlogin. The final challenge: sys.sysxlgns and sys.server_principals are visible only within a Dedicated Administrator Connection (DAC) query window in SSMS or within SQLCDMD.  To open a DAC connection you have to be logged in on the SQL Server itself, via RDP in my case,  and you preface the server name in the query connection with ADMIN:, so that the server connection looks like ADMIN:ServerName. From there you can create the modified stored procedure in the restored copy of a Master database from a source system as whatever name you like, and then run the modified stored procedure. I named my new stored procedure usp_help_revlogin_MyMaster. Upon execution I was happy to see the logins and password hashes that I needed to apply from the source Master database without having to restore over the new Master system database and without the need to access the original server (assuming it was down due to whatever disaster put it in that state). You will note that I am not providing full code samples here of the modifications. I will say that it was a slight bit of work and anyone who needed to do this for whatever reason, could fairly easily roll their own solution with the information provided herein.  My goal, as I said was to prove that this could be done and provide another option if required to ease the burden of getting SQL Servers up and available in an emergency situation where alternatives may be more challenging or otherwise unavailable.  

    Read the article

  • Atheros Wireless card shows up as two different models?

    - by geermc4
    Hi I've been fighting these wireless drivers for a few days and just recently i noticed that the model the Wireless controller appears in lspci is different sometimes. This is the data i have after installing Ubuntu Server 64 bit ~# lspci -k .... 04:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: AzureWave Device 1d89 Kernel driver in use: ath9k Kernel modules: ath9k ran some updates, restarted, all was good, all though it did say that linux-headers-server linux-image-server linux-server where beeing kept back. After that i installed ubuntu-desktop (aptitude install ubuntu-desktop --without-recommends) restarted and not only is the wireless not working anymore, but the hardware is listed as a different card ~# lspci -k .... 04:00.0 Ethernet controller: Atheros Communications Inc. AR5008 Wireless Network Adapter (rev 01) has no available drivers for it, still i tried to modprobe ath9k, they show up in lsmod as loaded, but still iw list shows nothing. this is what it looked like before the ubuntu-desktop instalation Wiphy phy0 Band 1: Capabilities: 0x11ce HT20/HT40 SM Power Save disabled RX HT40 SGI TX STBC RX STBC 1-stream Max AMSDU length: 3839 bytes DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 8 usec (0x06) HT TX/RX MCS rate indexes supported: 0-7 Frequencies: * 2412 MHz [1] (14.0 dBm) * 2417 MHz [2] (15.0 dBm) * 2422 MHz [3] (15.0 dBm) * 2427 MHz [4] (15.0 dBm) * 2432 MHz [5] (15.0 dBm) * 2437 MHz [6] (15.0 dBm) * 2442 MHz [7] (15.0 dBm) * 2447 MHz [8] (15.0 dBm) * 2452 MHz [9] (15.0 dBm) * 2457 MHz [10] (15.0 dBm) * 2462 MHz [11] (15.0 dBm) * 2467 MHz [12] (15.0 dBm) (passive scanning) * 2472 MHz [13] (14.0 dBm) (passive scanning) * 2484 MHz [14] (17.0 dBm) (passive scanning) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 max scan IEs length: 2257 bytes Coverage class: 0 (up to 0m) Supported Ciphers: * WEP40 (00-0f-ac:1) * WEP104 (00-0f-ac:5) * TKIP (00-0f-ac:2) * CCMP (00-0f-ac:4) * CMAC (00-0f-ac:6) Available Antennas: TX 0x1 RX 0x3 Configured Antennas: TX 0x1 RX 0x3 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point * P2P-client * P2P-GO software interface modes (can always be added): * AP/VLAN * monitor interface combinations are not supported Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * join_mesh * remain_on_channel * set_tx_bitrate_mask * action * frame_wait_cancel * set_wiphy_netns * set_channel * set_wds_peer * connect * disconnect Supported TX frame types: * IBSS: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * managed: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * AP/VLAN: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * mesh point: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-client: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 * P2P-GO: 0x0000 0x0010 0x0020 0x0030 0x0040 0x0050 0x0060 0x0070 0x0080 0x0090 0x00a0 0x00b0 0x00c0 0x00d0 0x00e0 0x00f0 Supported RX frame types: * IBSS: 0x00d0 * managed: 0x0040 0x00d0 * AP: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * AP/VLAN: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 * mesh point: 0x00b0 0x00c0 0x00d0 * P2P-client: 0x0040 0x00d0 * P2P-GO: 0x0000 0x0020 0x0040 0x00a0 0x00b0 0x00c0 0x00d0 Device supports RSN-IBSS. What's with the hardware change? If it has 2, how can i make the AR9285 always load and disable AR5008, or, is it the same and it's just showing it different? :| Oh and I've tried this on Ubuntu 10.04 server, xubuntu 12.04, ubuntu 12.04 desktop and server. Thanks in advanced. -- Here's some more info, i have it setup in 2 hard drives, 1 works and the other one i'm using to figure it out The one that works... # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.147 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network description: Wireless interface product: AR9285 Wireless Network Adapter (PCI-Express) vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 01 serial: 74:2f:68:4a:26:73 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.2.0-18-generic-pae firmware=N/A latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:fea00000-fea0ffff Here's where it doesn't # lshw -class network *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 06 serial: 54:04:a6:a3:3b:96 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168e-2.fw ip=192.168.2.160 latency=0 link=yes multicast=yes port=MII speed=1Gbit/s resources: irq:43 ioport:e000(size=256) memory:d0004000-d0004fff memory:d0000000-d0003fff *-network UNCLAIMED description: Ethernet controller product: AR5008 Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: latency=0 resources: memory:fea00000-fea0ffff Update I've noticed that if i blacklist the ath9k and ath9k_common modules lspci gives me the AR9285, but then I need to modprobe ath9k for it to work, does this make any sense? If so, why?

    Read the article

  • Extended FindWindow

    - by João Angelo
    The Win32 API provides the FindWindow function that supports finding top-level windows by their class name and/or title. However, the title search does not work if you are trying to match partial text at the middle or the end of the full window title. You can however implement support for these extended search features by using another set of Win32 API like EnumWindows and GetWindowText. A possible implementation follows: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.InteropServices; using System.Text; public class WindowInfo { private IntPtr handle; private string className; internal WindowInfo(IntPtr handle, string title) { if (handle == IntPtr.Zero) throw new ArgumentException("Invalid handle.", "handle"); this.Handle = handle; this.Title = title ?? string.Empty; } public string Title { get; private set; } public string ClassName { get { if (className == null) { className = GetWindowClassNameByHandle(this.Handle); } return className; } } public IntPtr Handle { get { if (!NativeMethods.IsWindow(this.handle)) throw new InvalidOperationException("The handle is no longer valid."); return this.handle; } private set { this.handle = value; } } public static WindowInfo[] EnumerateWindows() { var windows = new List<WindowInfo>(); NativeMethods.EnumWindowsProcessor processor = (hwnd, lParam) => { windows.Add(new WindowInfo(hwnd, GetWindowTextByHandle(hwnd))); return true; }; bool succeeded = NativeMethods.EnumWindows(processor, IntPtr.Zero); if (!succeeded) return new WindowInfo[] { }; return windows.ToArray(); } public static WindowInfo FindWindow(Predicate<WindowInfo> predicate) { WindowInfo target = null; NativeMethods.EnumWindowsProcessor processor = (hwnd, lParam) => { var current = new WindowInfo(hwnd, GetWindowTextByHandle(hwnd)); if (predicate(current)) { target = current; return false; } return true; }; NativeMethods.EnumWindows(processor, IntPtr.Zero); return target; } private static string GetWindowTextByHandle(IntPtr handle) { if (handle == IntPtr.Zero) throw new ArgumentException("Invalid handle.", "handle"); int length = NativeMethods.GetWindowTextLength(handle); if (length == 0) return string.Empty; var buffer = new StringBuilder(length + 1); NativeMethods.GetWindowText(handle, buffer, buffer.Capacity); return buffer.ToString(); } private static string GetWindowClassNameByHandle(IntPtr handle) { if (handle == IntPtr.Zero) throw new ArgumentException("Invalid handle.", "handle"); const int WindowClassNameMaxLength = 256; var buffer = new StringBuilder(WindowClassNameMaxLength); NativeMethods.GetClassName(handle, buffer, buffer.Capacity); return buffer.ToString(); } } internal class NativeMethods { public delegate bool EnumWindowsProcessor(IntPtr hwnd, IntPtr lParam); [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool EnumWindows( EnumWindowsProcessor lpEnumFunc, IntPtr lParam); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern int GetWindowText( IntPtr hWnd, StringBuilder lpString, int nMaxCount); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern int GetWindowTextLength(IntPtr hWnd); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern int GetClassName( IntPtr hWnd, StringBuilder lpClassName, int nMaxCount); [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool IsWindow(IntPtr hWnd); } The access to the windows handle is preceded by a sanity check to assert if it’s still valid, but if you are dealing with windows out of your control then the window can be destroyed right after the check so it’s not guaranteed that you’ll get a valid handle. Finally, to wrap this up a usage, example: static void Main(string[] args) { var w = WindowInfo.FindWindow(wi => wi.Title.Contains("Test.docx")); if (w != null) { Console.Write(w.Title); } }

    Read the article

  • When is my View too smart?

    - by Kyle Burns
    In this posting, I will discuss the motivation behind keeping View code as thin as possible when using patterns such as MVC, MVVM, and MVP.  Once the motivation is identified, I will examine some ways to determine whether a View contains logic that belongs in another part of the application.  While the concepts that I will discuss are applicable to most any pattern which favors a thin View, any concrete examples that I present will center on ASP.NET MVC. Design patterns that include a Model, a View, and other components such as a Controller, ViewModel, or Presenter are not new to application development.  These patterns have, in fact, been around since the early days of building applications with graphical interfaces.  The reason that these patterns emerged is simple – the code running closest to the user tends to be littered with logic and library calls that center around implementation details of showing and manipulating user interface widgets and when this type of code is interspersed with application domain logic it becomes difficult to understand and much more difficult to adequately test.  By removing domain logic from the View, we ensure that the View has a single responsibility of drawing the screen which, in turn, makes our application easier to understand and maintain. I was recently asked to take a look at an ASP.NET MVC View because the developer reviewing it thought that it possibly had too much going on in the view.  I looked at the .CSHTML file and the first thing that occurred to me was that it began with 40 lines of code declaring member variables and performing the necessary calculations to populate these variables, which were later either output directly to the page or used to control some conditional rendering action (such as adding a class name to an HTML element or not rendering another element at all).  This exhibited both of what I consider the primary heuristics (or code smells) indicating that the View is too smart: Member variables – in general, variables in View code are an indication that the Model to which the View is being bound is not sufficient for the needs of the View and that the View has had to augment that Model.  Notable exceptions to this guideline include variables used to hold information specifically related to rendering (such as a dynamically determined CSS class name or the depth within a recursive structure for indentation purposes) and variables which are used to facilitate looping through collections while binding. Arithmetic – as with member variables, the presence of arithmetic operators within View code are an indication that the Model servicing the View is insufficient for its needs.  For example, if the Model represents a line item in a sales order, it might seem perfectly natural to “normalize” the Model by storing the quantity and unit price in the Model and multiply these within the View to show the line total.  While this does seem natural, it introduces a business rule to the View code and makes it impossible to test that the rounding of the result meets the requirement of the business without executing the View.  Within View code, arithmetic should only be used for activities such as incrementing loop counters and calculating element widths. In addition to the two characteristics of a “Smart View” that I’ve discussed already, this View also exhibited another heuristic that commonly indicates to me the need to refactor a View and make it a bit less smart.  That characteristic is the existence of Boolean logic that either does not work directly with properties of the Model or works with too many properties of the Model.  Consider the following code and consider how logic that does not work directly with properties of the Model is just another form of the “member variable” heuristic covered earlier: @if(DateTime.Now.Hour < 12) {     <div>Good Morning!</div> } else {     <div>Greetings</div> } This code performs business logic to determine whether it is morning.  A possible refactoring would be to add an IsMorning property to the Model, but in this particular case there is enough similarity between the branches that the entire branching structure could be collapsed by adding a Greeting property to the Model and using it similarly to the following: <div>@Model.Greeting</div> Now let’s look at some complex logic around multiple Model properties: @if (ModelPageNumber + Model.NumbersToDisplay == Model.PageCount         || (Model.PageCount != Model.CurrentPage             && !Model.DisplayValues.Contains(Model.PageCount))) {     <div>There's more to see!</div> } In this scenario, not only is the View code difficult to read (you shouldn’t have to play “human compiler” to determine the purpose of the code), but it also complex enough to be at risk for logical errors that cannot be detected without executing the View.  Conditional logic that requires more than a single logical operator should be looked at more closely to determine whether the condition should be evaluated elsewhere and exposed as a single property of the Model.  Moving the logic above outside of the View and exposing a new Model property would simplify the View code to: @if(Model.HasMoreToSee) {     <div>There’s more to see!</div> } In this posting I have briefly discussed some of the more prominent heuristics that indicate a need to push code from the View into other pieces of the application.  You should now be able to recognize these symptoms when building or maintaining Views (or the Models that support them) in your applications.

    Read the article

  • WIF-less claim extraction from ACS: SWT

    - by Elton Stoneman
    WIF with SAML is solid and flexible, but unless you need the power, it can be overkill for simple claim assertion, and in the REST world WIF doesn’t have support for the latest token formats.  Simple Web Token (SWT) may not be around forever, but while it's here it's a nice easy format which you can manipulate in .NET without having to go down the WIF route. Assuming you have set up a Relying Party in ACS, specifying SWT as the token format: When ACS redirects to your login page, it will POST the SWT in the first form variable. It comes through in the BinarySecurityToken element of a RequestSecurityTokenResponse XML payload , the SWT type is specified with a TokenType of http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0 : <t:RequestSecurityTokenResponse xmlns:t="http://schemas.xmlsoap.org/ws/2005/02/trust">   <t:Lifetime>     <wsu:Created xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T07:31:18.655Z</wsu:Created>     <wsu:Expires xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">2012-08-31T09:11:18.655Z</wsu:Expires>   </t:Lifetime>   <wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">     <EndpointReference xmlns="http://www.w3.org/2005/08/addressing">       <Address>http://localhost/x.y.z</Address>     </EndpointReference>   </wsp:AppliesTo>   <t:RequestedSecurityToken>     <wsse:BinarySecurityToken wsu:Id="uuid:fc8d3332-d501-4bb0-84ba-d31aa95a1a6c" ValueType="http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> [ base64string ] </wsse:BinarySecurityToken>   </t:RequestedSecurityToken>   <t:TokenType>http://schemas.xmlsoap.org/ws/2009/11/swt-token-profile-1.0</t:TokenType>   <t:RequestType>http://schemas.xmlsoap.org/ws/2005/02/trust/Issue</t:RequestType>   <t:KeyType>http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey</t:KeyType> </t:RequestSecurityTokenResponse> Reading the SWT is as simple as base-64 decoding, then URL-decoding the element value:     var wrappedToken = XDocument.Parse(HttpContext.Current.Request.Form[1]);     var binaryToken = wrappedToken.Root.Descendants("{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}BinarySecurityToken").First();     var tokenBytes = Convert.FromBase64String(binaryToken.Value);     var token = Encoding.UTF8.GetString(tokenBytes);     var tokenType = wrappedToken.Root.Descendants("{http://schemas.xmlsoap.org/ws/2005/02/trust}TokenType").First().Value; The decoded token contains the claims as key/value pairs, along with the issuer, audience (ACS realm), expiry date and an HMAC hash, which are in query string format. Separate them on the ampersand, and you can write out the claim values in your logged-in page:     var decoded = HttpUtility.UrlDecode(token);     foreach (var part in decoded.Split('&'))     {         Response.Write("<pre>" + part + "</pre><br/>");     } - which will produce something like this: http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationinstant=2012-08-31T06:57:01.855Z http://schemas.microsoft.com/ws/2008/06/identity/claims/authenticationmethod=http://schemas.microsoft.com/ws/2008/06/identity/authenticationmethod/windows http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname=XYZ http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.xmlsoap.org/ws/2005/05/identity/claims/[email protected] http://schemas.microsoft.com/accesscontrolservice/2010/07/claims/identityprovider=http://fs.svc.xyz.com/adfs/services/trust Audience=http://localhost/x.y.z ExpiresOn=1346402225 Issuer=https://x-y-z.accesscontrol.windows.net/ HMACSHA256=oDCeEDDAWEC8x+yBnTaCLnzp4L6jI0Z/xNK95PdZTts= The HMAC hash lets you validate the token to ensure it hasn’t been tampered with. You'll need the token signing key from ACS, then you can re-sign the token and compare hashes. There's a full implementation of an SWT parser and validator here: How To Request SWT Token From ACS And How To Validate It At The REST WCF Service Hosted In Windows Azure, and a cut-down claim inspector on my github code gallery: ACS Claim Inspector. Interestingly, ACS lets you have a value for your logged-in page which has no relation to the realm for authentication, so you can put this code into a generic claim inspector page, and set that to be your logged-in page for any relying party where you want to check what's being sent through. Particularly handy with ADFS, when you're modifying the claims provided, and want to quickly see the results.

    Read the article

  • Training a 'replacement', how to enforce standards?

    - by Mohgeroth
    Not sure that this is the right stack exchange site to ask this of, but here goes... Scope I work for a small company that employs a few hundred people. The development team for the company is small and works out of visual foxpro. A specific department in the company hired me in as a 'lone gunman' to fix and enhance a pre-existing invoicing system. I've successfully taken an Access application that suffered from a lot of risks and limitations and converted it into a C# application driven off of a SQL server backend. I have recently obtained my undergraduate and am no expert by any means. To help make up for that I've felt that earning microsoft certifications will force me to understand more about .net and how it functions. So, after giving my notice with 9 months in advance, 3 months ago a replacement finally showed up. Their role is to learn what I have been designing to an attempt to support the applications designed in C#. The Replacement Fresh out of college with no real-world work experience, the first instinct for anything involving data was and still is listboxes... any time data is mentioned the list box is the control of choice for the replacement. This has gotten to the point, no matter how many times I discuss other controls, where I've seen 5 listboxes on a single form. Classroom experience was almost all C++ console development. So, an example of where I have concern is in a winforms application: Users need to key Reasons into a table to select from later. Given that I know that a strongly typed data set exists, I can just drag the data source from the toolbox and it would create all of this for me. I realize this is a simple example but using databinding is the key. For the past few months now we have been talking about the strongly typed dataset, how to use it and where it interacts with other controls. Data sets, how they work in relation to binding sources, adapters and data grid views. After handing this project off I expected questions about how to implement these since for me this is the way to do it. What happened next simply floors me: An instance of an adapter from the strongly typed dataset was created in the activate event of the form, a table was created and filled with data. Then, a loop was made to manually add rows to a listbox from this table. Finally, a variable was kept to do lookups to figure out what ID the record was for updates if required. How do they modify records you ask? That was my first question too. You won't believe how simple it is, all you do it double click and they type into a pop-up prompt the new value to change it to. As a data entry operator, all the modal popups would drive me absolutely insane. The final solution exceeds 100 lines of code that must be maintained. So my concern is that none of this is sinking in... the department is only allowed 20 hours a week of their time. Up until last week, we've only been given 4-5 hours a week if I'm lucky. The past week or so, I've been lucky to get 10. Question WHAT DO I DO?! I have 4 weeks left until I leave and they fully 'support' this application. I love this job and the opportunity it has given me but it's time for me to spread my wings and find something new. I am in no way, shape or form convinced that they are ready to take over. I do feel that the replacement has the technical ability to 'figure it out' but instead of learning they just write code to do all of this stuff manually. If the replacement wants to code differently in the end, as long as it works I'm fine with that as horrifiying at it looks. However to support what I have designed they MUST to understand how it works and how I have used controls and the framework to make 'magic' happen. This project has about 40 forms, a database with over 30 some odd tables, triggers and stored procedures. It relates labor to invoices to contracts to projections... it's not as simple as it was three years ago when I began this project and the department is now in a position where they cannot survive without it. How in the world can I accomplish any of the following?: Enforce standards or understanding in constent design when the department manager keeps telling them they can do it however they want to Find a way to engage the replacement in active learning of the framework and system design that support must be given for Gracefully inform sr. management that 5-9 hours a week is simply not enough time to learn about the department, pre-existing processes, applications that need to be supported AND determine where potential enhancements to the system go... Yes I know this is a wall of text, thanks for reading through me but I simply don't know what I should be doing. For me, this job is a monster of a reference and things would look extremely bad if I left and things fell apart. How do I handle this?

    Read the article

  • Generically correcting data before save with Entity Framework

    - by koevoeter
    Been working with Entity Framework (.NET 4.0) for a week now for a data migration job and needed some code that generically corrects string values in the database. You probably also have seen things like empty strings instead of NULL or non-trimmed texts ("United States       ") in "old" databases, and you don't want to apply a correcting function on every column you migrate. Here's how I've done this (extending the partial class of my ObjectContext):public partial class MyDatacontext{    partial void OnContextCreated()    {        SavingChanges += OnSavingChanges;    }     private void OnSavingChanges(object sender, EventArgs e)    {        foreach (var entity in GetPersistingEntities(sender))        {            foreach (var propertyInfo in GetStringProperties(entity))            {                var value = (string)propertyInfo.GetValue(entity, null);                 if (value == null)                {                    continue;                }                 if (value.Trim().Length == 0 && IsNullable(propertyInfo))                {                    propertyInfo.SetValue(entity, null, null);                }                else if (value != value.Trim())                {                    propertyInfo.SetValue(entity, value.Trim(), null);                }            }        }    }     private IEnumerable<object> GetPersistingEntities(object sender)    {        return ((ObjectContext)sender).ObjectStateManager            .GetObjectStateEntries(EntityState.Added | EntityState.Modified)             .Select(e => e.Entity);    }    private IEnumerable<PropertyInfo> GetStringProperties(object entity)    {        return entity.GetType().GetProperties()            .Where(pi => pi.PropertyType == typeof(string));    }    private bool IsNullable(PropertyInfo propertyInfo)    {        return ((EdmScalarPropertyAttribute)propertyInfo             .GetCustomAttributes(typeof(EdmScalarPropertyAttribute), false)            .Single()).IsNullable;    }}   Obviously you can use similar code for other generic corrections.

    Read the article

  • YouTube Scalability Lessons

    - by Bertrand Matthelié
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Calibri"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }h2 { margin: 12pt 0cm 3pt; page-break-after: avoid; font-size: 14pt; font-family: "Times New Roman"; font-style: italic; }a:link, span.MsoHyperlink { color: blue; text-decoration: underline; }a:visited, span.MsoHyperlinkFollowed { color: purple; text-decoration: underline; }span.Heading2Char { font-family: Calibri; font-weight: bold; font-style: italic; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Very interesting blog post by Todd Hoff at highscalability.com presenting “7 Years of YouTube Scalability Lessons in 30 min” based on a presentation from Mike Solomon, one of the original engineers at YouTube: …. The key takeaway away of the talk for me was doing a lot with really simple tools. While many teams are moving on to more complex ecosystems, YouTube really does keep it simple. They program primarily in Python, use MySQL as their database, they’ve stuck with Apache, and even new features for such a massive site start as a very simple Python program. That doesn’t mean YouTube doesn’t do cool stuff, they do, but what makes everything work together is more a philosophy or a way of doing things than technological hocus pocus. What made YouTube into one of the world’s largest websites? Read on and see... Stats @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } 4 billion Views a day 60 hours of video is uploaded every minute 350+ million devices are YouTube enabled Revenue double in 2010 The number of videos has gone up 9 orders of magnitude and the number of developers has only gone up two orders of magnitude. 1 million lines of Python code Stack @font-face { font-family: "Arial"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; } Python - most of the lines of code for YouTube are still in Python. Everytime you watch a YouTube video you are executing a bunch of Python code. Apache - when you think you need to get rid of it, you don’t. Apache is a real rockstar technology at YouTube because they keep it simple. Every request goes through Apache. Linux - the benefit of Linux is there’s always a way to get in and see how your system is behaving. No matter how bad your app is behaving, you can take a look at it with Linux tools like strace and tcpdump. MySQL - is used a lot. When you watch a video you are getting data from MySQL. Sometime it’s used a relational database or a blob store. It’s about tuning and making choices about how you organize your data. Vitess- a  new project released by YouTube, written in Go, it’s a frontend to MySQL. It does a lot of optimization on the fly, it rewrites queries and acts as a proxy. Currently it serves every YouTube database request. It’s RPC based. Zookeeper - a distributed lock server. It’s used for configuration. Really interesting piece of technology. Hard to use correctly so read the manual Wiseguy - a CGI servlet container. Spitfire - a templating system. It has an abstract syntax tree that let’s them do transformations to make things go faster. Serialization formats - no matter which one you use, they are all expensive. Measure. Don’t use pickle. Not a good choice. Found protocol buffers slow. They wrote their own BSON implementation, which is 10-15 time faster than the one you can download. ...Contiues. Read the blog Watch the video

    Read the article

  • Intel Centrino Wireless-N 1000 Again ! Ubuntu 13.04 x64

    - by vafa
    First I have to say that I tried everything written about this concept. The problem is that it stops working randomly in 3 main forms : 1 - sometimes it disconnect from wireless network and reconnect automatically 2 - sometimes it disconnect and wont connect no matter what (needs reboot) 3 - some times it's still connected but cannot ping or surf or whatever. I already tried disabling N mod using these commands : sudo modprobe -r iwlwifi modprobe iwlwifi 11n_disable=1 (or 0, whatever) it didn't help . these are the results of lspci, sudo lshw -C network, ifconfig, iwconfig, rfkill list when it disconnected and didn't connect till reboot : ifconfig : eth0 Link encap:Ethernet HWaddr c8:0a:a9:34:65:77 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:1563213476557380 errors:9379306629148050 dropped:3126435543049350 overruns:1563217771524675 frame:7816088857623375 TX packets:1563217771524675 errors:6252871086098700 dropped:0 overruns:1563217771524675 carrier:3126435543049350 collisions:7816088857623375 txqueuelen:1000 RX bytes:1563217771524675 (1.5 PB) TX bytes:1563217771524675 (1.5 PB) ham0 Link encap:Ethernet HWaddr 7a:79:19:a5:e4:93 inet addr:25.165.228.147 Bcast:25.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::7879:19ff:fea5:e493/64 Scope:Link inet6 addr: 2620:9b::19a5:e493/96 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1404 Metric:1 RX packets:7743 errors:0 dropped:0 overruns:0 frame:0 TX packets:1250 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:665642 (665.6 KB) TX bytes:204056 (204.0 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:41138 errors:0 dropped:0 overruns:0 frame:0 TX packets:41138 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:6420962 (6.4 MB) TX bytes:6420962 (6.4 MB) wlan0 Link encap:Ethernet HWaddr 00:1e:64:45:fb:70 inet6 addr: fe80::21e:64ff:fe45:fb70/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:286999 errors:0 dropped:0 overruns:0 frame:0 TX packets:226966 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:324386887 (324.3 MB) TX bytes:30674804 (30.6 MB) iwconfig : ham0 no wireless extensions. eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=14 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off sudo lshw -C network: *-network description: Wireless interface product: Centrino Wireless-N 1000 [Condor Peak] vendor: Intel Corporation physical id: 0 bus info: pci@0000:07:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:45:fb:70 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.8.0-30-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bg resources: irq:46 memory:c0400000-c0401fff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Qualcomm Atheros physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: c0 serial: c8:0a:a9:34:65:77 capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.1-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:47 memory:c0900000-c093ffff ioport:5000(size=128) *-network description: Ethernet interface physical id: 2 logical name: ham0 serial: 7a:79:19:a5:e4:93 size: 10Mbit/s capabilities: ethernet physical configuration: autonegotiation=off broadcast=yes driver=tun driverversion=1.6 duplex=full ip=25.165.228.147 link=yes multicast=yes port=twisted pair speed=10Mbit/s lspci: 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:01.0 PCI bridge: Intel Corporation Mobile 4 Series Chipset PCI Express Graphics Port (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 03) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 03) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 03) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 03) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 03) 00:1c.3 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 4 (rev 03) 00:1c.5 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 6 (rev 03) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:1d.3 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 03) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 93) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 03) 00:1f.2 SATA controller: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] (rev 03) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 03) 01:00.0 VGA compatible controller: NVIDIA Corporation G98M [GeForce G 105M] (rev a1) 07:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 [Condor Peak] 09:00.0 Ethernet controller: Qualcomm Atheros AR8131 Gigabit Ethernet (rev c0) rfkill list : 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-bluetooth: Bluetooth Soft blocked: yes Hard blocked: no 9: phy0: Wireless LAN Soft blocked: no Hard blocked: no any help will be REALLLYYYY appreciated

    Read the article

  • Best Practices Generating WebService Proxies for Oracle Sales Cloud (Fusion CRM)

    - by asantaga
    I've recently been building a REST Service wrapper for Oracle Sales Cloud and initially all was going well, however as soon as I added all of my Web Service proxies I started to get weird errors..  My project structure looks like this What I found out was if I only had the InteractionsService & OpportunityService WebService Proxies then all worked ok, but as soon as I added the LocationsService Proxy, I would start to see strange JAXB errors. Example of the error message Exception in thread "main" javax.xml.ws.WebServiceException: Unable to create JAXBContextat com.sun.xml.ws.model.AbstractSEIModelImpl.createJAXBContext(AbstractSEIModelImpl.java:164)at com.sun.xml.ws.model.AbstractSEIModelImpl.postProcess(AbstractSEIModelImpl.java:94)at com.sun.xml.ws.model.RuntimeModeler.buildRuntimeModel(RuntimeModeler.java:281)at com.sun.xml.ws.client.WSServiceDelegate.buildRuntimeModel(WSServiceDelegate.java:762)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.buildRuntimeModel(WLSProvider.java:982)at com.sun.xml.ws.client.WSServiceDelegate.createSEIPortInfo(WSServiceDelegate.java:746)at com.sun.xml.ws.client.WSServiceDelegate.addSEI(WSServiceDelegate.java:737)at com.sun.xml.ws.client.WSServiceDelegate.getPort(WSServiceDelegate.java:361)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate.internalGetPort(WLSProvider.java:934)at weblogic.wsee.jaxws.spi.WLSProvider$ServiceDelegate$PortClientInstanceFactory.createClientInstance(WLSProvider.java:1039)...... Looking further down I see the error message is related to JAXB not being able to find an objectFactory for one of its types Caused by: java.security.PrivilegedActionException: com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 6 counts of IllegalAnnotationExceptionsThere's no ObjectFactory with an @XmlElementDecl for the element {http://xmlns.oracle.com/apps/crmCommon/activities/activitiesService/}AssigneeRsrcOrgIdthis problem is related to the following location:at protected javax.xml.bind.JAXBElement com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee.assigneeRsrcOrgId at com.oracle.xmlns.apps.crmcommon.activities.activitiesservice.ActivityAssignee This is very strange... My first thoughts are that when I generated the WebService Proxy I entered the package name as "oracle.demo.pts.fusionproxy.servicename" and left the generated types as blank. This way all the generated types get put into the same package hierarchy and when deployed they get merged... Sounds resaonable and appears to work but not in this case..  To resolve this I regenerate the proxy but this time setting : Package name : To the name of my package eg. oracle.demo.pts.fusionproxy.interactionsRoot Package for Generated Types :  Package where the types will be generated to, e.g. oracle.demo.pts.fusionproxy.SalesParty.types When I ran the application now, it all works , awesome eh???? Alas no, there is a serious side effect. The problem now is that to help coding I've created a collection of helper classes , these helper classes take parameters which use some of the "generic" datatypes, like FindCriteria. e.g. This wont work any more public static FindCriteria createCustomFindCriteria(FindCriteria pFc,String pAttributes) Here lies a gremlin of a problem.. I cant use this method anymore, this is because the FindCriteria datatype is now being defined two, or more times, in the generated code for my project. If you leave the Root Package for types blank it will get generated to com.oracle.xmlns, and if you populate it then it gets generated to your custom package.. The two datatypes look the same, sound the same (and if this were a duck would sound the same), but THEY ARE NOT THE SAME... Speaking to development, they recommend you should not be entering anything in the Root Packages section, so the mystery thickens why does it work.. Well after spending sometime with some colleagues of mine in development we've identified the issue.. Alas different parts of Oracle Fusion Development have multiple schemas with the same namespace, when the WebService generator generates its classes its not seeing the other schemas properly and not generating the Object Factories correctly...  Thankfully I've found a workaround Solution Overview When generating the proxies leave the Root Package for Generated Types BLANK When you have finished generating your proxies, use the JAXB tool XJC and generate Java classes for all datatypes  Create a project within your JDeveloper11g workspace and import the java classes into this project Final bit.. within the project dependencies ensure that the JAXB/XJC generated classes are "FIRST" in the classpath Solution Details Generate the WebServices SOAP proxies When generating the proxies your generation dialog should look like this Ensure the "unwrap" parameters is selected, if it isn't then that's ok, it simply means when issuing a "get" you need to extract out the Element Generate the JAXB Classes using XJC XJC provides a command line switch called -wsdl, this (although is experimental/beta) , accepts a HTTP WSDL and will generate the relevant classes. You can put these into a single batch/shell script xjc -wsdl https://fusionservername:443/appCmmnCompInteractions/InteractionService?wsdlxjc -wsdl https://fusionservername443/opptyMgmtOpportunities/OpportunityService?wsdl Create Project in JDeveloper to store the XJC "generated" JAXB classes Within the project folder create a filesystem folder called "src" and copy the generated files into this folder. JDeveloper11g should then see the classes and display them, if it doesnt try clicking the "refresh" button In your main project ensure that the JDeveloper XJC project is selected as a dependancy and IMPORTANT make sure it is at the top of the list. This ensures that the classes are at the front of the classpath And voilà.. Hopefully you wont see any JAXB generation errors and you can use common datatypes interchangeably in your project, (e.g. FindCriteria etc)

    Read the article

  • Multiple Audio Issues

    - by Lerp
    I am having issues with my audio on Ubuntu 12.04, I will try and give as much detail as possible so sorry if there's too much detail. The Problem Audio plays from both speakers and headphone regardless of what connector I choose and regardless of the profile I use. The microphone is constantly being played through headphones & speakers. The headphone audio is extremely quiet but plays from both ears when I select "Headphones" for the connector in Sound Settings. The headphone audio only plays from one ear and is quiet (but not as quiet as above) when I select "Analogue Output" for the connector in Sound Settings. I can only select "Headphones" as the connector in Sound Settings if I set the profile to either "Analogue Stereo Output/Duplex", all others only allow me to choose "Analogue Output" for the connector. Despite the headphone sound issues, the speaker sound is fine apart from the fact that I am not able to select which output is used, they just both play. My headphone and microphone are plugged into the front and my speakers are plugged into the back. What I have tried I have put everything in alsamixer to 100 apart from "Front Mic Boost" which I have set to 0. Command Output aplay -l **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: AD198x Digital [AD198x Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 2: AD198x Headphone [AD198x Headphone] Subdevices: 1/1 Subdevice #0: subdevice #0 arecord -l **** List of CAPTURE Hardware Devices **** card 0: Intel [HDA Intel], device 0: AD198x Analog [AD198x Analog] Subdevices: 2/3 Subdevice #0: subdevice #0 Subdevice #1: subdevice #1 Subdevice #2: subdevice #2 cat /proc/asound/cards 0 [Intel ]: HDA-Intel - HDA Intel HDA Intel at 0xf7ff8000 irq 70 cat /proc/asound/modules 0 snd_hda_intel cat /proc/asound/card*/codec* | grep "Codec" Codec: Analog Devices AD1989B cat /etc/modprobe.d/alsa-base.conf # autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 # Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } # Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } # Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 # Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 # Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 # Keep snd-usb-audio from beeing loaded as first soundcard options snd-usb-audio index=-2 Hopefully I have provided enough information, I will happily provide anymore information needed. Thank you. Update Reinstalling alsa-base and pulseaudio fixed the headphone issues I was having.

    Read the article

  • My Feelings About Microsoft Surface

    - by Valter Minute
    Advice: read the title carefully, I’m talking about “feelings” and not about advanced technical points proved in a scientific and objective way I still haven’t had a chance to play with a MS Surface tablet (I would love to, of course) and so my ideas just came from reading different articles on the net and MS official statements. Remember also that the MVP motto begins with “Independent” (“Independent Experts. Real World Answers.”) and this is just my humble opinion about a product and a technology. I know that, being an MS MVP you can be called an “MS-fanboy”, I don’t care, I hope that people can appreciate my opinion, even if it doesn’t match theirs. The “Surface” brand can be confusing for techies that knew the “original” surface concept but I think that will be a fresh new brand name for most of the people out there. But marketing department are here to confuse people… so I can understand this “recycle” of an existing name. So Microsoft is entering the hardware arena… for me this is good news. Microsoft developed some nice hardware in the past: the xbox, zune (even if the commercial success was quite limited) and, last but not least, the two arc mices (old and new model) that I use and appreciate. In the past Microsoft worked with OEMs and that model lead to good and bad things. Good thing (for microsoft, at least) is market domination by windows-based PCs that only in the last years has been reduced by the return of the Mac and tablets. Google is also moving in the hardware business with its acquisition of Motorola, and Apple leveraged his control of both the hardware and software sides to develop innovative products. Microsoft can scare OEMs and make them fly away from windows (but where?) or just lead the pack, showing how devices should be designed to compete in the market and bring back some of the innovation that disappeared from recent PC products (look at the shelves of your favorite electronics store and try to distinguish a laptop between the huge mass of anonymous PCs on displays… only Macs shine out there…). Having to compete with MS “official” hardware will force OEMs to develop better product and bring back some real competition in a market that was ruled only by prices (the lower the better even when that means low quality) and no innovative features at all (when it was the last time that a new PC surprised you?). Moving into a new market is a big and risky move, but with Windows 8 Microsoft is playing a crucial move for its future, trying to be back in the innovation run against apple and google. MS can’t afford to fail this time. I saw the new devices (the WinRT and Pro) and the specifications are scarce, misleading and confusing. The first impression is that the device looks like an iPad with a nice keyboard cover… Using “HD” and “full HD” to define display resolution instead of using the real figures and reviving the “ClearType” brand (now dead on Win8 as reported here and missed by people who hate to read text on displays, like myself) without providing clear figures (couldn’t you count those damned pixels?) seems to imply that MS was caught by surprise by apple recent “retina” displays that brought very high definition screens on tablets.Also there are no specifications about the processors used (even if some sources report NVidia Tegra for the ARM tablet and i5 for the x86 one) and expected battery life (a critical point for tablets and the point that killed Windows7 x86 based tablets). Also nothing about the price, and this will be another critical point because other platform out there already provide lots of applications and have a good user base, if MS want to enter this market tablets pricing must be competitive. There are some expansion ports (SD and USB), so no fixed storage model (even if the specs talks about 32-64GB for RT and 128-256GB for pro). I like this and don’t like the apple model where flash memory (that it’s dirt cheap used in thumdrives or SD cards) is as expensive as gold (or cocaine to have a more accurate per gram measurement) when mounted inside a tablet/phone. For big files you’ll be able to use external media and an SD card could be used to store files that don’t require super-fast SSD-like access times, I hope. To be honest I really don’t like the marketplace model and the limitation of Windows RT APIs (no local database? from a company that based a good share of its success on VB6+Access!) and lack of desktop support on the ARM (even if the support is here and has been used to port office). It’s a step toward the consumer market (where competitors are making big money), but may impact enterprise (and embedded) users that may not appreciate Windows 8 new UI or the limitations of the new app model (if you aren’t connected you are dead ). Not having compatibility with the desktop will require brand new applications and honestly made all the CPU cycles spent to convert .NET IL into real machine code in the past like a huge waste of time… as soon as a new processor architecture is supported by Windows you still have to rewrite part of your application (and MS is pushing HTML5+JS and native code more than .NET in my perception). On the other side I believe that the development experience provided by Visual Studio is still miles (or kilometres) ahead of the competition and even the all-uppercase menu of VS2012 hasn’t changed this situation. The new metro UI got mixed reviews. On my side I should say that is very pleasant to use on a touch screen, I like the minimalist design (even if sometimes is too minimal and hides stuff that, in my opinion, should be visible) but I should also say that using it with mouse and keyboard is like trying to pick your nose with boxing gloves… Metro is also very interesting for embedded devices where touch screen usage is quite common and where having an application taking all the screen is the norm. For devices like kiosks, vending machines etc. this kind of UI can be a great selling point. I don’t need a new tablet (to be honest I’m pretty happy with my wife’s iPad and with my PC), but I may change my opinion after having a chance to play a little bit with those new devices and understand what’s hidden under all this mysterious and generic announcements and specifications!

    Read the article

  • Take Two: Comparing JVMs on ARM/Linux

    - by user12608080
    Although the intent of the previous article, entitled Comparing JVMs on ARM/Linux, was to introduce and highlight the availability of the HotSpot server compiler (referred to as c2) for Java SE-Embedded ARM v7,  it seems, based on feedback, that everyone was more interested in the OpenJDK comparisons to Java SE-E.  In fact there were two main concerns: The fact that the previous article compared Java SE-E 7 against OpenJDK 6 might be construed as an unlevel playing field because version 7 is newer and therefore potentially more optimized. That the generic compiler settings chosen to build the OpenJDK implementations did not put those versions in a particularly favorable light. With those considerations in mind, we'll institute the following changes to this version of the benchmarking: In order to help alleviate an additional concern that there is some sort of benchmark bias, we'll use a different suite, called DaCapo.  Funded and supported by many prestigious organizations, DaCapo's aim is to benchmark real world applications.  Further information about DaCapo can be found at http://dacapobench.org. At the suggestion of Xerxes Ranby, who has been a great help through this entire exercise, a newer Linux distribution will be used to assure that the OpenJDK implementations were built with more optimal compiler settings.  The Linux distribution in this instance is Ubuntu 11.10 Oneiric Ocelot. Having experienced difficulties getting Ubuntu 11.10 to run on the original D2Plug ARMv7 platform, for these benchmarks, we'll switch to an embedded system that has a supported Ubuntu 11.10 release.  That platform is the Freescale i.MX53 Quick Start Board.  It has an ARMv7 Coretex-A8 processor running at 1GHz with 1GB RAM. We'll limit comparisons to 4 JVM implementations: Java SE-E 7 Update 2 c1 compiler (default) Java SE-E 6 Update 30 (c1 compiler is the only option) OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 CACAO build 1.1.0pre2 OpenJDK 6 IcedTea6 1.11pre 6b23~pre11-0ubuntu1.11.10.2 JamVM build-1.6.0-devel Certain OpenJDK implementations were eliminated from this round of testing for the simple reason that their performance was not competitive.  The Java SE 7u2 c2 compiler was also removed because although quite respectable, it did not perform as well as the c1 compilers.  Recall that c2 works optimally in long-lived situations.  Many of these benchmarks completed in a relatively short period of time.  To get a feel for where c2 shines, take a look at the first chart in this blog. The first chart that follows includes performance of all benchmark runs on all platforms.  Later on we'll look more at individual tests.  In all runs, smaller means faster.  The DaCapo aficionado may notice that only 10 of the 14 DaCapo tests for this version were executed.  The reason for this is that these 10 tests represent the only ones successfully completed by all 4 JVMs.  Only the Java SE-E 6u30 could successfully run all of the tests.  Both OpenJDK instances not only failed to complete certain tests, but also experienced VM aborts too. One of the first observations that can be made between Java SE-E 6 and 7 is that, for all intents and purposes, they are on par with regards to performance.  While it is a fact that successive Java SE releases add additional optimizations, it is also true that Java SE 7 introduces additional complexity to the Java platform thus balancing out any potential performance gains at this point.  We are still early into Java SE 7.  We would expect further performance enhancements for Java SE-E 7 in future updates. In comparing Java SE-E to OpenJDK performance, among both OpenJDK VMs, Cacao results are respectable in 4 of the 10 tests.  The charts that follow show the individual results of those four tests.  Both Java SE-E versions do win every test and outperform Cacao in the range of 9% to 55%. For the remaining 6 tests, Java SE-E significantly outperforms Cacao in the range of 114% to 311% So it looks like OpenJDK results are mixed for this round of benchmarks.  In some cases, performance looks to have improved.  But in a majority of instances, OpenJDK still lags behind Java SE-Embedded considerably. Time to put on my asbestos suit.  Let the flames begin...

    Read the article

  • Better way to load level content in XNA?

    - by user2002495
    Currently I loaded all my assets in XNA in the main Game class. What I want to achieve later is that I only load specific assets for specific levels (the game will consist of many levels). Here is how I load my main assets into the main class: protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); plane = new Player(Content.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; lvl1 = new Level1(Content.Load<Texture2D>(@"Levels/bgLvl1"), Content.Load<Texture2D>(@"Levels/bgLvl1-other"), new Vector2(0, 0), new Vector2(0, -600)); CommonBullet.LoadContent(Content); CommonEnemyBullet.LoadContent(Content); } protected override void UnloadContent() { } protected override void Update(GameTime gameTime) { if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); plane.Update(gameTime); lvl1.Update(gameTime); foreach (CommonEnemy ce in cel) { if (ce.CollidesWith(plane)) { ce.hasSpawn = false; } foreach (CommonBullet b in plane.commonBulletList) { if (b.CollidesWith(ce)) { ce.hasSpawn = false; } } ce.Update(gameTime); } LoadCommonEnemy(); base.Update(gameTime); } private void LoadCommonEnemy() { int randY = rand.Next(-600, -10); int randX = rand.Next(0, 750); if (cel.Count < 3) { cel.Add(new CommonEnemy(Content.Load<Texture2D>(@"Enemy/Common/commonEnemySprite"), 7, 2, "left", randX, randY)); } for (int i = 0; i < cel.Count; i++) { if (!cel[i].hasSpawn) { cel.RemoveAt(i); i--; } } } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.Black); spriteBatch.Begin(); lvl1.Draw(spriteBatch); plane.Draw(spriteBatch); foreach (CommonEnemy ce in cel) { ce.Draw(spriteBatch); } spriteBatch.End(); base.Draw(gameTime); } I wish to load my players, enemies, all in Level1 class. However, when I move my player & enemy code into the Level1 class, the gameTime returns null. Here is my Level1 class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Audio; using Microsoft.Xna.Framework.Content; using Microsoft.Xna.Framework.Graphics; using Microsoft.Xna.Framework.Media; using Microsoft.Xna.Framework.Input; using SpaceShooter_Beta.Animation.PlayerCollection; using SpaceShooter_Beta.Animation.EnemyCollection.Common; namespace SpaceShooter_Beta.Levels { public class Level1 { public Texture2D bgTexture1, bgTexture2; public Vector2 bgPos1, bgPos2; public float speed = 5f; Player plane; public Level1(Texture2D texture1, Texture2D texture2, Vector2 pos1, Vector2 pos2) { this.bgTexture1 = texture1; this.bgTexture2 = texture2; this.bgPos1 = pos1; this.bgPos2 = pos2; } public void LoadContent(ContentManager cm) { plane = new Player(cm.Load<Texture2D>(@"Player/playerSprite"), 6, 8); plane.animation = "down"; plane.pos = new Vector2(400, 500); plane.fps = 15; Global.currentPos = plane.pos; } public void Draw(SpriteBatch sb) { sb.Draw(bgTexture1, bgPos1, Color.White); sb.Draw(bgTexture2, bgPos2, Color.White); plane.Draw(sb); } public void Update(GameTime gt) { bgPos1.Y += speed; bgPos2.Y += speed; if (bgPos1.Y >= 600) { bgPos1.Y = -600; } if (bgPos2.Y >= 600) { bgPos2.Y = -600; } plane.Update(gt); } } } Of course when I did this, I delete all my player's code in the main Game class. All of that works fine (no errors) except that the game cannot start. The debugger says that plane.Update(gt); in Level 1 class has null GameTime, same thing with the Draw method in the Level class. Please help, I appreciate for the time. [EDIT] I know that using switch in the main class can be a solution. But I prefer a cleaner solution than that, since using switch still means I need to load all the assets through the main class, the code will be A LOT later on for each levels

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • In 10.10, USB 3.0 PCI Express card recognized by lspci but not lsusb or dmesg. How to fix?

    - by Paul
    Asus N PC, runs 10.10 x86_64 The Asus N comes with 4 usb 2.0 ports, each labelled 2.0 on the case. Attempting to add two usb 3.0 ports to be provided by a generic usb 3.0 pci express card installed in the pci expres slot. The new card says usb 3.0 and has the blue ports. The card is installed into the laptop unpowered, then the laptop is powered on and boots normally. Nothing happens when a USB 3.0 flash drive is inserted into the usb 3.0 port. uname -a Linux drpaulbrewer-N90SV 2.6.35.8 #1 SMP Fri Jan 14 15:54:11 EST 2011 x86_64 GNU/Linux lspci -v 00:00.0 Host bridge: Silicon Integrated Systems [SiS] 671MX Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64 Kernel modules: sis-agp 00:01.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=01, subordinate=01, sec-latency=0 I/O behind bridge: 0000d000-0000dfff Memory behind bridge: fa000000-fdefffff Prefetchable memory behind bridge: 00000000d0000000-00000000dfffffff Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [f4] Power Management version 2 Capabilities: [70] Subsystem: Silicon Integrated Systems [SiS] PCI-to-PCI bridge Kernel driver in use: pcieport 00:02.0 ISA bridge: Silicon Integrated Systems [SiS] SiS968 [MuTIOL Media IO] (rev 01) Flags: bus master, medium devsel, latency 0 00:02.5 IDE interface: Silicon Integrated Systems [SiS] 5513 [IDE] (rev 01) (prog-if 80 [Master]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 128 I/O ports at 01f0 [size=8] I/O ports at 03f4 [size=1] I/O ports at 0170 [size=8] I/O ports at 0374 [size=1] I/O ports at ffe0 [size=16] Capabilities: [58] Power Management version 2 Kernel driver in use: pata_sis 00:03.0 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10 [OHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 20 Memory at f9fff000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:03.1 USB Controller: Silicon Integrated Systems [SiS] USB 1.1 Controller (rev 0f) (prog-if 10 [OHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 21 Memory at f9ffe000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:03.3 USB Controller: Silicon Integrated Systems [SiS] USB 2.0 Controller (prog-if 20 [EHCI]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 22 Memory at f9ffd000 (32-bit, non-prefetchable) [size=4K] Capabilities: [50] Power Management version 2 Kernel driver in use: ehci_hcd 00:04.0 Ethernet controller: Silicon Integrated Systems [SiS] 191 Gigabit Ethernet Adapter (rev 02) Subsystem: ASUSTeK Computer Inc. Device 11f5 Flags: bus master, medium devsel, latency 0, IRQ 19 Memory at f9ffcc00 (32-bit, non-prefetchable) [size=128] I/O ports at cc00 [size=128] Capabilities: [40] Power Management version 2 Kernel driver in use: sis190 Kernel modules: sis190 00:05.0 IDE interface: Silicon Integrated Systems [SiS] SATA Controller / IDE mode (rev 03) (prog-if 8f [Master SecP SecO PriP PriO]) Subsystem: ASUSTeK Computer Inc. Device 1b27 Flags: bus master, medium devsel, latency 64, IRQ 17 I/O ports at c800 [size=8] I/O ports at c400 [size=4] I/O ports at c000 [size=8] I/O ports at bc00 [size=4] I/O ports at b800 [size=16] I/O ports at b400 [size=128] Capabilities: [58] Power Management version 2 Kernel driver in use: sata_sis Kernel modules: sata_sis 00:06.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 Memory behind bridge: fdf00000-fdffffff Capabilities: [b0] Subsystem: Silicon Integrated Systems [SiS] Device 0004 Capabilities: [c0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [f4] Power Management version 2 Kernel driver in use: pcieport 00:07.0 PCI bridge: Silicon Integrated Systems [SiS] PCI-to-PCI bridge (prog-if 00 [Normal decode]) Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=03, subordinate=06, sec-latency=0 I/O behind bridge: 0000e000-0000efff Memory behind bridge: fe000000-febfffff Prefetchable memory behind bridge: 00000000f6000000-00000000f8ffffff Capabilities: [b0] Subsystem: Silicon Integrated Systems [SiS] Device 0004 Capabilities: [c0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [d0] Express Root Port (Slot+), MSI 00 Capabilities: [f4] Power Management version 2 Kernel driver in use: pcieport 00:0f.0 Audio device: Silicon Integrated Systems [SiS] Azalia Audio Controller Subsystem: ASUSTeK Computer Inc. Device 17b3 Flags: bus master, medium devsel, latency 0, IRQ 18 Memory at f9ff4000 (32-bit, non-prefetchable) [size=16K] Capabilities: [50] Power Management version 2 Kernel driver in use: HDA Intel Kernel modules: snd-hda-intel 01:00.0 VGA compatible controller: nVidia Corporation G96 [GeForce GT 130M] (rev a1) (prog-if 00 [VGA controller]) Subsystem: ASUSTeK Computer Inc. Device 2021 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fc000000 (32-bit, non-prefetchable) [size=16M] Memory at d0000000 (64-bit, prefetchable) [size=256M] Memory at fa000000 (64-bit, non-prefetchable) [size=32M] I/O ports at dc00 [size=128] [virtual] Expansion ROM at fde80000 [disabled] [size=512K] Capabilities: [60] Power Management version 3 Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+ Capabilities: [78] Express Endpoint, MSI 00 Capabilities: [b4] Vendor Specific Information: Len=14 <?> Kernel driver in use: nvidia Kernel modules: nvidia-current, nouveau, nvidiafb 02:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01) Subsystem: Device 1a3b:1067 Flags: bus master, fast devsel, latency 0, IRQ 16 Memory at fdff0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [40] Power Management version 2 Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit- Capabilities: [60] Express Legacy Endpoint, MSI 00 Capabilities: [90] MSI-X: Enable- Count=1 Masked- Kernel driver in use: ath9k Kernel modules: ath9k 03:00.0 USB Controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 03) (prog-if 30) Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at febfe000 (64-bit, non-prefetchable) [size=8K] Capabilities: [50] Power Management version 3 Capabilities: [70] MSI: Enable- Count=1/8 Maskable- 64bit+ Capabilities: [90] MSI-X: Enable- Count=8 Masked- Capabilities: [a0] Express Endpoint, MSI 00 lsusb Bus 003 Device 002: ID 0b05:1751 ASUSTek Computer, Inc. BT-253 Bluetooth Adapter Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 004: ID 0bda:0158 Realtek Semiconductor Corp. USB 2.0 multicard reader Bus 001 Device 002: ID 04f2:b071 Chicony Electronics Co., Ltd 2.0M UVC Webcam / CNF7129 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub dmesg trying to post dmesg exceeded the stackexchange posting limit of 30K... but nothing there is usb 3.0

    Read the article

  • Application Composer Series: Where and When to use Groovy

    - by Richard Bingham
    This brief post is really intended as more of a reference than an article. The table below highlights two things, firstly where you can add you own custom logic via groovy code (end column), and secondly (middle column) when you might use each particular feature. Obviously this applies only where Application Composer exists, namely Fusion CRM and Oracle Sales Cloud, and is based on current (release 8) functionality. Feature Most Common Use Case Groovy Field Triggers React to run-time data changes. Only fired when the field is changed and upon submit. Y Object Triggers To extend the standard processing logic for an object, based on record creation, updates and deletes. There is a split between these firing events, with some related to UI/ADF actions and others originating in the database. UI Trigger Points: After Create - fires when a new object record is created. Commonly used to set default values for fields. Before Modify - Fires when the end-user tries to modify a field value. Could be used for generic warnings or extra security logic. Before Invalidate - Fires on the parent object when one of its child object records is created, updated, or deleted. For building in relationship logic. Before Remove - Fires when an attempt is made to delete an object record. Can be used to create conditions that prevent deletes. Database Trigger Points: Before Insert in Database - Fires before a new object is inserted into the database. Can be used to ensure a dependent record exists or check for duplicates. After Insert in Database - Fires after a new object is inserted into the database. Could be used to create a complementary record. Before Update in Database -Fires before an existing object is modified in the database. Could be used to check dependent record values. After Update in Database - Fires after an existing object is modified in the database. Could be used to update a complementary record. Before Delete in Database - Fires before an existing object is deleted from the database. Could be used to check dependent record values. After Delete in Database - Fires after an existing object is deleted from the database. Could be used to remove dependent records. After Commit in Database - Fires after the change pending for the current object (insert, update, delete) is made permanent in the current transaction. Could be used when committed data that has passed all validation is required. After Changes Posted to Database - Fires after all changes have been posted to the database, but before they are permanently committed. Could be used to make additional changes that will be saved as part of the current transaction. Y Field Validation Displays a user entered error message based groovy logic validating the field value. The message is shown only when the validation logic returns false, and the logic is triggered only when tabbing out of the field on the user interface. Y Object Validation Commonly used where validation is needed across multiple related fields on the object. Triggered on the submit UI action. Y Object Workflows All Object Workflows are fired upon either record creation or update, along with the option of adding a custom groovy firing condition. Y Field Updates - change another field when a specified one changes. Intended as an easy way to set different run-time values (e.g. pick values for LOV's) plus the value field permits groovy logic entry. Y E-Mail Notification - sends an email notification to specified users/roles. Templates support using run-time value tokens and rich text. N Task Creation - for adding standard tasks for use in the worklist functionality. N Outbound Message - will create and send an XML payload of the related object SDO to a specified endpoint. N Business Process Flow - intended for approval using the seeded process, however can also trigger custom BPMN flows. N Global Functions Utility functions that can be called from any groovy code in Application Composer (across applications). Y Object Functions Utility functions that are local to the parent object. Usually triggered from within 'Buttons and Actions' definitions in Application Composer, although can be called from other code for that object (e.g. from a trigger). Y Add Custom Fields When adding custom fields there are a few places you can include groovy logic. Y Default Value - to add logic within setting the default value when new records are entered. Y Conditionally Updateable - to add logic to set the field to read-only or not. Y Conditionally Required - to add logic to set the field to required or not. Y Formula Field - Used to provide a new aggregate field that is entirely based on groovy logic and other field values. Y Simplified UI Layouts - Advanced Expressions Used for creating dynamic layouts for simplified UI pages where fields and regions show/hide based on run-time context values and logic. Also includes support for the depends-on feature as a trigger. Y Related References This Blog: Application Composer Series Extending Sales Guide: Using Groovy Scripts Groovy Scripting Reference Guide

    Read the article

  • Web Services Example - Part 2: Programmatic

    - by Denis T
    In this edition of the ADF Mobile blog we'll tackle part 2 of our Web Service examples.  In this posting we'll take a look at using a SOAP Web Service but calling it programmatically in code and parsing the return into a bean. Getting the sample code: Just click here to download a zip of the entire project.  You can unzip it and load it into JDeveloper and deploy it either to iOS or Android.  Please follow the previous blog posts if you need help getting JDeveloper or ADF Mobile installed.  Note: This is a different workspace than WS-Part1 Defining our Web Service: Just like our first installment, we are using the same public weather forecast web service provided free by CDYNE Corporation.  Sometimes this service goes down so please ensure you know it's up before reporting this example isn't working. We're going to concentrate on the same two web service methods, GetCityForecastByZIP and GetWeatherInformation. Defing the Application: The application setup is identical to the Weather1 version.  There are some improvements to the data that is displayed as part of this example though.  Now we are able to show the associated image along with each forecast line when using the Forecast By Zip feature.  We've also added the temperature Hi/Low values into the UI. Summary of Fundamental Changes In This Application The most fundamental change is that we're binding the UI to the Bean Data Controls instead of directly to the Web Service Data Controls.  This gives us much more flexibility to control the shape of the data and allows us to do caching of the data outside of the Web Service.  This way if your application is, say offline, your bean could still populate with data from a local cache and still show you some UI as opposed to completely failing because you don't have any connectivity. In general we promote this type of programming technique with ADF Mobile to insulate your application from any issues with network connectivity. What's different with this example? We have setup the Web Service DC the same way but now we have managed beans to process the data.  The following classes define the "Model" of our application:  CityInformation-CityForecast-Forecast, WeatherInformation-WeatherDescription.  We use WeatherBean for UI interaction to the model layer.  If you look through this example, we don't really do that much with the java code except use it to grab the image URL from the weather description.  In a more realistic example, you might be using some JDBC classes to persist the data to a local database. To have a good architecture it is always good to keep your model and UI layers separate.  This gets muddied if you start to use bindings on a page invoked from Java code and this java code starts to become your "model" layer.  Since bindings are page specific, your model layer starts to become entwined with your UI.  Not good!  To help with this, we've added some utility functions that let you invoke DC methods without having a binding and thus execute methods from your "model" layer without requiring a binding in your page definition.  We do this with the invokeDataControlMethod of the AdfmfJavaUtilities class.  An example of this method call is available in line 95 of WeatherInformation.java and line 93 of CityInformation.Java. What's a GenericType? Because Web Service Data Controls (and also URL Data Controls AKA REST) use generic name/value pairs to define their structure and don't have strongly typed objects, these are actually stored internally as GenericType objects.  The GenericType class is simply a property map of name/value pairs that can be hierarchical.  There are methods like getAttribute where you supply the index of the attribute or it's string property name.  Why is this important to know?  Because invokeDataControlMethod returns GenericType objects and developers either need to parse these GenericType objects themselves or use one of our helper functions. GenericTypeBeanSerializationHelper This class does exactly what it's name implies.  It's a helper class for developers to aid in serialization of GenericTypes to/from java objects.  This is extremely handy if you have a large GenericType object with many attributes (or you're just lazy like me!) and you just want to parse it out into a real java object you can use more easily.  Here you would use the fromGenericType method.  This method takes the class of the Java object you wish to return and the GenericType as parameters.  The method then parses through each attribute in the GenericType and uses reflection to set that same attribute in the Java class.  Then the method returns that new object of the class you specified.  This is obviously very handy to avoid a lot of shuffling code between GenericType and your own Java classes.  The reverse method, toGenericType is also available when you want to go the other way.  In this case you supply the string that represents the package location in the DataControl definition (Example: "MyDC.myParams.MyCollection") and then pass in the Java object you have that holds the data and a GenericType is returned to you.  Again, it will use reflection to calculate the attributes that match between the java class and the GenericType and call the getters/setters on those. Issues and Possible Improvements: In the next installment we'll show you how to make your web service calls asynchronously so your UI will fill dynamically when the service call returns but in the meantime you show the data you have locally in your bean fed from some local cache.  This gives your users instant delivery of some data while you fetch other data in the background.

    Read the article

  • How do I ensure that a JPanel Shrinks when the parent frame is resized?

    - by dah
    I have a basic notes panel that I'm looking to shrink the width of when the parent jframe is resized but it isn't happening. I'm using nested gridbaglayouts. package com.protocase.notes.views; import com.protocase.notes.controller.NotesController; import com.protocase.notes.model.Subject; import com.protocase.notes.model.Note; import com.protocase.notes.model.database.PMSNotesAdapter; import java.awt.Color; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import javax.swing.BorderFactory; import javax.swing.JButton; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JScrollPane; /** * @author DavidH */ public class NotesViewer extends JPanel { // <editor-fold defaultstate="collapsed" desc="Attributes"> private Subject subject; private NotesController controller; //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Getters N' Setters"> /** * Gets back the current subject. * @return */ public Subject getSubject() { return subject; } public NotesController getController() { return controller; } public void setController(NotesController controller) { this.controller = controller; } /** * Should clear the panel of the current subject and load the details for * the other object. * @param subject */ public void setSubject(Subject subject) { this.subject = subject; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Constructor"> /** * -- Sets up a note viewer with a subject and a controller. Likely this * would be the constructor used if you were passing off from another * NoteViewer or something else that used a notes adapter or controller. * @param subject * @param controller */ public NotesViewer(Subject subject, NotesController controller) { this.subject = subject; this.controller = controller; initComponents(); } /** * -- Sets up a note view with a subject and creates a new controller. This * would be the constructor typically chosen if choosing notes was * infrequent and only one or two notes needs to be displayed. * @param subject */ public NotesViewer(Subject subject) { this(subject, new NotesController(new PMSNotesAdapter())); } /** * -- Sets up a note view without a subject and creates a new controller. * This would be for a note viewer without any notes, perhaps populating * as you choose values in another form. * @param subject */ public NotesViewer() { this(null); } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="initComponents()"> /** * Sets up the view for the NotesViewer */ private void initComponents() { // -- Make a new panel for the header JPanel panel = new JPanel(); panel.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); c.gridx = 0; c.fill = GridBagConstraints.HORIZONTAL; c.gridy = 0; c.weightx = .5; //c.anchor = GridBagConstraints.NORTHWEST; JLabel label = new JLabel("Viewing Notes for [Subject]"); label.setAlignmentX(JLabel.LEFT_ALIGNMENT); label.setBorder(BorderFactory.createLineBorder(Color.YELLOW)); panel.add(label); JButton newNoteButton = new JButton("New"); c = new GridBagConstraints(); // c.fill = GridBagConstraints.HORIZONTAL; c.gridx = 1; c.gridy = 0; c.weightx = .5; c.anchor = GridBagConstraints.EAST; panel.add(newNoteButton, c); // -- NotePanels c = new GridBagConstraints(); c.fill = GridBagConstraints.HORIZONTAL; c.weightx = 1; c.weighty = 1; c.gridx = 0; c.gridwidth = 2; int y = 1; for (Note n : subject.getNotes()) { c.gridy = y++; panel.add(new NotesPanel(n, controller), c); } this.setLayout(new GridBagLayout()); GridBagConstraints pc = new GridBagConstraints(); pc.gridx = 0; pc.gridy = 0; pc.weightx = 1; pc.weighty = 1; pc.fill = GridBagConstraints.BOTH; panel.setBackground(Color.blue); JScrollPane scroll = new JScrollPane(); scroll.setViewportView(panel); //scroll.setHorizontalScrollBarPolicy(JScrollPane.HORIZONTAL_SCROLLBAR_NEVER); this.add(scroll, pc); //this.add(panel, pc); // -- Add it all to the layout } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="private methods"> //</editor-fold> } package com.protocase.notes.views; import com.protocase.notes.controller.NotesController; import com.protocase.notes.model.Note; import java.awt.CardLayout; import java.awt.Color; import java.awt.Component; import java.awt.Dimension; import java.awt.GridBagConstraints; import java.awt.GridBagLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.text.DateFormat; import java.text.SimpleDateFormat; import javax.swing.BorderFactory; import javax.swing.JButton; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JScrollPane; import javax.swing.JTextArea; import javax.swing.JTextField; import javax.swing.border.BevelBorder; import javax.swing.border.Border; import javax.swing.border.MatteBorder; /** * @author dah01 */ public class NotesPanel extends JPanel { // <editor-fold defaultstate="collapsed" desc="Attributes"> private Note note; private NotesController controller; private CardLayout cardLayout; private JTextArea viewTextArea; private JTextArea editTextArea; //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Getters N' Setters"> public NotesController getController() { return controller; } public void setController(NotesController controller) { this.controller = controller; } public Note getNote() { return note; } public void setNote(Note note) { this.note = note; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Constructor"> /** * Sets up a note panel that shows everything about the note. * @param note */ public NotesPanel(Note note, NotesController controller) { this.note = note; cardLayout = new CardLayout(); this.setLayout(cardLayout); // -- Setup the layout manager. this.setBackground(new Color(199, 187, 192)); this.setBorder(new BevelBorder(BevelBorder.RAISED)); // -- ViewPanel this.add("ViewPanel", initViewPanel()); this.add("EditPanel", initEditPanel()); } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="EditPanel"> private JPanel initEditPanel() { JPanel editPanel = new JPanel(); editPanel.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); c.fill = GridBagConstraints.HORIZONTAL; c.gridy = 0; c.weightx = 1; c.weighty = 0.3; editPanel.add(initCreatorLabel(), c); c.gridy++; editPanel.add(initEditTextScroll(), c); c.gridy++; c.anchor = GridBagConstraints.WEST; c.fill = GridBagConstraints.NONE; editPanel.add(initEditorLabel(), c); c.gridx++; c.anchor = GridBagConstraints.EAST; editPanel.add(initSaveButton(), c); return editPanel; } private JScrollPane initEditTextScroll() { this.editTextArea = new JTextArea(note.getContents()); editTextArea.setLineWrap(true); editTextArea.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(editTextArea); scrollPane.setAlignmentX(JScrollPane.LEFT_ALIGNMENT); Border b = scrollPane.getViewportBorder(); MatteBorder mb = BorderFactory.createMatteBorder(2, 2, 2, 2, Color.BLUE); scrollPane.setBorder(mb); return scrollPane; } private JButton initSaveButton() { final CardLayout l = this.cardLayout; final JPanel p = this; final NotesController c = this.controller; final Note n = this.note; final JTextArea noteText = this.viewTextArea; final JTextArea textToSubmit = this.editTextArea; ActionListener al = new ActionListener() { @Override public void actionPerformed(ActionEvent e) { //controller.saveNote(n); noteText.setText(textToSubmit.getText()); l.next(p); } }; JButton saveButton = new JButton("Save"); saveButton.addActionListener(al); saveButton.setPreferredSize(new Dimension(62, 26)); return saveButton; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="ViewPanel"> private JPanel initViewPanel() { JPanel viewPanel = new JPanel(); viewPanel.setLayout(new GridBagLayout()); GridBagConstraints c = new GridBagConstraints(); c.fill = GridBagConstraints.HORIZONTAL ; c.gridy = 0; c.weightx = 1; c.weighty = 0.3; viewPanel.add(initCreatorLabel(), c); c.gridy++; viewPanel.add(this.initNoteTextArea(), c); c.fill = GridBagConstraints.NONE; c.anchor = GridBagConstraints.WEST; c.gridy++; viewPanel.add(initEditorLabel(), c); c.gridx++; c.anchor = GridBagConstraints.EAST; viewPanel.add(initEditButton(), c); return viewPanel; } private JLabel initCreatorLabel() { DateFormat formatter = new SimpleDateFormat("yyyy-MM-dd"); if (note != null) { String noteBy = "Note by " + note.getCreator(); String noteCreated = formatter.format(note.getDateCreated()); JLabel creatorLabel = new JLabel(noteBy + " @ " + noteCreated); creatorLabel.setAlignmentX(JLabel.LEFT_ALIGNMENT); return creatorLabel; } else { System.out.println("NOTE IS NULL"); return null; } } private JScrollPane initNoteTextArea() { // -- Setup the notes area. this.viewTextArea = new JTextArea(note.getContents()); viewTextArea.setEditable(false); viewTextArea.setLineWrap(true); viewTextArea.setWrapStyleWord(true); JScrollPane scrollPane = new JScrollPane(viewTextArea); scrollPane.setAlignmentX(JScrollPane.LEFT_ALIGNMENT); return scrollPane; } private JLabel initEditorLabel() { // -- Setup the edited by label. JLabel editorLabel = new JLabel(" -- Last edited by " + note.getLastEdited() + " at " + note.getDateModified()); editorLabel.setAlignmentX(Component.LEFT_ALIGNMENT); return editorLabel; } private JButton initEditButton() { final CardLayout l = this.cardLayout; final JPanel p = this; ActionListener ar = new ActionListener() { @Override public void actionPerformed(ActionEvent e) { l.next(p); } }; JButton editButton = new JButton("Edit"); editButton.setPreferredSize(new Dimension(62,26)); editButton.addActionListener(ar); return editButton; } //</editor-fold> // <editor-fold defaultstate="collapsed" desc="Grow Width When Resized"> @Override public Dimension getPreferredSize() { int fw = this.getParent().getSize().width; int fh = super.getPreferredSize().height; return new Dimension(fw,fh); } //</editor-fold> }

    Read the article

  • How to get distinct values from the List&lt;T&gt; with LINQ

    - by Vincent Maverick Durano
    Recently I was working with data from a generic List<T> and one of my objectives is to get the distinct values that is found in the List. Consider that we have this simple class that holds the following properties: public class Product { public string Make { get; set; } public string Model { get; set; } }   Now in the page code behind we will create a list of product by doing the following: private List<Product> GetProducts() { List<Product> products = new List<Product>(); Product p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 1"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 2"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy Note"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4s"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Sensation"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Desire"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); return products; }   And then let’s bind the products to the GridView. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts(); Gridview1.DataBind(); } }   Running the code will display something like this in the page: Now what I want is to get the distinct row values from the list. So what I did is to use the LINQ Distinct operator and unfortunately it doesn't work. In order for it work is you must use the overload method of the Distinct operator for you to get the desired results. So I’ve added this IEqualityComparer<T> class to compare values: class ProductComparer : IEqualityComparer<Product> { public bool Equals(Product x, Product y) { if (Object.ReferenceEquals(x, y)) return true; if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false; return x.Make == y.Make && x.Model == y.Model; } public int GetHashCode(Product product) { if (Object.ReferenceEquals(product, null)) return 0; int hashProductName = product.Make == null ? 0 : product.Make.GetHashCode(); int hashProductCode = product.Model.GetHashCode(); return hashProductName ^ hashProductCode; } }   After that you can then bind the GridView like this: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts().Distinct(new ProductComparer()); Gridview1.DataBind(); } }   Running the page will give you the desired output below: As you notice, it now eliminates the duplicate rows in the GridView. Now what if we only want to get the distinct values for a certain field. For example I want to get the distinct “Make” values such as Samsung, Apple, HTC, Nokia and Sony Ericsson and populate them to a DropDownList control for filtering purposes. I was hoping the the Distinct operator has an overload that can compare values based on the property value like (GetProducts().Distinct(o => o.PropertyToCompare). But unfortunately it doesn’t provide that overload so what I did as a workaround is to use the GroupBy,Select and First LINQ query operators to achieve what I want. Here’s the code to get the distinct values of a certain field. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DropDownList1.DataSource = GetProducts().GroupBy(o => o.Make).Select(o => o.First()); DropDownList1.DataTextField = "Make"; DropDownList1.DataValueField = "Model"; DropDownList1.DataBind(); } } Running the code will display the following output below:   That’s it! I hope someone find this post useful!

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >