Search Results

Search found 4241 results on 170 pages for 'dual nic'.

Page 169/170 | < Previous Page | 165 166 167 168 169 170  | Next Page >

  • External USB attached drive works in Windows XP but not in Windows 7. How to fix?

    - by irrational John
    Earlier this week I purchased this "N52300 EZQuest Pro" external hard drive enclosure from here. I can connect the enclosure using USB 2.0 and access the files in both NTFS partitions on the MBR partitioned drive when I use either Windows XP (SP3) or Mac OS X 10.6. So it works as expected in XP & Snow Leopard. However, the enclosure does not work in Windows 7 (Home Premium) either 64-bit or 32-bit or in Ubuntu 10.04 (kernel 2.6.32-23-generic). I'm thinking this must be a Windows 7 driver problem because the enclosure works in XP & Snow Leopard. I do know that no special drivers are required to use this enclosure. It is supported using the USB mass storage drivers included with XP and OS X. It should also work fine using the mass storage support in Windows 7, no? FWIW, I have also tried using 32-bit Windows 7 on both my desktop, a Gigabyte GA-965P-DS3 with a Pentium Dual-Core E6500 @ 2.93GHz, and on my early 2008 MacBook. I see the same failure in both cases that I see with 64-bit Windows 7. So it doesn't appear to be specific to one hardware platform. I'm hoping someone out there can help me either get the enclosure to work in Windows 7 or convince me that the enclosure hardware is bad and should be RMAed. At the moment though an RMA seems pointless since this appears to be a (Windows 7) device driver problem. I have tried to track down any updates to the mass storage drivers included with Windows 7 but have so far come up empty. Heck, I can't even figure out how to place a bug report with Microsoft since apparently the grace period for Windows 7 email support is only a few months. I came across a link to some USB troubleshooting steps in another question. I haven't had a chance to look over the suggestions on that site or try them yet. Maybe tomorrow if I have time ... ;-) I'll finish up with some more details about the problem. When I connect the enclosure using USB to Windows 7 at first it appears everything worked. Windows detects the drive and installs a driver for it. Looking in Device Manager there is an entry under the Hard Drives section with the title, Hitachi HDT721010SLA360 USB Device. When you open Windows Disk Management the first time after the enclosure has been attached the drive appears as "Not initialize" and I'm prompted to initialize it. This is bogus. After all, the drive worked fine in XP so I know it has already been initialized, partitioned, and formatted. So of course I never try to initialize it "again". (It's a 1 GB drive and I don't want to lose the data on it). Except for this first time, the drive never shows up in Disk Management again unless I uninstall the Hitachi HDT721010SLA360 USB Device entry under Hard Drives, unplug, and then replug the enclosure. If I do that then the process in the previous paragraph repeats. In Ubuntu the enclosure never shows up at all at the file system level. Below are an excerpt from kern.log and an excerpt from the result of lsusb -v after attaching the enclosure. It appears that Ubuntu at first recongnizes the enclosure and is attempting to attach it, but encounters errors which prevent it from doing so. Unfortunately, I don't know whether any of this info is useful or not. excerpt from kern.log [ 2684.240015] usb 1-2: new high speed USB device using ehci_hcd and address 22 [ 2684.393618] usb 1-2: configuration #1 chosen from 1 choice [ 2684.395399] scsi17 : SCSI emulation for USB Mass Storage devices [ 2684.395570] usb-storage: device found at 22 [ 2684.395572] usb-storage: waiting for device to settle before scanning [ 2689.390412] usb-storage: device scan complete [ 2689.390894] scsi 17:0:0:0: Direct-Access Hitachi HDT721010SLA360 ST6O PQ: 0 ANSI: 4 [ 2689.392237] sd 17:0:0:0: Attached scsi generic sg7 type 0 [ 2689.395269] sd 17:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) [ 2689.395632] sd 17:0:0:0: [sde] Write Protect is off [ 2689.395636] sd 17:0:0:0: [sde] Mode Sense: 11 00 00 00 [ 2689.395639] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412003] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412009] sde: sde1 sde2 [ 2689.455759] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.455765] sd 17:0:0:0: [sde] Attached SCSI disk [ 2692.620017] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2707.740014] usb 1-2: device descriptor read/64, error -110 [ 2722.970103] usb 1-2: device descriptor read/64, error -110 [ 2723.200027] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2738.320019] usb 1-2: device descriptor read/64, error -110 [ 2753.550024] usb 1-2: device descriptor read/64, error -110 [ 2753.780020] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2758.810147] usb 1-2: device descriptor read/8, error -110 [ 2763.940142] usb 1-2: device descriptor read/8, error -110 [ 2764.170014] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2769.200141] usb 1-2: device descriptor read/8, error -110 [ 2774.330137] usb 1-2: device descriptor read/8, error -110 [ 2774.440069] usb 1-2: USB disconnect, address 22 [ 2774.440503] sd 17:0:0:0: Device offlined - not ready after error recovery [ 2774.590023] usb 1-2: new high speed USB device using ehci_hcd and address 23 [ 2789.710020] usb 1-2: device descriptor read/64, error -110 [ 2804.940020] usb 1-2: device descriptor read/64, error -110 [ 2805.170026] usb 1-2: new high speed USB device using ehci_hcd and address 24 [ 2820.290019] usb 1-2: device descriptor read/64, error -110 [ 2835.520027] usb 1-2: device descriptor read/64, error -110 [ 2835.750018] usb 1-2: new high speed USB device using ehci_hcd and address 25 [ 2840.780085] usb 1-2: device descriptor read/8, error -110 [ 2845.910079] usb 1-2: device descriptor read/8, error -110 [ 2846.140023] usb 1-2: new high speed USB device using ehci_hcd and address 26 [ 2851.170112] usb 1-2: device descriptor read/8, error -110 [ 2856.300077] usb 1-2: device descriptor read/8, error -110 [ 2856.410027] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 2856.730033] usb 3-2: new full speed USB device using uhci_hcd and address 11 [ 2871.850017] usb 3-2: device descriptor read/64, error -110 [ 2887.080014] usb 3-2: device descriptor read/64, error -110 [ 2887.310011] usb 3-2: new full speed USB device using uhci_hcd and address 12 [ 2902.430021] usb 3-2: device descriptor read/64, error -110 [ 2917.660013] usb 3-2: device descriptor read/64, error -110 [ 2917.890016] usb 3-2: new full speed USB device using uhci_hcd and address 13 [ 2922.911623] usb 3-2: device descriptor read/8, error -110 [ 2928.051753] usb 3-2: device descriptor read/8, error -110 [ 2928.280013] usb 3-2: new full speed USB device using uhci_hcd and address 14 [ 2933.301876] usb 3-2: device descriptor read/8, error -110 [ 2938.431993] usb 3-2: device descriptor read/8, error -110 [ 2938.540073] hub 3-0:1.0: unable to enumerate USB device on port 2 excerpt from lsusb -v Bus 001 Device 017: ID 0dc4:0000 Macpower Peripherals, Ltd Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dc4 Macpower Peripherals, Ltd idProduct 0x0000 bcdDevice 0.01 iManufacturer 1 EZ QUEST iProduct 2 USB Mass Storage iSerial 3 220417 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 5 Config0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 4 Interface0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Results using Firewire to connect. Today I recieved a 1394b 9 pin to 1394a 6 pin cable which allowed me to connect the "EZQuest Pro" via Firewire. Everything works. When I use Firewire I can connect whether I'm using Windows 7 or Ubuntu 10.04. I even tried booting my Gigabyte desktop as an OS X 10.6.3 Hackintosh and it worked there as well. (Though if I recall correctly, it also worked when using USB 2.0 and booting OS X on the desktop. Certainly it works with USB 2.0 and my MacBook.) I believe the firmware on the device is at the latest level available, v1.07. I base this on the excerpt below from the OS X System Profiler which shows Firmware Revision: 0x107. Bottom line: It's nice that the enclosure is actually usable when I connect with Firewire. But I am still searching for an answer as to why it does not work correctly when using USB 2.0 in Windows 7 (and Ubuntu ... but really Windows 7 is my biggest concern). OXFORD IDE Device 1: Manufacturer: EZ QUEST Model: 0x0 GUID: 0x1D202E0220417 Maximum Speed: Up to 800 Mb/sec Connection Speed: Up to 400 Mb/sec Sub-units: OXFORD IDE Device 1 Unit: Unit Software Version: 0x10483 Unit Spec ID: 0x609E Firmware Revision: 0x107 Product Revision Level: ST6O Sub-units: OXFORD IDE Device 1 SBP-LUN: Capacity: 1 TB (1,000,204,886,016 bytes) Removable Media: Yes BSD Name: disk3 Partition Map Type: MBR (Master Boot Record) S.M.A.R.T. status: Not Supported

    Read the article

  • Multiple routers, subnets, gateways etc

    - by allentown
    My current setup is: Cable modem dishes out 13 static IP's (/28), a GB switch is plugged into the cable modem, and has access to those 13 static IP's, I have about 6 "servers" in use right now. The cable modem is also a firewall, DHCP server, and 3 port 10/100 switch. I am using it as a firewall, but not currently as a DHCP server. I have plugged into the cable modem, two network cables, one which goes to the WAN port of a Linksys Dual Band Wireless 10/100/1000 router/switch. Into the linksys are a few workstations, a few printers, and some laptops connecting to wifi. I set the Linksys to use take static IP, and enabled DHCP for the workstations, printers, etc in 192.168.1.1/24. The network for the Linksys is mostly self contained, backups go to a SAN, on that network, it all happens through that switch, over GB. But I also get internet access from it as well via the cable modem using one static IP. This all works, however, I can not "see" the static IP machines when I am on the Linksys. I can get to them via ssh and other protocols, and if I want to from "outside", I open holes, like 80, 25, 587, 143, 22, etc. The second wire, from the cable modem/fireall/switch just uplinks to the managed GB switch. What are the pros and cons of this? I do not like giving up the static IP to the Linksys. I basically have a mixed network of public servers, and internal workstations. I want the public servers on public IP's because I do not want to mess with port forwarding and mappings. Is it correct also, that if someone breaches the Linksys wifi, they still would have a hard time getting to the static IP range, just by nature of the network topology? Today, just for a test, I toggled on the DHCP in the firewall/cable modem at 10.1.10.1/24 range, the Linksys is n the 192.168.1.100/24 range. At that point, all the static IP machines still had in and out access, but Linksys was unreachable. The cable modem only has 10/100 ports, so I will not plug anything but the network drop into it, which is 50Mb/10Mb. Which makes me think this could be less than ideal, as transfers from the workstation network to the server network will be bottlenecked at 100Mb when I have 1000Mb available. I may not need to solve that, if isolation is better though. I do not move a lot of data, if any, from Linsys network to server network, so for it to pretend to be remote is ok. Should I approach this any different? I could enable DHCP on the cable modem/firewall, it should still send out the statics to the GB switch, but will also be a DHCP in 10.1.10.1/24 range? I can then plug the Linksys into the GB switch, which is now picking up statics and the 10.1.10.1/24 ranges, tell the Linksys to use 10.1.10.5 or so. Now, do I disable DHCP on the Linksys, and the cable modem/firewall will pass through the statics and 10.0.10.1/24 ranges as well? Or, could I open a second DHCP pool on the Linksys? I guess doing so gives me network isolation again, but it is just the reverse of what I have now. But I get out of the bottleneck, not that the Linksys could ever really touch real GB speeds anyway, but the managed switch certainly can. This is all because 13 statics are not that many. Right now, 6 "servers", the Linksys, a managed switch, a few SSL certs, and I am running out. I do not want to waste a static IP on the managed GB switch, or the Linksys, unless it provides me some type of benefit. Final question, under my current setup, if I am on a workstation, sitting at 192.168.1.109, the Linksys, with GB, and I send a file over ssh to the static IP machine, is that literally leaving the internet, and coming back in, or does it stay local? To me it seems like: Workstation (192.168.1.109) -> Linksys DHCP -> Linksys Static IP -> Cable Modem -> Server ( and it hits the 10/100 ports on the cable modem, slowing me down. But does it round trip the network, leave and come back in, limiting me to the 50/10 internet speeds? *These are all made up numbers, I do not use default router IP's as I will one day add a VPN, and do not want collisions. I need some recommendations, do I want one big network, or two isolated ones. Printers these days need an IP, everything does, I can not get autoconf/bonjour to be reliable on most printers. but I am also not sure I want the "server" side of my operation to be polluted by the workstation side of my operation. Unless there is some magic subetting I have not learned yet, here is what I am thinking: Cable modem 10/100, has 13 static IP, publicly accessible -> Enable DHCP on the cable modem -> Cable modem plugs into managed switch -> Managed switch gets 10.1.10.1 ssh, telnet, https admin management address -> Managed switch sends static IP's to to servers -> Plug Linksys into managed switch, giving it 10.1.10.2 static internally in Linksys admin -> Linksys gets assigned 10.1.10.x as its DHCP sending range -> Local printers, workstations, iPhones etc, connect to this -> ( Do I enable DHCP or disable it on the Linksys, just define a non over lapping range, or create an entirely new DHCP at 10.1.50.0/24, I think I am back isolated again with that method too? ) Thank you for any suggestions. This is the first time I have had to deal with less than a /24, and most are larger than that, but it is just a drop to a cabinet. Otherwise, it's a router, a few repeaters, and soho stuff that is simple, with one IP. I know a few may suggest going all DHCP on the servers, and I may one day, just not now, there has been too much moving of gear for me to be interested in that, and I would want something in the Catalyst series to deal with that.

    Read the article

  • SQL IO and SAN troubles

    - by James
    We are running two servers with identical software setup but different hardware. The first one is a VM on VMWare on a normal tower server with dual core xeons, 16 GB RAM and a 7200 RPM drive. The second one is a VM on XenServer on a powerful brand new rack server, with 4 core xeons and shared storage. We are running Dynamics AX 2012 and SQL Server 2008 R2. When I insert 15 000 records into a table on the slow tower server (as a test), it does so in 13 seconds. On the fast server it takes 33 seconds. I re-ran these tests several times with the same results. I have a feeling it is some sort of IO bottleneck, so I ran SQLIO on both. Here are the results for the slow tower server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 226.97 MBs/sec: 1.77 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 281 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 99 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 91.34 MBs/sec: 0.71 latency metrics: Min_Latency(ms): 14 Avg_Latency(ms): 699 Max_Latency(ms): 1124 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1094.50 MBs/sec: 68.40 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 58 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1155.31 MBs/sec: 72.20 latency metrics: Min_Latency(ms): 17 Avg_Latency(ms): 55 Max_Latency(ms): 205 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Here are the results of the fast rack server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 2575.77 MBs/sec: 20.12 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 655 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 5 8 9 9 9 8 5 3 1 1 1 1 0 0 0 0 0 0 0 0 0 37 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1141.39 MBs/sec: 8.91 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 55 Max_Latency(ms): 652 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 91 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 341.37 MBs/sec: 21.33 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 186 Max_Latency(ms): 120037 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1024.07 MBs/sec: 64.00 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 61 Max_Latency(ms): 81632 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Three of the four tests are, to my mind, within reasonable parameters for the rack server. However, the 64 write test is incredibly slow on the rack server. (68 mb/sec on the slow tower vs 21 mb/s on the rack). The read speed for 64k also seems slow. Is this enough to say there is some sort of bottleneck with the shared storage? I need to know if I can take this evidence and say we need to launch an investigation into this. Any help is appreciated.

    Read the article

  • SWIG & C/C++ Python API connected - SEGFAULT

    - by user289637
    Hello, my task is to create dual program. At the beginning I start C program that calls throught C/C++ API of Python some Python method. The called method after that call a function that is created with SWIG. I show you my sample also with backtrace from gdb after I am given Segmentation fault. main.c: #include <Python.h> #include <stdio.h> #include "utils.h" int main(int argc, char** argv) { printf("Calling from C !\n"); increment(); int i; for(i = 0; i < 11; ++i) { Py_Initialize(); PyObject *pname = PyString_FromString("py_function"); PyObject *module = PyImport_Import(pname); PyObject *dict = PyModule_GetDict(module); PyObject *func = PyDict_GetItemString(dict, "ink"); PyObject_CallObject(func, NULL); Py_DECREF(module); Py_DECREF(pname); printf("\tbefore finalize\n"); Py_Finalize(); printf("\tafter finalize\n"); } return 0; } utils.c #include <stdio.h> #include "utils.h" void increment(void) { printf("Incremention counter to: %u\n", ++counter); } py_function.py #!/usr/bin/python2.6 '''py_function.py - Python source designed to demonstrate the use of python embedding''' import utils def ink(): print 'I am gonna increment !' utils.increment() and last think is my Makefile & SWIG configure file Makefile: CC=gcc CFLAGS=-c -g -Wall -std=c99 all: main main: main.o utils.o utils_wrap.o $(CC) main.o utils.o -lpython2.6 -o sample swig -Wall -python -o utils_wrap.c utils.i $(CC) utils.o utils_wrap.o -shared -o _utils.so main.o: main.c $(CC) $(CFLAGS) main.c -I/usr/include/python2.6 -o main.o utils.o: utils.c utils.h $(CC) $(CFLAGS) -fPIC utils.c -o $@ utils_wrap.o: utils_wrap.c $(CC) -c -fPIC utils_wrap.c -I/usr/include/python2.6 -o $@ clean: rm -rf *.o The program is called by ./main and there is output: (gdb) run Starting program: /home/marxin/Programming/python2/sample [Thread debugging using libthread_db enabled] Calling from C ! Incremention counter to: 1 I am gonna increment ! Incremention counter to: 2 before finalize after finalize I am gonna increment ! Incremention counter to: 3 before finalize after finalize I am gonna increment ! Incremention counter to: 4 before finalize after finalize Program received signal SIGSEGV, Segmentation fault. 0xb7ed3e4e in PyObject_Malloc () from /usr/lib/libpython2.6.so.1.0 Backtrace: (gdb) backtrace #0 0xb7ed3e4e in PyObject_Malloc () from /usr/lib/libpython2.6.so.1.0 #1 0xb7ca2b2c in ?? () #2 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #3 0xb7eb014c in ?? () from /usr/lib/libpython2.6.so.1.0 #4 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #5 0xb7f99820 in ?? () from /usr/lib/libpython2.6.so.1.0 #6 0x00000001 in ?? () #7 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #8 0xb7f4f014 in _PyObject_GC_Malloc () from /usr/lib/libpython2.6.so.1.0 #9 0xb7f99820 in ?? () from /usr/lib/libpython2.6.so.1.0 #10 0xb7f4f104 in _PyObject_GC_NewVar () from /usr/lib/libpython2.6.so.1.0 #11 0xb7ee8760 in _PyType_Lookup () from /usr/lib/libpython2.6.so.1.0 #12 0xb7f99820 in ?? () from /usr/lib/libpython2.6.so.1.0 #13 0x00000001 in ?? () #14 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #15 0xb7ef13ed in ?? () from /usr/lib/libpython2.6.so.1.0 #16 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #17 0x00000001 in ?? () #18 0xbfff0c34 in ?? () #19 0xb7e993c3 in ?? () from /usr/lib/libpython2.6.so.1.0 #20 0x00000001 in ?? () #21 0xbfff0c70 in ?? () #22 0xb7f99da0 in ?? () from /usr/lib/libpython2.6.so.1.0 #23 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #24 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #25 0x080a6b0c in ?? () #26 0x080a6b0c in ?? () #27 0xb7e99420 in PyObject_CallFunctionObjArgs () from /usr/lib/libpython2.6.so.1.0 #28 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #29 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #30 0x800e55eb in ?? () #31 0x080a6b0c in ?? () #32 0xb7e9958c in PyObject_IsSubclass () from /usr/lib/libpython2.6.so.1.0 #33 0xb7f8dd40 in ?? () from /usr/lib/libpython2.6.so.1.0 #34 0x080a9020 in ?? () #35 0xb7fb78f0 in PyFPE_counter () from /usr/lib/libpython2.6.so.1.0 #36 0xb7f86ff4 in ?? () from /usr/lib/libpython2.6.so.1.0 #37 0x00000000 in ?? () Thanks for your help and advices, marxin

    Read the article

  • Image/"most resembling pixel" search optimization?

    - by SigTerm
    The situation: Let's say I have an image A, say, 512x512 pixels, and image B, 5x5 or 7x7 pixels. Both images are 24bit rgb, and B have 1bit alpha mask (so each pixel is either completely transparent or completely solid). I need to find within image A a pixel which (with its' neighbors) most closely resembles image B, OR the pixel that probably most closely resembles image B. Resemblance is calculated as "distance" which is sum of "distances" between non-transparent B's pixels and A's pixels divided by number of non-transparent B's pixels. Here is a sample SDL code for explanation: struct Pixel{ unsigned char b, g, r, a; }; void fillPixel(int x, int y, SDL_Surface* dst, SDL_Surface* src, int dstMaskX, int dstMaskY){ Pixel& dstPix = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*x + dst->pitch*y)); int xMin = x + texWidth - searchWidth; int xMax = xMin + searchWidth*2; int yMin = y + texHeight - searchHeight; int yMax = yMin + searchHeight*2; int numFilled = 0; for (int curY = yMin; curY < yMax; curY++) for (int curX = xMin; curX < xMax; curX++){ Pixel& cur = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*(curX & texMaskX) + dst->pitch*(curY & texMaskY))); if (cur.a != 0) numFilled++; } if (numFilled == 0){ int srcX = rand() % src->w; int srcY = rand() % src->h; dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*srcX + src->pitch*srcY)); dstPix.a = 0xFF; return; } int storedSrcX = rand() % src->w; int storedSrcY = rand() % src->h; float lastDifference = 3.40282347e+37F; //unsigned char mask = for (int srcY = searchHeight; srcY < (src->h - searchHeight); srcY++) for (int srcX = searchWidth; srcX < (src->w - searchWidth); srcX++){ float curDifference = 0; int numPixels = 0; for (int tmpY = -searchHeight; tmpY < searchHeight; tmpY++) for(int tmpX = -searchWidth; tmpX < searchWidth; tmpX++){ Pixel& tmpSrc = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*(srcX+tmpX) + src->pitch*(srcY+tmpY))); Pixel& tmpDst = *((Pixel*)((char*)(dst->pixels) + sizeof(Pixel)*((x + dst->w + tmpX) & dstMaskX) + dst->pitch*((y + dst->h + tmpY) & dstMaskY))); if (tmpDst.a){ numPixels++; int dr = tmpSrc.r - tmpDst.r; int dg = tmpSrc.g - tmpDst.g; int db = tmpSrc.g - tmpDst.g; curDifference += dr*dr + dg*dg + db*db; } } if (numPixels) curDifference /= (float)numPixels; if (curDifference < lastDifference){ lastDifference = curDifference; storedSrcX = srcX; storedSrcY = srcY; } } dstPix = *((Pixel*)((char*)(src->pixels) + sizeof(Pixel)*storedSrcX + src->pitch*storedSrcY)); dstPix.a = 0xFF; } This thing is supposed to be used for texture generation. Now, the question: The easiest way to do this is brute force search (which is used in example routine). But it is slow - even using GPU acceleration and dual core cpu won't make it much faster. It looks like I can't use modified binary search because of B's mask. So, how can I find desired pixel faster? Additional Info: It is allowed to use 2 cores, GPU acceleration, CUDA, and 1.5..2 gigabytes of RAM for the task. I would prefer to avoid some kind of lengthy preprocessing phase that will take 30 minutes to finish. Ideas?

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • CodePlex Daily Summary for Thursday, April 01, 2010

    CodePlex Daily Summary for Thursday, April 01, 2010New ProjectsASP.NET Bing Maps: Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map style, add a PushPin...Bricks' Bane: Bricks' Bane is a brick breaker game developed using XNA and published on XBox Live Indy Games. Source code includes a C# library useful for game d...cURL for dotnet: Another dotnet binding for libcurl see http://curl.haxx.se for more info about cURL/libcurlCustom Functoid que acessa o banco de dados SQL: Functoid para Biztalk Server 2006 utilizando dados do SQL Server 2005FEI STU Pharmacy e-shop: Elektronicky obchod s liekmi Vytvorte jednoduchú klient-server aplikáciu, ktorá bude realizovať elektronický obchod s liekmi. Moduly: 1. e-shop f...Flavours of Wix: Investigating building DSL's to create installers based on WIXFulcrum: Fulcrum is a code generation framework built on top of the T4 technology in Visual Studio. GreviousAngel: New team projectHabanero Inferno: Habanero Inferno coming soon.Kawo Pounga !: A useless game !!!LetsXNA!!: This is a project created by members of Linked In group Lets XNA!! to build a XNA game and have fun in the process. The goal is to build a simple ...Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: <project name>Linq to Naver </project name> <programming language>C#, CSharp</programming language>LocoSync: LocoSync is a file Syncronization/Backup/Archiver program, which is easily extendable. It is easy to add new syncronization methods using C# code.Natural Language Processing: Natural Language ProcessingNop Commerce Azure: Ce projet vous permet de mettre en place rapidement et simplement votre site d'e-commerce en ligne en bénéficiant de tous les avantages de la plate...Nwinsock: Nwinsock is a component for network , Object Transfer, Pocket Compression, Support TCP,UDP Protocol, Thread Base OnTime: OnTime is a simple program from that matching game back in the day just to bring light to programming techniques. It's developed in C#.?OpenGL ES 2.0 Compact Framework Wrapper: OpenGL ES 2.0 wrapper for .NET Compact Framework. Developed on HTC HD 2 device but should run on any Windows Mobile device that has the correct lib...ortaknokta: bu proje: birkaç kişinin bir araya gelip, istedikleri konularda tartışma yapmalarına olanak saglamak icin hazırlanmaya çalışıl maktadır. P-Data: P-Data es una herramienta que permite obtener información procedente de archivos de datos (Data Profiling) a través de consultas SQL, automatizando...PowerAuras: Addon for World of Warcraft - Displays effects on screen at different conditionsPowerShell ToodleDo Module: PowerShell Module for interacting with toodledo.com online To-Do list site. RSS Reader for Windows Phone 7: This RSS Reader application for windows 7Streamlet Containers: This is my implement of STL-style containers, including a dynamic array, a double-linked list and an r-b-tree. Just for practice. Please feel free...Troav: Social encyclopedia built using c# and the Orchard frameworkUmbraco App_Code/Usercontrol Editor: Package for Umbraco to add App_Code and usercontrol editing to the Developer section of the Umbraco administration system. Will support GeSHi editi...Vczh Reactive Programming Library: Reactive programming library provide a stream or state machine view to use .NET eventsWhoIs XML API: The project uses the public WhoIs XML API service (http://www.whoisxmlapi.com/) to obtain detailed details. The project is written in C# and serial...WPF FlowDocument Examples for VS2008 and VS2010: WPF Text Samples (especially FlowDocument) on the various possible effects: sub- and super-script, ruby (a.k.a. furigana), and various others...You are here (for Windows Mobile): This sample shows you how to play a *.wav file on your device with Compact Framework 2.0. There is better support for playing music on Compact F...New Releases( λunula ): Lunula 0.4.0: Changelog Implemented a virtual machine. Implemented a compiler for the virtual machine. Added first-class continuations (call/cc) Removed co...Alter gear SQL index Management: Setup 1.0.1: Changes Test connection - successful message Connection string timeout property added Setup Project added to project source code Possible issu...ASP.NET Bing Maps: ASP.NET Bing Maps 0.1b: Project Description Extensible and easy to use, this is ASP.NET Bing Maps Control. Drag & Drop and is ready to go. You can configure map mode, map ...ASP.NET MVC Validation Library: ASP.NET MVC Validation Library 1.3: Changes since 1.2: - Support remote validation - Support custom server-side validation - The design of validation attribute is improved Note: test...BigDays 2010: HelfenHelfen - v1: PLEASE NOTE: This project is published under the Microsoft Public License (Ms-PL). http://bigdays10.codeplex.com/license IT IS A DEMO SOLUTION FOR...Caps - Manage your collection!: Caps Console 0.1.4.0 Alpha: This is preview release (Alpha quality). This release contains only limited amount of fixes and new features from user point of view. Major focus f...CSharpQuery: Version 1.0: This version is stable. Please report any possible bugs. The next release will include a sample project and index management tools. Until then pl...Custom Functoid que acessa o banco de dados SQL: Custom Functoid SQL Server: Solução do Visual Studio com código fonte e script SQL do functoid em BiztalkDawf: Dual Audio Workflow: Beta 3: Suppose if two good audio events overlap in time with a videoevent of interest. (This can only happen if PluralEyes isn't used on everything). Befo...Dirac codec user interface: Dirac User Interface (checkin 37132): Same as 36795 version, but done with the last source code.DotNetNuke® Blog: 04.00.00 RC 3: PLEASE NOTE: You may upgrade an RC 2 install. But please do not upgrade previous version of the Beta releases - please start from RC 2 or 03.05.0...DotNetNuke® Skinning Extensions: SimpleTitle Skin Object: This is an example skin object that only renders the "page name" if used in a skin and the "module title" if used in a container. No extra spans, c...Fulcrum: Fulcrum v0.9: Initial release of FulcrumHelloTipi Photos Uploader: Version 2010.03.31: De toute petites corrections : - Correction du bouton envoyer - Impossible d'interagir avec l'application quand on uploadkdar: KDAR 0.0.18: KDAR - Kernel Debugger Anti Rootkit - dispacth table's signature bases updated ( many driver's) - scripts refactored - some bug fixedLegend: Legend Libraries: The latest release.Linq To Naver , Custom Linq Provider for Naver searchengine OpenAPI: Linq to Naver: Linq to NaverLive at Education Meta Web-Service: Live at Education Meta Web Service v. 1.0: We're happy to publish final version of Live at Education Meta Web Serivce (LAEMWS). In this release: Huge list of Windows Live ID enabled servic...Live@edu SSO WebPart for MOSS 2007: WebPart 2.0: This release is based on Live@edu Meta Web Service (laemws - http://laemws.codeplex.com). It is highly recomended to use laemws version of webpart,...LocoSync: LocoSync v0.1r2010.03.31 installer: This is the first public release. Unzip and run setup. Or if you have .net 3.5 runtime available download the exetutable and try...Natural Language Processing: test1: testNop Commerce Azure: Nop Commerce Azure: Nop Commerce Full Sources with additionnals Azure Projects.Nwinsock: NWinsock: Nwinsock version 1.0 is hereOpen NFe: DANFe v1.9.8: Correção CSTOpenGL ES 2.0 Compact Framework Wrapper: v0.1 Sources: First rough release. It has a working sample application which renders a triangle with rotation. Don't expect anything great. Just a very early ...patterns & practices - Windows Azure Guidance: Code Drop 3: Second iteration of a-Expense on Azure. This release builds on the previous one and mainly focuses on replacing SQL Azure by Table Storage. We hav...Posh4DNN: Posh4DNN Scripts 2.0: This release greatly increases the speed of installation and incorporates the use of IIS and SQL Server Snap-ins for managing those services. Inst...Process Enactment Tool Framework: PET 1.1: PET Core new intermediate model with arbitrary "clean" relations among objects and several updates of the object fields (see DependencyInterfacesA...Project Tru Tiên: Elements-test V1-fix (v1): Là Elements-test V1 đã được fix các vấn đề sau: - Fix lỗi hiển thị thú cưỡi Hổ Kỳ Lân - Fix hiển thị tab tiếng trung --> sang tiếng việt - Fix hiể...Sentinel - Log Viewer: Sentinel 0.8.1 (nLog support): Build of the 0.8.1 code (svn revision 36823) which included support for both nLog and log4net that has been in SVN for a while but didn't have a bi...sgMotion Animation Library: SgMotion v1.1 (For Sunburn 1.3.1): SgMotion v1.1 (For Sunburn 1.3.1) This release includes both a Windows & Xbox sample. The sample is set to default at Forward rendering, but can e...sTASKedit: sTASKedit 44538 (Developer Alpha): + nearly all fields are viewed in this release for task verification and identifying of unknownsTest Project (ignore): asdf asdf asdf asdf asdf asdf asdf sadf sdf asdf a: ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ;dlf jkasdf ;lkasjdf ...Test Project (ignore): cdscs: csdcacacTroav: Traov20100331 Source Pre-Alpah: This is some experiements with implementing custom modules with Microsoft's Orchard frame work. This is very preliminary, and subject to change.Weather Report WebControls: WebWeatherReport: 主要文件的源代码WhoIs XML API: Initial Release: Initial ReleaseYou are here (for Windows Mobile): CAB file and Source Code: You can find more Controls and samples for Windows Mobile developers at: http://www.beemobile4.netMost Popular ProjectshmrEngineRawrWBFS ManagerASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETLiveUpload to FacebookMost Active ProjectsRawrGraffiti CMSBase Class LibrariesjQuery Library for SharePoint Web ServicesBlogEngine.NETMicrosoft Biology FoundationN2 CMSLINQ to TwitterManaged Extensibility FrameworkFarseer Physics Engine

    Read the article

  • Using Oracle Proxy Authentication with JPA (eclipselink-Style)

    - by olaf.heimburger
    Security is a very intriguing topic. You will find it everywhere and you need to implement it everywhere. Yes, you need. Unfortunately, one can easily forget it while implementing the last mile. The Last Mile In a multi-tier application it is a common practice to use connection pools between the business layer and the database layer. Connection pools are quite useful to speed database connection creation and to split the load. Another very common practice is to use a specific, often called technical, user to connect to the database. This user has authentication and authorization rules that apply to all application users. Imagine you've put every effort to define roles for different types of users that use your application. These roles are necessary to differentiate between normal users, premium users, and administrators (I bet you will find or already have more roles in your application). While these user roles are pretty well used within your application, once the flow of execution enters the database everything is gone. Each and every user just has one role and is the same database user. Issues? What Issues? As long as things go well, this is not a real issue. However, things do not go well all the time. Once your application becomes famous performance decreases in certain situations or, more importantly, current and upcoming regulations and laws require that your application must be able to apply different security measures on a per user role basis at every stage of your application. If you only have a bunch of users with the same name and role you are not able to find the application usage profile that causes the performance issue, or which user has accessed data that he/she is not allowed to. Another thread to your role concept is that databases tend to be used by different applications and tools. These tools can be developer tools like SQL*Plus, SQL Developer, etc. or end user applications like BI Publisher, Oracle Forms and so on. These tools have no idea of your applications role concept and access the database the way they think is appropriate. A big oversight for your perfect role model and a big nightmare for your Chief Security Officer. Speaking of the CSO, brings up another issue: Password management. Once your technical user account is compromised, every user is able to do things that he/she is not expected to do from the design of your application. Counter Measures In the Oracle world a common counter measure is to use Virtual Private Database (VPD). This restricts the values a database user can see to the allowed minimum. However, it doesn't help in regard of a connection pool user, because this one is still not the real user. Oracle Proxy Authentication Another feature of the Oracle database is Proxy Authentication. First introduced with version 9i it is a quite useful feature for nearly every situation. The main idea behind Proxy Authentication is, to create a crippled database user who has only connect rights. Even if this user is compromised the risks are well understood and fairly limited. This user can be used in every situation in which you need to connect to the database, no matter which tool or application (see above) you use.The proxy user is perfect for multi-tier connection pools. CREATE USER app_user IDENTIFIED BY abcd1234; GRANT CREATE SESSION TO app_user; But what if you need to access real data? Well, this is the primary use case, isn't it? Now is the time to bring the application's role concept into play. You define database roles that define the grants for your identified user groups. Once you have these groups you grant access through the proxy user with the application role to the specific user. CREATE ROLE app_role_a; GRANT app_role_a TO scott; ALTER USER scott GRANT CONNECT THROUGH app_user WITH ROLE app_role_a; Now, hr has permission to connect to the database through the proxy user. Through the role you can restrict the hr's rights the are needed for the application only. If hr connects to the database directly all assigned role and permissions apply. Testing the Setup To test the setup you can use SQL*Plus and connect to your database: $ sqlplus app_user[hr]/abcd1234 Java Persistence API The Java Persistence API (JPA) is a fairly easy means to build applications that retrieve data from the database and put it into Java objects. You use plain old Java objects (POJOs) and mixin some Java annotations that define how the attributes of the object are used for storing data from the database into the Java object. Here is a sample for objects from the HR sample schema EMPLOYEES table. When using Java annotations you only specify what can not be deduced from the code. If your Java class name is Employee but the table name is EMPLOYEES, you need to specify the table name, otherwise it will fail. package demo.proxy.ejb; import java.io.Serializable; import java.sql.Timestamp; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @NamedQueries({ @NamedQuery(name = "Employee.findAll", query = "select o from Employee o") }) @Table(name = "EMPLOYEES") public class Employee implements Serializable { @Column(name="COMMISSION_PCT") private Double commissionPct; @Column(name="DEPARTMENT_ID") private Long departmentId; @Column(nullable = false, unique = true, length = 25) private String email; @Id @Column(name="EMPLOYEE_ID", nullable = false) private Long employeeId; @Column(name="FIRST_NAME", length = 20) private String firstName; @Column(name="HIRE_DATE", nullable = false) private Timestamp hireDate; @Column(name="JOB_ID", nullable = false, length = 10) private String jobId; @Column(name="LAST_NAME", nullable = false, length = 25) private String lastName; @Column(name="PHONE_NUMBER", length = 20) private String phoneNumber; private Double salary; @ManyToOne @JoinColumn(name = "MANAGER_ID") private Employee employee; @OneToMany(mappedBy = "employee") private List employeeList; public Employee() { } public Employee(Double commissionPct, Long departmentId, String email, Long employeeId, String firstName, Timestamp hireDate, String jobId, String lastName, Employee employee, String phoneNumber, Double salary) { this.commissionPct = commissionPct; this.departmentId = departmentId; this.email = email; this.employeeId = employeeId; this.firstName = firstName; this.hireDate = hireDate; this.jobId = jobId; this.lastName = lastName; this.employee = employee; this.phoneNumber = phoneNumber; this.salary = salary; } public Double getCommissionPct() { return commissionPct; } public void setCommissionPct(Double commissionPct) { this.commissionPct = commissionPct; } public Long getDepartmentId() { return departmentId; } public void setDepartmentId(Long departmentId) { this.departmentId = departmentId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public Long getEmployeeId() { return employeeId; } public void setEmployeeId(Long employeeId) { this.employeeId = employeeId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public Timestamp getHireDate() { return hireDate; } public void setHireDate(Timestamp hireDate) { this.hireDate = hireDate; } public String getJobId() { return jobId; } public void setJobId(String jobId) { this.jobId = jobId; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public Double getSalary() { return salary; } public void setSalary(Double salary) { this.salary = salary; } public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public List getEmployeeList() { return employeeList; } public void setEmployeeList(List employeeList) { this.employeeList = employeeList; } public Employee addEmployee(Employee employee) { getEmployeeList().add(employee); employee.setEmployee(this); return employee; } public Employee removeEmployee(Employee employee) { getEmployeeList().remove(employee); employee.setEmployee(null); return employee; } } JPA could be used in standalone applications and Java EE containers. In both worlds you normally create a Facade to retrieve or store the values of the Entities to or from the database. The Facade does this via an EntityManager which will be injected by the Java EE container. Here is sample Facade Session Bean for a Java EE container. package demo.proxy.ejb; import java.util.HashMap; import java.util.List; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; import oracle.jdbc.driver.OracleConnection; import org.eclipse.persistence.config.EntityManagerProperties; import org.eclipse.persistence.internal.jpa.EntityManagerImpl; @Stateless(name = "DataFacade", mappedName = "ProxyUser-TestEJB-DataFacade") @Remote @Local public class DataFacadeBean implements DataFacade, DataFacadeLocal { @PersistenceContext(unitName = "TestEJB") private EntityManager em; private String username; public Object queryByRange(String jpqlStmt, int firstResult, int maxResults) { // setSessionUser(); Query query = em.createQuery(jpqlStmt); if (firstResult 0) { query = query.setFirstResult(firstResult); } if (maxResults 0) { query = query.setMaxResults(maxResults); } return query.getResultList(); } public Employee persistEmployee(Employee employee) { // setSessionUser(); em.persist(employee); return employee; } public Employee mergeEmployee(Employee employee) { // setSessionUser(); return em.merge(employee); } public void removeEmployee(Employee employee) { // setSessionUser(); employee = em.find(Employee.class, employee.getEmployeeId()); em.remove(employee); } /** select o from Employee o */ public List getEmployeeFindAll() { Query q = em.createNamedQuery("Employee.findAll"); return q.getResultList(); } Putting Both Together To use Proxy Authentication with JPA and within a Java EE container you have to take care of the additional requirements: Use an OCI JDBC driver Provide the user name that connects through the proxy user Use an OCI JDBC driver To use the OCI JDBC driver you need to set up your JDBC data source file to use the correct JDBC URL. hr jdbc:oracle:oci8:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SID=XE))) oracle.jdbc.OracleDriver user app_user 62C32F70E98297522AD97E15439FAC0E SQL SELECT 1 FROM DUAL jdbc/hrDS Application Additionally you need to make sure that the version of the shared libraries of the OCI driver match the version of the JDBC driver in your Java EE container or Java application and are within your PATH (on Windows) or LD_LIBRARY_PATH (on most Unix-based systems). Installing the Oracle Database Instance Client software works perfectly. Provide the user name that connects through the proxy user This part needs some modification of your application software and session facade. Session Facade Changes In the Session Facade we must ensure that every call that goes through the EntityManager must be prepared correctly and uniquely assigned to this session. The second is really important, as the EntityManager works with a connection pool and can not guarantee that we set the proxy user on the connection that will be used for the database activities. To avoid changing every method call of the Session Facade we provide a method to set the username of the user that connects through the proxy user. This method needs to be called by the Facade client bfore doing anything else. public void setUsername(String name) { username = name; } Next we provide a means to instruct the TopLink EntityManager Delegate to use Oracle Proxy Authentication. (I love small helper methods to hide the nitty-gritty details and avoid repeating myself.) private void setSessionUser() { setSessionUser(username); } private void setSessionUser(String user) { if (user != null && !user.isEmpty()) { EntityManagerImpl emDelegate = ((EntityManagerImpl)em.getDelegate()); emDelegate.setProperty(EntityManagerProperties.ORACLE_PROXY_TYPE, OracleConnection.PROXYTYPE_USER_NAME); emDelegate.setProperty(OracleConnection.PROXY_USER_NAME, user); emDelegate.setProperty(EntityManagerProperties.EXCLUSIVE_CONNECTION_MODE, "Always"); } } The final step is use the EJB 3.0 AroundInvoke interceptor. This interceptor will be called around every method invocation. We therefore check whether the Facade methods will be called or not. If so, we set the user for proxy authentication and the normal method flow continues. @AroundInvoke public Object proxyInterceptor(InvocationContext invocationCtx) throws Exception { if (invocationCtx.getTarget() instanceof DataFacadeBean) { setSessionUser(); } return invocationCtx.proceed(); } Benefits Using Oracle Proxy Authentification has a number of additional benefits appart from implementing the role model of your application: Fine grained access control for temporary users of the account, without compromising the original password. Enabling database auditing and logging. Better identification of performance bottlenecks. References Effective Oracle Database 10g Security by Design, David Knox TopLink Developer's Guide, Chapter 98

    Read the article

  • How do you setup the driver for a Philips based TV capture card?

    - by user8270
    Hi, I have a TV card that I have not managed to install with Ubuntu 10.10 i386. I have tried various topics in various forums and I could not install it. I hope you can help me to install it thank you. lspci 01:07.0 Multimedia controller: Philips Semiconductors SAA7130 Video Broadcast Decoder (rev 01) dmesg [10299.516344] saa7134 ALSA driver for DMA sound unloaded [11385.340661] Linux video capture interface: v2.00 [11385.384278] saa7130/34: v4l2 driver version 0.2.16 loaded [11385.384390] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [11385.384403] saa7130[0]: subsystem: 1131:0000, board: LifeView/Typhoon FlyVIDEO2000 [card=3,insmod option] [11385.384412] saa7130[0]: can't get MMIO memory @ 0x0 [11385.384431] saa7134: probe of 0000:01:07.0 failed with error -16 [11385.401174] saa7134 ALSA driver for DMA sound loaded [11385.401182] saa7134 ALSA: no saa7134 cards found [11477.797019] tvtime[12534]: segfault at 6b0 ip 0804cf64 sp bf928a4c error 4 in tvtime[8048000+76000] [11626.141821] tvtime[12549]: segfault at 6b0 ip 0804cf64 sp bfec357c error 4 in tvtime[8048000+76000] [12218.120632] saa7134 ALSA driver for DMA sound unloaded [12464.993061] Linux video capture interface: v2.00 [12465.028285] saa7130/34: v4l2 driver version 0.2.16 loaded [12465.028392] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [12465.028404] saa7134: <rant> [12465.028406] saa7134: Congratulations! Your TV card vendor saved a few [12465.028408] saa7134: cents for a eeprom, thus your pci board has no [12465.028411] saa7134: subsystem ID and I can't identify it automatically [12465.028414] saa7134: </rant> [12465.028416] saa7134: I feel better now. Ok, here are the good news: [12465.028418] saa7134: You can use the card=<nr> insmod option to specify [12465.028421] saa7134: which board do you have. The list: [12465.028428] saa7134: card=0 -> UNKNOWN/GENERIC [12465.028435] saa7134: card=1 -> Proteus Pro [philips reference design] 1131:2001 1131:2001 [12465.028447] saa7134: card=2 -> LifeView FlyVIDEO3000 5168:0138 4e42:0138 [12465.028457] saa7134: card=3 -> LifeView/Typhoon FlyVIDEO2000 5168:0138 4e42:0138 [12465.028467] saa7134: card=4 -> EMPRESS 1131:6752 [12465.028475] saa7134: card=5 -> SKNet Monster TV 1131:4e85 [12465.028484] saa7134: card=6 -> Tevion MD 9717 [12465.028491] saa7134: card=7 -> KNC One TV-Station RDS / Typhoon TV Tune 1131:fe01 1894:fe01 [12465.028501] saa7134: card=8 -> Terratec Cinergy 400 TV 153b:1142 [12465.028510] saa7134: card=9 -> Medion 5044 [12465.028517] saa7134: card=10 -> Kworld/KuroutoShikou SAA7130-TVPCI [12465.028523] saa7134: card=11 -> Terratec Cinergy 600 TV 153b:1143 [12465.028532] saa7134: card=12 -> Medion 7134 16be:0003 16be:5000 [12465.028542] saa7134: card=13 -> Typhoon TV+Radio 90031 [12465.028548] saa7134: card=14 -> ELSA EX-VISION 300TV 1048:226b [12465.028557] saa7134: card=15 -> ELSA EX-VISION 500TV 1048:226a [12465.028565] saa7134: card=16 -> ASUS TV-FM 7134 1043:4842 1043:4830 1043:4840 [12465.028576] saa7134: card=17 -> AOPEN VA1000 POWER 1131:7133 [12465.028585] saa7134: card=18 -> BMK MPEX No Tuner [12465.028592] saa7134: card=19 -> Compro VideoMate TV 185b:c100 [12465.028600] saa7134: card=20 -> Matrox CronosPlus 102b:48d0 [12465.028608] saa7134: card=21 -> 10MOONS PCI TV CAPTURE CARD 1131:2001 [12465.028617] saa7134: card=22 -> AverMedia M156 / Medion 2819 1461:a70b [12465.028625] saa7134: card=23 -> BMK MPEX Tuner [12465.028632] saa7134: card=24 -> KNC One TV-Station DVR 1894:a006 [12465.028640] saa7134: card=25 -> ASUS TV-FM 7133 1043:4843 [12465.028648] saa7134: card=26 -> Pinnacle PCTV Stereo (saa7134) 11bd:002b [12465.028657] saa7134: card=27 -> Manli MuchTV M-TV002 [12465.028663] saa7134: card=28 -> Manli MuchTV M-TV001 [12465.028670] saa7134: card=29 -> Nagase Sangyo TransGear 3000TV 1461:050c [12465.028679] saa7134: card=30 -> Elitegroup ECS TVP3XP FM1216 Tuner Card( 1019:4cb4 [12465.028687] saa7134: card=31 -> Elitegroup ECS TVP3XP FM1236 Tuner Card 1019:4cb5 [12465.028695] saa7134: card=32 -> AVACS SmartTV [12465.028702] saa7134: card=33 -> AVerMedia DVD EZMaker 1461:10ff [12465.028710] saa7134: card=34 -> Noval Prime TV 7133 [12465.028717] saa7134: card=35 -> AverMedia AverTV Studio 305 1461:2115 [12465.028725] saa7134: card=36 -> UPMOST PURPLE TV 12ab:0800 [12465.028734] saa7134: card=37 -> Items MuchTV Plus / IT-005 [12465.028740] saa7134: card=38 -> Terratec Cinergy 200 TV 153b:1152 [12465.028749] saa7134: card=39 -> LifeView FlyTV Platinum Mini 5168:0212 4e42:0212 5169:1502 [12465.028760] saa7134: card=40 -> Compro VideoMate TV PVR/FM 185b:c100 [12465.028768] saa7134: card=41 -> Compro VideoMate TV Gold+ 185b:c100 [12465.028776] saa7134: card=42 -> Sabrent SBT-TVFM (saa7130) [12465.028783] saa7134: card=43 -> :Zolid Xpert TV7134 [12465.028790] saa7134: card=44 -> Empire PCI TV-Radio LE [12465.028796] saa7134: card=45 -> Avermedia AVerTV Studio 307 1461:9715 [12465.028805] saa7134: card=46 -> AVerMedia Cardbus TV/Radio (E500) 1461:d6ee [12465.028813] saa7134: card=47 -> Terratec Cinergy 400 mobile 153b:1162 [12465.028821] saa7134: card=48 -> Terratec Cinergy 600 TV MK3 153b:1158 [12465.028830] saa7134: card=49 -> Compro VideoMate Gold+ Pal 185b:c200 [12465.028838] saa7134: card=50 -> Pinnacle PCTV 300i DVB-T + PAL 11bd:002d [12465.028847] saa7134: card=51 -> ProVideo PV952 1540:9524 [12465.028855] saa7134: card=52 -> AverMedia AverTV/305 1461:2108 [12465.028863] saa7134: card=53 -> ASUS TV-FM 7135 1043:4845 [12465.028871] saa7134: card=54 -> LifeView FlyTV Platinum FM / Gold 5168:0214 5168:5214 1489:0214 5168:0304 [12465.028884] saa7134: card=55 -> LifeView FlyDVB-T DUO / MSI TV@nywhere D 5168:0306 4e42:0306 [12465.028894] saa7134: card=56 -> Avermedia AVerTV 307 1461:a70a [12465.028903] saa7134: card=57 -> Avermedia AVerTV GO 007 FM 1461:f31f [12465.028911] saa7134: card=58 -> ADS Tech Instant TV (saa7135) 1421:0350 1421:0351 1421:0370 1421:1370 [12465.028924] saa7134: card=59 -> Kworld/Tevion V-Stream Xpert TV PVR7134 [12465.028931] saa7134: card=60 -> LifeView/Typhoon/Genius FlyDVB-T Duo Car 5168:0502 4e42:0502 1489:0502 [12465.028942] saa7134: card=61 -> Philips TOUGH DVB-T reference design 1131:2004 [12465.028951] saa7134: card=62 -> Compro VideoMate TV Gold+II [12465.028958] saa7134: card=63 -> Kworld Xpert TV PVR7134 [12465.028964] saa7134: card=64 -> FlyTV mini Asus Digimatrix 1043:0210 [12465.028973] saa7134: card=65 -> V-Stream Studio TV Terminator [12465.028980] saa7134: card=66 -> Yuan TUN-900 (saa7135) [12465.028986] saa7134: card=67 -> Beholder BeholdTV 409 FM 0000:4091 [12465.028995] saa7134: card=68 -> GoTView 7135 PCI 5456:7135 [12465.029003] saa7134: card=69 -> Philips EUROPA V3 reference design 1131:2004 [12465.029011] saa7134: card=70 -> Compro Videomate DVB-T300 185b:c900 [12465.029020] saa7134: card=71 -> Compro Videomate DVB-T200 185b:c901 [12465.029028] saa7134: card=72 -> RTD Embedded Technologies VFG7350 1435:7350 [12465.029036] saa7134: card=73 -> RTD Embedded Technologies VFG7330 1435:7330 [12465.029045] saa7134: card=74 -> LifeView FlyTV Platinum Mini2 14c0:1212 [12465.029053] saa7134: card=75 -> AVerMedia AVerTVHD MCE A180 1461:1044 [12465.029062] saa7134: card=76 -> SKNet MonsterTV Mobile 1131:4ee9 [12465.029070] saa7134: card=77 -> Pinnacle PCTV 40i/50i/110i (saa7133) 11bd:002e [12465.029078] saa7134: card=78 -> ASUSTeK P7131 Dual 1043:4862 [12465.029087] saa7134: card=79 -> Sedna/MuchTV PC TV Cardbus TV/Radio (ITO [12465.029094] saa7134: card=80 -> ASUS Digimatrix TV 1043:0210 [12465.029102] saa7134: card=81 -> Philips Tiger reference design 1131:2018 [12465.029110] saa7134: card=82 -> MSI TV@Anywhere plus 1462:6231 1462:8624 [12465.029120] saa7134: card=83 -> Terratec Cinergy 250 PCI TV 153b:1160 [12465.029128] saa7134: card=84 -> LifeView FlyDVB Trio 5168:0319 [12465.029137] saa7134: card=85 -> AverTV DVB-T 777 1461:2c05 1461:2c05 [12465.029147] saa7134: card=86 -> LifeView FlyDVB-T / Genius VideoWonder D 5168:0301 1489:0301 [12465.029156] saa7134: card=87 -> ADS Instant TV Duo Cardbus PTV331 0331:1421 [12465.029165] saa7134: card=88 -> Tevion/KWorld DVB-T 220RF 17de:7201 [12465.029173] saa7134: card=89 -> ELSA EX-VISION 700TV 1048:226c [12465.029182] saa7134: card=90 -> Kworld ATSC110/115 17de:7350 17de:7352 [12465.029191] saa7134: card=91 -> AVerMedia A169 B 1461:7360 [12465.029200] saa7134: card=92 -> AVerMedia A169 B1 1461:6360 [12465.029208] saa7134: card=93 -> Medion 7134 Bridge #2 16be:0005 [12465.029216] saa7134: card=94 -> LifeView FlyDVB-T Hybrid Cardbus/MSI TV 5168:3306 5168:3502 5168:3307 4e42:3502 [12465.029229] saa7134: card=95 -> LifeView FlyVIDEO3000 (NTSC) 5169:0138 [12465.029238] saa7134: card=96 -> Medion Md8800 Quadro 16be:0007 16be:0008 16be:000d [12465.029249] saa7134: card=97 -> LifeView FlyDVB-S /Acorp TV134DS 5168:0300 4e42:0300 [12465.029259] saa7134: card=98 -> Proteus Pro 2309 0919:2003 [12465.029267] saa7134: card=99 -> AVerMedia TV Hybrid A16AR 1461:2c00 [12465.029276] saa7134: card=100 -> Asus Europa2 OEM 1043:4860 [12465.029284] saa7134: card=101 -> Pinnacle PCTV 310i 11bd:002f [12465.029293] saa7134: card=102 -> Avermedia AVerTV Studio 507 1461:9715 [12465.029301] saa7134: card=103 -> Compro Videomate DVB-T200A [12465.029308] saa7134: card=104 -> Hauppauge WinTV-HVR1110 DVB-T/Hybrid 0070:6700 0070:6701 0070:6702 0070:6703 0070:6704 0070:6705 [12465.029324] saa7134: card=105 -> Terratec Cinergy HT PCMCIA 153b:1172 [12465.029332] saa7134: card=106 -> Encore ENLTV 1131:2342 1131:2341 3016:2344 [12465.029344] saa7134: card=107 -> Encore ENLTV-FM 1131:230f [12465.029352] saa7134: card=108 -> Terratec Cinergy HT PCI 153b:1175 [12465.029360] saa7134: card=109 -> Philips Tiger - S Reference design [12465.029367] saa7134: card=110 -> Avermedia M102 1461:f31e [12465.029375] saa7134: card=111 -> ASUS P7131 4871 1043:4871 [12465.029384] saa7134: card=112 -> ASUSTeK P7131 Hybrid 1043:4876 [12465.029392] saa7134: card=113 -> Elitegroup ECS TVP3XP FM1246 Tuner Card 1019:4cb6 [12465.029401] saa7134: card=114 -> KWorld DVB-T 210 17de:7250 [12465.029409] saa7134: card=115 -> Sabrent PCMCIA TV-PCB05 0919:2003 [12465.029418] saa7134: card=116 -> 10MOONS TM300 TV Card 1131:2304 [12465.029426] saa7134: card=117 -> Avermedia Super 007 1461:f01d [12465.029435] saa7134: card=118 -> Beholder BeholdTV 401 0000:4016 [12465.029443] saa7134: card=119 -> Beholder BeholdTV 403 0000:4036 [12465.029451] saa7134: card=120 -> Beholder BeholdTV 403 FM 0000:4037 [12465.029459] saa7134: card=121 -> Beholder BeholdTV 405 0000:4050 [12465.029468] saa7134: card=122 -> Beholder BeholdTV 405 FM 0000:4051 [12465.029476] saa7134: card=123 -> Beholder BeholdTV 407 0000:4070 [12465.029484] saa7134: card=124 -> Beholder BeholdTV 407 FM 0000:4071 [12465.029493] saa7134: card=125 -> Beholder BeholdTV 409 0000:4090 [12465.029501] saa7134: card=126 -> Beholder BeholdTV 505 FM 5ace:5050 [12465.029510] saa7134: card=127 -> Beholder BeholdTV 507 FM / BeholdTV 509 5ace:5070 5ace:5090 [12465.029520] saa7134: card=128 -> Beholder BeholdTV Columbus TVFM 0000:5201 [12465.029528] saa7134: card=129 -> Beholder BeholdTV 607 FM 5ace:6070 [12465.029537] saa7134: card=130 -> Beholder BeholdTV M6 5ace:6190 [12465.029545] saa7134: card=131 -> Twinhan Hybrid DTV-DVB 3056 PCI 1822:0022 [12465.029554] saa7134: card=132 -> Genius TVGO AM11MCE [12465.029560] saa7134: card=133 -> NXP Snake DVB-S reference design [12465.029567] saa7134: card=134 -> Medion/Creatix CTX953 Hybrid 16be:0010 [12465.029576] saa7134: card=135 -> MSI TV@nywhere A/D v1.1 1462:8625 [12465.029584] saa7134: card=136 -> AVerMedia Cardbus TV/Radio (E506R) 1461:f436 [12465.029592] saa7134: card=137 -> AVerMedia Hybrid TV/Radio (A16D) 1461:f936 [12465.029601] saa7134: card=138 -> Avermedia M115 1461:a836 [12465.029609] saa7134: card=139 -> Compro VideoMate T750 185b:c900 [12465.029617] saa7134: card=140 -> Avermedia DVB-S Pro A700 1461:a7a1 [12465.029626] saa7134: card=141 -> Avermedia DVB-S Hybrid+FM A700 1461:a7a2 [12465.029634] saa7134: card=142 -> Beholder BeholdTV H6 5ace:6290 [12465.029642] saa7134: card=143 -> Beholder BeholdTV M63 5ace:6191 [12465.029651] saa7134: card=144 -> Beholder BeholdTV M6 Extra 5ace:6193 [12465.029659] saa7134: card=145 -> AVerMedia MiniPCI DVB-T Hybrid M103 1461:f636 1461:f736 [12465.029669] saa7134: card=146 -> ASUSTeK P7131 Analog [12465.029676] saa7134: card=147 -> Asus Tiger 3in1 1043:4878 [12465.029684] saa7134: card=148 -> Encore ENLTV-FM v5.3 1a7f:2008 [12465.029693] saa7134: card=149 -> Avermedia PCI pure analog (M135A) 1461:f11d [12465.029701] saa7134: card=150 -> Zogis Real Angel 220 [12465.029708] saa7134: card=151 -> ADS Tech Instant HDTV 1421:0380 [12465.029716] saa7134: card=152 -> Asus Tiger Rev:1.00 1043:4857 [12465.029725] saa7134: card=153 -> Kworld Plus TV Analog Lite PCI 17de:7128 [12465.029733] saa7134: card=154 -> Avermedia AVerTV GO 007 FM Plus 1461:f31d [12465.029742] saa7134: card=155 -> Hauppauge WinTV-HVR1150 ATSC/QAM-Hybrid 0070:6706 0070:6708 [12465.029752] saa7134: card=156 -> Hauppauge WinTV-HVR1120 DVB-T/Hybrid 0070:6707 0070:6709 0070:670a [12465.029763] saa7134: card=157 -> Avermedia AVerTV Studio 507UA 1461:a11b [12465.029772] saa7134: card=158 -> AVerMedia Cardbus TV/Radio (E501R) 1461:b7e9 [12465.029780] saa7134: card=159 -> Beholder BeholdTV 505 RDS 0000:505b [12465.029789] saa7134: card=160 -> Beholder BeholdTV 507 RDS 0000:5071 [12465.029797] saa7134: card=161 -> Beholder BeholdTV 507 RDS 0000:507b [12465.029806] saa7134: card=162 -> Beholder BeholdTV 607 FM 5ace:6071 [12465.029815] saa7134: card=163 -> Beholder BeholdTV 609 FM 5ace:6090 [12465.029823] saa7134: card=164 -> Beholder BeholdTV 609 FM 5ace:6091 [12465.029832] saa7134: card=165 -> Beholder BeholdTV 607 RDS 5ace:6072 [12465.029840] saa7134: card=166 -> Beholder BeholdTV 607 RDS 5ace:6073 [12465.029849] saa7134: card=167 -> Beholder BeholdTV 609 RDS 5ace:6092 [12465.029857] saa7134: card=168 -> Beholder BeholdTV 609 RDS 5ace:6093 [12465.029866] saa7134: card=169 -> Compro VideoMate S350/S300 185b:c900 [12465.029874] saa7134: card=170 -> AverMedia AverTV Studio 505 1461:a115 [12465.029883] saa7134: card=171 -> Beholder BeholdTV X7 5ace:7595 [12465.029892] saa7134: card=172 -> RoverMedia TV Link Pro FM 19d1:0138 [12465.029900] saa7134: card=173 -> Zolid Hybrid TV Tuner PCI 1131:2004 [12465.029909] saa7134: card=174 -> Asus Europa Hybrid OEM 1043:4847 [12465.029917] saa7134: card=175 -> Leadtek Winfast DTV1000S 107d:6655 [12465.029926] saa7134: card=176 -> Beholder BeholdTV 505 RDS 0000:5051 [12465.029934] saa7134: card=177 -> Hawell HW-404M7 [12465.029941] saa7134: card=178 -> Beholder BeholdTV H7 [12465.029948] saa7134: card=179 -> Beholder BeholdTV A7 [12465.029955] saa7134: card=180 -> Avermedia PCI M733A 1461:4155 1461:4255 [12465.029967] saa7130[0]: subsystem: 1131:0000, board: UNKNOWN/GENERIC [card=0,autodetected] [12465.030033] saa7130[0]: can't get MMIO memory @ 0x0 [12465.030051] saa7134: probe of 0000:01:07.0 failed with error -16 [12465.053892] saa7134 ALSA driver for DMA sound loaded [12465.053900] saa7134 ALSA: no saa7134 cards found tvtime-scanner Leyendo la configuración de /etc/tvtime/tvtime.xml Leyendo la configuración de /home/ricardo/.tvtime/tvtime.xml Escaneando usando la norma de TV NTSC. /home/ricardo/.tvtime/stationlist.xml: No existing NTSC station list "Custom". videoinput: Cannot open capture device /dev/video0: No existe el dispositivo o la dirección

    Read the article

  • Problems with capture TV card

    - by user8270
    Hi, I have a TV card that I have not managed to install with Ubuntu 10.10 i386. I have tried various topics in various forums and I could not install it. I hope you can help me to install it thank you. lspci 01:07.0 Multimedia controller: Philips Semiconductors SAA7130 Video Broadcast Decoder (rev 01) dmesg [10299.516344] saa7134 ALSA driver for DMA sound unloaded [11385.340661] Linux video capture interface: v2.00 [11385.384278] saa7130/34: v4l2 driver version 0.2.16 loaded [11385.384390] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [11385.384403] saa7130[0]: subsystem: 1131:0000, board: LifeView/Typhoon FlyVIDEO2000 [card=3,insmod option] [11385.384412] saa7130[0]: can't get MMIO memory @ 0x0 [11385.384431] saa7134: probe of 0000:01:07.0 failed with error -16 [11385.401174] saa7134 ALSA driver for DMA sound loaded [11385.401182] saa7134 ALSA: no saa7134 cards found [11477.797019] tvtime[12534]: segfault at 6b0 ip 0804cf64 sp bf928a4c error 4 in tvtime[8048000+76000] [11626.141821] tvtime[12549]: segfault at 6b0 ip 0804cf64 sp bfec357c error 4 in tvtime[8048000+76000] [12218.120632] saa7134 ALSA driver for DMA sound unloaded [12464.993061] Linux video capture interface: v2.00 [12465.028285] saa7130/34: v4l2 driver version 0.2.16 loaded [12465.028392] saa7130[0]: found at 0000:01:07.0, rev: 1, irq: 17, latency: 32, mmio: 0x0 [12465.028404] saa7134: <rant> [12465.028406] saa7134: Congratulations! Your TV card vendor saved a few [12465.028408] saa7134: cents for a eeprom, thus your pci board has no [12465.028411] saa7134: subsystem ID and I can't identify it automatically [12465.028414] saa7134: </rant> [12465.028416] saa7134: I feel better now. Ok, here are the good news: [12465.028418] saa7134: You can use the card=<nr> insmod option to specify [12465.028421] saa7134: which board do you have. The list: [12465.028428] saa7134: card=0 -> UNKNOWN/GENERIC [12465.028435] saa7134: card=1 -> Proteus Pro [philips reference design] 1131:2001 1131:2001 [12465.028447] saa7134: card=2 -> LifeView FlyVIDEO3000 5168:0138 4e42:0138 [12465.028457] saa7134: card=3 -> LifeView/Typhoon FlyVIDEO2000 5168:0138 4e42:0138 [12465.028467] saa7134: card=4 -> EMPRESS 1131:6752 [12465.028475] saa7134: card=5 -> SKNet Monster TV 1131:4e85 [12465.028484] saa7134: card=6 -> Tevion MD 9717 [12465.028491] saa7134: card=7 -> KNC One TV-Station RDS / Typhoon TV Tune 1131:fe01 1894:fe01 [12465.028501] saa7134: card=8 -> Terratec Cinergy 400 TV 153b:1142 [12465.028510] saa7134: card=9 -> Medion 5044 [12465.028517] saa7134: card=10 -> Kworld/KuroutoShikou SAA7130-TVPCI [12465.028523] saa7134: card=11 -> Terratec Cinergy 600 TV 153b:1143 [12465.028532] saa7134: card=12 -> Medion 7134 16be:0003 16be:5000 [12465.028542] saa7134: card=13 -> Typhoon TV+Radio 90031 [12465.028548] saa7134: card=14 -> ELSA EX-VISION 300TV 1048:226b [12465.028557] saa7134: card=15 -> ELSA EX-VISION 500TV 1048:226a [12465.028565] saa7134: card=16 -> ASUS TV-FM 7134 1043:4842 1043:4830 1043:4840 [12465.028576] saa7134: card=17 -> AOPEN VA1000 POWER 1131:7133 [12465.028585] saa7134: card=18 -> BMK MPEX No Tuner [12465.028592] saa7134: card=19 -> Compro VideoMate TV 185b:c100 [12465.028600] saa7134: card=20 -> Matrox CronosPlus 102b:48d0 [12465.028608] saa7134: card=21 -> 10MOONS PCI TV CAPTURE CARD 1131:2001 [12465.028617] saa7134: card=22 -> AverMedia M156 / Medion 2819 1461:a70b [12465.028625] saa7134: card=23 -> BMK MPEX Tuner [12465.028632] saa7134: card=24 -> KNC One TV-Station DVR 1894:a006 [12465.028640] saa7134: card=25 -> ASUS TV-FM 7133 1043:4843 [12465.028648] saa7134: card=26 -> Pinnacle PCTV Stereo (saa7134) 11bd:002b [12465.028657] saa7134: card=27 -> Manli MuchTV M-TV002 [12465.028663] saa7134: card=28 -> Manli MuchTV M-TV001 [12465.028670] saa7134: card=29 -> Nagase Sangyo TransGear 3000TV 1461:050c [12465.028679] saa7134: card=30 -> Elitegroup ECS TVP3XP FM1216 Tuner Card( 1019:4cb4 [12465.028687] saa7134: card=31 -> Elitegroup ECS TVP3XP FM1236 Tuner Card 1019:4cb5 [12465.028695] saa7134: card=32 -> AVACS SmartTV [12465.028702] saa7134: card=33 -> AVerMedia DVD EZMaker 1461:10ff [12465.028710] saa7134: card=34 -> Noval Prime TV 7133 [12465.028717] saa7134: card=35 -> AverMedia AverTV Studio 305 1461:2115 [12465.028725] saa7134: card=36 -> UPMOST PURPLE TV 12ab:0800 [12465.028734] saa7134: card=37 -> Items MuchTV Plus / IT-005 [12465.028740] saa7134: card=38 -> Terratec Cinergy 200 TV 153b:1152 [12465.028749] saa7134: card=39 -> LifeView FlyTV Platinum Mini 5168:0212 4e42:0212 5169:1502 [12465.028760] saa7134: card=40 -> Compro VideoMate TV PVR/FM 185b:c100 [12465.028768] saa7134: card=41 -> Compro VideoMate TV Gold+ 185b:c100 [12465.028776] saa7134: card=42 -> Sabrent SBT-TVFM (saa7130) [12465.028783] saa7134: card=43 -> :Zolid Xpert TV7134 [12465.028790] saa7134: card=44 -> Empire PCI TV-Radio LE [12465.028796] saa7134: card=45 -> Avermedia AVerTV Studio 307 1461:9715 [12465.028805] saa7134: card=46 -> AVerMedia Cardbus TV/Radio (E500) 1461:d6ee [12465.028813] saa7134: card=47 -> Terratec Cinergy 400 mobile 153b:1162 [12465.028821] saa7134: card=48 -> Terratec Cinergy 600 TV MK3 153b:1158 [12465.028830] saa7134: card=49 -> Compro VideoMate Gold+ Pal 185b:c200 [12465.028838] saa7134: card=50 -> Pinnacle PCTV 300i DVB-T + PAL 11bd:002d [12465.028847] saa7134: card=51 -> ProVideo PV952 1540:9524 [12465.028855] saa7134: card=52 -> AverMedia AverTV/305 1461:2108 [12465.028863] saa7134: card=53 -> ASUS TV-FM 7135 1043:4845 [12465.028871] saa7134: card=54 -> LifeView FlyTV Platinum FM / Gold 5168:0214 5168:5214 1489:0214 5168:0304 [12465.028884] saa7134: card=55 -> LifeView FlyDVB-T DUO / MSI TV@nywhere D 5168:0306 4e42:0306 [12465.028894] saa7134: card=56 -> Avermedia AVerTV 307 1461:a70a [12465.028903] saa7134: card=57 -> Avermedia AVerTV GO 007 FM 1461:f31f [12465.028911] saa7134: card=58 -> ADS Tech Instant TV (saa7135) 1421:0350 1421:0351 1421:0370 1421:1370 [12465.028924] saa7134: card=59 -> Kworld/Tevion V-Stream Xpert TV PVR7134 [12465.028931] saa7134: card=60 -> LifeView/Typhoon/Genius FlyDVB-T Duo Car 5168:0502 4e42:0502 1489:0502 [12465.028942] saa7134: card=61 -> Philips TOUGH DVB-T reference design 1131:2004 [12465.028951] saa7134: card=62 -> Compro VideoMate TV Gold+II [12465.028958] saa7134: card=63 -> Kworld Xpert TV PVR7134 [12465.028964] saa7134: card=64 -> FlyTV mini Asus Digimatrix 1043:0210 [12465.028973] saa7134: card=65 -> V-Stream Studio TV Terminator [12465.028980] saa7134: card=66 -> Yuan TUN-900 (saa7135) [12465.028986] saa7134: card=67 -> Beholder BeholdTV 409 FM 0000:4091 [12465.028995] saa7134: card=68 -> GoTView 7135 PCI 5456:7135 [12465.029003] saa7134: card=69 -> Philips EUROPA V3 reference design 1131:2004 [12465.029011] saa7134: card=70 -> Compro Videomate DVB-T300 185b:c900 [12465.029020] saa7134: card=71 -> Compro Videomate DVB-T200 185b:c901 [12465.029028] saa7134: card=72 -> RTD Embedded Technologies VFG7350 1435:7350 [12465.029036] saa7134: card=73 -> RTD Embedded Technologies VFG7330 1435:7330 [12465.029045] saa7134: card=74 -> LifeView FlyTV Platinum Mini2 14c0:1212 [12465.029053] saa7134: card=75 -> AVerMedia AVerTVHD MCE A180 1461:1044 [12465.029062] saa7134: card=76 -> SKNet MonsterTV Mobile 1131:4ee9 [12465.029070] saa7134: card=77 -> Pinnacle PCTV 40i/50i/110i (saa7133) 11bd:002e [12465.029078] saa7134: card=78 -> ASUSTeK P7131 Dual 1043:4862 [12465.029087] saa7134: card=79 -> Sedna/MuchTV PC TV Cardbus TV/Radio (ITO [12465.029094] saa7134: card=80 -> ASUS Digimatrix TV 1043:0210 [12465.029102] saa7134: card=81 -> Philips Tiger reference design 1131:2018 [12465.029110] saa7134: card=82 -> MSI TV@Anywhere plus 1462:6231 1462:8624 [12465.029120] saa7134: card=83 -> Terratec Cinergy 250 PCI TV 153b:1160 [12465.029128] saa7134: card=84 -> LifeView FlyDVB Trio 5168:0319 [12465.029137] saa7134: card=85 -> AverTV DVB-T 777 1461:2c05 1461:2c05 [12465.029147] saa7134: card=86 -> LifeView FlyDVB-T / Genius VideoWonder D 5168:0301 1489:0301 [12465.029156] saa7134: card=87 -> ADS Instant TV Duo Cardbus PTV331 0331:1421 [12465.029165] saa7134: card=88 -> Tevion/KWorld DVB-T 220RF 17de:7201 [12465.029173] saa7134: card=89 -> ELSA EX-VISION 700TV 1048:226c [12465.029182] saa7134: card=90 -> Kworld ATSC110/115 17de:7350 17de:7352 [12465.029191] saa7134: card=91 -> AVerMedia A169 B 1461:7360 [12465.029200] saa7134: card=92 -> AVerMedia A169 B1 1461:6360 [12465.029208] saa7134: card=93 -> Medion 7134 Bridge #2 16be:0005 [12465.029216] saa7134: card=94 -> LifeView FlyDVB-T Hybrid Cardbus/MSI TV 5168:3306 5168:3502 5168:3307 4e42:3502 [12465.029229] saa7134: card=95 -> LifeView FlyVIDEO3000 (NTSC) 5169:0138 [12465.029238] saa7134: card=96 -> Medion Md8800 Quadro 16be:0007 16be:0008 16be:000d [12465.029249] saa7134: card=97 -> LifeView FlyDVB-S /Acorp TV134DS 5168:0300 4e42:0300 [12465.029259] saa7134: card=98 -> Proteus Pro 2309 0919:2003 [12465.029267] saa7134: card=99 -> AVerMedia TV Hybrid A16AR 1461:2c00 [12465.029276] saa7134: card=100 -> Asus Europa2 OEM 1043:4860 [12465.029284] saa7134: card=101 -> Pinnacle PCTV 310i 11bd:002f [12465.029293] saa7134: card=102 -> Avermedia AVerTV Studio 507 1461:9715 [12465.029301] saa7134: card=103 -> Compro Videomate DVB-T200A [12465.029308] saa7134: card=104 -> Hauppauge WinTV-HVR1110 DVB-T/Hybrid 0070:6700 0070:6701 0070:6702 0070:6703 0070:6704 0070:6705 [12465.029324] saa7134: card=105 -> Terratec Cinergy HT PCMCIA 153b:1172 [12465.029332] saa7134: card=106 -> Encore ENLTV 1131:2342 1131:2341 3016:2344 [12465.029344] saa7134: card=107 -> Encore ENLTV-FM 1131:230f [12465.029352] saa7134: card=108 -> Terratec Cinergy HT PCI 153b:1175 [12465.029360] saa7134: card=109 -> Philips Tiger - S Reference design [12465.029367] saa7134: card=110 -> Avermedia M102 1461:f31e [12465.029375] saa7134: card=111 -> ASUS P7131 4871 1043:4871 [12465.029384] saa7134: card=112 -> ASUSTeK P7131 Hybrid 1043:4876 [12465.029392] saa7134: card=113 -> Elitegroup ECS TVP3XP FM1246 Tuner Card 1019:4cb6 [12465.029401] saa7134: card=114 -> KWorld DVB-T 210 17de:7250 [12465.029409] saa7134: card=115 -> Sabrent PCMCIA TV-PCB05 0919:2003 [12465.029418] saa7134: card=116 -> 10MOONS TM300 TV Card 1131:2304 [12465.029426] saa7134: card=117 -> Avermedia Super 007 1461:f01d [12465.029435] saa7134: card=118 -> Beholder BeholdTV 401 0000:4016 [12465.029443] saa7134: card=119 -> Beholder BeholdTV 403 0000:4036 [12465.029451] saa7134: card=120 -> Beholder BeholdTV 403 FM 0000:4037 [12465.029459] saa7134: card=121 -> Beholder BeholdTV 405 0000:4050 [12465.029468] saa7134: card=122 -> Beholder BeholdTV 405 FM 0000:4051 [12465.029476] saa7134: card=123 -> Beholder BeholdTV 407 0000:4070 [12465.029484] saa7134: card=124 -> Beholder BeholdTV 407 FM 0000:4071 [12465.029493] saa7134: card=125 -> Beholder BeholdTV 409 0000:4090 [12465.029501] saa7134: card=126 -> Beholder BeholdTV 505 FM 5ace:5050 [12465.029510] saa7134: card=127 -> Beholder BeholdTV 507 FM / BeholdTV 509 5ace:5070 5ace:5090 [12465.029520] saa7134: card=128 -> Beholder BeholdTV Columbus TVFM 0000:5201 [12465.029528] saa7134: card=129 -> Beholder BeholdTV 607 FM 5ace:6070 [12465.029537] saa7134: card=130 -> Beholder BeholdTV M6 5ace:6190 [12465.029545] saa7134: card=131 -> Twinhan Hybrid DTV-DVB 3056 PCI 1822:0022 [12465.029554] saa7134: card=132 -> Genius TVGO AM11MCE [12465.029560] saa7134: card=133 -> NXP Snake DVB-S reference design [12465.029567] saa7134: card=134 -> Medion/Creatix CTX953 Hybrid 16be:0010 [12465.029576] saa7134: card=135 -> MSI TV@nywhere A/D v1.1 1462:8625 [12465.029584] saa7134: card=136 -> AVerMedia Cardbus TV/Radio (E506R) 1461:f436 [12465.029592] saa7134: card=137 -> AVerMedia Hybrid TV/Radio (A16D) 1461:f936 [12465.029601] saa7134: card=138 -> Avermedia M115 1461:a836 [12465.029609] saa7134: card=139 -> Compro VideoMate T750 185b:c900 [12465.029617] saa7134: card=140 -> Avermedia DVB-S Pro A700 1461:a7a1 [12465.029626] saa7134: card=141 -> Avermedia DVB-S Hybrid+FM A700 1461:a7a2 [12465.029634] saa7134: card=142 -> Beholder BeholdTV H6 5ace:6290 [12465.029642] saa7134: card=143 -> Beholder BeholdTV M63 5ace:6191 [12465.029651] saa7134: card=144 -> Beholder BeholdTV M6 Extra 5ace:6193 [12465.029659] saa7134: card=145 -> AVerMedia MiniPCI DVB-T Hybrid M103 1461:f636 1461:f736 [12465.029669] saa7134: card=146 -> ASUSTeK P7131 Analog [12465.029676] saa7134: card=147 -> Asus Tiger 3in1 1043:4878 [12465.029684] saa7134: card=148 -> Encore ENLTV-FM v5.3 1a7f:2008 [12465.029693] saa7134: card=149 -> Avermedia PCI pure analog (M135A) 1461:f11d [12465.029701] saa7134: card=150 -> Zogis Real Angel 220 [12465.029708] saa7134: card=151 -> ADS Tech Instant HDTV 1421:0380 [12465.029716] saa7134: card=152 -> Asus Tiger Rev:1.00 1043:4857 [12465.029725] saa7134: card=153 -> Kworld Plus TV Analog Lite PCI 17de:7128 [12465.029733] saa7134: card=154 -> Avermedia AVerTV GO 007 FM Plus 1461:f31d [12465.029742] saa7134: card=155 -> Hauppauge WinTV-HVR1150 ATSC/QAM-Hybrid 0070:6706 0070:6708 [12465.029752] saa7134: card=156 -> Hauppauge WinTV-HVR1120 DVB-T/Hybrid 0070:6707 0070:6709 0070:670a [12465.029763] saa7134: card=157 -> Avermedia AVerTV Studio 507UA 1461:a11b [12465.029772] saa7134: card=158 -> AVerMedia Cardbus TV/Radio (E501R) 1461:b7e9 [12465.029780] saa7134: card=159 -> Beholder BeholdTV 505 RDS 0000:505b [12465.029789] saa7134: card=160 -> Beholder BeholdTV 507 RDS 0000:5071 [12465.029797] saa7134: card=161 -> Beholder BeholdTV 507 RDS 0000:507b [12465.029806] saa7134: card=162 -> Beholder BeholdTV 607 FM 5ace:6071 [12465.029815] saa7134: card=163 -> Beholder BeholdTV 609 FM 5ace:6090 [12465.029823] saa7134: card=164 -> Beholder BeholdTV 609 FM 5ace:6091 [12465.029832] saa7134: card=165 -> Beholder BeholdTV 607 RDS 5ace:6072 [12465.029840] saa7134: card=166 -> Beholder BeholdTV 607 RDS 5ace:6073 [12465.029849] saa7134: card=167 -> Beholder BeholdTV 609 RDS 5ace:6092 [12465.029857] saa7134: card=168 -> Beholder BeholdTV 609 RDS 5ace:6093 [12465.029866] saa7134: card=169 -> Compro VideoMate S350/S300 185b:c900 [12465.029874] saa7134: card=170 -> AverMedia AverTV Studio 505 1461:a115 [12465.029883] saa7134: card=171 -> Beholder BeholdTV X7 5ace:7595 [12465.029892] saa7134: card=172 -> RoverMedia TV Link Pro FM 19d1:0138 [12465.029900] saa7134: card=173 -> Zolid Hybrid TV Tuner PCI 1131:2004 [12465.029909] saa7134: card=174 -> Asus Europa Hybrid OEM 1043:4847 [12465.029917] saa7134: card=175 -> Leadtek Winfast DTV1000S 107d:6655 [12465.029926] saa7134: card=176 -> Beholder BeholdTV 505 RDS 0000:5051 [12465.029934] saa7134: card=177 -> Hawell HW-404M7 [12465.029941] saa7134: card=178 -> Beholder BeholdTV H7 [12465.029948] saa7134: card=179 -> Beholder BeholdTV A7 [12465.029955] saa7134: card=180 -> Avermedia PCI M733A 1461:4155 1461:4255 [12465.029967] saa7130[0]: subsystem: 1131:0000, board: UNKNOWN/GENERIC [card=0,autodetected] [12465.030033] saa7130[0]: can't get MMIO memory @ 0x0 [12465.030051] saa7134: probe of 0000:01:07.0 failed with error -16 [12465.053892] saa7134 ALSA driver for DMA sound loaded [12465.053900] saa7134 ALSA: no saa7134 cards found tvtime-scanner Leyendo la configuración de /etc/tvtime/tvtime.xml Leyendo la configuración de /home/ricardo/.tvtime/tvtime.xml Escaneando usando la norma de TV NTSC. /home/ricardo/.tvtime/stationlist.xml: No existing NTSC station list "Custom". videoinput: Cannot open capture device /dev/video0: No existe el dispositivo o la dirección ls /dev/ No video directory here

    Read the article

  • Grub 'Read Error' - Only Loads with LiveCD

    - by Ryan Sharp
    Problem After installing Ubuntu to complete my Windows 7/Ubuntu 12.04 dual-boot setup, Grub just wouldn't load at all unless I boot from the LiveCD. Afterwards, everything works completely normal. However, this workaround isn't a solution and I'd like to be able to boot without the aid of a disc. Fdisk -l Using the fdisk -l command, I am given the following: Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x324971d1 Device Boot Start End Blocks Id System /dev/sda1 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 208896 48957439 24374272 7 HPFS/NTFS/exFAT /dev/sda3 * 48959486 124067839 37554177 5 Extended /dev/sda5 48959488 124067839 37554176 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xc0ee6a69 Device Boot Start End Blocks Id System /dev/sdb1 1024208894 1953523711 464657409 5 Extended /dev/sdb3 * 2048 1024206847 512102400 7 HPFS/NTFS/exFAT /dev/sdb5 1024208896 1937897471 456844288 83 Linux /dev/sdb6 1937899520 1953523711 7812096 82 Linux swap / Solaris Partition table entries are not in disk order Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x292eee23 Device Boot Start End Blocks Id System /dev/sdc1 2048 625141759 312569856 7 HPFS/NTFS/exFAT Bootinfoscript I've used the BootInfoScript, and received the following output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. => Windows is installed in the MBR of /dev/sdc. sda1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /bootmgr /Boot/BCD sda2: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda3: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb1: __________________________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdb5: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdb6: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: According to the info in the boot sector, sdb3 starts at sector 200744960. But according to the info from fdisk, sdb3 starts at sector 2048. According to the info in the boot sector, sdb3 has 823461887 sectors, but according to the info from fdisk, it has 1024204799 sectors. Operating System: Boot files: sdc1: __________________________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 64.0 GB, 64023257088 bytes 255 heads, 63 sectors/track, 7783 cylinders, total 125045424 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/sda2 208,896 48,957,439 48,748,544 7 NTFS / exFAT / HPFS /dev/sda3 * 48,959,486 124,067,839 75,108,354 5 Extended /dev/sda5 48,959,488 124,067,839 75,108,352 83 Linux Drive: sdb _____________________________________________________________________ Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 1,024,208,894 1,953,523,711 929,314,818 5 Extended /dev/sdb5 1,024,208,896 1,937,897,471 913,688,576 83 Linux /dev/sdb6 1,937,899,520 1,953,523,711 15,624,192 82 Linux swap / Solaris /dev/sdb3 * 2,048 1,024,206,847 1,024,204,800 7 NTFS / exFAT / HPFS Drive: sdc _____________________________________________________________________ Disk /dev/sdc: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 2,048 625,141,759 625,139,712 7 NTFS / exFAT / HPFS "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 A48056DF8056B80E ntfs System Reserved /dev/sda2 A8C6D6A4C6D671D4 ntfs Windows /dev/sda5 fd71c537-3715-44e1-b1fe-07537e22b3dd ext4 /dev/sdb3 6373D03D0A3747A8 ntfs Steam /dev/sdb5 6f5a6eb3-a932-45aa-893e-045b57708270 ext4 /dev/sdb6 469848c8-867a-41b7-b0e1-b813a43c64af swap /dev/sdc1 725D7B961CF34B1B ntfs backup ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda5 / ext4 (rw,noatime,nodiratime,discard,errors=remount-ro) /dev/sdb5 /home ext4 (rw) =========================== sda5/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd set locale_dir=($root)/boot/grub/locale set lang=en_GB insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-29-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-29-generic } menuentry 'Ubuntu, with Linux 3.2.0-29-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd echo 'Loading Linux 3.2.0-29-generic ...' linux /boot/vmlinuz-3.2.0-29-generic root=UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-29-generic } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root fd71c537-3715-44e1-b1fe-07537e22b3dd linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root A48056DF8056B80E chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root A8C6D6A4C6D671D4 chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda5/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda5 during installation UUID=fd71c537-3715-44e1-b1fe-07537e22b3dd / ext4 noatime,nodiratime,discard,errors=remount-ro 0 1 # /home was on /dev/sdb5 during installation UUID=6f5a6eb3-a932-45aa-893e-045b57708270 /home ext4 defaults 0 2 # swap was on /dev/sdb6 during installation UUID=469848c8-867a-41b7-b0e1-b813a43c64af none swap sw 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 -------------------------------------------------------------------------------- =================== sda5: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.2.0-29-generic 2 = boot/vmlinuz-3.2.0-29-generic 1 = initrd.img 2 = vmlinuz 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda3 00000000 63 6f 70 69 61 20 65 20 63 6f 6c 61 41 63 65 64 |copia e colaAced| 00000010 65 72 20 61 20 74 6f 64 6f 20 6f 20 74 65 78 74 |er a todo o text| 00000020 6f 20 66 61 6c 61 64 6f 20 75 74 69 6c 69 7a 61 |o falado utiliza| 00000030 6e 64 6f 20 61 20 63 6f 6e 76 65 72 73 c3 a3 6f |ndo a convers..o| 00000040 20 64 65 20 74 65 78 74 6f 20 70 61 72 61 20 76 | de texto para v| 00000050 6f 7a 4d 61 6e 69 70 75 6c 61 72 20 61 73 20 64 |ozManipular as d| 00000060 65 66 69 6e 69 c3 a7 c3 b5 65 73 20 71 75 65 20 |efini....es que | 00000070 63 6f 6e 74 72 6f 6c 61 6d 20 6f 20 61 63 65 73 |controlam o aces| 00000080 73 6f 20 64 65 20 57 65 62 73 69 74 65 73 20 61 |so de Websites a| 00000090 20 63 6f 6f 6b 69 65 73 2c 20 4a 61 76 61 53 63 | cookies, JavaSc| 000000a0 72 69 70 74 20 65 20 70 6c 75 67 2d 69 6e 73 4d |ript e plug-insM| 000000b0 61 6e 69 70 75 6c 61 72 20 61 73 20 64 65 66 69 |anipular as defi| 000000c0 6e 69 c3 a7 c3 b5 65 73 20 72 65 6c 61 63 69 6f |ni....es relacio| 000000d0 6e 61 64 61 73 20 63 6f 6d 20 70 72 69 76 61 63 |nadas com privac| 000000e0 69 64 61 64 65 41 63 65 64 65 72 20 61 6f 73 20 |idadeAceder aos | 000000f0 73 65 75 73 20 70 65 72 69 66 c3 a9 72 69 63 6f |seus perif..rico| 00000100 73 20 55 53 42 55 74 69 6c 69 7a 61 72 20 6f 20 |s USBUtilizar o | 00000110 73 65 75 20 6d 69 63 72 6f 66 6f 6e 65 55 74 69 |seu microfoneUti| 00000120 6c 69 7a 61 72 20 61 20 73 75 61 20 63 c3 a2 6d |lizar a sua c..m| 00000130 61 72 61 55 74 69 6c 69 7a 61 72 20 6f 20 73 65 |araUtilizar o se| 00000140 75 20 6d 69 63 72 6f 66 6f 6e 65 20 65 20 61 20 |u microfone e a | 00000150 63 c3 a2 6d 61 72 61 4e c3 a3 6f 20 66 6f 69 20 |c..maraN..o foi | 00000160 70 6f 73 73 c3 ad 76 65 6c 20 65 6e 63 6f 6e 74 |poss..vel encont| 00000170 72 61 72 20 6f 20 63 61 6d 69 6e 68 6f 20 61 62 |rar o caminho ab| 00000180 73 6f 6c 75 74 6f 20 70 61 72 61 20 6f 20 64 69 |soluto para o di| 00000190 72 65 63 74 c3 b3 72 69 6f 20 61 20 65 6d 70 61 |rect..rio a empa| 000001a0 63 6f 74 61 72 2e 4f 20 64 69 72 65 63 74 c3 b3 |cotar.O direct..| 000001b0 72 69 6f 20 64 65 20 65 6e 74 72 61 64 61 00 fe |rio de entrada..| 000001c0 ff ff 83 fe ff ff 02 00 00 00 00 10 7a 04 00 00 |............z...| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in Begging / Appreciation ;) If anything else is required to solve my problem, please ask. My only hopes are that I can solve this, and that doing so won't require re-installation of Grub due to how complicated the procedures are, or that I would be needed to reinstall the OS', as I have done so about six times already since friday due to several other issues I've encountered. Thank you, and good day. System Ubuntu 12.04 64-bit / Windows 7 SP1 64-bit 64GB SSD as boot/OS drive, 1TB HDD as /Home Swap and Steam drive.

    Read the article

  • CodePlex Daily Summary for Friday, June 03, 2011

    CodePlex Daily Summary for Friday, June 03, 2011Popular ReleasesMedia Companion: MC 3.404b Weekly: Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! *** Due to an issue where the date added & the full genre information was not being read into the Movie cache, it is highly recommended that you perform a Rebuild Movies when first running this latest version. This will read in the information from the nfo's & repopulate the cache used by MC during operation. Fi...Terraria Map Generator: TerrariaMapTool 1.0.0.4 Beta: 1) Fixed the generated map.html file so that the file:/// is included in the base path. 2) Added the ability to use parallelization during generation. This will cause the program to use as many threads as there are physical cores. 3) Fixed some background overdraw.DotRas: DotRas v1.2 (Version 1.2.4168): This release includes compiled (and signed) versions of the binaries, PDBs, CHM help documentation, along with both C# and VB.NET examples. Please don't forget to rate the release! If you find a bug, please open a work item and give as much description as possible. Stack traces, which operating system(s) you're targeting, and build type is also very helpful for bug hunting. If you find something you believe to be a bug but are not sure, create a new discussion on the discussions board. Thank...Ajax Minifier: Microsoft Ajax Minifier 4.21: Fixed issues #15949 and #15868. Tweaked output in multi-line (pretty-print) mode so that var-statements in the initializer of for-statements break multiple variables onto multiple lines. Added TreeModification flag (and can now therefore turn off) the behavior of removing quotes around object literal property names if the names can be valid identifiers. Convert true literals to !0 and false literals to !1. Added a Visitor class approach to iterating over a generated abstract syntax tree to he...Caliburn Micro: WPF, Silverlight and WP7 made easy.: Caliburn.Micro v1.1 RTW: Download ContentsDebug and Release Assemblies Samples Changes.txt License.txt Release Highlights For WP7A new Tombstoning API based on ideas from Fluent NHibernate A new Launcher/Chooser API Strongly typed Navigation SimpleContainer included The full phone lifecycle is made easy to work with ie. we figure out whether your app is actually Resurrecting or just Continuing for you For WPFSupport for the Client Profile Better support for WinForms integration All PlatformsA power...VidCoder: 0.9.1: Added color coding to the Log window. Errors are highlighted in red, HandBrake logs are in black and VidCoder logs are in dark blue. Moved enqueue button to the right with the other control buttons. Added logic to report failures when errors are logged during the encode or when the encode finishes prematurely. Added Copy button to Log window. Adjusted audio track selection box to always show the full track name. Changed encode job progress bar to also be colored yellow when the enco...AutoLoL: AutoLoL v2.0.1: - Fixed a small bug in Auto Login - Fixed the updaterEPPlus-Create advanced Excel 2007 spreadsheets on the server: EPPlus 2.9.0.1: EPPlus-Create advanced Excel 2007 spreadsheets on the server This version has been updated to .Net Framework 3.5 New Features Data Validation. PivotTables (Basic functionalliy...) Support for worksheet data sources. Column, Row, Page and Data fields. Date and Numeric grouping Build in styles. ...and more And some minor new features... Ranges Text-Property|Get the formated value AutofitColumns-method to set the column width from the content of the range LoadFromCollection-metho...jQuery ASP.Net MVC Controls: Version 1.4.0.0: Version 1.4.0.0 contains the following additions: Upgraded to MVC 3.0 Upgraded to jQuery 1.6.1 (Though the project supports all jQuery version from 1.4.x onwards) Upgraded to jqGrid 3.8 Better Razor View-Engine support Better Pager support, includes support for custom pagers Added jqGrid toolbar buttons support Search module refactored, with full suport for multiple filters and ordering And Code cleanup, bug-fixes and better controller configuration support.Nearforums - ASP.NET MVC forum engine: Nearforums v6.0: Version 6.0 of Nearforums, the ASP.NET MVC Forum Engine, containing new features: Authentication using Membership Provider for SQL Server and MySql Spam prevention: Flood Control Moderation: Flag messages Content management: Pages: Create pages (about us/contact/texts) through web administration Allow nearforums to run as an IIS subapp Migrated Facebook Connect to OAuth 2.0 Visit the project Roadmap for more details.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.8b: Changes: - fix critical issue 15922(AccessViolationException) once and for all ...update is strongly recommended Known Issues: - some addin ribbon examples has a COM Register function with missing codebase entry(win32 registry) ...the problem is only affected to c# examples. fixed in latest source code. NetOffice\ReleaseTags\NetOffice Release 0.8.rar Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:...................Facebook Graph Toolkit: Facebook Graph Toolkit 1.5.4186: Updates the API in response to Facebook's recent change of policy: All Graph Api accessing feeds or posts must provide a AccessToken.Serviio for Windows Home Server: Beta Release 0.5.2.0: Ready for widespread beta. Synchronized build number to Serviio version to avoid confusion.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta4: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta4 2011-5-31?? ???Bilibili.us????? ???? ?? ???"????" ???Bilibili.us??? ??????? ?? ??????? ?? ???????? ?? ?? ???Bilibili.us?????(??????????????????) ??????(6.cn)?????(????) ?? ?????Acfun?????????? ?????????????? ???QQ???????? ????????????Discussion...CodeCopy Auto Code Converter: Code Copy v0.1: Full add-in, setup project source code and setup fileWordpress Theme 'Windows Metro': v1.00.0017: v1.00.0017 Dies ist eine Vorab-Version aus der Entwickler-Phase. This is a preview version from the development phase.TerrariViewer: TerrariViewer v2.4.1: Added Piggy Bank editor and fixed some minor bugs.Kooboo CMS: Kooboo CMS 3.02: Updated the Kooboo_CMS.zip at 2011-06-02 11:44 GMT 1.Fixed: Adding data rule issue on page. 2.Fixed: Caching issue for higher performance. Updated the Kooboo_CMS.zip at 2011-06-01 10:00 GMT 1. Fixed the published package throwed a compile error under WebSite mode. 2. Fixed the ContentHelper.NewMediaFolderObject return TextFolder issue. 3. Shorten the name of ContentHelper API. NewMediaFolderObject=>MediaFolder, NewTextFolderObject=> TextFolder, NewSchemaObject=>Schema. Also update the C...mojoPortal: 2.3.6.6: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2366-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0 The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code. To download the source code see the Source Code Tab I recommend getting the latest source code using TortoiseHG, you can get the source code corresponding to this release here.Terraria World Creator: Terraria World Creator: Version 1.01 Fixed a bug that would cause the application to crash. Re-named the Application.New Projects"Background Transfer Service Sample" From WP7 Sample: This project is an extension to existing Windows Phone7 sample available from MS sight ("Background Transfer Service Sample"). This project extends functionality to display the background downloaded item to verify download conten. Agile project and development management dashboard - Meter Console: Agile project and development management dashboard application. The focus of this application to bridge the gap between the management, clients and developer and chive best possible transparency. Ajax Multi-ListBox: Ajax Multi-ListBox is .NET C# written user control with a dual listbox that uses ajax to move items from left to right and vice versa.aLOG : Asynchronous logging (Logging Library): aLog is an asynchronous logging component built on Microsoft Enterprise Exception handling and Logging Component block. It allows application to asynchronously log the data in multiple data sources such as file system, database , XML. Application Base Component Library: Application Base Component LibraryConcise Service Bus: A compact but comprehensive WCF service bus framework.Contour provider for Umbraco Courier: Transfer Umbraco Contour forms between Umbraco instances with Courier.CoveyNet MidWest: CoveyNet MidWestCSharp Samples: This contain C#- wpf, wcf, linqtosql, some basic oops stuff etc.DotNet1: DotNet1FireWeb: fefaesfrafasGeoCoder: Basic server side GeoCoder which accesses Google, YAHOO etc. This is developed in C# and is a mix of language styles, just depended what I was doing that week. All the libraries include tests which can be used to check if the libraries still work, it's worth doing this as the consumed webservices may change from time to time. GrandReward: GrandReward is designed to help young and old people to learn by answer questions. It is easy to use and the community can design own topic files for own questions to learn. The project is develop in C# and contains some WPF views. The project homepage is: http://grand-reward.com/helferlein: The helferlein library contains some web controls and tools I used in other projects.helferlein_BabelFish: helferlein_BabelFish is a DNN module to support multilingual features for other helferlein modules. It is developed in C#.helferlein_Form: helferlein_Form is an easy-to-use module for DotNetNuke that allows creating forms. The submissions can be sent by e-mail and/or stored in a database. It's developed in C#.ILSpy/Silverlight — Proof Of Concept: ILSpy refactoring (dire hacking) to make it work in Silverlight.ISGProject: ISGProjectMarcelo Rocha: Trabalhos acadêmicos, Apresentações, Projetos e outras Invencionices de Marcelo Rocha.MosquitoOnline: MosquitoOnline ProjectNET.Studio: Ein kleines Visual Studio für die SchuleOmegle Desktop Client: A desktop client for the anonymous chat site, Omegle. Designed to combat some of the shortcomings of the webpage, the desktop client will have no adverts, and be able to flash the taskbar window, and play a sound on message received. Online PHP IDE: Free Open Source online editor for working with files on FTP. Free and easy deployment.Open JGL: A utillity library using OpenGL, which focus' on graphics and vertex control.Open Tokenization Server: Open Tokenization Server is a simplistic implementation of a tokenization data security service. It allows users to create and retrieve tokens that represent sensitive information.OurCodeNights: Just some small projects we "work" on at our codenights. A good excuse to meet and have a good time ;)Raple: Raple is simple programming languageSea: Set of C# Extension libraries that make life easier.Shai Chi: Shai ChiSharePoint Sandbox Logging: SharePoint Sandbox Logging enables developers and solution designers to easily log events to logging list in your site collection, allowing you to easily trace errors on custom solutions within your site collection, with no need for server access. Perfect for sandbox / Office365. This project contains one sandbox solution with a site collection logging feature. When feature is active - it creates the list and logs. If it is not active - it does not log. Simple. It also contains a sampl...SharePoint Stock Ticker: SharePoint Stock Ticker is a SharePoint 2010 project which consists of a SharePoint Stock Ticker Web Part driven off of Google's Stock API. The width of this Web Part was developed to fit within the default SharePoint 2010 Quick Launch navigation. This has also been tested to run over http and https SharePoint 2010 farms and will not throw a mixed content warning with Internet Explorer.SharpLinx: SharpLinx is an open source social networking application on ASP.Net.Single.Net - less Copy & Paste,less code lines,more easier: make your code life easierSqlMyAdmin: A projecto to create a PHPMyAdmin clone app that access Sql Server.TDB: TDB is a NOSQL Key-value store entirely designed to run in windows environment. It's directly inspired by Redis project and use Redis' command sematincs and Redis' communication protocol to maintain a base compatibility with it.Tomasulo: A simulator of Tomasulo CDB dynamic instruction scheduling algorithm in Java. Authors: Mengyu Zhou @J85 Zhenyao Zhu @J85 Jun Li @J81 @Tsinghua UniversityTransport Broker Management System: Transport Management system will serve the small freight brokers manage the carriers and orders. It has features like manage truck owners, carriers, source, destination, orders. Verification system for MCBS: Verification system for mobile communication base station.VietCard: VietCardVkontakte File converter: ????????? ?????? ??? ???????? ?? ? "?????????" vkontakte.ru. VSGesture for Visual Studio: VSGesture can execute command via mouse gestures within Visual Studio2010. If you have any feedback, please send me an email to powerumc at gmail.com.WCF-Silverlight Helper: Class library to generate WCF DataContractSerializer-serializable and Silverlight (SL) -compatible classes. Useful, for example, if you have an assembly with classes that fail to serialize with DataContractSerializer. T4 Templates are used to auto-generate code.Windows Phone eBook Samples: Windows Phone eBook Samples is a collection of samples associated with the Windows Phone eBoom community initiative.WinPreciousMetal: WinPreciousMetal helps people know the precious metal's states. Because I am a Chinese, my soft mainly focus at the price of metal in RMB.

    Read the article

  • CodePlex Daily Summary for Tuesday, June 11, 2013

    CodePlex Daily Summary for Tuesday, June 11, 2013Popular ReleasesToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This samples illustrates the use of ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and Microsoft Ente...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message EncodingMapWinGIS ActiveX Map and GIS Component: MapWinGIS v4.8.8 Release Candidate - 32 Bit: This is the first release candidate of MapWinGIS. Please test it thoroughly.MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfVR Player: VR Player 0.3 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesASP.NET MVC Forum: MVCForum v1.3.5: This is a bug release version, with a couple of small usability features and UI changes. All the small amount of bugs reported in v1.3 have been fixed, no upgrade needed just overwrite the files and everything should just work.Json.NET: Json.NET 5.0 Release 6: New feature - Added serialized/deserialized JSON to verbose tracing New feature - Added support for using type name handling with ISerializable content Fix - Fixed not using default serializer settings with primitive values and JToken.ToObject Fix - Fixed error writing BigIntegers with JsonWriter.WriteToken Fix - Fixed serializing and deserializing flag enums with EnumMember attribute Fix - Fixed error deserializing interfaces with a valid type converter Fix - Fixed error deser...QlikView Extension - Animated Scatter Chart: Animated Scatter Chart - v1.0: Version 1.0 including Source Code qar File Example QlikView application Tested With: Browser Firefox 20 (x64) Google Chrome 27 (x64) Internet Explorer 9 QlikView QlikView Desktop 11 - SR2 (x64) QlikView Desktop 11.2 - SR1 (x64) QlikView Ajax Client 11.2 - SR2 (based on x64)BarbaTunnel: BarbaTunnel 7.2: Warning: HTTP Tunnel is not compatible with version 6.x and prior, HTTP packet format has been changed. Check Version History for more information about this release.SuperWebSocket, a .NET WebSocket Server: SuperWebSocket 0.8: This release includes these changes below: Upgrade SuperSocket to 1.5.3 which is much more stable Added handshake request validating api (WebSocketServer.ValidateHandshake(TWebSocketSession session, string origin)) Fixed a bug that the m_Filters in the SubCommandBase can be null if the command's method LoadSubCommandFilters(IEnumerable<SubCommandFilterAttribute> globalFilters) is not invoked Fixed the compatibility issue on Origin getting in the different version protocols Marked ISub...BlackJumboDog: Ver5.9.0: 2013.06.04 Ver5.9.0 (1) ?????????????????????????????????($Remote.ini Tmp.ini) (2) ThreadBaseTest?? (3) ????POP3??????SMTP???????????????? (4) Web???????、?????????URL??????????????? (5) Ftp???????、LIST?????????????? (6) ?????????????????????Media Companion: Media Companion MC3.569b: New* Movies - Autoscrape/Batch Rescrape extra fanart and or extra thumbs. * Movies - Alternative editor can add manually actors. * TV - Batch Rescraper, AutoScrape extrafanart, if option enabled. Fixed* Movies - Slow performance switching to movie tab by adding option 'Disable "Not Matching Rename Pattern"' to Movie Preferences - General. * Movies - Fixed only actors with images were scraped and added to nfo * Movies - Fixed filter reset if selected tab was above Home Movies. * Updated Medi...New Projects.Net Framework Detector: This application displays all the .Net framework installed on a PC.Adhayayan: Online learning management system designed for small as having 50 students and large as 1000+ students. This is community version having most needed features.Akad SPPortal: Project portal Reserved for unit with features the basic. Been Development in the Background platform SharePoint foundation 2010. Combined EXT.net FrameworkANGF Autumn's Novel Game Framework: This is a novel game framework project. ANGF was use by a game of "1980 Otaku no Hideo", before Codeplex project started. auto check in: help users to login and check in for certain websites automatically,such as xiami and taobaoBasic Expression Evaluator C#: Mathematical Expression Evaluation with symbols and normal bodmas order of precedence. Supports basic arithmetic operations and access to a symbol dictionaryCaloriesCalculator: CaloriesCalculator, the code refactor exampleCamadanPortal: Just a simple company website...charity organization1: charity organizationControl de Asistencia - Grupo Azar2: Control de asistenciaExpenses Manager: Expenses Manager for Windows Phone 8File Sync (Desktop Application): This is a simple "File Synchronizing" / "File Back Up" software, uses Microsift Sync 2.0Furcadia Heimdall Tester: An application that helps Furcadia technicians test the integrity of the game server. It checks for availability of each heimdall, its connectivity to the rest of the system (horton/tribble) and how often it receives a user compared to the rest of them.Furcadia Installer Browser: A program that can access files within a Furcadia installer and allow the user to open them from within the install package, extract some or all the files inside the package, check data integrity of each file and compare the content of two installers.Furcadia Map Normalizer: Furcadia Map Normalizer is a small tool that helps recover a damaged Furcadia map after a live-edit bug. It restores out-of-range elements within back to zero.MyHistory: .NexusCamera: Nexus Camera is a control for Windows Phone 7 & 8, which can be used as a menu on the Camera. The idea in making this control when we use a camera nexus. ThanksNUnit Test Runner: Allows you to run NUnit tests in a more flexible way.Photo Printer Load Balancing: This was developed as a small utility allowing for multiple printers to be used in a DIY photobooth. It monitors a folder for images and load balances a printerResource Translator: A utility that allows you to quickly translate .NET resource files.SCCM Powershell Module: A powershell module to simplify the administration of a System Center Configuration Manager (SCCM) environment.ShardMath: A simple cross platform maths library with expression parsing written without any dependencies on existing pre installed .NET libraries.SoftwareEngineering: Software Engineering Project that named SakhtemanPezeshkan Sprint TinCan LRS: Using the TinCan specifications, this C# MVC project attempts to provide endpoints for TinCan compliant clients.threadpool: a high performance threadpool good for tiny / small / medium processor work, which is the base of event_comb procedural programming framework.TouchProxy (for Windows 8): A remote touch injection client for Windows 8 using standard TUIO+OSC protocols, variable input calibration, and integrated hosted Wi-Fi networking for devices.UGSF Design StarterKit: Information for helping users to create SharePoint Design.UNREAL GAME BY FREITAG: Unreal action game!Win8DemoAcceleratore: This project is a demo project for a lab on Windows 8 Store App development Windows 8 App Design Reference Template: Education Dark Banner: Education Dark Banner Template help if you want to build an app which has an option for group category display and education detailsWindows 8 App Design Reference Template: Photo Viewer: Photo Viewer template will help if you want a photo gallery app to showcase images from various folders. ????: ????WP7??????

    Read the article

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • ?Oracle Database 12c????Information Lifecycle Management ILM?Storage Enhancements

    - by Liu Maclean(???)
    Oracle Database 12c????Information Lifecycle Management ILM ?????????Storage Enhancements ???????? Lifecycle Management ILM ????????? Automatic Data Placement ??????, ??ADP? ?????? 12c???????Datafile??? Online Move Datafile, ????????????????datafile???????,??????????????? ????(12.1.0.1)Automatic Data Optimization?heat map????????: ????????? (CDB)?????Automatic Data Optimization?heat map Row-level policies for ADO are not supported for Temporal Validity. Partition-level ADO and compression are supported if partitioned on the end-time columns. Row-level policies for ADO are not supported for in-database archiving. Partition-level ADO and compression are supported if partitioned on the ORA_ARCHIVE_STATE column. Custom policies (user-defined functions) for ADO are not supported if the policies default at the tablespace level. ADO does not perform checks for storage space in a target tablespace when using storage tiering. ADO is not supported on tables with object types or materialized views. ADO concurrency (the number of simultaneous policy jobs for ADO) depends on the concurrency of the Oracle scheduler. If a policy job for ADO fails more than two times, then the job is marked disabled and the job must be manually enabled later. Policies for ADO are only run in the Oracle Scheduler maintenance windows. Outside of the maintenance windows all policies are stopped. The only exceptions are those jobs for rebuilding indexes in ADO offline mode. ADO has restrictions related to moving tables and table partitions. ??????row,segment???????????ADO??,?????create table?alter table?????? ????ADO??,??????????????,???????????????? storage tier , ?????????storage tier?????????, ??????????????ADO??????????? segment?row??group? ?CREATE TABLE?ALERT TABLE???ILM???,??????????????????ADO policy? ??ILM policy???????????????? ??????? ????ADO policy, ?????alter table  ???????,?????????????? CREATE TABLE sales_ado (PROD_ID NUMBER NOT NULL, CUST_ID NUMBER NOT NULL, TIME_ID DATE NOT NULL, CHANNEL_ID NUMBER NOT NULL, PROMO_ID NUMBER NOT NULL, QUANTITY_SOLD NUMBER(10,2) NOT NULL, AMOUNT_SOLD NUMBER(10,2) NOT NULL ) ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SQL> SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled 2 FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLED -------------------- -------------------------- -------------- P41 DATA MOVEMENT YES ALTER TABLE sales MODIFY PARTITION sales_1995 ILM ADD POLICY COMPRESS FOR ARCHIVE HIGH SEGMENT AFTER 6 MONTHS OF NO ACCESS; SELECT SUBSTR(policy_name,1,24) AS POLICY_NAME, policy_type, enabled FROM USER_ILMPOLICIES; POLICY_NAME POLICY_TYPE ENABLE ------------------------ ------------- ------ P1 DATA MOVEMENT YES P2 DATA MOVEMENT YES /* You can disable an ADO policy with the following */ ALTER TABLE sales_ado ILM DISABLE POLICY P1; /* You can delete an ADO policy with the following */ ALTER TABLE sales_ado ILM DELETE POLICY P1; /* You can disable all ADO policies with the following */ ALTER TABLE sales_ado ILM DISABLE_ALL; /* You can delete all ADO policies with the following */ ALTER TABLE sales_ado ILM DELETE_ALL; /* You can disable an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DISABLE POLICY P2; /* You can delete an ADO policy in a partition with the following */ ALTER TABLE sales MODIFY PARTITION sales_1995 ILM DELETE POLICY P2; ILM ???????: ?????ILM ADP????,???????: ?????? ???? activity tracking, ????2????????,???????????????????: SEGMENT-LEVEL???????????????????? ROW-LEVEL????????,??????? ????????: 1??????? SEGMENT-LEVEL activity tracking ALTER TABLE interval_sales ILM  ENABLE ACTIVITY TRACKING SEGMENT ACCESS ???????INTERVAL_SALES??segment level  activity tracking,?????????????????? 2? ??????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING (CREATE TIME , WRITE TIME); 3????????? ALTER TABLE emp ILM ENABLE ACTIVITY TRACKING  (READ TIME); ?12.1.0.1.0?????? ??HEAT_MAP??????????, ?????system??session?????heap_map????????????? ?????????HEAT MAP??,? ALTER SYSTEM SET HEAT_MAP = ON; ?HEAT MAP??????,??????????????????????????  ??SYSTEM?SYSAUX????????????? ???????HEAT MAP??: ALTER SYSTEM SET HEAT_MAP = OFF; ????? HEAT_MAP????, ?HEAT_MAP??? ?????????????????????? ?HEAT_MAP?????????Automatic Data Optimization (ADO)??? ??ADO??,Heat Map ?????????? ????V$HEAT_MAP_SEGMENT ??????? HEAT MAP?? SQL> select * from V$heat_map_segment; no rows selected SQL> alter session set heat_map=on; Session altered. SQL> select * from scott.emp; EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO ---------- ---------- --------- ---------- --------- ---------- ---------- ---------- 7369 SMITH CLERK 7902 17-DEC-80 800 20 7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30 7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30 7566 JONES MANAGER 7839 02-APR-81 2975 20 7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30 7698 BLAKE MANAGER 7839 01-MAY-81 2850 30 7782 CLARK MANAGER 7839 09-JUN-81 2450 10 7788 SCOTT ANALYST 7566 19-APR-87 3000 20 7839 KING PRESIDENT 17-NOV-81 5000 10 7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30 7876 ADAMS CLERK 7788 23-MAY-87 1100 20 7900 JAMES CLERK 7698 03-DEC-81 950 30 7902 FORD ANALYST 7566 03-DEC-81 3000 20 7934 MILLER CLERK 7782 23-JAN-82 1300 10 14 rows selected. SQL> select * from v$heat_map_segment; OBJECT_NAME SUBOBJECT_NAME OBJ# DATAOBJ# TRACK_TIM SEG SEG FUL LOO CON_ID -------------------- -------------------- ---------- ---------- --------- --- --- --- --- ---------- EMP 92997 92997 23-JUL-13 NO NO YES NO 0 ??v$heat_map_segment???,?v$heat_map_segment??????????????X$HEATMAPSEGMENT V$HEAT_MAP_SEGMENT displays real-time segment access information. Column Datatype Description OBJECT_NAME VARCHAR2(128) Name of the object SUBOBJECT_NAME VARCHAR2(128) Name of the subobject OBJ# NUMBER Object number DATAOBJ# NUMBER Data object number TRACK_TIME DATE Timestamp of current activity tracking SEGMENT_WRITE VARCHAR2(3) Indicates whether the segment has write access: (YES or NO) SEGMENT_READ VARCHAR2(3) Indicates whether the segment has read access: (YES or NO) FULL_SCAN VARCHAR2(3) Indicates whether the segment has full table scan: (YES or NO) LOOKUP_SCAN VARCHAR2(3) Indicates whether the segment has lookup scan: (YES or NO) CON_ID NUMBER The ID of the container to which the data pertains. Possible values include:   0: This value is used for rows containing data that pertain to the entire CDB. This value is also used for rows in non-CDBs. 1: This value is used for rows containing data that pertain to only the root n: Where n is the applicable container ID for the rows containing data The Heat Map feature is not supported in CDBs in Oracle Database 12c, so the value in this column can be ignored. ??HEAP MAP??????????????????,????DBA_HEAT_MAP_SEGMENT???????? ???????HEAT_MAP_STAT$?????? ??Automatic Data Optimization??????: ????1: SQL> alter system set heat_map=on; ?????? ????????????? scott?? http://www.askmaclean.com/archives/scott-schema-script.html SQL> grant all on dbms_lock to scott; ????? SQL> grant dba to scott; ????? @ilm_setup_basic C:\APP\XIANGBLI\ORADATA\MACLEAN\ilm.dbf @tktgilm_demo_env_setup SQL> connect scott/tiger ; ???? SQL> select count(*) from scott.employee; COUNT(*) ---------- 3072 ??? 1 ?? SQL> set serveroutput on SQL> exec print_compression_stats('SCOTT','EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 3072 Adv/basic compressed : 0 Others : 0 PL/SQL ???????? ???????3072?????? ????????? ????policy ???????????? alter table employee ilm add policy row store compress advanced row after 3 days of no modification / SQL> set serveroutput on SQL> execute list_ilm_policies; -------------------------------------------------- Policies defined for SCOTT -------------------------------------------------- Object Name------ : EMPLOYEE Subobject Name--- : Object Type------ : TABLE Inherited from--- : POLICY NOT INHERITED Policy Name------ : P1 Action Type------ : COMPRESSION Scope------------ : ROW Compression level : ADVANCED Tier Tablespace-- : Condition type--- : LAST MODIFICATION TIME Condition days--- : 3 Enabled---------- : YES -------------------------------------------------- PL/SQL ???????? SQL> select sysdate from dual; SYSDATE -------------- 29-7? -13 SQL> execute set_back_chktime(get_policy_name('EMPLOYEE',null,'COMPRESSION','ROW','ADVANCED',3,null,null),'EMPLOYEE',null,6); Object check time reset ... -------------------------------------- Object Name : EMPLOYEE Object Number : 93123 D.Object Numbr : 93123 Policy Number : 1 Object chktime : 23-7? -13 08.13.42.000000 ?? Distnt chktime : 0 -------------------------------------- PL/SQL ???????? ?policy?chktime???6??, ????set_back_chktime???????????????“????”?,?????????,???????? ?????? alter system flush buffer_cache; alter system flush buffer_cache; alter system flush shared_pool; alter system flush shared_pool; SQL> execute set_window('MONDAY_WINDOW','OPEN'); Set Maint. Window OPEN ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : TRUE ----------------------------- PL/SQL ???????? SQL> exec dbms_lock.sleep(60) ; PL/SQL ???????? SQL> exec print_compression_stats('SCOTT', 'EMPLOYEE'); Compression Stats ------------------ Uncmpressed : 338 Adv/basic compressed : 2734 Others : 0 PL/SQL ???????? ??????????????? Adv/basic compressed : 2734 ??????? SQL> col object_name for a20 SQL> select object_id,object_name from dba_objects where object_name='EMPLOYEE'; OBJECT_ID OBJECT_NAME ---------- -------------------- 93123 EMPLOYEE SQL> execute list_ilm_policy_executions ; -------------------------------------------------- Policies execution details for SCOTT -------------------------------------------------- Policy Name------ : P22 Job Name--------- : ILMJOB48 Start time------- : 29-7? -13 08.37.45.061000 ?? End time--------- : 29-7? -13 08.37.48.629000 ?? ----------------- Object Name------ : EMPLOYEE Sub_obj Name----- : Obj Type--------- : TABLE ----------------- Exec-state------- : SELECTED FOR EXECUTION Job state-------- : COMPLETED SUCCESSFULLY Exec comments---- : Results comments- : --- -------------------------------------------------- PL/SQL ???????? ILMJOB48?????policy?JOB,?12.1.0.1??J00x???? ?MMON_SLAVE???M00x???15????????? select sample_time,program,module,action from v$active_session_history where action ='KDILM background EXEcution' order by sample_time; 29-7? -13 08.16.38.369000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.38.388000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.17.39.390000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.23.38.681000000 ?? ORACLE.EXE (M002) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.32.38.968000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.39.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.33.40.993000000 ?? ORACLE.EXE (M003) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.36.40.066000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.42.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.43.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.37.44.258000000 ?? ORACLE.EXE (M000) MMON_SLAVE KDILM background EXEcution 29-7? -13 08.38.42.386000000 ?? ORACLE.EXE (M001) MMON_SLAVE KDILM background EXEcution select distinct action from v$active_session_history where action like 'KDILM%' KDILM background CLeaNup KDILM background EXEcution SQL> execute set_window('MONDAY_WINDOW','CLOSE'); Set Maint. Window CLOSE ----------------------------- Window Name : MONDAY_WINDOW Enabled? : TRUE Active? : FALSE ----------------------------- PL/SQL ???????? SQL> drop table employee purge ; ????? ???? ????? spool ilm_usecase_1_cleanup.lst @ilm_demo_cleanup ; spool off

    Read the article

  • Routing a PPTP client and VMware Server instance running on the same box

    - by servermanfail
    I have a Windows 2003 SBS box. It has 2 physical NIC's: WAN and LAN. The WAN is a public IP. The LAN is a simple 192.168.2.x subnet with Microsoft DHCP Server. Microsoft Routing and Remote Access Service is used to provide NAT to LAN. The box also runs VMware Server with a virtual machine running Windows XP. I want people to be able to VPN into the box, and connect to these virtual machines on the MSRDP port. I can VPN (PPTP) into the 2003 SBS box fine, as well as ping other machines on the LAN. I can ping the VM from a physical workstation on the LAN and vice-versa. I can ping the VPN client from the a physical workstation on the LAN and vice-versa. I can ping the VPN client from the Server console and vice-versa. I can ping the VM client from the Server console and vice-versa. But I cannot ping the VPN client from the VM and vice-versa. I was hoping to set up 2 or 3 Windows XP virtual machines on our only server, so that a couple of people can remote in to work without having to leave a physical machine on in the office. You could this attempted set up a "poor mans terminal server". On the 2003 SBS Server:- C:\Documents and Settings\Administrator>route print IPv4 Route Table =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...00 50 56 c0 00 08 ...... VMware Virtual Ethernet Adapter for VMnet8 0x3 ...00 50 56 c0 00 01 ...... VMware Virtual Ethernet Adapter for VMnet1 0x10004 ...00 53 45 00 00 00 ...... WAN (PPP/SLIP) Interface 0x10005 ...00 11 43 d4 69 13 ...... Broadcom NetXtreme Gigabit Ethernet 0x10006 ...00 11 43 d4 69 14 ...... Broadcom NetXtreme Gigabit Ethernet #2 =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 81.123.144.22 81.123.144.21 1 81.123.144.20 255.255.255.252 81.123.144.21 81.123.144.21 1 81.123.144.21 255.255.255.255 127.0.0.1 127.0.0.1 1 81.255.255.255 255.255.255.255 81.123.144.21 81.123.144.21 1 86.135.78.235 255.255.255.255 81.123.144.22 81.123.144.21 1 109.152.62.236 255.255.255.255 81.123.144.22 81.123.144.21 1 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 192.168.2.0 255.255.255.0 192.168.2.3 192.168.2.3 1 192.168.2.3 255.255.255.255 127.0.0.1 127.0.0.1 1 192.168.2.26 255.255.255.255 192.168.2.32 192.168.2.32 1 192.168.2.28 255.255.255.255 192.168.2.32 192.168.2.32 1 192.168.2.32 255.255.255.255 127.0.0.1 127.0.0.1 50 192.168.2.50 255.255.255.255 127.0.0.1 127.0.0.1 1 192.168.2.255 255.255.255.255 192.168.2.3 192.168.2.3 1 192.168.10.0 255.255.255.0 192.168.10.1 192.168.10.1 20 192.168.10.1 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.10.255 255.255.255.255 192.168.10.1 192.168.10.1 20 192.168.96.0 255.255.255.0 192.168.96.1 192.168.96.1 20 192.168.96.1 255.255.255.255 127.0.0.1 127.0.0.1 20 192.168.96.255 255.255.255.255 192.168.96.1 192.168.96.1 20 224.0.0.0 240.0.0.0 81.123.144.21 81.123.144.21 1 224.0.0.0 240.0.0.0 192.168.2.3 192.168.2.3 1 224.0.0.0 240.0.0.0 192.168.10.1 192.168.10.1 20 224.0.0.0 240.0.0.0 192.168.96.1 192.168.96.1 20 255.255.255.255 255.255.255.255 81.123.144.21 81.123.144.21 1 255.255.255.255 255.255.255.255 192.168.2.3 192.168.2.3 1 255.255.255.255 255.255.255.255 192.168.10.1 192.168.10.1 1 255.255.255.255 255.255.255.255 192.168.96.1 192.168.96.1 1 Default Gateway: 81.123.144.22 =========================================================================== Persistent Routes: None C:\Documents and Settings\Administrator>ipconfig /all Windows IP Configuration Host Name . . . . . . . . . . . . : 2003server Primary Dns Suffix . . . . . . . : mycompany.local Node Type . . . . . . . . . . . . : Unknown IP Routing Enabled. . . . . . . . : Yes WINS Proxy Enabled. . . . . . . . : Yes DNS Suffix Search List. . . . . . : mycompany.local gateway.2wire.net Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet 8 Physical Address. . . . . . . . . : 00-50-56-C0-00-08 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.10.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : VMware Virtual Ethernet Adapter for VMnet 1 Physical Address. . . . . . . . . : 00-50-56-C0-00-01 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.96.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : PPP adapter RAS Server (Dial In) Interface: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : WAN (PPP/SLIP) Interface Physical Address. . . . . . . . . : 00-53-45-00-00-00 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.2.32 Subnet Mask . . . . . . . . . . . : 255.255.255.255 Default Gateway . . . . . . . . . : NetBIOS over Tcpip. . . . . . . . : Disabled Ethernet adapter LAN: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet Physical Address. . . . . . . . . : 00-11-43-D4-69-13 DHCP Enabled. . . . . . . . . . . : No IP Address. . . . . . . . . . . . : 192.168.2.50 Subnet Mask . . . . . . . . . . . : 255.255.255.0 IP Address. . . . . . . . . . . . : 192.168.2.3 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : 192.168.2.3 Primary WINS Server . . . . . . . : 192.168.2.3 Ethernet adapter WAN: Connection-specific DNS Suffix . : gateway.2wire.net Description . . . . . . . . . . . : Broadcom NetXtreme Gigabit Ethernet #2 Physical Address. . . . . . . . . : 00-11-43-D4-69-14 DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes IP Address. . . . . . . . . . . . : 81.123.144.21 Subnet Mask . . . . . . . . . . . : 255.255.255.252 Default Gateway . . . . . . . . . : 81.123.144.22 DHCP Server . . . . . . . . . . . : 10.0.0.1 DNS Servers . . . . . . . . . . . : 10.0.0.1 Primary WINS Server . . . . . . . : 192.168.2.3 NetBIOS over Tcpip. . . . . . . . : Disabled Lease Obtained. . . . . . . . . . : 25 February 2011 22:56:59 Lease Expires . . . . . . . . . . : 25 February 2011 23:06:59 C:\Documents and Settings\Administrator>ping 192.168.2.11 Pinging 192.168.2.11 with 32 bytes of data: Reply from 192.168.2.11: bytes=32 time<1ms TTL=128 Reply from 192.168.2.11: bytes=32 time<1ms TTL=128 Reply from 192.168.2.11: bytes=32 time<1ms TTL=128 Reply from 192.168.2.11: bytes=32 time<1ms TTL=128

    Read the article

  • vmware network installation problem

    - by shantanu
    After installation from vmware_bunddle it shows network device error during configuration(First run). Log File: 2012-04-03T20:01:24.881+06:00| vthread-3| I120: Log for VMware Workstation pid=5766 version=8.0.2 build=build-591240 option=Release 2012-04-03T20:01:24.881+06:00| vthread-3| I120: The process is 64-bit. 2012-04-03T20:01:24.881+06:00| vthread-3| I120: Host codepage=UTF-8 encoding=UTF-8 2012-04-03T20:01:24.881+06:00| vthread-3| I120: Host is Linux 3.2.0-19-generic Ubuntu precise (development branch) 2012-04-03T20:01:24.880+06:00| vthread-3| I120: Msg_Reset: 2012-04-03T20:01:24.880+06:00| vthread-3| I120: [msg.dictionary.load.openFailed] Cannot open file "/usr/lib/vmware/settings": No such file or directory. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: ---------------------------------------- 2012-04-03T20:01:24.880+06:00| vthread-3| I120: PREF Optional preferences file not found at /usr/lib/vmware/settings. Using default values. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: Msg_Reset: 2012-04-03T20:01:24.880+06:00| vthread-3| I120: [msg.dictionary.load.openFailed] Cannot open file "/root/.vmware/config": No such file or directory. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: ---------------------------------------- 2012-04-03T20:01:24.880+06:00| vthread-3| I120: PREF Optional preferences file not found at /root/.vmware/config. Using default values. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: Msg_Reset: 2012-04-03T20:01:24.880+06:00| vthread-3| I120: [msg.dictionary.load.openFailed] Cannot open file "/root/.vmware/preferences": No such file or directory. 2012-04-03T20:01:24.880+06:00| vthread-3| I120: ---------------------------------------- 2012-04-03T20:01:24.881+06:00| vthread-3| I120: PREF Failed to load user preferences. 2012-04-03T20:01:24.881+06:00| vthread-3| W110: Logging to /tmp/vmware-root/modconfig-5766.log 2012-04-03T20:01:25.200+06:00| vthread-3| I120: modconf query interface initialized 2012-04-03T20:01:25.201+06:00| vthread-3| I120: modconf library initialized 2012-04-03T20:01:25.269+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:01:25.278+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.284+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.306+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.355+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:25.355+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.362+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.383+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.434+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:25.502+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.507+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.511+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.516+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.521+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.561+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.566+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.571+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.576+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.581+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:01:25.586+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.593+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.614+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.663+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:25.740+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.747+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.752+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.757+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.762+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:01:25.767+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:25.772+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.792+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:25.843+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:26.838+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.848+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.853+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.858+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:26.863+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:28.460+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:28.460+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:28.466+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:28.488+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:28.542+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:28.542+06:00| vthread-3| I120: Building module vmmon. 2012-04-03T20:01:28.553+06:00| vthread-3| I120: Extracting the sources of the vmmon module. 2012-04-03T20:01:28.615+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmmon-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:01:36.499+06:00| vthread-3| I120: Installing module vmmon from /tmp/vmware-root/modules/vmmon.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:01:36.507+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmmon.ko 2012-04-03T20:01:58.314+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:01:58.315+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:01:58.336+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:58.379+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:01:58.431+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:01:58.431+06:00| vthread-3| I120: Building module vmnet. 2012-04-03T20:01:58.431+06:00| vthread-3| I120: Extracting the sources of the vmnet module. 2012-04-03T20:01:58.541+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmnet-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:05.973+06:00| vthread-3| I120: Failed to compile module vmnet! 2012-04-03T20:02:05.984+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:02:05.984+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:02:05.990+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:06.015+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:06.067+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:02:06.067+06:00| vthread-3| I120: Building module vmblock. 2012-04-03T20:02:06.067+06:00| vthread-3| I120: Extracting the sources of the vmblock module. 2012-04-03T20:02:06.141+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmblock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:13.531+06:00| vthread-3| I120: Installing module vmblock from /tmp/vmware-root/modules/vmblock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:02:13.532+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmblock.ko 2012-04-03T20:02:19.090+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:02:19.090+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:02:19.097+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:19.117+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:19.173+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:02:19.173+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:02:19.174+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:02:19.284+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:28.525+06:00| vthread-3| I120: Installing module vmci from /tmp/vmware-root/modules/vmci.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:02:28.526+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmci.ko 2012-04-03T20:02:31.760+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:02:31.760+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:02:31.766+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:31.786+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:02:31.838+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:02:31.838+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:02:31.839+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:02:31.864+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:33.684+06:00| vthread-3| I120: Building module vsock. 2012-04-03T20:02:33.685+06:00| vthread-3| I120: Extracting the sources of the vsock module. 2012-04-03T20:02:33.809+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vsock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:02:41.050+06:00| vthread-3| I120: Installing module vsock from /tmp/vmware-root/modules/vsock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:02:41.051+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vsock.ko 2012-04-03T20:03:02.757+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.762+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.767+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.771+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.776+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Validating path /lib/modules/preferred/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Failed to find /lib/modules/preferred/build/include/linux/version.h 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Failed version test: /lib/modules/preferred/build/include/linux/version.h not found. 2012-04-03T20:03:02.782+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:02.790+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:02.814+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:02.865+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:02.958+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.968+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.973+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.978+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:02.983+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:04.372+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:04.372+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:04.378+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:04.399+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:04.452+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:04.452+06:00| vthread-3| I120: Building module vmmon. 2012-04-03T20:03:04.452+06:00| vthread-3| I120: Extracting the sources of the vmmon module. 2012-04-03T20:03:04.486+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmmon-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:05.976+06:00| vthread-3| I120: Installing module vmmon from /tmp/vmware-root/modules/vmmon.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:05.977+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmmon.ko 2012-04-03T20:03:09.056+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:09.057+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:09.065+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:09.090+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:09.142+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:09.142+06:00| vthread-3| I120: Building module vmnet. 2012-04-03T20:03:09.142+06:00| vthread-3| I120: Extracting the sources of the vmnet module. 2012-04-03T20:03:09.169+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmnet-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:12.072+06:00| vthread-3| I120: Failed to compile module vmnet! 2012-04-03T20:03:12.090+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:12.090+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:12.098+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:12.121+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:12.179+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:12.179+06:00| vthread-3| I120: Building module vmblock. 2012-04-03T20:03:12.179+06:00| vthread-3| I120: Extracting the sources of the vmblock module. 2012-04-03T20:03:12.205+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmblock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:15.340+06:00| vthread-3| I120: Installing module vmblock from /tmp/vmware-root/modules/vmblock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:15.341+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmblock.ko 2012-04-03T20:03:18.451+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:18.451+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:18.457+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:18.480+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:18.531+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:18.531+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:03:18.531+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:03:18.569+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:19.787+06:00| vthread-3| I120: Installing module vmci from /tmp/vmware-root/modules/vmci.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:19.789+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vmci.ko 2012-04-03T20:03:22.933+06:00| vthread-3| I120: Trying to find a suitable PBM set for kernel 3.2.0-19-generic. 2012-04-03T20:03:22.933+06:00| vthread-3| I120: Validating path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic 2012-04-03T20:03:22.939+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:22.959+06:00| vthread-3| I120: Your GCC version: 4.6 2012-04-03T20:03:23.009+06:00| vthread-3| I120: Header path /lib/modules/3.2.0-19-generic/build/include for kernel release 3.2.0-19-generic is valid. 2012-04-03T20:03:23.009+06:00| vthread-3| I120: Building module vmci. 2012-04-03T20:03:23.009+06:00| vthread-3| I120: Extracting the sources of the vmci module. 2012-04-03T20:03:23.034+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vmci-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:24.227+06:00| vthread-3| I120: Building module vsock. 2012-04-03T20:03:24.227+06:00| vthread-3| I120: Extracting the sources of the vsock module. 2012-04-03T20:03:24.254+06:00| vthread-3| I120: Building module with command: /usr/bin/make -j -C /tmp/vmware-root/modules/vsock-only auto-build SUPPORT_SMP=1 HEADER_DIR=/lib/modules/3.2.0-19-generic/build/include CC=/usr/bin/gcc GREP=/usr/bin/make IS_GCC_3=no VMCCVER=4.6 2012-04-03T20:03:26.125+06:00| vthread-3| I120: Installing module vsock from /tmp/vmware-root/modules/vsock.o to /lib/modules/3.2.0-19-generic/misc. 2012-04-03T20:03:26.126+06:00| vthread-3| I120: Registering file: /usr/lib/vmware-installer/2.0/vmware-installer --register-file vmware-vmx regular /lib/modules/3.2.0-19-generic/misc/vsock.ko My System details: cpu : AMD APU dual core E450 ram: 2GB ubuntu: 12.04 (64 bit) I have downloaded Latest vmware version. Thanks in advance

    Read the article

  • No sound after video card replaced (AMD Radeon HD 7770)

    - by Sean
    Issue: no sound System: Dual boot Windows 7 (sda) Ubuntu 12.04 (sdb) 2 harddrives Dell XPS 730 Video card: AMD Radeo HD 7770 Diamond Multimedia Sound card: Creative Labs SB X-Fi Additional info: My sound used to work. Then, my old video card (NVIDIA geforce 280) died. I bought and installed a new video card: Radeon HD 7770. After this, my sound no longer worked in ubuntu (Win7 audio still works). Everything else in ubuntu, such as video, works fine. I suspect it has something to do with the fact that the Radeon card includes sound capability. Problem Details: If I click on System Settings - Sound, the panel freezes and stops responding indefinitely. The sound volume icon at the top of the screen (by the clock) shows 3 dashes beside it "---", and an empty drop-down box shows if I click on it. (Possibly related to 1.) When I reboot my machine, I get the message: "gnome settings daemon not responding". I have to force the reboot. I reinstalled ubunbu (perserving my home directory) and the problem persists. Diagnostics info: Following procedure outlined here: https://help.ubuntu.com/community/SoundTroubleshooting The following is a list of terminal commands, and their output: $ aplay -l List of PLAYBACK Hardware Devices There is no listing beyond that, and the command freezes until I hit control-c $ lspci -v | grep -A7 -i "audio" 00:0f.1 Audio device: NVIDIA Corporation MCP55 High Definition Audio (rev a2) Subsystem: Dell Device 0224 Flags: bus master, 66MHz, fast devsel, latency 0, IRQ 23 Memory at dfff0000 (32-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel -- 01:00.1 Audio device: Advanced Micro Devices [AMD] nee ATI Device aab0 Subsystem: Diamond Multimedia Systems Device aab0 Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at dfefc000 (64-bit, non-prefetchable) [size=16K] Capabilities: <access denied> Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel -- 03:0a.0 Audio device: Creative Labs SB X-Fi Subsystem: Creative Labs Device 6002 Flags: bus master, medium devsel, latency 32, IRQ 18 Memory at dbff4000 (32-bit, non-prefetchable) [size=16K] Memory at dbc00000 (64-bit, non-prefetchable) [size=2M] Memory at d4000000 (64-bit, non-prefetchable) [size=64M] I/O ports at 8c00 [size=32] Capabilities: <access denied> Notice the Diamond Multimedia Systems Device - that seems to be my video card sound. My video card is Diamond multimedia. Also there's the weird NVIDIA device in there. That must either be a remnant of my now removed NVIDIA graphics card, or else some kind of on-board thing. Not sure which. $ killall pulseaudio This allows me to open system settings - sound. But the "Test Sound" button makes no sound And the output volume + mute controls are greyed / disabled at 0 volume. It also allows me to click on the sound control in the "task bar" (beside the clock), and a volume slider drops down, but it is disabled / greyed at 0 volume. $ find /lib/modules/uname -r | grep snd /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-88pm860x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic3x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8900.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8978.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320dac33.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm9090.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-sta32x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max98088.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max9850.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-rt5631.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8903.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8580.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8523.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max9877.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ads117x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8955.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8804.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-sgtl5000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8750.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm2000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic32x4.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4642.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ad193x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8753.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4535.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8985.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8350.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-dfbmcs320.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cs42l51.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic26.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8737.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-uda1380.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8776.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8995.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tpa6130a2.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8727.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm5100.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8991.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8510.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-jz4740-codec.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8400.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-lm4857.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8960.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-alc5623.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cs4270.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-tlv320aic23.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8993.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8961.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8940.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-uda134x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ad1836.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8994.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8782.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cs4271.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8974.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8983.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8962.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4641.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm-hubs.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8971.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8996.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wl1273.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-adav80x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-spdif.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-pcm3008.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-cx20442.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4671.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8711.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ad73311.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-max98095.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm9081.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8741.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm1250-ev1.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8988.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-adau1373.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8731.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-l3.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ssm2602.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-da7210.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-ak4104.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8904.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8728.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8770.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/codecs/snd-soc-wm8990.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/soc/snd-soc-core.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/synth/emux/snd-emux-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/synth/snd-util-mem.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-hrtimer.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-hwdep.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-pcm.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-rawmidi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/oss/snd-mixer-oss.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-page-alloc.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-midi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-dummy.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-virmidi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-device.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-midi-event.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/seq/snd-seq-midi-emul.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/core/snd-timer.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pcmcia/pdaudiocf/snd-pdaudiocf.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pcmcia/vx/snd-vxpocket.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/6fire/snd-usb-6fire.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/snd-usbmidi-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/caiaq/snd-usb-caiaq.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/usx2y/snd-usb-usx2y.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/usx2y/snd-usb-us122l.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/snd-usb-audio.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/usb/misc/snd-ua101.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl3/snd-opl3-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl3/snd-opl3-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl4/snd-opl4-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/opl4/snd-opl4-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-portman2x4.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-serial-u16550.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-mts64.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-mtpav.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/mpu401/snd-mpu401.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/mpu401/snd-mpu401-uart.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/vx/snd-vx-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-dummy.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-aloop.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/pcsp/snd-pcsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/drivers/snd-virmidi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/firewire/snd-firewire-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/firewire/snd-firewire-speakers.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/firewire/snd-isight.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/snd-tea6330t.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-tea575x-tuner.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4113.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-pt2258.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4117.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4xxx-adda.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/other/snd-ak4114.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/snd-cs8427.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/i2c/snd-i2c.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/emu10k1/snd-emu10k1-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/emu10k1/snd-emu10k1.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/emu10k1/snd-emu10k1x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/korg1212/snd-korg1212.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/au88x0/snd-au8830.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/au88x0/snd-au8820.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/au88x0/snd-au8810.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/aw2/snd-aw2.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-sis7019.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-ens1371.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/vx222/snd-vx222.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-via82xx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-es1968.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-atiixp-modem.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-cs4281.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-sonicvibes.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-intel8x0.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-maestro3.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ac97/snd-ac97-codec.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-es1938.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-fm801.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/nm256/snd-nm256.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-realtek.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-cmedia.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-conexant.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-intel.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-analog.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-hdmi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-idt.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-ca0110.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-cirrus.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-via.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-ca0132.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec-si3054.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/hda/snd-hda-codec.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/riptide/snd-riptide.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-ens1370.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-als4000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-intel8x0m.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ca0106/snd-ca0106.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-cs5530.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/cs5535audio/snd-cs5535audio.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-rme32.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ymfpci/snd-ymfpci.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ctxfi/snd-ctxfi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-azt3328.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/cs46xx/snd-cs46xx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/lx6464es/snd-lx6464es.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ice1712/snd-ice1712.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ice1712/snd-ice17xx-ak4xxx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ice1712/snd-ice1724.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/mixart/snd-mixart.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/ali5451/snd-ali5451.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/lola/snd-lola.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/oxygen/snd-oxygen-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/oxygen/snd-oxygen.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/oxygen/snd-virtuoso.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-via82xx-modem.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/pcxhr/snd-pcxhr.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigo.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-echo3g.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-mona.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-layla20.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-gina20.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-layla24.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-mia.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigoiox.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-darla24.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigoio.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigodjx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-gina24.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-darla20.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/echoaudio/snd-indigodj.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-cmipci.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/asihpi/snd-asihpi.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-ad1889.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/rme9652/snd-rme9652.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/rme9652/snd-hdspm.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/rme9652/snd-hdsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/trident/snd-trident.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-atiixp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-als300.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-bt87x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/pci/snd-rme96.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-miro.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-opti92x-ad1848.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-opti93x.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/opti9xx/snd-opti92x-cs4231.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gusextreme.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-interwave.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gusmax.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-interwave-stb.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gus-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/gus/snd-gusclassic.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-emu8000-synth.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb16-dsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sbawe.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb8-dsp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb-common.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb16.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb16-csp.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-sb8.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/sb/snd-jazz16.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-es18xx.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-azt2320.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-cmi8330.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-als100.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd/snd-msnd-classic.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd/snd-msnd-pinnacle.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/msnd/snd-msnd-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/cs423x/snd-cs4231.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/cs423x/snd-cs4236.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/es1688/snd-es1688-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/es1688/snd-es1688.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-adlib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/ad1848/snd-ad1848.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/ad1816a/snd-ad1816a.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/galaxy/snd-azt1605.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/galaxy/snd-azt2316.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/wavefront/snd-wavefront.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/wss/snd-wss-lib.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-sc6000.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-sscape.ko /lib/modules/3.2.0-29-generic-pae/kernel/sound/isa/snd-opl3sa2.ko

    Read the article

  • Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET

    - by user647124
    This year I embarked on a journey to migrate a group of ASP.NET web applications developed to integrate with WebLogic Portal 9.2 via the AquaLogic® Interaction .NET Application Accelerator 1.0 to instead use the Oracle WebCenter WSRP Producer for .NET and integrated with WebLogic Portal 10.3.4. It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings. Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there. For the Curious From the perspective of necessity, this section would be better at the end. If it were there, though, it would probably be read by far fewer people, including those that are actually interested in these types of sections. Those in a hurry may skip past and be none the worst for it in dealing with the hands-on bits of performing a migration from .NET Accelerator to WSRP Producer. For others who want to talk about why they did what they did after they did it, or just want to know for themselves, enjoy. A Brief (and edited) History of the WSRP for .NET Technologies (as Relevant to the this Post) Note: This section is for those who are curious about why the migration path is not as simple as many other Oracle technologies. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The currently deployed architecture that was to be migrated and upgraded achieved initial integration between .NET and J2EE over the WSRP protocol through the use of The AquaLogic Interaction .NET Application Accelerator. The .NET Accelerator allowed the applications that were written in ASP.NET and deployed on a Microsoft Internet Information Server (IIS) to interact with a WebLogic Portal application deployed on a WebLogic (J2EE application) Server (both version 9.2, the state of the art at the time of its creation). At the time this architectural decision for the application was made, both the AquaLogic and WebLogic brands were owned by BEA Systems. The AquaLogic brand included products acquired by BEA through the acquisition of Plumtree, whose flagship product was a portal platform available in both J2EE and .NET versions. As part of this dual technology support an adaptor was created to facilitate the use of WSRP as a communication protocol where customers wished to integrate components from both versions of the Plumtree portal. The adapter evolved over several product generations to include a broad array of both standard and proprietary WSRP integration capabilities. Later, BEA Systems was acquired by Oracle. Over the course of several years Oracle has acquired a large number of portal applications and has taken the strategic direction to migrate users of these myriad (and formerly competitive) products to the Oracle WebCenter technology stack. As part of Oracle’s strategic technology roadmap, older portal products are being schedule for end of life, including the portal products that were part of the BEA acquisition. The .NET Accelerator has been modified over a very long period of time with features driven by users of that product and developed under three different vendors (each a direct competitor in the same solution space prior to merger). The Oracle WebCenter WSRP Producer for .NET was introduced much more recently with the key objective to specifically address the needs of the WebCenter customers developing solutions accessible through both J2EE and .NET platforms utilizing the WSRP specifications. The Oracle Product Development Team also provides these insights on the drivers for developing the WSRP Producer: ***************************************** Support for ASP.NET AJAX. Controls using the ASP.NET AJAX script manager do not function properly in the Application Accelerator for .NET. Support 2 way SSL in WLP. This was not possible with the proxy/bridge set up in the existing Application Accelerator for .NET. Allow developers to code portlets (Web Parts) using the .NET framework rather than a proprietary framework. Developers had to use the Application Accelerator for .NET plug-ins to Visual Studio to manage preferences and profile data. This is now replaced with the .NET Framework Personalization (for preferences) and Profile providers. The WSRP Producer for .NET was created as a new way of developing .NET portlets. It was never designed to be an upgrade path for the Application Accelerator for .NET. .NET developers would create new .NET portlets with the WSRP Producer for .NET and leave any existing .NET portlets running in the Application Accelerator for .NET. ***************************************** The advantage to creating a new solution for WSRP is a product that is far easier for Oracle to maintain and support which in turn improves quality, reliability and maintainability for their customers. No changes to J2EE applications consuming the WSRP portlets previously rendered by the.NET Accelerator is required to migrate from the Aqualogic WSRP solution. For some customers using the .NET Accelerator the challenge is adapting their current .NET applications to work with the WSRP Producer (or any other WSRP adapter as they are proprietary by nature). Part of this adaptation is the need to deploy the .NET applications as a child to the WSRP producer web application as root. Differences between .NET Accelerator and WSRP Producer Note: This section is for those who are curious about why the migration is not as pluggable as something such as changing security providers in WebLogic Server. You can skip this section in its entirety and still be just as competent in performing a migration as if you had read it. The basic terminology used to describe the participating applications in a WSRP environment are the same when applied to either the .NET Accelerator or the WSRP Producer: Producer and Consumer. In both cases the .NET application serves as what is referred to as a WSRP environment as the Producer. The difference lies in how the two adapters create the WSRP translation of the .NET application. The .NET Accelerator, as the name implies, is meant to serve as a quick way of adding WSRP capability to a .NET application. As such, at a high level, the .NET Accelerator behaves as a proxy for requests between the .NET application and the WSRP Consumer. A WSRP request is sent from the consumer to the .NET Accelerator, the.NET Accelerator transforms this request into an ASP.NET request, receives the response, then transforms the response into a WSRP response. The .NET Accelerator is deployed as a stand-alone application on IIS. The WSRP Producer is deployed as a parent application on IIS and all ASP.NET modules that will be made available over WSRP are deployed as children of the WSRP Producer application. In this manner, the WSRP Producer acts more as a Request Filter than a proxy in the WSRP transactions between Producer and Consumer. Highly Recommended Enabling Logging Note: You can skip this section now, but you will most likely want to come back to it later, so why not just read it now? Logging is very helpful in tracking down the causes of any anomalies during testing of migrated portlets. To enable the WSRP Producer logging, update the Application_Start method in the Global.asax.cs for your .NET application by adding log4net.Config.XmlConfigurator.Configure(); IIS logs will usually (in a standard configuration) be in a sub folder under C:\WINDOWS\system32\LogFiles\W3SVC. WSRP Producer logs will be found at C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\Logs\WSRPProducer.log InputTrace.webinfo and OutputTrace.webinfo are located under C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault and can be useful in debugging issues related to markup transformations. Things You Must Do Merge Web.Config Note: If you have been skipping all the sections that you can, now is the time to stop and pay attention J Because the existing .NET application will become a sub-application to the WSRP Producer, you will want to merge required settings from the existing Web.Config to the one in the WSRP Producer. Use the WSRP Producer Master Page The Master Page installed for the WSRP Producer provides common, hiddenform fields and JavaScripts to facilitate portlet instance management and display configuration when the child page is being rendered over WSRP. You add the Master Page by including it in the <@ Page declaration with MasterPageFile="~/portlets/Resources/MasterPages/WSRP.Master" . You then replace: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <HTML> <HEAD> With <asp:Content ID="ContentHead1" ContentPlaceHolderID="wsrphead" Runat="Server"> And </HEAD> <body> <form id="theForm" method="post" runat="server"> With </asp:Content> <asp:Content ID="ContentBody1" ContentPlaceHolderID="Main" Runat="Server"> And finally </form> </body> </HTML> With </asp:Content> In the event you already use Master Pages, adapt your existing Master Pages to be sub masters. See Nested ASP.NET Master Pages for a detailed reference of how to do this. It Happened to Me, It Might Happen to You…Or Not Watch for Use of Session or Request in OnInit In the event the .NET application being modified has pages developed to assume the user has been authenticated in an earlier page request there may be direct or indirect references in the OnInit method to request or session objects that may not have been created yet. This will vary from application to application, so the recommended approach is to test first. If there is an issue with a page running as a WSRP portlet then check for potential references in the OnInit method (including references by methods called within OnInit) to session or request objects. If there are, the simplest solution is to create a new method and then call that method once the necessary object(s) is fully available. I find doing this at the start of the Page_Load method to be the simplest solution. Case Sensitivity .NET languages are not case sensitive, but Java is. This means it is possible to have many variations of SRC= and src= or .JPG and .jpg. The preferred solution is to make these mark up instances all lower case in your .NET application. This will allow the default Rewriter rules in wsrp-producer.xml to work as is. If this is not practical, then make duplicates of any rules where an issue is occurring due to upper or mixed case usage in the .NET application markup and match the case in use with the duplicate rule. For example: <RewriterRule> <LookFor>(href=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> May need to be duplicated as: <RewriterRule> <LookFor>(HREF=\"([^\"]+)</LookFor> <ChangeToAbsolute>true</ChangeToAbsolute> <ApplyTo>.axd,.css</ApplyTo> <MakeResource>true</MakeResource> </RewriterRule> While it is possible to write a regular expression that will handle mixed case usage, it would be long and strenous to test and maintain, so the recommendation is to use duplicate rules. Is it Still Relative? Some .NET applications base relative paths with a fixed root location. With the introduction of the WSRP Producer, the root has moved up one level. References to ~/ will need to be updated to ~/portlets and many ../ paths will need another ../ in front. I Can See You But I Can’t Find You This issue was first discovered while debugging modules with code that referenced the form on a page from the code-behind by name and/or id. The initial error presented itself as run-time error that was difficult to interpret over WSRP but seemed clear when run as straight ASP.NET as it indicated that the object with the form name did not exist. Since the form name was no longer valid after implementing the WSRP Master Page, the likely fix seemed to simply update the references in the code. However, as the WSRP Master Page is external to the code, a compile time error resulted: Error      155         The name 'form1' does not exist in the current context                C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault\portlets\legacywebsite\module\Screens \Reporting.aspx.cs                51           52           legacywebsite.module Much hair-pulling research later it was discovered that it was the use of the FindControl method causing the issue. FindControl doesn’t work quite as expected once a Master Page has been introduced as the controls become embedded in controls, require a recursion to find them that is not part of the FindControl method. In code where the page form is referenced by name, there are two steps to the solution. First, the form needs to be referenced in code generically with Page.Form. For example, this: ToggleControl ctrl = new ToggleControl(frmManualEntry, FunctionLibrary.ParseArrayLst(userObj.Roles)); Becomes this: ToggleControl ctrl = new ToggleControl(Page.Form, FunctionLibrary.ParseArrayLst(userObj.Roles)); Generally the form id is referenced in most ASP.NET applications as a path to a control on the form. To reach the control once a MasterPage has been added requires an additional method to recurse through the controls collections within the form and find the control ID. The following method (found at Rick Strahl's Web Log) corrects this very nicely: public static Control FindControlRecursive(Control Root, string Id) { if (Root.ID == Id) return Root; foreach (Control Ctl in Root.Controls) { Control FoundCtl = FindControlRecursive(Ctl, Id); if (FoundCtl != null) return FoundCtl; } return null; } Where the form name is not referenced, simply using the FindControlRecursive method in place of FindControl will be all that is necessary. Following the second part of the example referenced earlier, the method called with Page.Form changes its value extraction code block from this: Label lblErrMsg = (Label)frmRef.FindControl("lblBRMsg" To this: Label lblErrMsg = (Label) FunctionLibrary.FindControlRecursive(frmRef, "lblBRMsg" The Master That Won’t Step Aside In most migrations it is preferable to make as few changes as possible. In one case I ran across an existing Master Page that would not function as a sub-Master Page. While it would probably have been educational to trace down why, the expedient process of updating it to take the place of the WSRP Master Page is the route I took. The changes are highlighted below: … <asp:ContentPlaceHolder ID="wsrphead" runat="server"></asp:ContentPlaceHolder> </head> <body leftMargin="0" topMargin="0"> <form id="TheForm" runat="server"> <input type="hidden" name="key" id="key" value="" /> <input type="hidden" name="formactionurl" id="formactionurl" value="" /> <input type="hidden" name="handle" id="handle" value="" /> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePartialRendering="true" > </asp:ScriptManager> This approach did not work for all existing Master Pages, but fortunately all of the other existing Master Pages I have run across worked fine as a sub-Master to the WSRP Master Page. Moving On In Enterprise Portals, even after you get everything working, the work is not finished. Next you need to get it where everyone will work with it. Migration Planning Providing that the server where IIS is running is adequately sized, it is possible to run both the .NET Accelerator and the WSRP Producer on the same server during the upgrade process. The upgrade can be performed incrementally, i.e., one portlet at a time, if server administration processes support it. Those processes would include the ability to manage a second producer in the consuming portal and to change over individual portlet instances from one provider to the other. If processes or requirements demand that all portlets be cut over at the same time, it needs to be determined if this cut over should include a new producer, updating all of the portlets in the consumer, or if the WSRP Producer portlet configuration must maintain the naming conventions used by the .NET Accelerator and simply change the WSRP end point configured in the consumer. In some enterprises it may even be necessary to maintain the same WSDL end point, at which point the IIS configuration will be where the updates occur. The downside to such a requirement is that it makes rolling back very difficult, should the need arise. Location, Location, Location Not everyone wants the web application to have the descriptively obvious wsrpdefault location, or needs to create a second WSRP site on the same server. The instructions below are from the product team and, while targeted towards making a second site, will work for creating a site with a different name and then remove the old site. You can also change just the name in IIS. Manually Creating a WSRP Producer Site Instructions (NOTE: all executables used are the same ones used by the installer and “wsrpdev” will be the name of the new instance): 1. Copy C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdefault to C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev. 2. Bring up a command window as an administrator 3. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\IISAppAccelSiteCreator.exe install WSRPProducers wsrpdev "C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev" 8678 2.0.50727 4. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev "NETWORK SERVICE" 3 1 5. Run C:\Oracle\Middleware\WSRPProducerForDotNet\uninstall_resources\PermManage.exe add FileSystem C:\Oracle\Middleware\WSRPProducerForDotNet\wsrpdev EVERYONE 1 1 6. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\1.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev 7. Open up C:\Oracle\Middleware\WSRPProducerForDotNet\wsdl\2.0\WSRPService.wsdl and replace wsrpdefault with wsrpdev Tests: 1. Bring up a browser on the host itself and go to http://localhost:8678/wsrpdev/wsdl/1.0/WSRPService.wsdl and make sure that the URLs in the XML returned include the wsrpdev changes you made in step 6. 2. Bring up a browser on the host itself and see if the default sample comes up: http://localhost:8678/wsrpdev/portlets/ASPNET_AJAX_sample/default.aspx 3. Register the producer in WLP and test the portlet. Changing the Port used by WSRP Producer The pre-configured port for the WSRP Producer is 8678. You can change this port by updating both the IIS configuration and C:\Oracle\Middleware\WSRPProducerForDotNet\[WSRP_APP_NAME]\wsdl\1.0\WSRPService.wsdl. Do You Need to Migrate? Oracle Premier Support ended in November of 2010 for AquaLogic Interaction .NET Application Accelerator 1.x and Extended Support ends in November 2012 (see http://www.oracle.com/us/support/lifetime-support/lifetime-support-software-342730.html for other related dates). This means that integration with products released after November of 2010 is not supported. If having such support is the policy within your enterprise, you do indeed need to migrate. If changes in your enterprise cause your current solution with the .NET Accelerator to no longer function properly, you may need to migrate. Migration is a choice, and if the goals of your enterprise are to take full advantage of newer technologies then migration is certainly one activity you should be planning for.

    Read the article

  • Developing Spring Portlet for use inside Weblogic Portal / Webcenter Portal

    - by Murali Veligeti
    We need to understand the main difference between portlet workflow and servlet workflow.The main difference between portlet workflow and servlet workflow is that, the request to the portlet can have two distinct phases: 1) Action phase 2) Render phase. The Action phase is executed only once and is where any 'backend' changes or actions occur, such as making changes in a database. The Render phase then produces what is displayed to the user each time the display is refreshed. The critical point here is that for a single overall request, the action phase is executed only once, but the render phase may be executed multiple times. This provides a clean separation between the activities that modify the persistent state of your system and the activities that generate what is displayed to the user.The dual phases of portlet requests are one of the real strengths of the JSR-168 specification. For example, dynamic search results can be updated routinely on the display without the user explicitly re-running the search. Most other portlet MVC frameworks attempt to completely hide the two phases from the developer and make it look as much like traditional servlet development as possible - we think this approach removes one of the main benefits of using portlets. So, the separation of the two phases is preserved throughout the Spring Portlet MVC framework. The primary manifestation of this approach is that where the servlet version of the MVC classes will have one method that deals with the request, the portlet version of the MVC classes will have two methods that deal with the request: one for the action phase and one for the render phase. For example, where the servlet version of AbstractController has the handleRequestInternal(..) method, the portlet version of AbstractController has handleActionRequestInternal(..) and handleRenderRequestInternal(..) methods.The Spring Portlet Framework is designed around a DispatcherPortlet that dispatches requests to handlers, with configurable handler mappings and view resolution, just as the DispatcherServlet in the Spring Web Framework does.  Developing portlet.xml Let's start the sample development by creating the portlet.xml file in the /WebContent/WEB-INF/ folder as shown below: <?xml version="1.0" encoding="UTF-8"?> <portlet-app version="2.0" xmlns="http://java.sun.com/xml/ns/portlet/portlet-app_2_0.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <portlet> <portlet-name>SpringPortletName</portlet-name> <portlet-class>org.springframework.web.portlet.DispatcherPortlet</portlet-class> <supports> <mime-type>text/html</mime-type> <portlet-mode>view</portlet-mode> </supports> <portlet-info> <title>SpringPortlet</title> </portlet-info> </portlet> </portlet-app> DispatcherPortlet is responsible for handling every client request. When it receives a request, it finds out which Controller class should be used for handling this request, and then it calls its handleActionRequest() or handleRenderRequest() method based on the request processing phase. The Controller class executes business logic and returns a View name that should be used for rendering markup to the user. The DispatcherPortlet then forwards control to that View for actual markup generation. As you can see, DispatcherPortlet is the central dispatcher for use within Spring Portlet MVC Framework. Note that your portlet application can define more than one DispatcherPortlet. If it does so, then each of these portlets operates its own namespace, loading its application context and handler mapping. The DispatcherPortlet is also responsible for loading application context (Spring configuration file) for this portlet. First, it tries to check the value of the configLocation portlet initialization parameter. If that parameter is not specified, it takes the portlet name (that is, the value of the <portlet-name> element), appends "-portlet.xml" to it, and tries to load that file from the /WEB-INF folder. In the portlet.xml file, we did not specify the configLocation initialization parameter, so let's create SpringPortletName-portlet.xml file in the next section. Developing SpringPortletName-portlet.xml Create the SpringPortletName-portlet.xml file in the /WebContent/WEB-INF folder of your application as shown below: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd"> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView"/> <property name="prefix" value="/jsp/"/> <property name="suffix" value=".jsp"/> </bean> <bean id="pointManager" class="com.wlp.spring.bo.internal.PointManagerImpl"> <property name="users"> <list> <ref bean="point1"/> <ref bean="point2"/> <ref bean="point3"/> <ref bean="point4"/> </list> </property> </bean> <bean id="point1" class="com.wlp.spring.bean.User"> <property name="name" value="Murali"/> <property name="points" value="6"/> </bean> <bean id="point2" class="com.wlp.spring.bean.User"> <property name="name" value="Sai"/> <property name="points" value="13"/> </bean> <bean id="point3" class="com.wlp.spring.bean.User"> <property name="name" value="Rama"/> <property name="points" value="43"/> </bean> <bean id="point4" class="com.wlp.spring.bean.User"> <property name="name" value="Krishna"/> <property name="points" value="23"/> </bean> <bean id="messageSource" class="org.springframework.context.support.ResourceBundleMessageSource"> <property name="basename" value="messages"/> </bean> <bean name="/users.htm" id="userController" class="com.wlp.spring.controller.UserController"> <property name="pointManager" ref="pointManager"/> </bean> <bean name="/pointincrease.htm" id="pointIncreaseController" class="com.wlp.spring.controller.IncreasePointsFormController"> <property name="sessionForm" value="true"/> <property name="pointManager" ref="pointManager"/> <property name="commandName" value="pointIncrease"/> <property name="commandClass" value="com.wlp.spring.bean.PointIncrease"/> <property name="formView" value="pointincrease"/> <property name="successView" value="users"/> </bean> <bean id="parameterMappingInterceptor" class="org.springframework.web.portlet.handler.ParameterMappingInterceptor" /> <bean id="portletModeParameterHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeParameterHandlerMapping"> <property name="order" value="1" /> <property name="interceptors"> <list> <ref bean="parameterMappingInterceptor" /> </list> </property> <property name="portletModeParameterMap"> <map> <entry key="view"> <map> <entry key="pointincrease"> <ref bean="pointIncreaseController" /> </entry> <entry key="users"> <ref bean="userController" /> </entry> </map> </entry> </map> </property> </bean> <bean id="portletModeHandlerMapping" class="org.springframework.web.portlet.handler.PortletModeHandlerMapping"> <property name="order" value="2" /> <property name="portletModeMap"> <map> <entry key="view"> <ref bean="userController" /> </entry> </map> </property> </bean> </beans> The SpringPortletName-portlet.xml file is an application context file for your MVC portlet. It has a couple of bean definitions: viewController. At this point, remember that the viewController bean definition points to the com.ibm.developerworks.springmvc.ViewController.java class. portletModeHandlerMapping. As we discussed in the last section, whenever DispatcherPortlet gets a client request, it tries to find a suitable Controller class for handling that request. That is where PortletModeHandlerMapping comes into the picture. The PortletModeHandlerMapping class is a simple implementation of the HandlerMapping interface and is used by DispatcherPortlet to find a suitable Controller for every request. The PortletModeHandlerMapping class uses Portlet mode for the current request to find a suitable Controller class to use for handling the request. The portletModeMap property of portletModeHandlerMapping bean is the place where we map the Portlet mode name against the Controller class. In the sample code, we show that viewController is responsible for handling View mode requests. Developing UserController.java In the preceding section, you learned that the viewController bean is responsible for handling all the View mode requests. Your next step is to create the UserController.java class as shown below: public class UserController extends AbstractController { private PointManager pointManager; public void handleActionRequest(ActionRequest request, ActionResponse response) throws Exception { } public ModelAndView handleRenderRequest(RenderRequest request, RenderResponse response) throws ServletException, IOException { String now = (new java.util.Date()).toString(); Map<String, Object> myModel = new HashMap<String, Object>(); myModel.put("now", now); myModel.put("users", this.pointManager.getUsers()); return new ModelAndView("users", "model", myModel); } public void setPointManager(PointManager pointManager) { this.pointManager = pointManager; } } Every controller class in Spring Portlet MVC Framework must implement the org.springframework.web. portlet.mvc.Controller interface directly or indirectly. To make things easier, Spring Framework provides AbstractController class, which is the default implementation of the Controller interface. As a developer, you should always extend your controller from either AbstractController or one of its more specific subclasses. Any implementation of the Controller class should be reusable, thread-safe, and capable of handling multiple requests throughout the lifecycle of the portlet. In the sample code, we create the ViewController class by extending it from AbstractController. Because we don't want to do any action processing in the HelloSpringPortletMVC portlet, we override only the handleRenderRequest() method of AbstractController. Now, the only thing that HelloWorldPortletMVC should do is render the markup of View.jsp to the user when it receives a user request to do so. To do that, return the object of ModelAndView with a value of view equal to View. Developing web.xml According to Portlet Specification 1.0, every portlet application is also a Servlet Specification 2.3-compliant Web application, and it needs a Web application deployment descriptor (that is, web.xml). Let’s create the web.xml file in the /WEB-INF/ folder as shown in listing 4. Follow these steps: Open the existing web.xml file located at /WebContent/WEB-INF/web.xml. Replace the contents of this file with the code as shown below: <servlet> <servlet-name>ViewRendererServlet</servlet-name> <servlet-class>org.springframework.web.servlet.ViewRendererServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>ViewRendererServlet</servlet-name> <url-pattern>/WEB-INF/servlet/view</url-pattern> </servlet-mapping> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml</param-value> </context-param> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> The web.xml file for the sample portlet declares two things: ViewRendererServlet. The ViewRendererServlet is the bridge servlet for portlet support. During the render phase, DispatcherPortlet wraps PortletRequest into ServletRequest and forwards control to ViewRendererServlet for actual rendering. This process allows Spring Portlet MVC Framework to use the same View infrastructure as that of its servlet version, that is, Spring Web MVC Framework. ContextLoaderListener. The ContextLoaderListener class takes care of loading Web application context at the time of the Web application startup. The Web application context is shared by all the portlets in the portlet application. In case of duplicate bean definition, the bean definition in the portlet application context takes precedence over the Web application context. The ContextLoader class tries to read the value of the contextConfigLocation Web context parameter to find out the location of the context file. If the contextConfigLocation parameter is not set, then it uses the default value, which is /WEB-INF/applicationContext.xml, to load the context file. The Portlet Controller interface requires two methods that handle the two phases of a portlet request: the action request and the render request. The action phase should be capable of handling an action request and the render phase should be capable of handling a render request and returning an appropriate model and view. While the Controller interface is quite abstract, Spring Portlet MVC offers a lot of controllers that already contain a lot of the functionality you might need – most of these are very similar to controllers from Spring Web MVC. The Controller interface just defines the most common functionality required of every controller - handling an action request, handling a render request, and returning a model and a view. How rendering works As you know, when the user tries to access a page with PointSystemPortletMVC portlet on it or when the user performs some action on any other portlet on that page or tries to refresh that page, a render request is sent to the PointSystemPortletMVC portlet. In the sample code, because DispatcherPortlet is the main portlet class, Weblogic Portal / Webcenter Portal calls its render() method and then the following sequence of events occurs: The render() method of DispatcherPortlet calls the doDispatch() method, which in turn calls the doRender() method. After the doRenderService() method gets control, first it tries to find out the locale of the request by calling the PortletRequest.getLocale() method. This locale is used while making all the locale-related decisions for choices such as which resource bundle should be loaded or which JSP should be displayed to the user based on the locale. After that, the doRenderService() method starts iterating through all the HandlerMapping classes configured for this portlet, calling their getHandler() method to identify the appropriate Controller for handling this request. In the sample code, we have configured only PortletModeHandlerMapping as a HandlerMapping class. The PortletModeHandlerMapping class reads the value of the current portlet mode, and based on that, it finds out, the Controller class that should be used to handle this request. In the sample code, ViewController is configured to handle the View mode request so that the PortletModeHandlerMapping class returns the object of ViewController. After the object of ViewController is returned, the doRenderService() method calls its handleRenderRequestInternal() method. Implementation of the handleRenderRequestInternal() method in ViewController.java is very simple. It logs a message saying that it got control, and then it creates an instance of ModelAndView with a value equal to View and returns it to DispatcherPortlet. After control returns to doRenderService(), the next task is to figure out how to render View. For that, DispatcherPortlet starts iterating through all the ViewResolvers configured in your portlet application, calling their resolveViewName() method. In the sample code we have configured only one ViewResolver, InternalResourceViewResolver. When its resolveViewName() method is called with viewName, it tries to add /WEB-INF/jsp as a prefix to the view name and to add JSP as a suffix. And it checks if /WEB-INF/jsp/View.jsp exists. If it does exist, it returns the object of JstlView wrapping View.jsp. After control is returned to the doRenderService() method, it creates the object PortletRequestDispatcher, which points to /WEB-INF/servlet/view – that is, ViewRendererServlet. Then it sets the object of JstlView in the request and dispatches the request to ViewRendererServlet. After ViewRendererServlet gets control, it reads the JstlView object from the request attribute and creates another RequestDispatcher pointing to the /WEB-INF/jsp/View.jsp URL and passes control to it for actual markup generation. The markup generated by View.jsp is returned to user. At this point, you may question the need for ViewRendererServlet. Why can't DispatcherPortlet directly forward control to View.jsp? Adding ViewRendererServlet in between allows Spring Portlet MVC Framework to reuse the existing View infrastructure. You may appreciate this more when we discuss how easy it is to integrate Apache Tiles Framework with your Spring Portlet MVC Framework. The attached project SpringPortlet.zip should be used to import the project in to your OEPE Workspace. SpringPortlet_Jars.zip contains jar files required for the application. Project is written on Spring 2.5.  The same JSR 168 portlet should work on Webcenter Portal as well.  Downloads: Download WeblogicPotal Project which consists of Spring Portlet. Download Spring Jars In-addition to above you need to download Spring.jar (Spring2.5)

    Read the article

  • Using Stub Objects

    - by user9154181
    Having told the long and winding tale of where stub objects came from and how we use them to build Solaris, I'd like to focus now on the the nuts and bolts of building and using them. The following new features were added to the Solaris link-editor (ld) to support the production and use of stub objects: -z stub This new command line option informs ld that it is to build a stub object rather than a normal object. In this mode, it accepts the same command line arguments as usual, but will quietly ignore any objects and sharable object dependencies. STUB_OBJECT Mapfile Directive In order to build a stub version of an object, its mapfile must specify the STUB_OBJECT directive. When producing a non-stub object, the presence of STUB_OBJECT causes the link-editor to perform extra validation to ensure that the stub and non-stub objects will be compatible. ASSERT Mapfile Directive All data symbols exported from the object must have an ASSERT symbol directive in the mapfile that declares them as data and supplies the size, binding, bss attributes, and symbol aliasing details. When building the stub objects, the information in these ASSERT directives is used to create the data symbols. When building the real object, these ASSERT directives will ensure that the real object matches the linking interface presented by the stub. Although ASSERT was added to the link-editor in order to support stub objects, they are a general purpose feature that can be used independently of stub objects. For instance you might choose to use an ASSERT directive if you have a symbol that must have a specific address in order for the object to operate properly and you want to automatically ensure that this will always be the case. The material presented here is derived from a document I originally wrote during the development effort, which had the dual goals of providing supplemental materials for the stub object PSARC case, and as a set of edits that were eventually applied to the Oracle Solaris Linker and Libraries Manual (LLM). The Solaris 11 LLM contains this information in a more polished form. Stub Objects A stub object is a shared object, built entirely from mapfiles, that supplies the same linking interface as the real object, while containing no code or data. Stub objects cannot be used at runtime. However, an application can be built against a stub object, where the stub object provides the real object name to be used at runtime, and then use the real object at runtime. When building a stub object, the link-editor ignores any object or library files specified on the command line, and these files need not exist in order to build a stub. Since the compilation step can be omitted, and because the link-editor has relatively little work to do, stub objects can be built very quickly. Stub objects can be used to solve a variety of build problems: Speed Modern machines, using a version of make with the ability to parallelize operations, are capable of compiling and linking many objects simultaneously, and doing so offers significant speedups. However, it is typical that a given object will depend on other objects, and that there will be a core set of objects that nearly everything else depends on. It is necessary to impose an ordering that builds each object before any other object that requires it. This ordering creates bottlenecks that reduce the amount of parallelization that is possible and limits the overall speed at which the code can be built. Complexity/Correctness In a large body of code, there can be a large number of dependencies between the various objects. The makefiles or other build descriptions for these objects can become very complex and difficult to understand or maintain. The dependencies can change as the system evolves. This can cause a given set of makefiles to become slightly incorrect over time, leading to race conditions and mysterious rare build failures. Dependency Cycles It might be desirable to organize code as cooperating shared objects, each of which draw on the resources provided by the other. Such cycles cannot be supported in an environment where objects must be built before the objects that use them, even though the runtime linker is fully capable of loading and using such objects if they could be built. Stub shared objects offer an alternative method for building code that sidesteps the above issues. Stub objects can be quickly built for all the shared objects produced by the build. Then, all the real shared objects and executables can be built in parallel, in any order, using the stub objects to stand in for the real objects at link-time. Afterwards, the executables and real shared objects are kept, and the stub shared objects are discarded. Stub objects are built from a mapfile, which must satisfy the following requirements. The mapfile must specify the STUB_OBJECT directive. This directive informs the link-editor that the object can be built as a stub object, and as such causes the link-editor to perform validation and sanity checking intended to guarantee that an object and its stub will always provide identical linking interfaces. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data exported from the object must have an ASSERT symbol attribute in the mapfile to specify the symbol type, size, and bss attributes. In the case where there are multiple symbols that reference the same data, the ASSERT for one of these symbols must specify the TYPE and SIZE attributes, while the others must use the ALIAS attribute to reference this primary symbol. Given such a mapfile, the stub and real versions of the shared object can be built using the same command line for each, adding the '-z stub' option to the link for the stub object, and omiting the option from the link for the real object. To demonstrate these ideas, the following code implements a shared object named idx5, which exports data from a 5 element array of integers, with each element initialized to contain its zero-based array index. This data is available as a global array, via an alternative alias data symbol with weak binding, and via a functional interface. % cat idx5.c int _idx5[5] = { 0, 1, 2, 3, 4 }; #pragma weak idx5 = _idx5 int idx5_func(int index) { if ((index 4)) return (-1); return (_idx5[index]); } A mapfile is required to describe the interface provided by this shared object. % cat mapfile $mapfile_version 2 STUB_OBJECT; SYMBOL_SCOPE { _idx5 { ASSERT { TYPE=data; SIZE=4[5] }; }; idx5 { ASSERT { BINDING=weak; ALIAS=_idx5 }; }; idx5_func; local: *; }; The following main program is used to print all the index values available from the idx5 shared object. % cat main.c #include <stdio.h> extern int _idx5[5], idx5[5], idx5_func(int); int main(int argc, char **argv) { int i; for (i = 0; i The following commands create a stub version of this shared object in a subdirectory named stublib. elfdump is used to verify that the resulting object is a stub. The command used to build the stub differs from that of the real object only in the addition of the -z stub option, and the use of a different output file name. This demonstrates the ease with which stub generation can be added to an existing makefile. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o stublib/libidx5.so.1 -zstub % ln -s libidx5.so.1 stublib/libidx5.so % elfdump -d stublib/libidx5.so | grep STUB [11] FLAGS_1 0x4000000 [ STUB ] The main program can now be built, using the stub object to stand in for the real shared object, and setting a runpath that will find the real object at runtime. However, as we have not yet built the real object, this program cannot yet be run. Attempts to cause the system to load the stub object are rejected, as the runtime linker knows that stub objects lack the actual code and data found in the real object, and cannot execute. % cc main.c -L stublib -R '$ORIGIN/lib' -lidx5 -lc % ./a.out ld.so.1: a.out: fatal: libidx5.so.1: open failed: No such file or directory Killed % LD_PRELOAD=stublib/libidx5.so.1 ./a.out ld.so.1: a.out: fatal: stublib/libidx5.so.1: stub shared object cannot be used at runtime Killed We build the real object using the same command as we used to build the stub, omitting the -z stub option, and writing the results to a different file. % cc -Kpic -G -M mapfile -h libidx5.so.1 idx5.c -o lib/libidx5.so.1 Once the real object has been built in the lib subdirectory, the program can be run. % ./a.out [0] 0 0 0 [1] 1 1 1 [2] 2 2 2 [3] 3 3 3 [4] 4 4 4 Mapfile Changes The version 2 mapfile syntax was extended in a number of places to accommodate stub objects. Conditional Input The version 2 mapfile syntax has the ability conditionalize mapfile input using the $if control directive. As you might imagine, these directives are used frequently with ASSERT directives for data, because a given data symbol will frequently have a different size in 32 or 64-bit code, or on differing hardware such as x86 versus sparc. The link-editor maintains an internal table of names that can be used in the logical expressions evaluated by $if and $elif. At startup, this table is initialized with items that describe the class of object (_ELF32 or _ELF64) and the type of the target machine (_sparc or _x86). We found that there were a small number of cases in the Solaris code base in which we needed to know what kind of object we were producing, so we added the following new predefined items in order to address that need: NameMeaning ...... _ET_DYNshared object _ET_EXECexecutable object _ET_RELrelocatable object ...... STUB_OBJECT Directive The new STUB_OBJECT directive informs the link-editor that the object described by the mapfile can be built as a stub object. STUB_OBJECT; A stub shared object is built entirely from the information in the mapfiles supplied on the command line. When the -z stub option is specified to build a stub object, the presence of the STUB_OBJECT directive in a mapfile is required, and the link-editor uses the information in symbol ASSERT attributes to create global symbols that match those of the real object. When the real object is built, the presence of STUB_OBJECT causes the link-editor to verify that the mapfiles accurately describe the real object interface, and that a stub object built from them will provide the same linking interface as the real object it represents. All function and data symbols that make up the external interface to the object must be explicitly listed in the mapfile. The mapfile must use symbol scope reduction ('*'), to remove any symbols not explicitly listed from the external interface. All global data in the object is required to have an ASSERT attribute that specifies the symbol type and size. If the ASSERT BIND attribute is not present, the link-editor provides a default assertion that the symbol must be GLOBAL. If the ASSERT SH_ATTR attribute is not present, or does not specify that the section is one of BITS or NOBITS, the link-editor provides a default assertion that the associated section is BITS. All data symbols that describe the same address and size are required to have ASSERT ALIAS attributes specified in the mapfile. If aliased symbols are discovered that do not have an ASSERT ALIAS specified, the link fails and no object is produced. These rules ensure that the mapfiles contain a description of the real shared object's linking interface that is sufficient to produce a stub object with a completely compatible linking interface. SYMBOL_SCOPE/SYMBOL_VERSION ASSERT Attribute The SYMBOL_SCOPE and SYMBOL_VERSION mapfile directives were extended with a symbol attribute named ASSERT. The syntax for the ASSERT attribute is as follows: ASSERT { ALIAS = symbol_name; BINDING = symbol_binding; TYPE = symbol_type; SH_ATTR = section_attributes; SIZE = size_value; SIZE = size_value[count]; }; The ASSERT attribute is used to specify the expected characteristics of the symbol. The link-editor compares the symbol characteristics that result from the link to those given by ASSERT attributes. If the real and asserted attributes do not agree, a fatal error is issued and the output object is not created. In normal use, the link editor evaluates the ASSERT attribute when present, but does not require them, or provide default values for them. The presence of the STUB_OBJECT directive in a mapfile alters the interpretation of ASSERT to require them under some circumstances, and to supply default assertions if explicit ones are not present. See the definition of the STUB_OBJECT Directive for the details. When the -z stub command line option is specified to build a stub object, the information provided by ASSERT attributes is used to define the attributes of the global symbols provided by the object. ASSERT accepts the following: ALIAS Name of a previously defined symbol that this symbol is an alias for. An alias symbol has the same type, value, and size as the main symbol. The ALIAS attribute is mutually exclusive to the TYPE, SIZE, and SH_ATTR attributes, and cannot be used with them. When ALIAS is specified, the type, size, and section attributes are obtained from the alias symbol. BIND Specifies an ELF symbol binding, which can be any of the STB_ constants defined in <sys/elf.h>, with the STB_ prefix removed (e.g. GLOBAL, WEAK). TYPE Specifies an ELF symbol type, which can be any of the STT_ constants defined in <sys/elf.h>, with the STT_ prefix removed (e.g. OBJECT, COMMON, FUNC). In addition, for compatibility with other mapfile usage, FUNCTION and DATA can be specified, for STT_FUNC and STT_OBJECT, respectively. TYPE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SH_ATTR Specifies attributes of the section associated with the symbol. The section_attributes that can be specified are given in the following table: Section AttributeMeaning BITSSection is not of type SHT_NOBITS NOBITSSection is of type SHT_NOBITS SH_ATTR is mutually exclusive to ALIAS, and cannot be used in conjunction with it. SIZE Specifies the expected symbol size. SIZE is mutually exclusive to ALIAS, and cannot be used in conjunction with it. The syntax for the size_value argument is as described in the discussion of the SIZE attribute below. SIZE The SIZE symbol attribute existed before support for stub objects was introduced. It is used to set the size attribute of a given symbol. This attribute results in the creation of a symbol definition. Prior to the introduction of the ASSERT SIZE attribute, the value of a SIZE attribute was always numeric. While attempting to apply ASSERT SIZE to the objects in the Solaris ON consolidation, I found that many data symbols have a size based on the natural machine wordsize for the class of object being produced. Variables declared as long, or as a pointer, will be 4 bytes in size in a 32-bit object, and 8 bytes in a 64-bit object. Initially, I employed the conditional $if directive to handle these cases as follows: $if _ELF32 foo { ASSERT { TYPE=data; SIZE=4 } }; bar { ASSERT { TYPE=data; SIZE=20 } }; $elif _ELF64 foo { ASSERT { TYPE=data; SIZE=8 } }; bar { ASSERT { TYPE=data; SIZE=40 } }; $else $error UNKNOWN ELFCLASS $endif I found that the situation occurs frequently enough that this is cumbersome. To simplify this case, I introduced the idea of the addrsize symbolic name, and of a repeat count, which together make it simple to specify machine word scalar or array symbols. Both the SIZE, and ASSERT SIZE attributes support this syntax: The size_value argument can be a numeric value, or it can be the symbolic name addrsize. addrsize represents the size of a machine word capable of holding a memory address. The link-editor substitutes the value 4 for addrsize when building 32-bit objects, and the value 8 when building 64-bit objects. addrsize is useful for representing the size of pointer variables and C variables of type long, as it automatically adjusts for 32 and 64-bit objects without requiring the use of conditional input. The size_value argument can be optionally suffixed with a count value, enclosed in square brackets. If count is present, size_value and count are multiplied together to obtain the final size value. Using this feature, the example above can be written more naturally as: foo { ASSERT { TYPE=data; SIZE=addrsize } }; bar { ASSERT { TYPE=data; SIZE=addrsize[5] } }; Exported Global Data Is Still A Bad Idea As you can see, the additional plumbing added to the Solaris link-editor to support stub objects is minimal. Furthermore, about 90% of that plumbing is dedicated to handling global data. We have long advised against global data exported from shared objects. There are many ways in which global data does not fit well with dynamic linking. Stub objects simply provide one more reason to avoid this practice. It is always better to export all data via a functional interface. You should always hide your data, and make it available to your users via a function that they can call to acquire the address of the data item. However, If you do have to support global data for a stub, perhaps because you are working with an already existing object, it is still easilily done, as shown above. Oracle does not like us to discuss hypothetical new features that don't exist in shipping product, so I'll end this section with a speculation. It might be possible to do more in this area to ease the difficulty of dealing with objects that have global data that the users of the library don't need. Perhaps someday... Conclusions It is easy to create stub objects for most objects. If your library only exports function symbols, all you have to do to build a faithful stub object is to add STUB_OBJECT; and then to use the same link command you're currently using, with the addition of the -z stub option. Happy Stubbing!

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • How can I add headers to DualList control wpf

    - by devnet247
    Hi all I am trying to write a Dual List usercontrol in wpf. I am new to wpf and I am finding it quite difficult. This is something I have put together in a couple of hours.It's not that good but a start. I would be extremely grateful if somebody with wpf experience could improve it. The aim is to simplify the usage as much as possible I am kind of stuck. I would like the user of the DualList Control to be able to set up headers how do you do that. Do I need to expose some dependency properties in my control? At the moment when loading the user has to pass a ObservableCollection is there a better way? Could you have a look and possibly make any suggestions with some code? Thanks a lot!!!!! xaml <Grid ShowGridLines="False"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*"/> <ColumnDefinition Width="25px"></ColumnDefinition> <ColumnDefinition Width="*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="*"></RowDefinition> </Grid.RowDefinitions> <StackPanel Orientation="Vertical" Grid.Column="0" Grid.Row="0"> <Label Name="lblLeftTitle" Content="Available"></Label> <ListView Name="lvwLeft"> </ListView> </StackPanel> <WrapPanel Grid.Column="1" Grid.Row="0"> <Button Name="btnMoveRight" Content=">" Width="25" Margin="0,35,0,0" Click="btnMoveRight_Click" /> <Button Name="btnMoveAllRight" Content=">>" Width="25" Margin="0,05,0,0" Click="btnMoveAllRight_Click" /> <Button Name="btnMoveLeft" Content="&lt;" Width="25" Margin="0,25,0,0" Click="btnMoveLeft_Click" /> <Button Name="btnMoveAllLeft" Content="&lt;&lt;" Width="25" Margin="0,05,0,0" Click="btnMoveAllLeft_Click" /> </WrapPanel> <StackPanel Orientation="Vertical" Grid.Column="2" Grid.Row="0"> <Label Name="lblRightTitle" Content="Selected"></Label> <ListView Name="lvwRight"> </ListView> </StackPanel> </Grid> Client public partial class DualListTest { public ObservableCollection<ListViewItem> LeftList { get; set; } public ObservableCollection<ListViewItem> RightList { get; set; } public DualListTest() { InitializeComponent(); LoadCustomers(); LoadDualList(); } private void LoadDualList() { dualList1.Load(LeftList, RightList); } private void LoadCustomers() { //Pretend we are getting a list of Customers from a repository. //Some go in the left List(Good Customers) some go in the Right List(Bad Customers). LeftList = new ObservableCollection<ListViewItem>(); RightList = new ObservableCollection<ListViewItem>(); var customers = GetCustomers(); foreach (var customer in customers) { if (customer.Status == CustomerStatus.Good) { LeftList.Add(new ListViewItem { Content = customer }); } else { RightList.Add(new ListViewItem{Content=customer }); } } } private static IEnumerable<Customer> GetCustomers() { return new List<Customer> { new Customer {Name = "Jo Blogg", Status = CustomerStatus.Good}, new Customer {Name = "Rob Smith", Status = CustomerStatus.Good}, new Customer {Name = "Michel Platini", Status = CustomerStatus.Good}, new Customer {Name = "Roberto Baggio", Status = CustomerStatus.Good}, new Customer {Name = "Gio Surname", Status = CustomerStatus.Bad}, new Customer {Name = "Diego Maradona", Status = CustomerStatus.Bad} }; } } UserControl public partial class DualList:UserControl { public ObservableCollection<ListViewItem> LeftListCollection { get; set; } public ObservableCollection<ListViewItem> RightListCollection { get; set; } public DualList() { InitializeComponent(); } public void Load(ObservableCollection<ListViewItem> leftListCollection, ObservableCollection<ListViewItem> rightListCollection) { LeftListCollection = leftListCollection; RightListCollection = rightListCollection; lvwLeft.ItemsSource = leftListCollection; lvwRight.ItemsSource = rightListCollection; EnableButtons(); } public static DependencyProperty LeftTitleProperty = DependencyProperty.Register("LeftTitle", typeof(string), typeof(Label)); public static DependencyProperty RightTitleProperty = DependencyProperty.Register("RightTitle", typeof(string), typeof(Label)); public static DependencyProperty LeftListProperty = DependencyProperty.Register("LeftList", typeof(ListView), typeof(DualList)); public static DependencyProperty RightListProperty = DependencyProperty.Register("RightList", typeof(ListView), typeof(DualList)); public string LeftTitle { get { return (string)lblLeftTitle.Content; } set { lblLeftTitle.Content = value; } } public string RightTitle { get { return (string)lblRightTitle.Content; } set { lblRightTitle.Content = value; } } public ListView LeftList { get { return lvwLeft; } set { lvwLeft = value; } } public ListView RightList { get { return lvwRight; } set { lvwRight = value; } } private void EnableButtons() { if (lvwLeft.Items.Count > 0) { btnMoveRight.IsEnabled = true; btnMoveAllRight.IsEnabled = true; } else { btnMoveRight.IsEnabled = false; btnMoveAllRight.IsEnabled = false; } if (lvwRight.Items.Count > 0) { btnMoveLeft.IsEnabled = true; btnMoveAllLeft.IsEnabled = true; } else { btnMoveLeft.IsEnabled = false; btnMoveAllLeft.IsEnabled = false; } if (lvwLeft.Items.Count != 0 || lvwRight.Items.Count != 0) return; btnMoveLeft.IsEnabled = false; btnMoveAllLeft.IsEnabled = false; btnMoveRight.IsEnabled = false; btnMoveAllRight.IsEnabled = false; } private void MoveRight() { while (lvwLeft.SelectedItems.Count > 0) { var selectedItem = (ListViewItem)lvwLeft.SelectedItem; LeftListCollection.Remove(selectedItem); RightListCollection.Add(selectedItem); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveAllRight() { while (lvwLeft.Items.Count > 0) { var item = (ListViewItem)lvwLeft.Items[lvwLeft.Items.Count - 1]; LeftListCollection.Remove(item); RightListCollection.Add(item); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveAllLeft() { while (lvwRight.Items.Count > 0) { var item = (ListViewItem)lvwRight.Items[lvwRight.Items.Count - 1]; RightListCollection.Remove(item); LeftListCollection.Add(item); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void MoveLeft() { while (lvwRight.SelectedItems.Count > 0) { var selectedCustomer = (ListViewItem)lvwRight.SelectedItem; LeftListCollection.Add(selectedCustomer); RightListCollection.Remove(selectedCustomer); } lvwRight.ItemsSource = RightListCollection; lvwLeft.ItemsSource = LeftListCollection; EnableButtons(); } private void btnMoveLeft_Click(object sender, RoutedEventArgs e) { MoveLeft(); } private void btnMoveAllLeft_Click(object sender, RoutedEventArgs e) { MoveAllLeft(); } private void btnMoveRight_Click(object sender, RoutedEventArgs e) { MoveRight(); } private void btnMoveAllRight_Click(object sender, RoutedEventArgs e) { MoveAllRight(); } }

    Read the article

  • Help with Java Program for Prime numbers

    - by Ben
    Hello everyone, I was wondering if you can help me with this program. I have been struggling with it for hours and have just trashed my code because the TA doesn't like how I executed it. I am completely hopeless and if anyone can help me out step by step, I would greatly appreciate it. In this project you will write a Java program that reads a positive integer n from standard input, then prints out the first n prime numbers. We say that an integer m is divisible by a non-zero integer d if there exists an integer k such that m = k d , i.e. if d divides evenly into m. Equivalently, m is divisible by d if the remainder of m upon (integer) division by d is zero. We would also express this by saying that d is a divisor of m. A positive integer p is called prime if its only positive divisors are 1 and p. The one exception to this rule is the number 1 itself, which is considered to be non-prime. A positive integer that is not prime is called composite. Euclid showed that there are infinitely many prime numbers. The prime and composite sequences begin as follows: Primes: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, … Composites: 1, 4, 6, 8, 9, 10, 12, 14, 15, 16, 18, 20, 21, 22, 24, 25, 26, 27, 28, … There are many ways to test a number for primality, but perhaps the simplest is to simply do trial divisions. Begin by dividing m by 2, and if it divides evenly, then m is not prime. Otherwise, divide by 3, then 4, then 5, etc. If at any point m is found to be divisible by a number d in the range 2 d m-1, then halt, and conclude that m is composite. Otherwise, conclude that m is prime. A moment’s thought shows that one need not do any trial divisions by numbers d which are themselves composite. For instance, if a trial division by 2 fails (i.e. has non-zero remainder, so m is odd), then a trial division by 4, 6, or 8, or any even number, must also fail. Thus to test a number m for primality, one need only do trial divisions by prime numbers less than m. Furthermore, it is not necessary to go all the way up to m-1. One need only do trial divisions of m by primes p in the range 2 p m . To see this, suppose m 1 is composite. Then there exist positive integers a and b such that 1 < a < m, 1 < b < m, and m = ab . But if both a m and b m , then ab m, contradicting that m = ab . Hence one of a or b must be less than or equal to m . To implement this process in java you will write a function called isPrime() with the following signature: static boolean isPrime(int m, int[] P) This function will return true or false according to whether m is prime or composite. The array argument P will contain a sufficient number of primes to do the testing. Specifically, at the time isPrime() is called, array P must contain (at least) all primes p in the range 2 p m . For instance, to test m = 53 for primality, one must do successive trial divisions by 2, 3, 5, and 7. We go no further since 11 53 . Thus a precondition for the function call isPrime(53, P) is that P[0] = 2 , P[1] = 3 , P[2] = 5, and P[3] = 7 . The return value in this case would be true since all these divisions fail. Similarly to test m =143 , one must do trial divisions by 2, 3, 5, 7, and 11 (since 13 143 ). The precondition for the function call isPrime(143, P) is therefore P[0] = 2 , P[1] = 3 , P[2] = 5, P[3] = 7 , and P[4] =11. The return value in this case would be false since 11 divides 143. Function isPrime() should contain a loop that steps through array P, doing trial divisions. This loop should terminate when 2 either a trial division succeeds, in which case false is returned, or until the next prime in P is greater than m , in which case true is returned. Function main() in this project will read the command line argument n, allocate an int array of length n, fill the array with primes, then print the contents of the array to stdout according to the format described below. In the context of function main(), we will refer to this array as Primes[]. Thus array Primes[] plays a dual role in this project. On the one hand, it is used to collect, store, and print the output data. On the other hand, it is passed to function isPrime() to test new integers for primality. Whenever isPrime() returns true, the newly discovered prime will be placed at the appropriate position in array Primes[]. This process works since, as explained above, the primes needed to test an integer m range only up to m , and all of these primes (and more) will already be stored in array Primes[] when m is tested. Of course it will be necessary to initialize Primes[0] = 2 manually, then proceed to test 3, 4, … for primality using function isPrime(). The following is an outline of the steps to be performed in function main(). • Check that the user supplied exactly one command line argument which can be interpreted as a positive integer n. If the command line argument is not a single positive integer, your program will print a usage message as specified in the examples below, then exit. • Allocate array Primes[] of length n and initialize Primes[0] = 2 . • Enter a loop which will discover subsequent primes and store them as Primes[1] , Primes[2], Primes[3] , ……, Primes[n -1] . This loop should contain an inner loop which walks through successive integers and tests them for primality by calling function isPrime() with appropriate arguments. • Print the contents of array Primes[] to stdout, 10 to a line separated by single spaces. In other words Primes[0] through Primes[9] will go on line 1, Primes[10] though Primes[19] will go on line 2, and so on. Note that if n is not a multiple of 10, then the last line of output will contain fewer than 10 primes. Your program, which will be called Prime.java, will produce output identical to that of the sample runs below. (As usual % signifies the unix prompt.) % java Prime Usage: java Prime [PositiveInteger] % java Prime xyz Usage: java Prime [PositiveInteger] % java Prime 10 20 Usage: java Prime [PositiveInteger] % java Prime 75 2 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 % 3 As you can see, inappropriate command line argument(s) generate a usage message which is similar to that of many unix commands. (Try doing the more command with no arguments to see such a message.) Your program will include a function called Usage() having signature static void Usage() that prints this message to stderr, then exits. Thus your program will contain three functions in all: main(), isPrime(), and Usage(). Each should be preceded by a comment block giving it’s name, a short description of it’s operation, and any necessary preconditions (such as those for isPrime().) See examples on the webpage.

    Read the article

  • Trying to randomise a jQuery content slider

    - by alecrust
    Hi everyone, I'm using a very nice jQuery content slider called Easy Slider on my site that I downloaded from Css Globe. The script is excellent and does just what I want - except I can't make it randomise the list, it always scrolls from left to right or right to left! I'm far from good with JavaScript, so my attempts at solving this have been feeble. Although I'm sure it must be an easy fix! If anyone wouldn't mind taking a glance over the script to see if they can spot what I need to change to make it random it would be greatly appreciated! I've tried contacting the original plugin developer but have had no response yet. The comments on the Easy Slider page didn't bear much fruit either unfortunately. I've pasted the script I'm using on my site below: /* * Easy Slider 1.7 - jQuery plugin * written by Alen Grakalic * http://cssglobe.com/post/4004/easy-slider-15-the-easiest-jquery-plugin-for-sliding * * Copyright (c) 2009 Alen Grakalic (http://cssglobe.com) * Dual licensed under the MIT (MIT-LICENSE.txt) * and GPL (GPL-LICENSE.txt) licenses. * * Built for jQuery library * http://jquery.com * */ (function($) { $.fn.easySlider = function(options){ // default configuration properties var defaults = { prevId: 'prevBtn', prevText: 'Previous', nextId: 'nextBtn', nextText: 'Next', controlsShow: true, controlsBefore: '', controlsAfter: '', controlsFade: true, firstId: 'firstBtn', firstText: 'First', firstShow: false, lastId: 'lastBtn', lastText: 'Last', lastShow: false, vertical: false, speed: 800, auto: false, pause: 7000, continuous: false, numeric: false, numericId: 'controls' }; var options = $.extend(defaults, options); this.each(function() { var obj = $(this); var s = $("li", obj).length; var w = $("li", obj).width(); var h = $("li", obj).height(); var clickable = true; obj.width(w); obj.height(h); obj.css("overflow","hidden"); var ts = s-1; var t = 0; $("ul", obj).css('width',s*w); if(options.continuous){ $("ul", obj).prepend($("ul li:last-child", obj).clone().css("margin-left","-"+ w +"px")); $("ul", obj).append($("ul li:nth-child(2)", obj).clone()); $("ul", obj).css('width',(s+1)*w); }; if(!options.vertical) $("li", obj).css('float','left'); if(options.controlsShow){ var html = options.controlsBefore; if(options.numeric){ html += '<ol id="'+ options.numericId +'"></ol>'; } else { if(options.firstShow) html += '<span id="'+ options.firstId +'"><a href=\"javascript:void(0);\">'+ options.firstText +'</a></span>'; html += ' <span id="'+ options.prevId +'"><a href=\"javascript:void(0);\">'+ options.prevText +'</a></span>'; html += ' <span id="'+ options.nextId +'"><a href=\"javascript:void(0);\">'+ options.nextText +'</a></span>'; if(options.lastShow) html += ' <span id="'+ options.lastId +'"><a href=\"javascript:void(0);\">'+ options.lastText +'</a></span>'; }; html += options.controlsAfter; $(obj).after(html); }; if(options.numeric){ for(var i=0;i<s;i++){ $(document.createElement("li")) .attr('id',options.numericId + (i+1)) .html('<a rel='+ i +' href=\"javascript:void(0);\">'+ (i+1) +'</a>') .appendTo($("#"+ options.numericId)) .click(function(){ animate($("a",$(this)).attr('rel'),true); }); }; } else { $("a","#"+options.nextId).click(function(){ animate("next",true); }); $("a","#"+options.prevId).click(function(){ animate("prev",true); }); $("a","#"+options.firstId).click(function(){ animate("first",true); }); $("a","#"+options.lastId).click(function(){ animate("last",true); }); }; function setCurrent(i){ i = parseInt(i)+1; $("li", "#" + options.numericId).removeClass("current"); $("li#" + options.numericId + i).addClass("current"); }; function adjust(){ if(t>ts) t=0; if(t<0) t=ts; if(!options.vertical) { $("ul",obj).css("margin-left",(t*w*-1)); } else { $("ul",obj).css("margin-left",(t*h*-1)); } clickable = true; if(options.numeric) setCurrent(t); }; function animate(dir,clicked){ if (clickable){ clickable = false; var ot = t; switch(dir){ case "next": t = (ot>=ts) ? (options.continuous ? t+1 : ts) : t+1; break; case "prev": t = (t<=0) ? (options.continuous ? t-1 : 0) : t-1; break; case "first": t = 0; break; case "last": t = ts; break; default: t = dir; break; }; var diff = Math.abs(ot-t); var speed = diff*options.speed; if(!options.vertical) { p = (t*w*-1); $("ul",obj).animate( { marginLeft: p }, { queue:false, duration:speed, complete:adjust } ); } else { p = (t*h*-1); $("ul",obj).animate( { marginTop: p }, { queue:false, duration:speed, complete:adjust } ); }; if(!options.continuous && options.controlsFade){ if(t==ts){ $("a","#"+options.nextId).hide(); $("a","#"+options.lastId).hide(); } else { $("a","#"+options.nextId).show(); $("a","#"+options.lastId).show(); }; if(t==0){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); } else { $("a","#"+options.prevId).show(); $("a","#"+options.firstId).show(); }; }; if(clicked) clearTimeout(timeout); if(options.auto && dir=="next" && !clicked){; timeout = setTimeout(function(){ animate("next",false); },diff*options.speed+options.pause); }; }; }; // init var timeout; if(options.auto){; timeout = setTimeout(function(){ animate("next",false); },options.pause); }; if(options.numeric) setCurrent(0); if(!options.continuous && options.controlsFade){ $("a","#"+options.prevId).hide(); $("a","#"+options.firstId).hide(); }; }); }; })(jQuery); Many thanks again! Alec

    Read the article

< Previous Page | 165 166 167 168 169 170  | Next Page >