Search Results

Search found 1344 results on 54 pages for 'it 07'.

Page 50/54 | < Previous Page | 46 47 48 49 50 51 52 53 54  | Next Page >

  • need help with automating a CMD java tool whcih qurries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process ? the java tool runs from the CMD with the following: C:java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)472906(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • high load average, high wait, dmesg raid error messages (debian nfs server)

    - by John Stumbles
    Debian 6 on HP proliant (2 CPU) with raid (2*1.5T RAID1 + 2*2T RAID1 joined RAID0 to make 3.5T) running mainly nfs & imapd (plus samba for windows share & local www for previewing web pages); with local ubuntu desktop client mounting $HOME, laptops accessing imap & odd files (e.g. videos) via nfs/smb; boxes connected 100baseT or wifi via home router/switch uname -a Linux prole 2.6.32-5-686 #1 SMP Wed Jan 11 12:29:30 UTC 2012 i686 GNU/Linux Setup has been working for months but prone to intermittently going very slow (user experience on desktop mounting $HOME from server, or laptop playing videos) and now consistently so bad I've had to delve into it to try to find what's wrong(!) Server seems OK at low load e.g. (laptop) client (with $HOME on local disk) connecting to server's imapd and nfs mounting RAID to access 1 file: top shows load ~ 0.1 or less, 0 wait but when (desktop) client mounts $HOME and starts user KDE session (all accessing server) then top shows e.g. top - 13:41:17 up 3:43, 3 users, load average: 9.29, 9.55, 8.27 Tasks: 158 total, 1 running, 157 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.4%sy, 0.0%ni, 49.0%id, 49.7%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 903856k total, 851784k used, 52072k free, 171152k buffers Swap: 0k total, 0k used, 0k free, 476896k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3935 root 20 0 2456 1088 784 R 2 0.1 0:00.02 top 1 root 20 0 2028 680 584 S 0 0.1 0:01.14 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.12 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.00 migration/1 7 root 20 0 0 0 0 S 0 0.0 0:00.16 ksoftirqd/1 8 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 9 root 20 0 0 0 0 S 0 0.0 0:00.42 events/0 10 root 20 0 0 0 0 S 0 0.0 0:02.26 events/1 11 root 20 0 0 0 0 S 0 0.0 0:00.00 cpuset 12 root 20 0 0 0 0 S 0 0.0 0:00.00 khelper 13 root 20 0 0 0 0 S 0 0.0 0:00.00 netns 14 root 20 0 0 0 0 S 0 0.0 0:00.00 async/mgr 15 root 20 0 0 0 0 S 0 0.0 0:00.00 pm 16 root 20 0 0 0 0 S 0 0.0 0:00.02 sync_supers 17 root 20 0 0 0 0 S 0 0.0 0:00.02 bdi-default 18 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/0 19 root 20 0 0 0 0 S 0 0.0 0:00.00 kintegrityd/1 20 root 20 0 0 0 0 S 0 0.0 0:00.02 kblockd/0 21 root 20 0 0 0 0 S 0 0.0 0:00.08 kblockd/1 22 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpid 23 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_notify 24 root 20 0 0 0 0 S 0 0.0 0:00.00 kacpi_hotplug 25 root 20 0 0 0 0 S 0 0.0 0:00.00 kseriod 28 root 20 0 0 0 0 S 0 0.0 0:04.19 kondemand/0 29 root 20 0 0 0 0 S 0 0.0 0:02.93 kondemand/1 30 root 20 0 0 0 0 S 0 0.0 0:00.00 khungtaskd 31 root 20 0 0 0 0 S 0 0.0 0:00.18 kswapd0 32 root 25 5 0 0 0 S 0 0.0 0:00.00 ksmd 33 root 20 0 0 0 0 S 0 0.0 0:00.00 aio/0 34 root 20 0 0 0 0 S 0 0.0 0:00.00 aio/1 35 root 20 0 0 0 0 S 0 0.0 0:00.00 crypto/0 36 root 20 0 0 0 0 S 0 0.0 0:00.00 crypto/1 203 root 20 0 0 0 0 S 0 0.0 0:00.00 ksuspend_usbd 204 root 20 0 0 0 0 S 0 0.0 0:00.00 khubd 205 root 20 0 0 0 0 S 0 0.0 0:00.00 ata/0 206 root 20 0 0 0 0 S 0 0.0 0:00.00 ata/1 207 root 20 0 0 0 0 S 0 0.0 0:00.14 ata_aux 208 root 20 0 0 0 0 S 0 0.0 0:00.01 scsi_eh_0 dmesg suggests there's a disk problem: .............. (previous episode) [13276.966004] raid1:md0: read error corrected (8 sectors at 489900360 on sdc7) [13276.966043] raid1: sdb7: redirecting sector 489898312 to another mirror [13279.569186] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13279.569211] ata4.00: irq_stat 0x40000008 [13279.569230] ata4.00: failed command: READ FPDMA QUEUED [13279.569257] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13279.569262] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13279.569306] ata4.00: status: { DRDY ERR } [13279.569321] ata4.00: error: { UNC } [13279.575362] ata4.00: configured for UDMA/133 [13279.575388] ata4: EH complete [13283.169224] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13283.169246] ata4.00: irq_stat 0x40000008 [13283.169263] ata4.00: failed command: READ FPDMA QUEUED [13283.169289] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13283.169294] res 41/40:00:07:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13283.169331] ata4.00: status: { DRDY ERR } [13283.169345] ata4.00: error: { UNC } [13283.176071] ata4.00: configured for UDMA/133 [13283.176104] ata4: EH complete [13286.224814] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13286.224837] ata4.00: irq_stat 0x40000008 [13286.224853] ata4.00: failed command: READ FPDMA QUEUED [13286.224879] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13286.224884] res 41/40:00:06:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13286.224922] ata4.00: status: { DRDY ERR } [13286.224935] ata4.00: error: { UNC } [13286.231277] ata4.00: configured for UDMA/133 [13286.231303] ata4: EH complete [13288.802623] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13288.802646] ata4.00: irq_stat 0x40000008 [13288.802662] ata4.00: failed command: READ FPDMA QUEUED [13288.802688] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13288.802693] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13288.802731] ata4.00: status: { DRDY ERR } [13288.802745] ata4.00: error: { UNC } [13288.808901] ata4.00: configured for UDMA/133 [13288.808927] ata4: EH complete [13291.380430] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13291.380453] ata4.00: irq_stat 0x40000008 [13291.380470] ata4.00: failed command: READ FPDMA QUEUED [13291.380496] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13291.380501] res 41/40:00:05:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13291.380577] ata4.00: status: { DRDY ERR } [13291.380594] ata4.00: error: { UNC } [13291.386517] ata4.00: configured for UDMA/133 [13291.386543] ata4: EH complete [13294.347147] ata4.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0 [13294.347169] ata4.00: irq_stat 0x40000008 [13294.347186] ata4.00: failed command: READ FPDMA QUEUED [13294.347211] ata4.00: cmd 60/08:00:00:6a:05/00:00:23:00:00/40 tag 0 ncq 4096 in [13294.347217] res 41/40:00:06:6a:05/00:00:23:00:00/40 Emask 0x409 (media error) <F> [13294.347254] ata4.00: status: { DRDY ERR } [13294.347268] ata4.00: error: { UNC } [13294.353556] ata4.00: configured for UDMA/133 [13294.353583] sd 3:0:0:0: [sdc] Unhandled sense code [13294.353590] sd 3:0:0:0: [sdc] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE [13294.353599] sd 3:0:0:0: [sdc] Sense Key : Medium Error [current] [descriptor] [13294.353610] Descriptor sense data with sense descriptors (in hex): [13294.353616] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [13294.353635] 23 05 6a 06 [13294.353644] sd 3:0:0:0: [sdc] Add. Sense: Unrecovered read error - auto reallocate failed [13294.353657] sd 3:0:0:0: [sdc] CDB: Read(10): 28 00 23 05 6a 00 00 00 08 00 [13294.353675] end_request: I/O error, dev sdc, sector 587557382 [13294.353726] ata4: EH complete [13294.366953] raid1:md0: read error corrected (8 sectors at 489900544 on sdc7) [13294.366992] raid1: sdc7: redirecting sector 489898496 to another mirror and they're happening quite frequently, which I guess is liable to account for the performance problem(?) # dmesg | grep mirror [12433.561822] raid1: sdc7: redirecting sector 489900464 to another mirror [12449.428933] raid1: sdb7: redirecting sector 489900504 to another mirror [12464.807016] raid1: sdb7: redirecting sector 489900512 to another mirror [12480.196222] raid1: sdb7: redirecting sector 489900520 to another mirror [12495.585413] raid1: sdb7: redirecting sector 489900528 to another mirror [12510.974424] raid1: sdb7: redirecting sector 489900536 to another mirror [12526.374933] raid1: sdb7: redirecting sector 489900544 to another mirror [12542.619938] raid1: sdc7: redirecting sector 489900608 to another mirror [12559.431328] raid1: sdc7: redirecting sector 489900616 to another mirror [12576.553866] raid1: sdc7: redirecting sector 489900624 to another mirror [12592.065265] raid1: sdc7: redirecting sector 489900632 to another mirror [12607.621121] raid1: sdc7: redirecting sector 489900640 to another mirror [12623.165856] raid1: sdc7: redirecting sector 489900648 to another mirror [12638.699474] raid1: sdc7: redirecting sector 489900656 to another mirror [12655.610881] raid1: sdc7: redirecting sector 489900664 to another mirror [12672.255617] raid1: sdc7: redirecting sector 489900672 to another mirror [12672.288746] raid1: sdc7: redirecting sector 489900680 to another mirror [12672.332376] raid1: sdc7: redirecting sector 489900688 to another mirror [12672.362935] raid1: sdc7: redirecting sector 489900696 to another mirror [12674.201177] raid1: sdc7: redirecting sector 489900704 to another mirror [12698.045050] raid1: sdc7: redirecting sector 489900712 to another mirror [12698.089309] raid1: sdc7: redirecting sector 489900720 to another mirror [12698.111999] raid1: sdc7: redirecting sector 489900728 to another mirror [12698.134006] raid1: sdc7: redirecting sector 489900736 to another mirror [12719.034376] raid1: sdc7: redirecting sector 489900744 to another mirror [12734.545775] raid1: sdc7: redirecting sector 489900752 to another mirror [12734.590014] raid1: sdc7: redirecting sector 489900760 to another mirror [12734.624050] raid1: sdc7: redirecting sector 489900768 to another mirror [12734.647308] raid1: sdc7: redirecting sector 489900776 to another mirror [12734.664657] raid1: sdc7: redirecting sector 489900784 to another mirror [12734.710642] raid1: sdc7: redirecting sector 489900792 to another mirror [12734.721919] raid1: sdc7: redirecting sector 489900800 to another mirror [12734.744732] raid1: sdc7: redirecting sector 489900808 to another mirror [12734.779330] raid1: sdc7: redirecting sector 489900816 to another mirror [12782.604564] raid1: sdb7: redirecting sector 1242934216 to another mirror [12798.264153] raid1: sdc7: redirecting sector 1242935080 to another mirror [13245.832193] raid1: sdb7: redirecting sector 489898296 to another mirror [13261.376929] raid1: sdb7: redirecting sector 489898304 to another mirror [13276.966043] raid1: sdb7: redirecting sector 489898312 to another mirror [13294.366992] raid1: sdc7: redirecting sector 489898496 to another mirror although the arrays are still running on all disks - they haven't given up on any yet: # cat /proc/mdstat Personalities : [raid1] [raid0] md10 : active raid0 md0[0] md1[1] 3368770048 blocks super 1.2 512k chunks md1 : active raid1 sde2[2] sdd2[1] 1464087824 blocks super 1.2 [2/2] [UU] md0 : active raid1 sdb7[0] sdc7[2] 1904684920 blocks super 1.2 [2/2] [UU] unused devices: <none> So I think I have some idea what the problem is but I am not a linux sysadmin expert by the remotest stretch of the imagination and would really appreciate some clue checking here with my diagnosis and what do I need to do: obviously I need to source another drive for sdc. (I'm guessing I could buy a larger drive if the price is right: I'm thinking that one day I'll need to grow the size of the array and that would be one less drive to replace with a larger one) then use mdadm to fail out the existing sdc, remove it and fit the new drive fdisk the new drive with the same size partition for the array as the old one had use mdadm to add the new drive into the array that sound OK?

    Read the article

  • need help with automating a CMD java tool which queries alexa AWS using batch

    - by Eli.C
    Hi everyone, I need to get all available info on 600 URLs from "Alexa Web Information Service", I downloaded the java tool and I'm able to run a single query each time with a single switch/Response Group. I would like to ask how to write a batch file that would automate the process? The java tool runs from the CMD with the following: C:\>java UrlInfo (key1) (key2) (URL) (Response Group) UrlInfo - constant key1 - constant key2 -constant URL - variable (I guess I need to use the "(" sign to read from a file) Response Group - variable - (14 total, and I need to run each Response Group on each of the URLs once ) the app returns data in clear text formatted as XML after each query, here is an example: C:\>java UrlInfo (key1) (key2) www.url.com Rank Response: (?xml version="1.0"?) (aws:UrlInfoResponse xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:Response xmlns:aws="http://awis.amazonaws.com/doc/2005-07-11") (aws:OperationRequest) (aws:RequestId)ec2b6-e8ae-b392(/aws:RequestId) (/aws:OperationRequest) (aws:UrlInfoResult) (aws:Alexa) (aws:TrafficData) (aws:DataUrl type="canonical")url.com/(/aws:DataUrl) (aws:Rank)**472906**(/aws:Rank) (/aws:TrafficData) (/aws:Alexa) (/aws:UrlInfoResult) (aws:ResponseStatus xmlns:aws="http://alexa.amazonaws.com/doc/2005-10-05/") (aws:StatusCode)Success(/aws:StatusCode) (/aws:ResponseStatus) (/aws:Response) (/aws:UrlInfoResponse) Any help would be really appreciated Thanks and regards Eli.C

    Read the article

  • android app does not show up on my device or the emulator in eclipse

    - by Sam
    hey everyone, I have no errors in my app-code what so ever, but when i try to run in on either my cell or my emulator/the avd in eclipse i can't run it because it doesn't show up on either one. this is my console output: [2011-02-04 08:14:58 - Versuch] Uploading Versuch.apk onto device 'CB511L2WTB' [2011-02-04 08:14:58 - Versuch] Installing Versuch.apk... [2011-02-04 08:15:01 - Versuch] Success! [2011-02-04 08:15:01 - Versuch] \Versuch\bin\Versuch.apk installed on device [2011-02-04 08:15:01 - Versuch] Done! and this is my LogCat output, which tells me nothing, but you are the experts ;) 02-04 08:18:10.020: DEBUG/dalvikvm(22167): GC freed 2576 objects / 559120 bytes in 37ms 02-04 08:18:10.700: DEBUG/dalvikvm(6709): GC freed 7692 objects / 478912 bytes in 41ms 02-04 08:18:11.170: DEBUG/dalvikvm(31774): GC freed 3367 objects / 163464 bytes in 122ms 02-04 08:18:13.230: DEBUG/dalvikvm(22167): GC freed 2790 objects / 552328 bytes in 38ms 02-04 08:18:14.650: DEBUG/dalvikvm(6709): GC freed 8443 objects / 540440 bytes in 39ms 02-04 08:18:16.260: DEBUG/dalvikvm(31921): GC freed 214 objects / 9824 bytes in 216ms 02-04 08:18:16.670: DEBUG/dalvikvm(22167): GC freed 3232 objects / 561256 bytes in 40ms 02-04 08:18:18.600: DEBUG/dalvikvm(6709): GC freed 7718 objects / 481952 bytes in 39ms 02-04 08:18:19.210: DEBUG/dalvikvm(1129): GC freed 6898 objects / 275328 bytes in 109ms 02-04 08:18:19.690: DEBUG/dalvikvm(22167): GC freed 2968 objects / 571232 bytes in 39ms 02-04 08:18:21.440: DEBUG/dalvikvm(1212): GC freed 1020 objects / 49328 bytes in 395ms 02-04 08:18:22.570: DEBUG/dalvikvm(6709): GC freed 7893 objects / 495616 bytes in 40ms 02-04 08:18:23.060: DEBUG/dalvikvm(22167): GC freed 3117 objects / 561912 bytes in 41ms 02-04 08:18:25.860: DEBUG/dalvikvm(22167): GC freed 2924 objects / 558448 bytes in 36ms 02-04 08:18:26.350: DEBUG/dalvikvm(32098): GC freed 4662 objects / 495496 bytes in 290ms 02-04 08:18:26.410: DEBUG/dalvikvm(22167): GC freed 1077 objects / 130680 bytes in 33ms 02-04 08:18:27.080: DEBUG/dalvikvm(6709): GC freed 7912 objects / 485368 bytes in 40ms 02-04 08:18:28.190: DEBUG/dalvikvm(22167): GC freed 953 objects / 767272 bytes in 33ms 02-04 08:18:29.500: DEBUG/dalvikvm(1129): GC freed 6756 objects / 270480 bytes in 105ms 02-04 08:18:30.500: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:18:30.670: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:18:31.090: DEBUG/dalvikvm(6709): GC freed 10545 objects / 682480 bytes in 49ms 02-04 08:18:31.120: DEBUG/dalvikvm(1813): GC freed 5970 objects / 310912 bytes in 60ms 02-04 08:18:31.320: DEBUG/dalvikvm(22167): GC freed 2468 objects / 539520 bytes in 39ms 02-04 08:18:34.110: DEBUG/dalvikvm(22167): GC freed 2879 objects / 569008 bytes in 35ms 02-04 08:18:34.920: DEBUG/dalvikvm(6709): GC freed 7029 objects / 424632 bytes in 35ms 02-04 08:18:36.150: DEBUG/dalvikvm(9060): GC freed 564 objects / 27840 bytes in 89ms 02-04 08:18:36.630: DEBUG/dalvikvm(22167): GC freed 2437 objects / 554000 bytes in 35ms 02-04 08:18:38.760: DEBUG/dalvikvm(6709): GC freed 8309 objects / 545032 bytes in 36ms 02-04 08:18:39.270: DEBUG/dalvikvm(1129): GC freed 6958 objects / 278352 bytes in 107ms 02-04 08:18:39.970: DEBUG/dalvikvm(22167): GC freed 2915 objects / 560312 bytes in 38ms 02-04 08:18:41.260: DEBUG/dalvikvm(6184): GC freed 373 objects / 26152 bytes in 205ms 02-04 08:18:42.780: DEBUG/dalvikvm(6709): GC freed 7212 objects / 447696 bytes in 36ms 02-04 08:18:43.160: DEBUG/dalvikvm(22167): GC freed 3106 objects / 561824 bytes in 39ms 02-04 08:18:46.310: DEBUG/dalvikvm(22167): GC freed 3110 objects / 564080 bytes in 45ms 02-04 08:18:46.650: DEBUG/dalvikvm(6709): GC freed 7508 objects / 468832 bytes in 36ms 02-04 08:18:48.820: DEBUG/dalvikvm(31712): GC freed 13795 objects / 828232 bytes in 203ms 02-04 08:18:49.040: DEBUG/dalvikvm(1129): GC freed 6918 objects / 276224 bytes in 109ms 02-04 08:18:49.640: DEBUG/dalvikvm(22167): GC freed 2952 objects / 562168 bytes in 37ms 02-04 08:18:50.630: DEBUG/dalvikvm(6709): GC freed 8332 objects / 549680 bytes in 35ms 02-04 08:18:52.770: DEBUG/dalvikvm(22167): GC freed 3108 objects / 563192 bytes in 37ms 02-04 08:18:54.400: DEBUG/dalvikvm(6709): GC freed 7509 objects / 469016 bytes in 35ms 02-04 08:18:55.900: DEBUG/dalvikvm(22167): GC freed 3121 objects / 572920 bytes in 38ms 02-04 08:18:58.150: DEBUG/dalvikvm(6709): GC freed 7408 objects / 465456 bytes in 35ms 02-04 08:18:58.710: DEBUG/dalvikvm(1129): GC freed 6908 objects / 276440 bytes in 107ms 02-04 08:18:59.190: DEBUG/dalvikvm(22167): GC freed 3160 objects / 563144 bytes in 38ms 02-04 08:19:02.080: DEBUG/dalvikvm(6709): GC freed 7436 objects / 468040 bytes in 36ms 02-04 08:19:02.380: DEBUG/dalvikvm(22167): GC freed 3104 objects / 557600 bytes in 39ms 02-04 08:19:05.050: DEBUG/dalvikvm(22167): GC freed 2860 objects / 570072 bytes in 35ms 02-04 08:19:05.810: DEBUG/dalvikvm(6709): GC freed 7508 objects / 469080 bytes in 35ms 02-04 08:19:06.500: DEBUG/skia(22167): --- decoder->decode returned false 02-04 08:19:07.960: DEBUG/dalvikvm(22167): GC freed 2747 objects / 520008 bytes in 36ms 02-04 08:19:08.180: DEBUG/dalvikvm(1129): GC freed 7866 objects / 317304 bytes in 107ms 02-04 08:19:09.540: DEBUG/dalvikvm(6709): GC freed 8220 objects / 539688 bytes in 36ms 02-04 08:19:10.810: DEBUG/dalvikvm(22167): GC freed 2898 objects / 596824 bytes in 37ms 02-04 08:19:13.360: DEBUG/dalvikvm(22167): GC freed 2503 objects / 398936 bytes in 35ms 02-04 08:19:13.370: INFO/dalvikvm-heap(22167): Grow heap (frag case) to 5.029MB for 570264-byte allocation 02-04 08:19:13.400: DEBUG/dalvikvm(22167): GC freed 702 objects / 24976 bytes in 31ms 02-04 08:19:13.400: DEBUG/skia(22167): --- decoder->decode returned false 02-04 08:19:13.540: DEBUG/dalvikvm(6709): GC freed 7481 objects / 466544 bytes in 36ms 02-04 08:19:15.600: DEBUG/WifiService(1129): got ACTION_DEVICE_IDLE 02-04 08:19:15.960: INFO/wpa_supplicant(2522): CTRL-EVENT-DRIVER-STATE STOPPED 02-04 08:19:15.960: VERBOSE/WifiMonitor(1129): Event [CTRL-EVENT-DRIVER-STATE STOPPED] 02-04 08:19:17.270: DEBUG/dalvikvm(22167): GC freed 2372 objects / 1266992 bytes in 36ms 02-04 08:19:17.520: DEBUG/dalvikvm(6709): GC freed 7996 objects / 519128 bytes in 37ms 02-04 08:19:18.150: DEBUG/dalvikvm(1129): GC freed 7110 objects / 285032 bytes in 108ms 02-04 08:19:20.460: DEBUG/dalvikvm(22167): GC freed 3327 objects / 565264 bytes in 36ms 02-04 08:19:21.250: DEBUG/dalvikvm(6709): GC freed 7632 objects / 486024 bytes in 37ms 02-04 08:19:26.470: DEBUG/dalvikvm(31774): GC freed 345 objects / 16160 bytes in 96ms 02-04 08:19:30.423: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:19:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:20:05.280: DEBUG/dalvikvm(1813): GC freed 741 objects / 36840 bytes in 91ms 02-04 08:20:23.580: DEBUG/WifiService(1129): ACTION_BATTERY_CHANGED pluggedType: 2 02-04 08:20:30.423: WARN/System.err(22536): java.lang.Exception: You must call com.mercuryintermedia.productconfiguration.initialize() first 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.ProductConfiguration.getProductName(ProductConfiguration.java:136) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.api.rest.Item.getPublishingContainersItems(Item.java:15) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.getContainerFromServer(ContainerHelper.java:68) 02-04 08:20:30.423: WARN/System.err(22536): at com.mercuryintermedia.mflow.ContainerHelper.run(ContainerHelper.java:46) 02-04 08:20:53.970: INFO/FastDormancyManager(1129): Fast Dormant executed. ExecuteCount:2683 NonExecuteCount:25773 I really hope you can help me.

    Read the article

  • Setting up Apache 2.2 + FastCGI + SuExec + PHP-FPM on Centos 6

    - by mr1031011
    I'm trying to follow this very detailed instruction here, I simply changed from www-data user to apache user, and is using /var/www/hosts/sitename/public_html instead of /home/user/public_html However, I spent the whole day trying to figure out why the php file content is displayed without being parsed correctly. I just cant's seem to figure this out. Below is my current config: /etc/httpd/conf.d/fastcgi.conf User apache Group apache LoadModule fastcgi_module modules/mod_fastcgi.so # dir for IPC socket files FastCgiIpcDir /var/run/mod_fastcgi # wrap all fastcgi script calls in suexec FastCgiWrapper On # global FastCgiConfig can be overridden by FastCgiServer options in vhost config FastCgiConfig -idle-timeout 20 -maxClassProcesses 1 # sample PHP config # see /usr/share/doc/mod_fastcgi-2.4.6 for php-wrapper script # don't forget to disable mod_php in /etc/httpd/conf.d/php.conf! # # to enable privilege separation, add a "SuexecUserGroup" directive # and chown the php-wrapper script and parent directory accordingly # see also http://www.brandonturner.net/blog/2009/07/fastcgi_with_php_opcode_cache/ # FastCgiServer /var/www/www-data/php5-fcgi #AddType application/x-httpd-php .php AddHandler php-fcgi .php Action php-fcgi /fcgi-bin/php5-fcgi Alias /fcgi-bin/ /var/www/www-data/ #FastCgiExternalServer /usr/lib/cgi-bin/php5-fcgi -socket /tmp/php5-fpm.sock -pass-header Authorization #DirectoryIndex index.php # <Location /fcgi-bin/> # Order Deny,Allow # Deny from All # Allow from env=REDIRECT_STATUS SetHandler fcgid-script Options +ExecCGI </Location> /etc/httpd/conf.d/vhost.conf <VirtualHost> DirectoryIndex index.php index.html index.shtml index.cgi SuexecUserGroup www.mysite.com mygroup Alias /fcgi-bin/ /var/www/www-data/www.mysite.com/ DocumentRoot /var/www/hosts/mysite.com/w/w/w/www/ <Directory /var/www/hosts/mysite.com/w/w/w/www/> Options -Indexes FollowSymLinks AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> PS: 1. Also, with PHP5.5, do I even need FPM or is it already included? 2. I'm using mod_fastcgi, not sure if this is the problem and it I should switch to mod_fcgid? There seems to be conflicting records on the internet considering which one is better. I have many virtual hosts running on the machine and hope to be able to provide each user with their own opcache

    Read the article

  • BES Express - configure MDS to push messages from 3rd party web application

    - by Max Gontar
    Hi! I have developed IIS web service to send PAP messages using Blackberry Push API over MDS. And there is an application installed on device, configured to receive push messages on appropriate port. Everything works well on MDS simulator. But it's not working well in real environment: I have installed BES Express and register several devices. I can browse MDS url with appropriate port, so url is correct. Also port enabled for reliable pushes is used in push message and in device application. Here is MDS simulator log: <2011-01-12 14:00:03.456 EET>:[272]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = PapServlet: request from 0:0:0:0:0:0:0:1 564 bytes...> <2011-01-12 14:00:03.476 EET>:[273]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Mapping PAP request to push request for pushID:pushID:asdas> <2011-01-12 14:00:03.479 EET>:[274]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = PushServlet: POST request from [UNKNOWN @ 0:0:0:0:0:0:0:1] to [PAPDEST=WAPPUSH%3D2100000A%253A100%2FTYPE%3DUSER%40rim.net&PORT=100&REQUESTURI=/] : -1 bytes...> <2011-01-12 14:00:03.480 EET>:[275]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = submitting push message with id:pushID:asdas> <2011-01-12 14:00:03.482 EET>:[276]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Executing push submit command for pushID:pushID:asdas> <2011-01-12 14:00:03.483 EET>:[278]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Pushing message to: 2100000a> <2011-01-12 14:00:03.484 EET>:[279]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Number of active push connections:1> <2011-01-12 14:00:03.489 EET>:[280]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = added server-initiated connection = -872546301, push id = pushID:asdas> <2011-01-12 14:00:03.491 EET>:[281]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Available threads in DefaultJobPool = 9 running JobRunner: DefaultJobRunner-7> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION => <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Transmission Line Section]:> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = POST / HTTP/1.1> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Headers Section]: 8 headers> <2011-01-12 14:00:03.494 EET>:[282]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = ReceivedFromServer, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Parameters Section]: 3 parameters> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION => <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Transmission Line Section]:> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = POST / HTTP/1.1> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Headers Section]: 9 headers> <2011-01-12 14:00:03.499 EET>:[283]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, HANDLER = HTTP, EVENT = SentToDevice, DEVICEPIN = 2100000a, CONNECTIONID = -872546301, HTTPTRANSMISSION = [Parameters Section]: 3 parameters> <2011-01-12 14:00:03.501 EET>:[284]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Finished JobRunner: DefaultJobRunner-7, available threads in DefaultJobPool = 10, time spent = 8ms> <2011-01-12 14:00:03.521 EET>:[287]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = CreatedSendingQueue, DEVICEPIN = 2100000a> <2011-01-12 14:00:03.526 EET>:[290]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = Sending, TAG = 1288699908, DEVICEPIN = 2100000a, VERSION = 16, CONNECTIONID = -872546301, SEQUENCE = 0, TYPE = NOTIFY-REQUEST, CONNECTIONHANDLER = http, PROTOCOL = TCP, PARAMETERS = [MGONTAR/10.10.0.35:100], SIZE = 339> <2011-01-12 14:00:03.531 EET>:[291]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Number of active push connections:0> <2011-01-12 14:00:03.591 EET>:[292]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = Notification, TAG = 1288699908, STATE = DELIVERED> <2011-01-12 14:00:03.600 EET>:[296]:<MDS-CS_MDS>:<DEBUG>:<LAYER = SCM, EVENT = Device connections: AVG latency (msecs)79> <2011-01-12 14:00:03.600 EET>:[297]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, Removed push connection:-872546301> <2011-01-12 14:00:07.015 EET>:[298]:<MDS-CS_MDS>:<DEBUG>:<LAYER = IPPP, EVENT = RemovedSendingQueue, DEVICEPIN = 2100000a> And here is real MDS log: <2011-01-12 11:35:02.763 GMT>:[3932]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, PapServlet: request from 192.168.1.241 583 bytes...> <2011-01-12 11:35:02.897 GMT>:[3933]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Mapping PAP request to push request for pushID:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.909 GMT>:[3934]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, PushServlet: POST request from [UNKNOWN @ 192.168.1.241] to [PAPDEST=WAPPUSH%3D22D7F6BD%253A7874%2FTYPE%3DUSER%40rim.net&PORT=7874&REQUESTURI=/]> <2011-01-12 11:35:02.909 GMT>:[3934]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<push id: pushID:sdfsdfwerwer> <2011-01-12 11:35:02.910 GMT>:[3935]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, submitting push message with id:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.910 GMT>:[3936]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Executing push submit command for pushID:pushID:sdfsdfwerwer> <2011-01-12 11:35:02.911 GMT>:[3937]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Pushing message to: 22d7f6bd> <2011-01-12 11:35:02.912 GMT>:[3938]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Number of active push connections:1> <2011-01-12 11:35:02.931 GMT>:[3939]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, added server-initiated connection = -1848311806, push id = pushID:sdfsdfwerwer> <2011-01-12 11:35:03.240 GMT>:[3940]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = CreatedSendingQueue, DEVICEPIN = 22d7f6bd, USERID = u3> <2011-01-12 11:35:03.241 GMT>:[3941]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = Sending, TAG = 536543251, DEVICEPIN = 22d7f6bd, USERID = u3, VERSION = 16, CONNECTIONID = -1848311806, SEQUENCE = 0, TYPE = NOTIFY-REQUEST, CONNECTIONHANDLER = http, PROTOCOL = TCP, PARAMETERS = [LDN-Server1/192.168.1.240:7874], SIZE = 383> <2011-01-12 11:35:03.241 GMT>:[3942]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Number of active push connections:0> <2011-01-12 11:35:03.253 GMT>:[3943]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Sending, VERSION = 1, COMMAND = SEND, TAG = 536543251, SIZE = 570> <2011-01-12 11:35:03.838 GMT>:[3944]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Receiving, VERSION = 1, COMMAND = STATUS, TAG = 536543251, SIZE = 10, STATE = DELIVERED> <2011-01-12 11:35:04.104 GMT>:[3945]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = Notification, TAG = 536543251, STATE = DELIVERED> <2011-01-12 11:35:04.121 GMT>:[3946]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Device connections: AVG latency (msecs)893> <2011-01-12 11:35:04.135 GMT>:[3947]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<INFO >:<LAYER = IPPP, DEVICEPIN = 22d7f6bd, DOMAINNAME = LDN-Server1/192.168.1.240, CONNECTION_TYPE = PUSH_CONN, ConnectionId = -1848311806, DURATION(ms) = 1151, MFH_KBytes = 0, MTH_KBytes = 0.374, MFH_PACKET_COUNT = 0, MTH_PACKET_COUNT = 1> <2011-01-12 11:35:04.144 GMT>:[3948]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, Removed push connection:-1848311806> <2011-01-12 11:35:09.264 GMT>:[3949]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = IPPP, EVENT = RemovedSendingQueue, DEVICEPIN = 22d7f6bd, USERID = u3> <2011-01-12 11:35:58.187 GMT>:[3950]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SRP, SRPID = S27700165[LDN-SERVER1:3200], EVENT = Sending, VERSION = 1, COMMAND = INFO, SIZE = 46> <2011-01-12 11:35:58.187 GMT>:[3951]:<MDS-CS_LDN-SERVER1_MDS-CS_1>:<DEBUG>:<LAYER = SCM, Sent health to S27700165[LDN-SERVER1:3200] Health=[0x 0000 0007 0000 0000],Mask=[0x 0000 0007 0000 0000],Load=[60]> As you can see, logs not really differs, message is marked as delivered. But my app on device not really gets this message (as it works in mds simulator) Please advice me, what may be wrong? Is there some certificate to install or security settings I should configure to make this push message came to device application? Thank you! same question on bbforums

    Read the article

  • Windows 2012 Server: Enable .NET 3.5

    - by Meengla
    I have a Windows 2012 Server which needs .NET 3.5 installed. For background info/solutions, please see this: http://sqlblog.com/blogs/sergio_govoni/archive/2013/07/13/how-to-install-netfx3-on-windows-server-2012-required-by-sql-server-2012.aspx I have tried this but it doesn't work for me. Here is what I am trying: Using a Windows 2012 ISO file, 'mount' on the 2012 Server as 'D' drive and then tried both GUI and Command prompt. In case of GUI, I specified the 'alternate source' path to the D drive's 'source/sxs folder but that failed without giving enough info. In case of the command prompt, here is what's happening: dism /online /enable-feature /featurename:NetFx3 /source:d:\sources\sxs I get error: Installed but Parent feature not enabled. So I tried another approached: dism /online /enable-feature /featurename:NetFx3 /all /source:d:\sources\sxs the above command, per http://blogs.technet.com/b/askcore/archive/2012/05/14/windows-8-and-net-framework-3-5.aspx is supposed to enable parent elements; but running this I get error like 'source not found'. Is there some error in my second command? What else I could do? This is Windows 2012 Server Standard edition. Thanks!

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • trying to allow domain admins access in apache

    - by sharif
    I am trying to authenticate domain admins through apache and it is not working. Error i get is as follows [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(1432): [client 172.16.0.85] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(915): [client 172.16.0.85] Using HTTP/[email protected] as server principal for password verification [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(655): [client 172.16.0.85] Trying to get TGT for user [email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(569): [client 172.16.0.85] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(994): [client 172.16.0.85] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(561): [client 172.16.0.85] ldap authorize: Creating LDAP req structure [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(573): [client 172.16.0.85] auth_ldap authorise: User DN not found, LDAP: ldap_simple_bind_s() failed Below is what I have in my httpd file Alias /compass "/data/intranet/html/compass" <Directory "/data/intranet/html/compass"> AuthType Kerberos AuthName KerberosLogin KrbServiceName HTTP/intranet.xxx.com KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms xxx.COM Krb5KeyTab /etc/httpd/conf/intranet.keytab # require valid-user # Options Indexes MultiViews FollowSymLinks # AllowOverride All # Order allow,deny # Allow from all # SetOutputFilter DEFLATE # taken from http://blogs.freebsdish.org/tmclaugh/2010/07/15/mod_auth_kerb-ad-and-ldap-authorization/ # download extra module and install # Strip the kerberos realm from the principle. # MapUsernameRule (.*)@(.*) "$1" AuthLDAPURL "ldap://echo.uk.xxx.com akhutan.usa.xxx.com/dc=xxx,dc=com?sAMAccountName" AuthLDAPBindDN cn=Administrator,ou=Users,dc=xxx,dc=com AuthLDAPBindPassword *** Require ldap-group cn=Domain Admins,ou=Users,dc=xxx,dc=com </Directory> I have followed this guide. I have download and install the tarball. when I try to uncomment MapUsernameRule i get failed error when restarting apache Reloading httpd: not reloading due to configuration syntax error I am using centos 5 64bit. I have added the following line but i still get syntax error LoadModule mod_map_user modules/mod_map_user.so

    Read the article

  • TSQL Conditionally Select Specific Value

    - by Dzejms
    This is a follow-up to #1644748 where I successfully answered my own question, but Quassnoi helped me to realize that it was the wrong question. He gave me a solution that worked for my sample data, but I couldn't plug it back into the parent stored procedure because I fail at SQL 2005 syntax. So here is an attempt to paint the broader picture and ask what I actually need. This is part of a stored procedure that returns a list of items in a bug tracking application I've inherited. There are are over 100 fields and 26 joins so I'm pulling out only the mostly relevant bits. SELECT tickets.ticketid, tickets.tickettype, tickets_tickettype_lu.tickettypedesc, tickets.stage, tickets.position, tickets.sponsor, tickets.dev, tickets.qa, DATEDIFF(DAY, ticket_history_assignment.savedate, GETDATE()) as 'daysinqueue' FROM dbo.tickets WITH (NOLOCK) LEFT OUTER JOIN dbo.tickets_tickettype_lu WITH (NOLOCK) ON tickets.tickettype = tickets_tickettype_lu.tickettypeid LEFT OUTER JOIN dbo.tickets_history_assignment WITH (NOLOCK) ON tickets_history_assignment.ticketid = tickets.ticketid AND tickets_history_assignment.historyid = ( SELECT MAX(historyid) FROM dbo.tickets_history_assignment WITH (NOLOCK) WHERE tickets_history_assignment.ticketid = tickets.ticketid GROUP BY tickets_history_assignment.ticketid ) WHERE tickets.sponsor = @sponsor The area of interest is the daysinqueue subquery mess. The tickets_history_assignment table looks roughly as follows declare @tickets_history_assignment table ( historyid int, ticketid int, sponsor int, dev int, qa int, savedate datetime ) insert into @tickets_history_assignment values (1521402, 92774,20,14, 20, '2009-10-27 09:17:59.527') insert into @tickets_history_assignment values (1521399, 92774,20,14, 42, '2009-08-31 12:07:52.917') insert into @tickets_history_assignment values (1521311, 92774,100,14, 42, '2008-12-08 16:15:49.887') insert into @tickets_history_assignment values (1521336, 92774,100,14, 42, '2009-01-16 14:27:43.577') Whenever a ticket is saved, the current values for sponsor, dev and qa are stored in the tickets_history_assignment table with the ticketid and a timestamp. So it is possible for someone to change the value for qa, but leave sponsor alone. What I want to know, based on all of these conditions, is the historyid of the record in the tickets_history_assignment table where the sponsor value was last changed so that I can calculate the value for daysinqueue. If a record is inserted into the history table, and only the qa value has changed, I don't want that record. So simply relying on MAX(historyid) won't work for me. Quassnoi came up with the following which seemed to work with my sample data, but I can't plug it into the larger query, SQL Manager bitches about the WITH statement. ;WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY ticketid ORDER BY savedate DESC) AS rn FROM @Table ) SELECT rl.sponsor, ro.savedate FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.savedate FROM rows rc JOIN rows rn ON rn.ticketid = rc.ticketid AND rn.rn = rc.rn + 1 AND rn.sponsor <> rc.sponsor WHERE rc.ticketid = rl.ticketid ORDER BY rc.rn ) ro WHERE rl.rn = 1 I played with it yesterday afternoon and got nowhere because I don't fundamentally understand what is going on here and how it should fit into the larger context. So, any takers? UPDATE Ok, here's the whole thing. I've been switching some of the table and column names in an attempt to simplify things so here's the full unedited mess. snip - old bad code Here are the errors: Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 159 Incorrect syntax near ';'. Msg 102, Level 15, State 1, Procedure usp_GetProjectRecordsByAssignment, Line 179 Incorrect syntax near ')'. Line numbers are of course not correct but refer to ;WITH rows AS And the ')' char after the WHERE rl.rn = 1 ) Respectively Is there a tag for extra super long question? UPDATE #2 Here is the finished query for anyone who may need this: CREATE PROCEDURE [dbo].[usp_GetProjectRecordsByAssignment] ( @assigned numeric(18,0), @assignedtype numeric(18,0) ) AS SET NOCOUNT ON WITH rows AS ( SELECT *, ROW_NUMBER() OVER (PARTITION BY recordid ORDER BY savedate DESC) AS rn FROM projects_history_assignment ) SELECT projects_records.recordid, projects_records.recordtype, projects_recordtype_lu.recordtypedesc, projects_records.stage, projects_stage_lu.stagedesc, projects_records.position, projects_position_lu.positiondesc, CASE projects_records.clientrequested WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientrequested, projects_records.reportingmethod, projects_reportingmethod_lu.reportingmethoddesc, projects_records.clientaccess, projects_clientaccess_lu.clientaccessdesc, projects_records.clientnumber, projects_records.project, projects_lu.projectdesc, projects_records.version, projects_version_lu.versiondesc, projects_records.projectedversion, projects_version_lu_projected.versiondesc AS projectedversiondesc, projects_records.sitetype, projects_sitetype_lu.sitetypedesc, projects_records.title, projects_records.module, projects_module_lu.moduledesc, projects_records.component, projects_component_lu.componentdesc, projects_records.loginusername, projects_records.loginpassword, projects_records.assistedusername, projects_records.browsername, projects_browsername_lu.browsernamedesc, projects_records.browserversion, projects_records.osname, projects_osname_lu.osnamedesc, projects_records.osversion, projects_records.errortype, projects_errortype_lu.errortypedesc, projects_records.gsipriority, projects_gsipriority_lu.gsiprioritydesc, projects_records.clientpriority, projects_clientpriority_lu.clientprioritydesc, projects_records.scheduledstartdate, projects_records.scheduledcompletiondate, projects_records.projectedhours, projects_records.actualstartdate, projects_records.actualcompletiondate, projects_records.actualhours, CASE projects_records.billclient WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS billclient, projects_records.billamount, projects_records.status, projects_status_lu.statusdesc, CASE CAST(projects_records.assigned AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' WHEN '20000' THEN 'Client' WHEN '30000' THEN 'Tech Support' WHEN '40000' THEN 'LMI Tech Support' WHEN '50000' THEN 'Upload' WHEN '60000' THEN 'Spider' WHEN '70000' THEN 'DB Admin' ELSE rtrim(users_assigned.nickname) + ' ' + rtrim(users_assigned.lastname) END AS assigned, CASE CAST(projects_records.assigneddev AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assigneddev.nickname) + ' ' + rtrim(users_assigneddev.lastname) END AS assigneddev, CASE CAST(projects_records.assignedqa AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedqa.nickname) + ' ' + rtrim(users_assignedqa.lastname) END AS assignedqa, CASE CAST(projects_records.assignedsponsor AS VARCHAR(5)) WHEN '0' THEN 'N/A' WHEN '10000' THEN 'Unassigned' ELSE rtrim(users_assignedsponsor.nickname) + ' ' + rtrim(users_assignedsponsor.lastname) END AS assignedsponsor, projects_records.clientcreated, CASE projects_records.clientcreated WHEN '1' THEN 'Yes' WHEN '0' THEN 'No' END AS clientcreateddesc, CASE projects_records.clientcreated WHEN '1' THEN rtrim(clientusers_createuser.firstname) + ' ' + rtrim(clientusers_createuser.lastname) + ' (Client)' ELSE rtrim(users_createuser.nickname) + ' ' + rtrim(users_createuser.lastname) END AS createuser, projects_records.createdate, projects_records.savedate, projects_resolution.sitesaffected, projects_sitesaffected_lu.sitesaffecteddesc, DATEDIFF(DAY, projects_history_assignment.savedate, GETDATE()) as 'daysinqueue', projects_records.iOnHitList, projects_records.changetype FROM dbo.projects_records WITH (NOLOCK) LEFT OUTER JOIN dbo.projects_recordtype_lu WITH (NOLOCK) ON projects_records.recordtype = projects_recordtype_lu.recordtypeid LEFT OUTER JOIN dbo.projects_stage_lu WITH (NOLOCK) ON projects_records.stage = projects_stage_lu.stageid LEFT OUTER JOIN dbo.projects_position_lu WITH (NOLOCK) ON projects_records.position = projects_position_lu.positionid LEFT OUTER JOIN dbo.projects_reportingmethod_lu WITH (NOLOCK) ON projects_records.reportingmethod = projects_reportingmethod_lu.reportingmethodid LEFT OUTER JOIN dbo.projects_lu WITH (NOLOCK) ON projects_records.project = projects_lu.projectid LEFT OUTER JOIN dbo.projects_version_lu WITH (NOLOCK) ON projects_records.version = projects_version_lu.versionid LEFT OUTER JOIN dbo.projects_version_lu projects_version_lu_projected WITH (NOLOCK) ON projects_records.projectedversion = projects_version_lu_projected.versionid LEFT OUTER JOIN dbo.projects_sitetype_lu WITH (NOLOCK) ON projects_records.sitetype = projects_sitetype_lu.sitetypeid LEFT OUTER JOIN dbo.projects_module_lu WITH (NOLOCK) ON projects_records.module = projects_module_lu.moduleid LEFT OUTER JOIN dbo.projects_component_lu WITH (NOLOCK) ON projects_records.component = projects_component_lu.componentid LEFT OUTER JOIN dbo.projects_browsername_lu WITH (NOLOCK) ON projects_records.browsername = projects_browsername_lu.browsernameid LEFT OUTER JOIN dbo.projects_osname_lu WITH (NOLOCK) ON projects_records.osname = projects_osname_lu.osnameid LEFT OUTER JOIN dbo.projects_errortype_lu WITH (NOLOCK) ON projects_records.errortype = projects_errortype_lu.errortypeid LEFT OUTER JOIN dbo.projects_resolution WITH (NOLOCK) ON projects_records.recordid = projects_resolution.recordid LEFT OUTER JOIN dbo.projects_sitesaffected_lu WITH (NOLOCK) ON projects_resolution.sitesaffected = projects_sitesaffected_lu.sitesaffectedid LEFT OUTER JOIN dbo.projects_gsipriority_lu WITH (NOLOCK) ON projects_records.gsipriority = projects_gsipriority_lu.gsipriorityid LEFT OUTER JOIN dbo.projects_clientpriority_lu WITH (NOLOCK) ON projects_records.clientpriority = projects_clientpriority_lu.clientpriorityid LEFT OUTER JOIN dbo.projects_status_lu WITH (NOLOCK) ON projects_records.status = projects_status_lu.statusid LEFT OUTER JOIN dbo.projects_clientaccess_lu WITH (NOLOCK) ON projects_records.clientaccess = projects_clientaccess_lu.clientaccessid LEFT OUTER JOIN dbo.users users_assigned WITH (NOLOCK) ON projects_records.assigned = users_assigned.userid LEFT OUTER JOIN dbo.users users_assigneddev WITH (NOLOCK) ON projects_records.assigneddev = users_assigneddev.userid LEFT OUTER JOIN dbo.users users_assignedqa WITH (NOLOCK) ON projects_records.assignedqa = users_assignedqa.userid LEFT OUTER JOIN dbo.users users_assignedsponsor WITH (NOLOCK) ON projects_records.assignedsponsor = users_assignedsponsor.userid LEFT OUTER JOIN dbo.users users_createuser WITH (NOLOCK) ON projects_records.createuser = users_createuser.userid LEFT OUTER JOIN dbo.clientusers clientusers_createuser WITH (NOLOCK) ON projects_records.createuser = clientusers_createuser.userid LEFT OUTER JOIN dbo.projects_history_assignment WITH (NOLOCK) ON projects_history_assignment.recordid = projects_records.recordid AND projects_history_assignment.historyid = ( SELECT ro.historyid FROM rows rl CROSS APPLY ( SELECT TOP 1 rc.historyid FROM rows rc JOIN rows rn ON rn.recordid = rc.recordid AND rn.rn = rc.rn + 1 AND rn.assigned <> rc.assigned WHERE rc.recordid = rl.recordid ORDER BY rc.rn ) ro WHERE rl.rn = 1 AND rl.recordid = projects_records.recordid ) WHERE (@assignedtype='0' and projects_records.assigned = @assigned) OR (@assignedtype='1' and projects_records.assigneddev = @assigned) OR (@assignedtype='2' and projects_records.assignedqa = @assigned) OR (@assignedtype='3' and projects_records.assignedsponsor = @assigned) OR (@assignedtype='4' and projects_records.createuser = @assigned)

    Read the article

  • Amazon VPC NAT not working

    - by rpkelly
    I'm trying to create a NAT instance for my VPC to allow instances on private subnets connect to the internet (most importantly, S3). I tried following the instructions here: http://docs.amazonwebservices.com/AmazonVPC/2011-07-15/UserGuide/index.html?VPC_NAT_Instance.html . Unfortunately, the instances in the private subnet (call it 10.10.2.0/24) cannot reach the internet. I have done the following: Create a NAT instance (Amazon's ami-vpc-nat-1.0.0-beta.i386-ebs (ami-d8699bb1)) in public subnet (call it 10.10.1.0/24). Changed "Source / Dest Check" to disabled. Created a new entry in the default routing table (which is used by 10.10.2.0/24) and had it point to the ID of the newly created instance. Associated an Elastic IP address with the NAT instance. Allowed all outbound traffic on the security group of the NAT instance. Ensured that all traffic could pass between the two subnets. I've tried also doing this with an existing instance using iptables, but had no luck. And I have verified that sys.net.ipv4.ip_forward is 1, just in case anyone was wondering. And I still have no internet connectivity from the instances on 10.10.2.0/24. Does anyone have any suggestions?

    Read the article

  • Exchange 2010 Recovery: Mailbox not found using Restore-Mailbox

    - by user146665
    Exchange 2010 SP1 Update Rollup 5 server information store database was restored to a Recovery Database using EMC Networker successfully. The Recovery Database is in a mounted state with mailboxes listed within in it. However, when restoring the mailbox content using the following command: Restore-Mailbox –Identity MYMAILBOX –RecoveryDatabase MYRECOVERYDB –RecoveryMailbox LOSTMAILBOX –TargetFolder FOLDERFORLOSTMAILBOX Returns the following error: Mailbox "LOSTMAILBOX" doesn't exist on database "MYRECOVERYDB". + CategoryInfo : NotSpecified: (0:Int32) [Restore-Mailbox], ManagementObjectNotFoundException + FullyQualifiedErrorId : 66265C53,Microsoft.Exchange.Management.RecipientTasks.RestoreMailbox Note: I've used the correct alias name for the mailbox name; i've also tried combinations such as first name, or last name or both and so forth. Issuing a Get-MailboxStatistics -Database MYRECOVERYDB to see if the mailbox is there and it is as shown below: DisplayName ItemCount StorageLimitStatus LOSTMAILBOX 39495 MailboxDisabled Note: The StorageLimitStatus shows a strange output of MaibloxDisabled. Perhaps this may be the culprit. Going by the article's documentation I cannot complete the restore of the mailbox as I'm stuck at the restore-mailbox error that it cannot be found. Please advise & Thank you! Source of article: http://www.testlabs.se/blog/2012/07/05/exchange-2010-restore-to-recovery-database-using-emc-networker/

    Read the article

  • NullPointerException, Collections not storing data?

    - by Elliott
    Hi there, I posted this question earlier but not with the code in its entirety. The coe below also calls to other classes Background and Hydro which I have included at the bottom. I have a Nullpointerexception at the line indicate by asterisks. Which would suggest to me that the Collections are not storing data properly. Although when I check their size they seem correct. Thanks in advance. PS: If anyone would like to give me advice on how best to format my code to make it readable, it would be appreciated. Elliott package exam0607; import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import java.net.URL; import java.util.Collection; import java.util.Scanner; import java.util.Vector; import exam0607.Hydro; import exam0607.Background;// this may not be necessary???? FIND OUT public class HydroAnalysis { public static void main(String[] args) { Collection hydroList = null; Collection backList = null; try{hydroList = readHydro("http://www.hep.ucl.ac.uk/undergrad/3459/exam_data/2006-07/final/hd_data.dat");} catch (IOException e){ e.getMessage();} try{backList = readBackground("http://www.hep.ucl.ac.uk/undergrad/3459/exam_data/2006-07/final/hd_bgd.dat"); //System.out.println(backList.size()); } catch (IOException e){ e.getMessage();} for(int i =0; i <=14; i++ ){ String nameroot = "HJK"; String middle = Integer.toString(i); String hydroName = nameroot + middle + "X"; System.out.println(hydroName); ALGO_1(hydroName, backList, hydroList); } } public static Collection readHydro(String url) throws IOException { URL u = new URL(url); InputStream is = u.openStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader b = new BufferedReader(isr); String line =""; Collection data = new Vector(); while((line = b.readLine())!= null){ Scanner s = new Scanner(line); String name = s.next(); System.out.println(name); double starttime = Double.parseDouble(s.next()); System.out.println(+starttime); double increment = Double.parseDouble(s.next()); System.out.println(+increment); double p = 0; double nterms = 0; while(s.hasNextDouble()){ p = Double.parseDouble(s.next()); System.out.println(+p); nterms++; System.out.println(+nterms); } Hydro SAMP = new Hydro(name, starttime, increment, p); data.add(SAMP); } return data; } public static Collection readBackground(String url) throws IOException { URL u = new URL(url); InputStream is = u.openStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader b = new BufferedReader(isr); String line =""; Vector data = new Vector(); while((line = b.readLine())!= null){ Scanner s = new Scanner(line); String name = s.next(); //System.out.println(name); double starttime = Double.parseDouble(s.next()); //System.out.println(starttime); double increment = Double.parseDouble(s.next()); //System.out.println(increment); double sum = 0; double p = 0; double nterms = 0; while((s.hasNextDouble())){ p = Double.parseDouble(s.next()); //System.out.println(p); nterms++; sum += p; } double pbmean = sum/nterms; Background SAMP = new Background(name, starttime, increment, pbmean); //System.out.println(SAMP); data.add(SAMP); } return data; } public static void ALGO_1(String hydroName, Collection backgs, Collection hydros){ //double aMin = Double.POSITIVE_INFINITY; //double sum = 0; double intensity = 0; double numberPN_SIG = 0; double POSITIVE_PN_SIG =0; //int numberOfRays = 0; for(Hydro hd: hydros){ System.out.println(hd.H_NAME); for(Background back : backgs){ System.out.println(back.H_NAME); if(back.H_NAME.equals(hydroName)){//ERROR HERE double PN_SIG = Math.max(0.0, hd.PN - back.PBMEAN); numberPN_SIG ++; if(PN_SIG 0){ intensity += PN_SIG; POSITIVE_PN_SIG ++; } } } double positive_fraction = POSITIVE_PN_SIG/numberPN_SIG; if(positive_fraction < 0.5){ System.out.println( hydroName + "is faulty" ); } else{System.out.println(hydroName + "is not faulty");} System.out.println(hydroName + "has instensity" + intensity); } } } THE BACKGROUND CLASS package exam0607; public class Background { String H_NAME; double T_START; double DT; double PBMEAN; public Background(String name, double starttime, double increment, double pbmean) { name = H_NAME; starttime = T_START; increment = DT; pbmean = PBMEAN; }} AND THE HYDRO CLASS public class Hydro { String H_NAME; double T_START; double DT; double PN; public double n; public Hydro(String name, double starttime, double increment, double p) { name = H_NAME; starttime = T_START; increment = DT; p = PN; } }

    Read the article

  • How could Load average numbers from 'htop' exceed 100% CPU utlization?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • Limit copssh users to home directory Windows 7

    - by Siriss
    Hello all- I have found these two sites below: CopSSH SFTP -- limit users access to their home directory only and http://blogs.windowsnetworking.com/wnadmin/2006/11/07/copssh-restricting-users-access/ as well as the Copssh website, but upon completion they do not seem to work. I have copssh installed and I have a separate Windows account "sftpuser" created that is used to connect. The connection works just fine, but I want to limit that user to just their home directory and sub folders. I have 3 hard drives, the C:, a W: and an S: and I want the FTP account to only be able to access the W: drive and its contents (the root of the W: drive is the FTP home directory). Right now "sftpuser" can access all folders, including jump drives to C:, and S:. The linked tutorials do not seem to work, because it seems when I create a group "ftpusersgroup" and add "sftpuser" to the group, and then deny "ftpusersgroup" access to the C: drive, the service breaks and I can no longer login. I have undone everything and am ready to start fresh. Does anyone know how to do this, or is there a better tutorial that someone has or has found? I hope this makes sense. Thank you very much for any help!

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • Ubuntu 13.10 - How to disable LVM and cryptsetup? cryptsetup: evms_activate is not available

    - by NeverEndingQueue
    I am trying to remove whole drive encryption from my Ubuntu installation. I've run Ubuntu from Live CD, mounted crypt partition and copied it to another partition /dev/sda3. sudo cryptsetup luksOpen /dev/sda5 crypt1 sudo dd if=/dev/ubuntu-vg/root of=/dev/sda3 bs=1M After that I've run boot-repair: https://help.ubuntu.com/community/Boot-Repair Added entry to /etc/fstab: UUID=<uuid> / ext4 errors=remount-ro 0 1 Of course I've replaced with blkid result of my /dev/sda3. I've also deleted overlayfs and tmpfs lines from /etc/fstab. (I've just compared it to content of /etc/fstab in non-encrypted Ubuntu installation and could not find overlayfs and tmpfs). I've chrooted from LiveCD into my system and rebuilt initramfs: http://blog.leenix.co.uk/2012/07/evmsactivate-is-not-available-on-boot.html I've also removed cryptsetup using apt-get remove. Basically I can easily mount my system partition from Live CD (without setting up the encryption and LVM stuff), but can not boot from it. Instead I see: cryptsetup: evms_activate is not available When I've chosen the Recovery mode I've seen this: Begin: Mounting root file system ... Begin: Running /script/local-top ... Reading all physical volumes. This may take a while ... No volume groups found cryptsetup: evms_activate is not available Begin: Waiting for encrytpted source device ... My /etc/crypttab is empty. I am pretty sure that system tries to find encrypted partition, search for LVMs etc. Do you have ideas what could be the problem or how can I fix it? Thanks

    Read the article

  • Need help displaying a table dynamically based on a drop down date selection

    - by Gideon
    I am designing a job rota planner for a company and need help displaying a dynamic table containing the staff details. I have the following tables in MySQL database: Staff, Event, and Job. The staff table holds staff details (staffed, name, address...etc), the Event table (eventide, eventName, Fromdate, Todate...etc) and the Job table holds (Jobid, Jobdate, Eventid(fk), Staffid (fk)). I need to dynamically display the available staff list from the staff table when the user selects the event and the date (3 drop downs: date, month, and year) from a PHP form. I need to display staff members that have not been assigned work on the selected date by checking the Jobdate in the Job table. I have been at this for all day and can't get around it. I am still learning PHP and would surely appreciate any help I can get. My current code displays all staff members when an event is selected: //Create the day pull-down menu. $days = range (1, 31); echo "<Select Name=day Value=''><Option>Day</option>"; foreach ($days as $value) { echo '<option value="'.$value.'">'.$value.'</option>\n'; } echo "</Select>"; echo "</td><td>"; //Create the month pull-down menu echo "<Select Name=month Value=''><Option>Month</option>"; echo "<option value='01'>Jan</option>"; echo "<option value='02'>Feb</option>"; echo "<option value='03'>Mar</option>"; echo "<option value='04'>Apr</option>"; echo "<option value='05'>May</option>"; echo "<option value='06'>Jun</option>"; echo "<option value='07'>Jul</option>"; echo "<option value='08'>Aug</option>"; echo "<option value='09'>Sep</option>"; echo "<option value='10'>Oct</option>"; echo "<option value='11'>Nov</option>"; echo "<option value='12'>Dec</option>"; echo "</select>"; echo "</td><td>"; //Create the year pull-down menu $currentYear = date ("Y"); $years = range ($currentYear, 2020); echo "<Select Name=year Value=''><Option>Year</option>"; foreach ($years as $value) { echo '<option value="'.$value.'">'.$value.'</option>\n'; } echo "</Select>"; echo "</td></tr></table>"; echo "</td><td>"; //echo "<img src='../ETMSimages/etms_staff.png'</img></td><td>"; //construct the available staff list $staffsql = "SELECT StaffId, LastName, FirstName FROM Staff order by StaffId"; $staffResult = mysql_query($staffsql); if ($staffResult){ echo "<p><table cellspacing='1' cellpadding='3'>"; echo "<th colspan=6>List of Available Staff</th>"; echo "</tr><tr><th> Select</th><th>Id</th><th></th><th>Last Name </th><th></th><th>First Name </th></tr>"; while ($staffarray = mysql_fetch_array($staffResult)) { echo "<tr onMouseOver= this.bgColor = 'red' onMouseOut =this.bgColor = 'white' bgcolor= '#FFFFFF'> <td align=center><input type='checkbox' name='selectbox[]' id='selectbox[]' value=".$staffarray['StaffId']."> </td><td align=left>".$staffarray['StaffId']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['LastName']." </td><td>&nbsp&nbsp</td><td align=center>".$staffarray['FirstName']." </td></tr>"; } echo "</table>"; } else { echo "<br> The Staff list can not be displayed!"; } echo "</td></tr>"; echo "<tr><td></td>"; echo "<td align=center><input type='submit' name='Submit' value='Assign Staff'>&nbsp&nbsp"; echo "<input type='reset' value='Start Over'>"; echo "</td></tr>"; echo "</table>";

    Read the article

  • Parsing HTML using HTTP Agility Pack

    - by Pajci
    Here is one table out of 5: <h3>marec - maj 2009</h3> <div class="graf_table"> <table summary="layout table"> <tr> <th>DATUM</th> <td class="datum">10.03.2009</td> <td class="datum">24.03.2009</td> <td class="datum">07.04.2009</td> <td class="datum">21.04.2009</td> <td class="datum">05.05.2009</td> <td class="datum">06.05.2009</td> </tr> <tr> <th>Maloprodajna cena [EUR/L]</th> <td>0,96000</td> <td>0,97000</td> <td>0,99600</td> <td>1,00800</td> <td>1,00800</td> <td>1,01000</td> </tr> <tr> <th>Maloprodajna cena [SIT/L]</th> <td>230,054</td> <td>232,451</td> <td>238,681</td> <td>241,557</td> <td>241,557</td> <td>242,036</td> </tr> <tr> <th>Prodajna cena brez dajatev</th> <td>0,33795</td> <td>0,34628</td> <td>0,36795</td> <td>0,37795</td> <td>0,37795</td> <td>0,37962</td> </tr> <tr> <th>Trošarina</th> <td>0,46205</td> <td>0,46205</td> <td>0,46205</td> <td>0,46205</td> <td>0,46205</td> <td>0,46205</td> </tr> <tr> <th>DDV</th> <td>0,16000</td> <td>0,16167</td> <td>0,16600</td> <td>0,16800</td> <td>0,16800</td> <td>0,16833</td> </tr> </table> </div> I have to extract out values, where table header is DATUM and Maloprodajna cena [EUR/L]. I am using Agility HTML pack. this.htmlDoc = new HtmlAgilityPack.HtmlDocument(); this.htmlDoc.OptionCheckSyntax = true; this.htmlDoc.OptionFixNestedTags = true; this.htmlDoc.OptionAutoCloseOnEnd = true; this.htmlDoc.OptionOutputAsXml = true; // is this necessary ?? this.htmlDoc.OptionDefaultStreamEncoding = System.Text.Encoding.Default; I had a lot of trouble with getting those values out. I started with: var query = from html in doc.DocumentNode.SelectNodes("//div[@class='graf_table']").Cast<HtmlNode>() from table in html.SelectNodes("//table").Cast<HtmlNode>() from row in table.SelectNodes("tr").Cast<HtmlNode>() from cell in row.SelectNodes("th|td").Cast<HtmlNode>() select new { Table = table.Id, CellText = cell.InnerHtml }; but could not figure out a way to select only values where table header is DATUM and Maloprodajna cena[EUR/L]. Is it possible to do that with where clause? Then I ended with those two queries: var date = (from d in htmlDoc.DocumentNode.SelectNodes("//div[@class='graf_table']//table//tr[1]/td") select DateTime.Parse(d.InnerText)).ToArray(); var price = (from p in htmlDoc.DocumentNode.SelectNodes("//div[@class='graf_table']//table//tr[2]/td") select double.Parse(p.InnerText)).ToArray(); Is it possible to combine those two queries? And how would I convert that to lambda expression? I just started to learn those things and I would like to know how it is done so that in the future I would not have those question. O, one more question ... does anybody know any graph control, cause I have to show those values in graph. I started with Microsoft Chart Controls, but I am having trouble with setting it. So if anyone has any experience with it I would like to know how to set it, so that x axle will show all values not every second ... example: if I have: 10.03.2009, 24.03.2009, 07.04.2009, 21.04.2009, 05.05.2009, 06.05.2009 it show only: 10.03.2009, 07.04.2009, 05.05.2009, ect. I bind data to graph like that: chart1.Series["Series1"].Points.DataBindXY(date, price); I lot of questions for my fist post ... hehe, hope that I was not indistinct or something. Thank's for any reply!

    Read the article

  • Server on blacklist

    - by Cudos
    I have a Debian Wheezy server with several websites with separate domains. Some of these websites uses Wordpress and in turn uses PHP's mail function to send mail. I installed "sendmail" to be able for the server to send mail from PHP. We use Google Apps for our customers, so no need to setup a regular mail server. Now the server is blacklisted at www.spamhaus.org and get this message: This IP address is HELO'ing as "localhost.localdomain" which violates the relevant standards (specifically: RFC5321). I have tried to follow the instructions on these websites with no luck: http://www.cardiothink.com/downloads/README.spamhaus-and-blocked-email.html http://centosbeginer.wordpress.com/2011/07/12/how-to-remove-ip-in-cbl-spamhaus/ Can you please help me figure out how to configure the server? File: /etc/hosts # nameserver config # IPv4 127.0.0.1 somedomain.dk xxx.xxx.xxx.xxx server.somedomain.dk bigby # # IPv6 ::1 ip6-localhost ip6-loopback xxxx::0 ip6-localnet xxxx::0 ip6-mcastprefix xxxx::1 ip6-allnodes xxxx::2 ip6-allrouters xxxx::3 ip6-allhosts xxxx:xxx:xxx:xxxx::2 Debian-76-wheezy-64-minimal File: /etc/hostname bigby somedomain.dk is a made up domain. In reality another domain name I have on this server along with other domains. bigby is also a made up name. It is also something else in reality.

    Read the article

  • An Xml Serializable PropertyBag Dictionary Class for .NET

    - by Rick Strahl
    I don't know about you but I frequently need property bags in my applications to store and possibly cache arbitrary data. Dictionary<T,V> works well for this although I always seem to be hunting for a more specific generic type that provides a string key based dictionary. There's string dictionary, but it only works with strings. There's Hashset<T> but it uses the actual values as keys. In most key value pair situations for me string is key value to work off. Dictionary<T,V> works well enough, but there are some issues with serialization of dictionaries in .NET. The .NET framework doesn't do well serializing IDictionary objects out of the box. The XmlSerializer doesn't support serialization of IDictionary via it's default serialization, and while the DataContractSerializer does support IDictionary serialization it produces some pretty atrocious XML. What doesn't work? First off Dictionary serialization with the Xml Serializer doesn't work so the following fails: [TestMethod] public void DictionaryXmlSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXml(bag)); } public string ToXml(object obj) { if (obj == null) return null; StringWriter sw = new StringWriter(); XmlSerializer ser = new XmlSerializer(obj.GetType()); ser.Serialize(sw, obj); return sw.ToString(); } The error you get with this is: System.NotSupportedException: The type System.Collections.Generic.Dictionary`2[[System.String, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089],[System.Object, mscorlib, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089]] is not supported because it implements IDictionary. Got it! BTW, the same is true with binary serialization. Running the same code above against the DataContractSerializer does work: [TestMethod] public void DictionaryDataContextSerializerTest() { var bag = new Dictionary<string, object>(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42, 45, 66 }); TestContext.WriteLine(this.ToXmlDcs(bag)); } public string ToXmlDcs(object value, bool throwExceptions = false) { var ser = new DataContractSerializer(value.GetType(), null, int.MaxValue, true, false, null); MemoryStream ms = new MemoryStream(); ser.WriteObject(ms, value); return Encoding.UTF8.GetString(ms.ToArray(), 0, (int)ms.Length); } This DOES work but produces some pretty heinous XML (formatted with line breaks and indentation here): <ArrayOfKeyValueOfstringanyType xmlns="http://schemas.microsoft.com/2003/10/Serialization/Arrays" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <KeyValueOfstringanyType> <Key>key</Key> <Value i:type="a:string" xmlns:a="http://www.w3.org/2001/XMLSchema">Value</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key2</Key> <Value i:type="a:decimal" xmlns:a="http://www.w3.org/2001/XMLSchema">100.10</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key3</Key> <Value i:type="a:guid" xmlns:a="http://schemas.microsoft.com/2003/10/Serialization/">2cd46d2a-a636-4af4-979b-e834d39b6d37</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key4</Key> <Value i:type="a:dateTime" xmlns:a="http://www.w3.org/2001/XMLSchema">2011-09-19T17:17:05.4406999-07:00</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key5</Key> <Value i:type="a:boolean" xmlns:a="http://www.w3.org/2001/XMLSchema">true</Value> </KeyValueOfstringanyType> <KeyValueOfstringanyType> <Key>Key7</Key> <Value i:type="a:base64Binary" xmlns:a="http://www.w3.org/2001/XMLSchema">Ki1C</Value> </KeyValueOfstringanyType> </ArrayOfKeyValueOfstringanyType> Ouch! That seriously hurts the eye! :-) Worse though it's extremely verbose with all those repetitive namespace declarations. It's good to know that it works in a pinch, but for a human readable/editable solution or something lightweight to store in a database it's not quite ideal. Why should I care? As a little background, in one of my applications I have a need for a flexible property bag that is used on a free form database field on an otherwise static entity. Basically what I have is a standard database record to which arbitrary properties can be added in an XML based string field. I intend to expose those arbitrary properties as a collection from field data stored in XML. The concept is pretty simple: When loading write the data to the collection, when the data is saved serialize the data into an XML string and store it into the database. When reading the data pick up the XML and if the collection on the entity is accessed automatically deserialize the XML into the Dictionary. (I'll talk more about this in another post). While the DataContext Serializer would work, it's verbosity is problematic both for size of the generated XML strings and the fact that users can manually edit this XML based property data in an advanced mode. A clean(er) layout certainly would be preferable and more user friendly. Custom XMLSerialization with a PropertyBag Class So… after a bunch of experimentation with different serialization formats I decided to create a custom PropertyBag class that provides for a serializable Dictionary. It's basically a custom Dictionary<TType,TValue> implementation with the keys always set as string keys. The result are PropertyBag<TValue> and PropertyBag (which defaults to the object type for values). The PropertyBag<TType> and PropertyBag classes provide these features: Subclassed from Dictionary<T,V> Implements IXmlSerializable with a cleanish XML format ToXml() and FromXml() methods to export and import to and from XML strings Static CreateFromXml() method to create an instance It's simple enough as it's merely a Dictionary<string,object> subclass but that supports serialization to a - what I think at least - cleaner XML format. The class is super simple to use: [TestMethod] public void PropertyBagTwoWayObjectSerializationTest() { var bag = new PropertyBag(); bag.Add("key", "Value"); bag.Add("Key2", 100.10M); bag.Add("Key3", Guid.NewGuid()); bag.Add("Key4", DateTime.Now); bag.Add("Key5", true); bag.Add("Key7", new byte[3] { 42,45,66 } ); bag.Add("Key8", null); bag.Add("Key9", new ComplexObject() { Name = "Rick", Entered = DateTime.Now, Count = 10 }); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag["key"] as string == "Value"); Assert.IsInstanceOfType( bag["Key3"], typeof(Guid)); Assert.IsNull(bag["Key8"]); //Assert.IsNull(bag["Key10"]); Assert.IsInstanceOfType(bag["Key9"], typeof(ComplexObject)); } This uses the PropertyBag class which uses a PropertyBag<string,object> - which means it returns untyped values of type object. I suspect for me this will be the most common scenario as I'd want to store arbitrary values in the PropertyBag rather than one specific type. The same code with a strongly typed PropertyBag<decimal> looks like this: [TestMethod] public void PropertyBagTwoWayValueTypeSerializationTest() { var bag = new PropertyBag<decimal>(); bag.Add("key", 10M); bag.Add("Key1", 100.10M); bag.Add("Key2", 200.10M); bag.Add("Key3", 300.10M); string xml = bag.ToXml(); TestContext.WriteLine(bag.ToXml()); bag.Clear(); bag.FromXml(xml); Assert.IsTrue(bag.Get("Key1") == 100.10M); Assert.IsTrue(bag.Get("Key3") == 300.10M); } and produces typed results of type decimal. The types can be either value or reference types the combination of which actually proved to be a little more tricky than anticipated due to null and specific string value checks required - getting the generic typing right required use of default(T) and Convert.ChangeType() to trick the compiler into playing nice. Of course the whole raison d'etre for this class is the XML serialization. You can see in the code above that we're doing a .ToXml() and .FromXml() to serialize to and from string. The XML produced for the first example looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value>Value</value> </item> <item> <key>Key2</key> <value type="decimal">100.10</value> </item> <item> <key>Key3</key> <value type="___System.Guid"> <guid>f7a92032-0c6d-4e9d-9950-b15ff7cd207d</guid> </value> </item> <item> <key>Key4</key> <value type="datetime">2011-09-26T17:45:58.5789578-10:00</value> </item> <item> <key>Key5</key> <value type="boolean">true</value> </item> <item> <key>Key7</key> <value type="base64Binary">Ki1C</value> </item> <item> <key>Key8</key> <value type="nil" /> </item> <item> <key>Key9</key> <value type="___Westwind.Tools.Tests.PropertyBagTest+ComplexObject"> <ComplexObject> <Name>Rick</Name> <Entered>2011-09-26T17:45:58.5789578-10:00</Entered> <Count>10</Count> </ComplexObject> </value> </item> </properties>   The format is a bit cleaner than the DataContractSerializer. Each item is serialized into <key> <value> pairs. If the value is a string no type information is written. Since string tends to be the most common type this saves space and serialization processing. All other types are attributed. Simple types are mapped to XML types so things like decimal, datetime, boolean and base64Binary are encoded using their Xml type values. All other types are embedded with a hokey format that describes the .NET type preceded by a three underscores and then are encoded using the XmlSerializer. You can see this best above in the ComplexObject encoding. For custom types this isn't pretty either, but it's more concise than the DCS and it works as long as you're serializing back and forth between .NET clients at least. The XML generated from the second example that uses PropertyBag<decimal> looks like this: <?xml version="1.0" encoding="utf-8"?> <properties> <item> <key>key</key> <value type="decimal">10</value> </item> <item> <key>Key1</key> <value type="decimal">100.10</value> </item> <item> <key>Key2</key> <value type="decimal">200.10</value> </item> <item> <key>Key3</key> <value type="decimal">300.10</value> </item> </properties>   How does it work As I mentioned there's nothing fancy about this solution - it's little more than a subclass of Dictionary<T,V> that implements custom Xml Serialization and a couple of helper methods that facilitate getting the XML in and out of the class more easily. But it's proven very handy for a number of projects for me where dynamic data storage is required. Here's the code: /// <summary> /// Creates a serializable string/object dictionary that is XML serializable /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> [XmlRoot("properties")] public class PropertyBag : PropertyBag<object> { /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml">Serialize</param> /// <returns></returns> public static PropertyBag CreateFromXml(string xml) { var bag = new PropertyBag(); bag.FromXml(xml); return bag; } } /// <summary> /// Creates a serializable string for generic types that is XML serializable. /// /// Encodes keys as element names and values as simple values with a type /// attribute that contains an XML type name. Complex names encode the type /// name with type='___namespace.classname' format followed by a standard xml /// serialized format. The latter serialization can be slow so it's not recommended /// to pass complex types if performance is critical. /// </summary> /// <typeparam name="TValue">Must be a reference type. For value types use type object</typeparam> [XmlRoot("properties")] public class PropertyBag<TValue> : Dictionary<string, TValue>, IXmlSerializable { /// <summary> /// Not implemented - this means no schema information is passed /// so this won't work with ASMX/WCF services. /// </summary> /// <returns></returns> public System.Xml.Schema.XmlSchema GetSchema() { return null; } /// <summary> /// Serializes the dictionary to XML. Keys are /// serialized to element names and values as /// element values. An xml type attribute is embedded /// for each serialized element - a .NET type /// element is embedded for each complex type and /// prefixed with three underscores. /// </summary> /// <param name="writer"></param> public void WriteXml(System.Xml.XmlWriter writer) { foreach (string key in this.Keys) { TValue value = this[key]; Type type = null; if (value != null) type = value.GetType(); writer.WriteStartElement("item"); writer.WriteStartElement("key"); writer.WriteString(key as string); writer.WriteEndElement(); writer.WriteStartElement("value"); string xmlType = XmlUtils.MapTypeToXmlType(type); bool isCustom = false; // Type information attribute if not string if (value == null) { writer.WriteAttributeString("type", "nil"); } else if (!string.IsNullOrEmpty(xmlType)) { if (xmlType != "string") { writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } } else { isCustom = true; xmlType = "___" + value.GetType().FullName; writer.WriteStartAttribute("type"); writer.WriteString(xmlType); writer.WriteEndAttribute(); } // Actual deserialization if (!isCustom) { if (value != null) writer.WriteValue(value); } else { XmlSerializer ser = new XmlSerializer(value.GetType()); ser.Serialize(writer, value); } writer.WriteEndElement(); // value writer.WriteEndElement(); // item } } /// <summary> /// Reads the custom serialized format /// </summary> /// <param name="reader"></param> public void ReadXml(System.Xml.XmlReader reader) { this.Clear(); while (reader.Read()) { if (reader.NodeType == XmlNodeType.Element && reader.Name == "key") { string xmlType = null; string name = reader.ReadElementContentAsString(); // item element reader.ReadToNextSibling("value"); if (reader.MoveToNextAttribute()) xmlType = reader.Value; reader.MoveToContent(); TValue value; if (xmlType == "nil") value = default(TValue); // null else if (string.IsNullOrEmpty(xmlType)) { // value is a string or object and we can assign TValue to value string strval = reader.ReadElementContentAsString(); value = (TValue) Convert.ChangeType(strval, typeof(TValue)); } else if (xmlType.StartsWith("___")) { while (reader.Read() && reader.NodeType != XmlNodeType.Element) { } Type type = ReflectionUtils.GetTypeFromName(xmlType.Substring(3)); //value = reader.ReadElementContentAs(type,null); XmlSerializer ser = new XmlSerializer(type); value = (TValue)ser.Deserialize(reader); } else value = (TValue)reader.ReadElementContentAs(XmlUtils.MapXmlTypeToType(xmlType), null); this.Add(name, value); } } } /// <summary> /// Serializes this dictionary to an XML string /// </summary> /// <returns>XML String or Null if it fails</returns> public string ToXml() { string xml = null; SerializationUtils.SerializeObject(this, out xml); return xml; } /// <summary> /// Deserializes from an XML string /// </summary> /// <param name="xml"></param> /// <returns>true or false</returns> public bool FromXml(string xml) { this.Clear(); // if xml string is empty we return an empty dictionary if (string.IsNullOrEmpty(xml)) return true; var result = SerializationUtils.DeSerializeObject(xml, this.GetType()) as PropertyBag<TValue>; if (result != null) { foreach (var item in result) { this.Add(item.Key, item.Value); } } else // null is a failure return false; return true; } /// <summary> /// Creates an instance of a propertybag from an Xml string /// </summary> /// <param name="xml"></param> /// <returns></returns> public static PropertyBag<TValue> CreateFromXml(string xml) { var bag = new PropertyBag<TValue>(); bag.FromXml(xml); return bag; } } } The code uses a couple of small helper classes SerializationUtils and XmlUtils for mapping Xml types to and from .NET, both of which are from the WestWind,Utilities project (which is the same project where PropertyBag lives) from the West Wind Web Toolkit. The code implements ReadXml and WriteXml for the IXmlSerializable implementation using old school XmlReaders and XmlWriters (because it's pretty simple stuff - no need for XLinq here). Then there are two helper methods .ToXml() and .FromXml() that basically allow your code to easily convert between XML and a PropertyBag object. In my code that's what I use to actually to persist to and from the entity XML property during .Load() and .Save() operations. It's sweet to be able to have a string key dictionary and then be able to turn around with 1 line of code to persist the whole thing to XML and back. Hopefully some of you will find this class as useful as I've found it. It's a simple solution to a common requirement in my applications and I've used the hell out of it in the  short time since I created it. Resources You can find the complete code for the two classes plus the helpers in the Subversion repository for Westwind.Utilities. You can grab the source files from there or download the whole project. You can also grab the full Westwind.Utilities assembly from NuGet and add it to your project if that's easier for you. PropertyBag Source Code SerializationUtils and XmlUtils Westwind.Utilities Assembly on NuGet (add from Visual Studio) © Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  CSharp   Tweet (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problem rendering VBO

    - by Onno
    I'm developing a game engine using OpenTK. I'm trying to get to grips with the use of VBO's. I've run into some trouble because somehow it doesn't render correctly. Thus far I've used immediate mode to render a test object, a test cube with a texture. namespace SharpEngine.Utility.Mesh { using System; using System.Collections.Generic; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL; using SharpEngine.Utility; using System.Drawing; public class ImmediateFaceBasedCube : IMesh { private IList<Face> faces = new List<Face>(); public ImmediateFaceBasedCube() { IList<Vector3> allVertices = new List<Vector3>(); //rechtsbovenvoor allVertices.Add(new Vector3(1.0f, 1.0f, 1.0f)); //0 //rechtsbovenachter allVertices.Add(new Vector3(1.0f, 1.0f, -1.0f)); //1 //linksbovenachter allVertices.Add(new Vector3(-1.0f, 1.0f, -1.0f)); //2 //linksbovenvoor allVertices.Add(new Vector3(-1.0f, 1.0f, 1.0f)); //3 //rechtsondervoor allVertices.Add(new Vector3(1.0f, -1.0f, 1.0f)); //4 //rechtsonderachter allVertices.Add(new Vector3(1.0f, -1.0f, -1.0f)); //5 //linksonderachter allVertices.Add(new Vector3(-1.0f, -1.0f, -1.0f)); //6 //linksondervoor allVertices.Add(new Vector3(-1.0f, -1.0f, 1.0f)); //7 IList<Vector2> textureCoordinates = new List<Vector2>(); textureCoordinates.Add(new Vector2(0, 0)); //AA - 0 textureCoordinates.Add(new Vector2(0, 0.3333333f)); //AB - 1 textureCoordinates.Add(new Vector2(0, 0.6666666f)); //AC - 2 textureCoordinates.Add(new Vector2(0, 1)); //AD - 3 textureCoordinates.Add(new Vector2(0.3333333f, 0)); //BA - 4 textureCoordinates.Add(new Vector2(0.3333333f, 0.3333333f)); //BB - 5 textureCoordinates.Add(new Vector2(0.3333333f, 0.6666666f)); //BC - 6 textureCoordinates.Add(new Vector2(0.3333333f, 1)); //BD - 7 textureCoordinates.Add(new Vector2(0.6666666f, 0)); //CA - 8 textureCoordinates.Add(new Vector2(0.6666666f, 0.3333333f)); //CB - 9 textureCoordinates.Add(new Vector2(0.6666666f, 0.6666666f)); //CC -10 textureCoordinates.Add(new Vector2(0.6666666f, 1)); //CD -11 textureCoordinates.Add(new Vector2(1, 0)); //DA -12 textureCoordinates.Add(new Vector2(1, 0.3333333f)); //DB -13 textureCoordinates.Add(new Vector2(1, 0.6666666f)); //DC -14 textureCoordinates.Add(new Vector2(1, 1)); //DD -15 Vector3 copy1 = new Vector3(-2.0f, -2.5f, -3.5f); IList<Vector3> normals = new List<Vector3>(); normals.Add(new Vector3(0, 1.0f, 0)); //0 normals.Add(new Vector3(0, 0, 1.0f)); //1 normals.Add(new Vector3(1.0f, 0, 0)); //2 normals.Add(new Vector3(0, 0, -1.0f)); //3 normals.Add(new Vector3(-1.0f, 0, 0)); //4 normals.Add(new Vector3(0, -1.0f, 0)); //5 //todo: move vertex normal and texture data to datastructure //todo: VBO based rendering //top face //1 IList<VertexData> verticesT1 = new List<VertexData>(); VertexData T1a = new VertexData(); T1a.Normal = normals[0]; T1a.TexCoord = textureCoordinates[5]; T1a.Position = allVertices[3]; verticesT1.Add(T1a); VertexData T1b = new VertexData(); T1b.Normal = normals[0]; T1b.TexCoord = textureCoordinates[9]; T1b.Position = allVertices[0]; verticesT1.Add(T1b); VertexData T1c = new VertexData(); T1c.Normal = normals[0]; T1c.TexCoord = textureCoordinates[10]; T1c.Position = allVertices[1]; verticesT1.Add(T1c); Face F1 = new Face(verticesT1); faces.Add(F1); //2 IList<VertexData> verticesT2 = new List<VertexData>(); VertexData T2a = new VertexData(); T2a.Normal = normals[0]; T2a.TexCoord = textureCoordinates[10]; T2a.Position = allVertices[1]; verticesT2.Add(T2a); VertexData T2b = new VertexData(); T2b.Normal = normals[0]; T2b.TexCoord = textureCoordinates[6]; T2b.Position = allVertices[2]; verticesT2.Add(T2b); VertexData T2c = new VertexData(); T2c.Normal = normals[0]; T2c.TexCoord = textureCoordinates[5]; T2c.Position = allVertices[3]; verticesT2.Add(T2c); Face F2 = new Face(verticesT2); faces.Add(F2); //front face //3 IList<VertexData> verticesT3 = new List<VertexData>(); VertexData T3a = new VertexData(); T3a.Normal = normals[1]; T3a.TexCoord = textureCoordinates[1]; T3a.Position = allVertices[3]; verticesT3.Add(T3a); VertexData T3b = new VertexData(); T3b.Normal = normals[1]; T3b.TexCoord = textureCoordinates[0]; T3b.Position = allVertices[7]; verticesT3.Add(T3b); VertexData T3c = new VertexData(); T3c.Normal = normals[1]; T3c.TexCoord = textureCoordinates[5]; T3c.Position = allVertices[0]; verticesT3.Add(T3c); Face F3 = new Face(verticesT3); faces.Add(F3); //4 IList<VertexData> verticesT4 = new List<VertexData>(); VertexData T4a = new VertexData(); T4a.Normal = normals[1]; T4a.TexCoord = textureCoordinates[5]; T4a.Position = allVertices[0]; verticesT4.Add(T4a); VertexData T4b = new VertexData(); T4b.Normal = normals[1]; T4b.TexCoord = textureCoordinates[0]; T4b.Position = allVertices[7]; verticesT4.Add(T4b); VertexData T4c = new VertexData(); T4c.Normal = normals[1]; T4c.TexCoord = textureCoordinates[4]; T4c.Position = allVertices[4]; verticesT4.Add(T4c); Face F4 = new Face(verticesT4); faces.Add(F4); //right face //5 IList<VertexData> verticesT5 = new List<VertexData>(); VertexData T5a = new VertexData(); T5a.Normal = normals[2]; T5a.TexCoord = textureCoordinates[2]; T5a.Position = allVertices[0]; verticesT5.Add(T5a); VertexData T5b = new VertexData(); T5b.Normal = normals[2]; T5b.TexCoord = textureCoordinates[1]; T5b.Position = allVertices[4]; verticesT5.Add(T5b); VertexData T5c = new VertexData(); T5c.Normal = normals[2]; T5c.TexCoord = textureCoordinates[6]; T5c.Position = allVertices[1]; verticesT5.Add(T5c); Face F5 = new Face(verticesT5); faces.Add(F5); //6 IList<VertexData> verticesT6 = new List<VertexData>(); VertexData T6a = new VertexData(); T6a.Normal = normals[2]; T6a.TexCoord = textureCoordinates[1]; T6a.Position = allVertices[4]; verticesT6.Add(T6a); VertexData T6b = new VertexData(); T6b.Normal = normals[2]; T6b.TexCoord = textureCoordinates[5]; T6b.Position = allVertices[5]; verticesT6.Add(T6b); VertexData T6c = new VertexData(); T6c.Normal = normals[2]; T6c.TexCoord = textureCoordinates[6]; T6c.Position = allVertices[1]; verticesT6.Add(T6c); Face F6 = new Face(verticesT6); faces.Add(F6); //back face //7 IList<VertexData> verticesT7 = new List<VertexData>(); VertexData T7a = new VertexData(); T7a.Normal = normals[3]; T7a.TexCoord = textureCoordinates[4]; T7a.Position = allVertices[5]; verticesT7.Add(T7a); VertexData T7b = new VertexData(); T7b.Normal = normals[3]; T7b.TexCoord = textureCoordinates[9]; T7b.Position = allVertices[2]; verticesT7.Add(T7b); VertexData T7c = new VertexData(); T7c.Normal = normals[3]; T7c.TexCoord = textureCoordinates[5]; T7c.Position = allVertices[1]; verticesT7.Add(T7c); Face F7 = new Face(verticesT7); faces.Add(F7); //8 IList<VertexData> verticesT8 = new List<VertexData>(); VertexData T8a = new VertexData(); T8a.Normal = normals[3]; T8a.TexCoord = textureCoordinates[9]; T8a.Position = allVertices[2]; verticesT8.Add(T8a); VertexData T8b = new VertexData(); T8b.Normal = normals[3]; T8b.TexCoord = textureCoordinates[4]; T8b.Position = allVertices[5]; verticesT8.Add(T8b); VertexData T8c = new VertexData(); T8c.Normal = normals[3]; T8c.TexCoord = textureCoordinates[8]; T8c.Position = allVertices[6]; verticesT8.Add(T8c); Face F8 = new Face(verticesT8); faces.Add(F8); //left face //9 IList<VertexData> verticesT9 = new List<VertexData>(); VertexData T9a = new VertexData(); T9a.Normal = normals[4]; T9a.TexCoord = textureCoordinates[8]; T9a.Position = allVertices[6]; verticesT9.Add(T9a); VertexData T9b = new VertexData(); T9b.Normal = normals[4]; T9b.TexCoord = textureCoordinates[13]; T9b.Position = allVertices[3]; verticesT9.Add(T9b); VertexData T9c = new VertexData(); T9c.Normal = normals[4]; T9c.TexCoord = textureCoordinates[9]; T9c.Position = allVertices[2]; verticesT9.Add(T9c); Face F9 = new Face(verticesT9); faces.Add(F9); //10 IList<VertexData> verticesT10 = new List<VertexData>(); VertexData T10a = new VertexData(); T10a.Normal = normals[4]; T10a.TexCoord = textureCoordinates[8]; T10a.Position = allVertices[6]; verticesT10.Add(T10a); VertexData T10b = new VertexData(); T10b.Normal = normals[4]; T10b.TexCoord = textureCoordinates[12]; T10b.Position = allVertices[7]; verticesT10.Add(T10b); VertexData T10c = new VertexData(); T10c.Normal = normals[4]; T10c.TexCoord = textureCoordinates[13]; T10c.Position = allVertices[3]; verticesT10.Add(T10c); Face F10 = new Face(verticesT10); faces.Add(F10); //bottom face //11 IList<VertexData> verticesT11 = new List<VertexData>(); VertexData T11a = new VertexData(); T11a.Normal = normals[5]; T11a.TexCoord = textureCoordinates[10]; T11a.Position = allVertices[7]; verticesT11.Add(T11a); VertexData T11b = new VertexData(); T11b.Normal = normals[5]; T11b.TexCoord = textureCoordinates[9]; T11b.Position = allVertices[6]; verticesT11.Add(T11b); VertexData T11c = new VertexData(); T11c.Normal = normals[5]; T11c.TexCoord = textureCoordinates[14]; T11c.Position = allVertices[4]; verticesT11.Add(T11c); Face F11 = new Face(verticesT11); faces.Add(F11); //12 IList<VertexData> verticesT12 = new List<VertexData>(); VertexData T12a = new VertexData(); T12a.Normal = normals[5]; T12a.TexCoord = textureCoordinates[13]; T12a.Position = allVertices[5]; verticesT12.Add(T12a); VertexData T12b = new VertexData(); T12b.Normal = normals[5]; T12b.TexCoord = textureCoordinates[14]; T12b.Position = allVertices[4]; verticesT12.Add(T12b); VertexData T12c = new VertexData(); T12c.Normal = normals[5]; T12c.TexCoord = textureCoordinates[9]; T12c.Position = allVertices[6]; verticesT12.Add(T12c); Face F12 = new Face(verticesT12); faces.Add(F12); } public void draw() { GL.Begin(BeginMode.Triangles); foreach (Face face in faces) { foreach (VertexData datapoint in face.verticesWithTexCoords) { GL.Normal3(datapoint.Normal); GL.TexCoord2(datapoint.TexCoord); GL.Vertex3(datapoint.Position); } } GL.End(); } } } Gets me this very nice picture: The immediate mode cube renders nicely and taught me a bit on how to use OpenGL, but VBO's are the way to go. Since I read on the OpenTK forums that OpenTK has problems doing VA's or DL's, I decided to skip using those. Now, I've tried to change this cube to a VBO by using the same vertex, normal and tc collections, and making float arrays from them by using the coordinates in combination with uint arrays which contain the index numbers from the immediate cube. (see the private functions at end of the code sample) Somehow this only renders two triangles namespace SharpEngine.Utility.Mesh { using System; using System.Collections.Generic; using OpenTK; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL; using SharpEngine.Utility; using System.Drawing; public class VBOFaceBasedCube : IMesh { private int VerticesVBOID; private int VerticesVBOStride; private int VertexCount; private int ELementBufferObjectID; private int textureCoordinateVBOID; private int textureCoordinateVBOStride; //private int textureCoordinateArraySize; private int normalVBOID; private int normalVBOStride; public VBOFaceBasedCube() { IList<Vector3> allVertices = new List<Vector3>(); //rechtsbovenvoor allVertices.Add(new Vector3(1.0f, 1.0f, 1.0f)); //0 //rechtsbovenachter allVertices.Add(new Vector3(1.0f, 1.0f, -1.0f)); //1 //linksbovenachter allVertices.Add(new Vector3(-1.0f, 1.0f, -1.0f)); //2 //linksbovenvoor allVertices.Add(new Vector3(-1.0f, 1.0f, 1.0f)); //3 //rechtsondervoor allVertices.Add(new Vector3(1.0f, -1.0f, 1.0f)); //4 //rechtsonderachter allVertices.Add(new Vector3(1.0f, -1.0f, -1.0f)); //5 //linksonderachter allVertices.Add(new Vector3(-1.0f, -1.0f, -1.0f)); //6 //linksondervoor allVertices.Add(new Vector3(-1.0f, -1.0f, 1.0f)); //7 IList<Vector2> textureCoordinates = new List<Vector2>(); textureCoordinates.Add(new Vector2(0, 0)); //AA - 0 textureCoordinates.Add(new Vector2(0, 0.3333333f)); //AB - 1 textureCoordinates.Add(new Vector2(0, 0.6666666f)); //AC - 2 textureCoordinates.Add(new Vector2(0, 1)); //AD - 3 textureCoordinates.Add(new Vector2(0.3333333f, 0)); //BA - 4 textureCoordinates.Add(new Vector2(0.3333333f, 0.3333333f)); //BB - 5 textureCoordinates.Add(new Vector2(0.3333333f, 0.6666666f)); //BC - 6 textureCoordinates.Add(new Vector2(0.3333333f, 1)); //BD - 7 textureCoordinates.Add(new Vector2(0.6666666f, 0)); //CA - 8 textureCoordinates.Add(new Vector2(0.6666666f, 0.3333333f)); //CB - 9 textureCoordinates.Add(new Vector2(0.6666666f, 0.6666666f)); //CC -10 textureCoordinates.Add(new Vector2(0.6666666f, 1)); //CD -11 textureCoordinates.Add(new Vector2(1, 0)); //DA -12 textureCoordinates.Add(new Vector2(1, 0.3333333f)); //DB -13 textureCoordinates.Add(new Vector2(1, 0.6666666f)); //DC -14 textureCoordinates.Add(new Vector2(1, 1)); //DD -15 Vector3 copy1 = new Vector3(-2.0f, -2.5f, -3.5f); IList<Vector3> normals = new List<Vector3>(); normals.Add(new Vector3(0, 1.0f, 0)); //0 normals.Add(new Vector3(0, 0, 1.0f)); //1 normals.Add(new Vector3(1.0f, 0, 0)); //2 normals.Add(new Vector3(0, 0, -1.0f)); //3 normals.Add(new Vector3(-1.0f, 0, 0)); //4 normals.Add(new Vector3(0, -1.0f, 0)); //5 //todo: VBO based rendering uint[] vertexElements = { 3,0,1, //01 1,2,3, //02 3,7,0, //03 0,7,4, //04 0,4,1, //05 4,5,1, //06 5,2,1, //07 2,5,6, //08 6,3,2, //09 6,7,5, //10 7,6,4, //11 5,4,6 //12 }; VertexCount = vertexElements.Length; IList<uint> vertexElementList = new List<uint>(vertexElements); uint[] normalElements = { 0,0,0, 0,0,0, 1,1,1, 1,1,1, 2,2,2, 2,2,2, 3,3,3, 3,3,3, 4,4,4, 4,4,4, 5,5,5, 5,5,5 }; IList<uint> normalElementList = new List<uint>(normalElements); uint[] textureIndexArray = { 5,9,10, 10,6,5, 1,0,5, 5,0,4, 2,1,6, 1,5,6, 4,9,5, 9,4,8, 8,13,9, 8,12,13, 10,9,14, 13,14,9 }; //textureCoordinateArraySize = textureIndexArray.Length; IList<uint> textureIndexList = new List<uint>(textureIndexArray); LoadVBO(allVertices, normals, textureCoordinates, vertexElements, normalElementList, textureIndexList); } public void draw() { //bind vertices //bind elements //bind normals //bind texture coordinates GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.NormalArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.BindBuffer(BufferTarget.ArrayBuffer, VerticesVBOID); GL.VertexPointer(3, VertexPointerType.Float, VerticesVBOStride, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, normalVBOID); GL.NormalPointer(NormalPointerType.Float, normalVBOStride, 0); GL.BindBuffer(BufferTarget.ArrayBuffer, textureCoordinateVBOID); GL.TexCoordPointer(2, TexCoordPointerType.Float, textureCoordinateVBOStride, 0); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ELementBufferObjectID); GL.DrawElements(BeginMode.Polygon, VertexCount, DrawElementsType.UnsignedShort, 0); } //loads a static VBO void LoadVBO(IList<Vector3> vertices, IList<Vector3> normals, IList<Vector2> texcoords, uint[] elements, IList<uint> normalIndices, IList<uint> texCoordIndices) { int size; //todo // To create a VBO: // 1) Generate the buffer handles for the vertex and element buffers. // 2) Bind the vertex buffer handle and upload your vertex data. Check that the buffer was uploaded correctly. // 3) Bind the element buffer handle and upload your element data. Check that the buffer was uploaded correctly. float[] verticesArray = convertVector3fListToFloatArray(vertices); float[] normalsArray = createFloatArrayFromListOfVector3ElementsAndIndices(normals, normalIndices); float[] textureCoordinateArray = createFloatArrayFromListOfVector2ElementsAndIndices(texcoords, texCoordIndices); GL.GenBuffers(1, out VerticesVBOID); GL.BindBuffer(BufferTarget.ArrayBuffer, VerticesVBOID); Console.WriteLine("load 1 - vertices"); VerticesVBOStride = BlittableValueType.StrideOf(verticesArray); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verticesArray.Length * sizeof(float)), verticesArray, BufferUsageHint.StaticDraw); GL.GetBufferParameter(BufferTarget.ArrayBuffer, BufferParameterName.BufferSize, out size); if (verticesArray.Length * BlittableValueType.StrideOf(verticesArray) != size) { throw new ApplicationException("Vertex data not uploaded correctly"); } else { Console.WriteLine("load 1 finished ok"); size = 0; } Console.WriteLine("load 2 - elements"); GL.GenBuffers(1, out ELementBufferObjectID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ELementBufferObjectID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(elements.Length * sizeof(uint)), elements, BufferUsageHint.StaticDraw); GL.GetBufferParameter(BufferTarget.ElementArrayBuffer, BufferParameterName.BufferSize, out size); if (elements.Length * sizeof(uint) != size) { throw new ApplicationException("Element data not uploaded correctly"); } else { size = 0; Console.WriteLine("load 2 finished ok"); } GL.GenBuffers(1, out normalVBOID); GL.BindBuffer(BufferTarget.ArrayBuffer, normalVBOID); Console.WriteLine("load 3 - normals"); normalVBOStride = BlittableValueType.StrideOf(normalsArray); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(normalsArray.Length * sizeof(float)), normalsArray, BufferUsageHint.StaticDraw); GL.GetBufferParameter(BufferTarget.ArrayBuffer, BufferParameterName.BufferSize, out size); Console.WriteLine("load 3 - pre check"); if (normalsArray.Length * BlittableValueType.StrideOf(normalsArray) != size) { throw new ApplicationException("Normal data not uploaded correctly"); } else { Console.WriteLine("load 3 finished ok"); size = 0; } GL.GenBuffers(1, out textureCoordinateVBOID); GL.BindBuffer(BufferTarget.ArrayBuffer, textureCoordinateVBOID); Console.WriteLine("load 4- texture coordinates"); textureCoordinateVBOStride = BlittableValueType.StrideOf(textureCoordinateArray); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(textureCoordinateArray.Length * textureCoordinateVBOStride), textureCoordinateArray, BufferUsageHint.StaticDraw); GL.GetBufferParameter(BufferTarget.ArrayBuffer, BufferParameterName.BufferSize, out size); if (textureCoordinateArray.Length * BlittableValueType.StrideOf(textureCoordinateArray) != size) { throw new ApplicationException("texture coordinate data not uploaded correctly"); } else { Console.WriteLine("load 3 finished ok"); size = 0; } } //used to convert vertex arrayss for use with VBO's private float[] convertVector3fListToFloatArray(IList<Vector3> input) { int arrayElementCount = input.Count * 3; float[] output = new float[arrayElementCount]; int fillCount = 0; foreach (Vector3 v in input) { output[fillCount] = v.X; output[fillCount + 1] = v.Y; output[fillCount + 2] = v.Z; fillCount += 3; } return output; } //used for converting texture coordinate arrays for use with VBO's private float[] convertVector2List_to_floatArray(IList<Vector2> input) { int arrayElementCount = input.Count * 2; float[] output = new float[arrayElementCount]; int fillCount = 0; foreach (Vector2 v in input) { output[fillCount] = v.X; output[fillCount + 1] = v.Y; fillCount += 2; } return output; } //used to create an array of floats from private float[] createFloatArrayFromListOfVector3ElementsAndIndices(IList<Vector3> inputVectors, IList<uint> indices) { int arrayElementCount = inputVectors.Count * indices.Count * 3; float[] output = new float[arrayElementCount]; int fillCount = 0; foreach (int i in indices) { output[fillCount] = inputVectors[i].X; output[fillCount + 1] = inputVectors[i].Y; output[fillCount + 2] = inputVectors[i].Z; fillCount += 3; } return output; } private float[] createFloatArrayFromListOfVector2ElementsAndIndices(IList<Vector2> inputVectors, IList<uint> indices) { int arrayElementCount = inputVectors.Count * indices.Count * 2; float[] output = new float[arrayElementCount]; int fillCount = 0; foreach (int i in indices) { output[fillCount] = inputVectors[i].X; output[fillCount + 1] = inputVectors[i].Y; fillCount += 2; } return output; } } } This code will only render two triangles and they're nothing like I had in mind: I've done some searching. In some other questions I read that, if I did something wrong, I'd get no rendering at all. Clearly, something gets sent to the GFX card, but it might be that I'm not sending the right data. I've tried altering the sequence in which the triangles are rendered by swapping some of the index numbers in the vert, tc and normal index arrays, but this doesn't seem to be of any effect. I'm slightly lost here. What am I doing wrong here?

    Read the article

  • xen 4.1 host priodically dropping network packets of domU

    - by Dyutiman Chakraborty
    I have xen 4.1 Host running on a ubuntu 12.04 LTS Server with ip 153.x.x.54. I have setup 2 VMs on it, namely, "dev.mydomain.com" and "web.mydomain.com" with ips 195.X.X.2 and 195.x.x.3 respectively. For network the VMs connect through xendbr0 (xen-bridge), and can accces the network properly. I can also login to the VMs with ssh with no issue. However when I ping any of the VMs, there is a high amount of periodic packet drop. If I the ping the xen host (dom0) there is no packet drop. Following is a output of "tcpdump | grep ICMP" on dOM0 while I was pinging one of the domU tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes 05:19:55.682493 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 30, length 64 05:19:56.691144 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 31, length 64 05:19:57.698776 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 32, length 64 05:19:58.706784 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 33, length 64 05:19:59.714751 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 34, length 64 05:20:00.723144 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 35, length 64 05:20:01.730349 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 36, length 64 05:20:02.739017 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 37, length 64 05:20:03.746806 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 38, length 64 05:20:06.770326 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 41, length 64 05:20:07.778801 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 42, length 64 05:20:08.786481 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 43, length 64 05:20:09.794720 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 44, length 64 05:20:10.802395 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 45, length 64 05:20:11.810770 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 46, length 64 05:20:12.818511 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 47, length 64 05:20:13.826817 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 48, length 64 05:20:14.835125 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 49, length 64 05:20:15.842138 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3460, seq 50, length 64 05:20:18.274072 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 1, length 64 05:20:19.282347 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 2, length 64 05:20:20.290746 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 3, length 64 05:20:21.297910 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 4, length 64 05:20:22.305656 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 5, length 64 05:20:23.314369 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 6, length 64 05:20:24.322055 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 7, length 64 05:20:25.329782 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 8, length 64 05:20:26.338473 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 9, length 64 05:20:27.346411 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 10, length 64 05:20:28.354175 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 11, length 64 05:20:29.361640 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 12, length 64 05:20:30.370026 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 13, length 64 05:20:31.377696 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 14, length 64 05:20:32.386151 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 15, length 64 05:20:33.394118 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 16, length 64 05:20:34.402058 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 17, length 64 05:20:35.409002 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 18, length 64 05:20:36.417692 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > web.mydomain.com: ICMP echo request, id 3461, seq 19, length 64 05:20:36.496916 IP6 fe80::3285:a9ff:feec:fc69 > ip6-allnodes: HBH ICMP6, multicast listener querymax resp delay: 1000 addr: ::, length 24 05:20:36.499112 IP6 fe80::21c:c0ff:fe6c:c091 > ff02::1:ff6c:c091: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff6c:c091, length 24 05:20:36.507041 IP6 fe80::227:eff:fe11:fa3f > ff02::1:ff00:2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:2, length 24 05:20:36.523919 IP6 fe80::21c:c0ff:fe77:6257 > ff02::1:ff77:6257: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff77:6257, length 24 05:20:36.544785 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff12:ea9a: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff12:ea9a, length 24 05:20:36.581740 IP6 fe80::5604:a6ff:fef1:6da7 > ff02::1:fff1:6da7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:fff1:6da7, length 24 05:20:36.600103 IP6 fe80::8a8:8aa0:5e18:917a > ff02::1:ff18:917a: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff18:917a, length 24 05:20:36.601989 IP6 fe80::227:eff:fe11:fa3e > ff02::1:ff11:fa3e: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff11:fa3e, length 24 05:20:36.611090 IP6 fe80::dcad:56ff:fe57:3bbe > ff02::1:ff57:3bbe: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff57:3bbe, length 24 05:20:36.660521 IP6 fe80::54:ff:fe02:1d31 > ff02::1:ff00:6: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:6, length 24 05:20:36.698871 IP6 fe80::21e:8cff:feb4:9f89 > ff02::1:ffb4:9f89: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ffb4:9f89, length 24 05:20:36.776548 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff01:7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff01:7, length 24 05:20:36.781910 IP6 fe80::54:ff:fe8f:6dd > ff02::1:ff00:3: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:3, length 24 05:20:36.865475 IP6 fe80::21c:c0ff:fe4a:ae9f > ff02::1:ff4a:ae9f: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff4a:ae9f, length 24 05:20:36.908333 IP6 fe80::dcad:45ff:fe90:84db > ff02::1:ff90:84db: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff90:84db, length 24 05:20:36.919653 IP6 fe80::54:ff:fe12:ea9a > ff02::1:ff00:7: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff00:7, length 24 05:20:36.924276 IP6 fe80::59a2:2a4a:2082:6dee > ff02::1:ff82:6dee: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff82:6dee, length 24 05:20:37.001905 IP6 fe80::54:ff:fe8f:6dd > ff02::1:ff8f:6dd: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff8f:6dd, length 24 05:20:37.042403 IP6 fe80::54:ff:fe95:54f2 > ff02::1:ff95:54f2: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff95:54f2, length 24 05:20:37.090992 IP6 fe80::21c:c0ff:fe77:62ac > ff02::1:ff77:62ac: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff77:62ac, length 24 05:20:37.098118 IP6 fe80::d63d:7eff:fe01:b67f > ff02::1:ff01:b67f: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff01:b67f, length 24 05:20:37.118784 IP6 fe80::54:ff:fe12:ea9a > ff02::202: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::202, length 24 05:20:37.168548 IP6 fe80::54:ff:fe02:1d31 > ff02::1:ff02:1d31: HBH ICMP6, multicast listener reportmax resp delay: 0 addr: ff02::1:ff02:1d31, length 24 05:20:41.743286 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 1, length 64 05:20:41.743542 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 1, length 64 05:20:42.743859 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 2, length 64 05:20:42.743952 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 2, length 64 05:20:43.745689 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 3, length 64 05:20:43.745777 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 3, length 64 05:20:44.746706 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 4, length 64 05:20:44.746796 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 4, length 64 05:20:45.747986 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 5, length 64 05:20:45.748082 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 5, length 64 05:20:46.749834 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 6, length 64 05:20:46.749920 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 6, length 64 05:20:47.750838 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 7, length 64 05:20:47.751182 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 7, length 64 05:20:48.751909 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 8, length 64 05:20:48.751991 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 8, length 64 05:20:49.752542 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 9, length 64 05:20:49.752620 IP dev.mydomain.com > ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in: ICMP echo reply, id 3463, seq 9, length 64 05:20:50.754246 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 10, length 64 05:20:51.753856 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 11, length 64 05:20:52.752868 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 12, length 64 05:20:53.754174 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 13, length 64 05:20:54.753972 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 14, length 64 05:20:55.753814 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 15, length 64 05:20:56.753391 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 16, length 64 05:20:57.753683 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 17, length 64 05:20:58.753487 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 18, length 64 05:20:59.754013 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 19, length 64 05:21:00.753169 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 20, length 64 05:21:01.753757 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 21, length 64 05:21:02.753307 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 22, length 64 05:21:03.753021 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 23, length 64 05:21:04.753628 IP ABTS-North-Dynamic-226.X.X.122.airtelbroadband.in > dev.mydomain.com: ICMP echo request, id 3463, seq 24, length 64 ^C479 packets captured 718 packets received by filter 238 packets dropped by kernel 3 packets dropped by interface You see the ping request is not responed to initially, then for a moment it is replied back and then again no reply. I have tried everything (to the best of my knowledge) to fix this, but can't find any answer Any help will be greatly appreciated Thanks.

    Read the article

  • Converting linear colors to SRGB shows banding in FFmpeg

    - by user1863947
    When I convert an EXR file sequence with x264 using FFmpeg and convert the colorspace from linear to SRGB (with gamma 0.45454545) I get some heavy banding issues (most visible on a dark gradient). Here is the ffmpeg command I use: C:/ffmpeg.exe -y -i C:/seq_v001.%04d.exr -vf lutrgb=r=gammaval(0.45454545):g=gammaval(0.45454545):b=gammaval(0.45454545) -vcodec libx264 -pix_fmt yuv420p -preset slow -crf 18 -r 25 C:/out.mov Here is the output: ffmpeg version N-47062-g26c531c Copyright (c) 2000-2012 the FFmpeg developers built on Nov 25 2012 12:25:21 with gcc 4.7.2 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib libavutil 52. 9.100 / 52. 9.100 libavcodec 54. 77.100 / 54. 77.100 libavformat 54. 37.100 / 54. 37.100 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 23.102 / 3. 23.102 libswscale 2. 1.102 / 2. 1.102 libswresample 0. 17.101 / 0. 17.101 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'C:/seq_v001.%04d.exr': Duration: 00:00:09.60, start: 0.000000, bitrate: N/A Stream #0:0: Video: exr, rgb48le, 960x540 [SAR 1:1 DAR 16:9], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0000000004d11540] using SAR=1/1 [libx264 @ 0000000004d11540] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0000000004d11540] profile High, level 3.1 [libx264 @ 0000000004d11540] 264 - core 128 r2216 198a7ea - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=5 deblock=1:0:0 analyse=0x3:0x113 me=umh subme=8 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=18 lookahead_threads=3 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=2 b_bias=0 direct=3 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=50 rc=crf mbtree=1 crf=18.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mov, to 'C:/out.mov': Metadata: encoder : Lavf54.37.100 Stream #0:0: Video: h264 (avc1 / 0x31637661), yuv420p, 960x540 [SAR 1:1 DAR 16:9], q=-1--1, 12800 tbn, 25 tbc Stream mapping: Stream #0:0 -> #0:0 (exr -> libx264) Press [q] to stop, [?] for help [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute frame= 16 fps=0.0 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute frame= 34 fps= 33 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute frame= 52 fps= 34 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute frame= 68 fps= 34 q=0.0 size= 0kB time=00:00:00.00 bitrate= 0.0kbits/s Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute frame= 85 fps= 33 q=23.0 size= 47kB time=00:00:00.44 bitrate= 867.5kbits/s Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute frame= 104 fps= 34 q=23.0 size= 94kB time=00:00:01.20 bitrate= 640.3kbits/s Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute frame= 121 fps= 34 q=23.0 size= 133kB time=00:00:01.88 bitrate= 577.8kbits/s Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute frame= 139 fps= 34 q=23.0 size= 172kB time=00:00:02.60 bitrate= 543.4kbits/s Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute frame= 157 fps= 34 q=23.0 size= 213kB time=00:00:03.32 bitrate= 525.6kbits/s Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute frame= 175 fps= 34 q=23.0 size= 254kB time=00:00:04.04 bitrate= 516.0kbits/s Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute frame= 193 fps= 35 q=23.0 size= 287kB time=00:00:04.76 bitrate= 494.6kbits/s Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute frame= 211 fps= 35 q=23.0 size= 332kB time=00:00:05.48 bitrate= 496.4kbits/s Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute [exr @ 000000000dffa660] Found more than one compression attribute [exr @ 000000000dffaaa0] Found more than one compression attribute [exr @ 000000000dffaf00] Found more than one compression attribute [exr @ 000000000dffb340] Found more than one compression attribute [exr @ 000000000dffb7a0] Found more than one compression attribute [exr @ 000000000dffbbe0] Found more than one compression attribute [exr @ 000000000dffc040] Found more than one compression attribute [exr @ 000000000dff8c40] Found more than one compression attribute [exr @ 000000000dff90c0] Found more than one compression attribute [exr @ 000000000dff9520] Found more than one compression attribute [exr @ 000000000dff9960] Found more than one compression attribute [exr @ 000000000dff9dc0] Found more than one compression attribute [exr @ 000000000dffa200] Found more than one compression attribute frame= 228 fps= 34 q=23.0 size= 421kB time=00:00:06.16 bitrate= 559.8kbits/s frame= 240 fps= 32 q=-1.0 Lsize= 708kB time=00:00:09.52 bitrate= 609.3kbits/s video:705kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.505636% [libx264 @ 0000000004d11540] frame I:2 Avg QP:15.07 size: 18186 [libx264 @ 0000000004d11540] frame P:73 Avg QP:16.51 size: 3719 [libx264 @ 0000000004d11540] frame B:165 Avg QP:18.38 size: 2502 [libx264 @ 0000000004d11540] consecutive B-frames: 2.5% 3.3% 42.5% 51.7% [libx264 @ 0000000004d11540] mb I I16..4: 46.2% 33.3% 20.4% [libx264 @ 0000000004d11540] mb P I16..4: 6.8% 2.0% 0.6% P16..4: 29.4% 10.5% 4.6% 0.0% 0.0% skip:46.1% [libx264 @ 0000000004d11540] mb B I16..4: 1.8% 0.7% 0.2% B16..8: 40.9% 6.5% 0.3% direct: 1.2% skip:48.5% L0:52.0% L1:47.5% BI: 0.5% [libx264 @ 0000000004d11540] 8x8 transform intra:24.7% inter:81.3% [libx264 @ 0000000004d11540] direct mvs spatial:93.3% temporal:6.7% [libx264 @ 0000000004d11540] coded y,uvDC,uvAC intra: 10.7% 31.4% 24.9% inter: 2.3% 9.0% 2.9% [libx264 @ 0000000004d11540] i16 v,h,dc,p: 83% 11% 6% 1% [libx264 @ 0000000004d11540] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 9% 9% 52% 6% 4% 4% 5% 5% 5% [libx264 @ 0000000004d11540] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 22% 11% 44% 5% 4% 3% 3% 4% 3% [libx264 @ 0000000004d11540] i8c dc,h,v,p: 69% 15% 15% 2% [libx264 @ 0000000004d11540] Weighted P-Frames: Y:0.0% UV:0.0% [libx264 @ 0000000004d11540] ref P L0: 48.9% 0.1% 16.8% 17.0% 11.3% 5.8% [libx264 @ 0000000004d11540] ref B L0: 57.7% 21.9% 13.9% 6.4% [libx264 @ 0000000004d11540] ref B L1: 82.4% 17.6% [libx264 @ 0000000004d11540] kb/s:600.61 For me it looks like it converts the video first and afterwards applies the gamma correction on 8-bit clipped video. Does someone have an idea?

    Read the article

  • Nginx and client certificates from hierarchical OpenSSL-based certification authorities

    - by Fmy Oen
    I'm trying to set up root certification authority, subordinate certification authority and to generate the client certificates signed by any of this CA that nginx 0.7.67 on Debian Squeeze will accept. My problem is that root CA signed client certificate works fine while subordinate CA signed one results in "400 Bad Request. The SSL certificate error". Step 1: nginx virtual host configuration: server { server_name test.local; access_log /var/log/nginx/test.access.log; listen 443 default ssl; keepalive_timeout 70; ssl_protocols SSLv3 TLSv1; ssl_ciphers AES128-SHA:AES256-SHA:RC4-SHA:DES-CBC3-SHA:RC4-MD5; ssl_certificate /etc/nginx/ssl/server.crt; ssl_certificate_key /etc/nginx/ssl/server.key; ssl_client_certificate /etc/nginx/ssl/client.pem; ssl_verify_client on; ssl_session_cache shared:SSL:10m; ssl_session_timeout 5m; location / { proxy_pass http://testsite.local/; } } Step 2: PKI infrastructure organization for both root and subordinate CA (based on this article): # mkdir ~/pki && cd ~/pki # mkdir rootCA subCA # cp -v /etc/ssl/openssl.cnf rootCA/ # cd rootCA/ # mkdir certs private crl newcerts; touch serial; echo 01 > serial; touch index.txt; touch crlnumber; echo 01 > crlnumber # cp -Rvp * ../subCA/ Almost no changes was made to rootCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/rootca.crt # The CA certificate ... private_key = $dir/private/rootca.key # The private key and to subCA/openssl.cnf: [ CA_default ] dir = . # Where everything is kept ... certificate = $dir/certs/subca.crt # The CA certificate ... private_key = $dir/private/subca.key # The private key Step 3: Self-signed root CA certificate generation: # openssl genrsa -out ./private/rootca.key -des3 2048 # openssl req -x509 -new -key ./private/rootca.key -out certs/rootca.crt -config openssl.cnf Enter pass phrase for ./private/rootca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:rootca Email Address []: Step 4: Subordinate CA certificate generation: # cd ../subCA # openssl genrsa -out ./private/subca.key -des3 2048 # openssl req -new -key ./private/subca.key -out subca.csr -config openssl.cnf Enter pass phrase for ./private/subca.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:subca Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: Step 5: Subordinate CA certificate signing by root CA certificate: # cd ../rootCA/ # openssl ca -in ../subCA/subca.csr -extensions v3_ca -config openssl.cnf Using configuration from openssl.cnf Enter pass phrase for ./private/rootca.key: Check that the request matches the signature Signature ok Certificate Details: Serial Number: 1 (0x1) Validity Not Before: Feb 4 10:49:43 2013 GMT Not After : Feb 4 10:49:43 2014 GMT Subject: countryName = AU stateOrProvinceName = Some-State organizationName = Internet Widgits Pty Ltd commonName = subca X509v3 extensions: X509v3 Subject Key Identifier: C9:E2:AC:31:53:81:86:3F:CD:F8:3D:47:10:FC:E5:8E:C2:DA:A9:20 X509v3 Authority Key Identifier: keyid:E9:50:E6:BF:57:03:EA:6E:8F:21:23:86:BB:44:3D:9F:8F:4A:8B:F2 DirName:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca serial:9F:FB:56:66:8D:D3:8F:11 X509v3 Basic Constraints: CA:TRUE Certificate is to be certified until Feb 4 10:49:43 2014 GMT (365 days) Sign the certificate? [y/n]:y 1 out of 1 certificate requests certified, commit? [y/n]y ... # cd ../subCA/ # cp -v ../rootCA/newcerts/01.pem certs/subca.crt Step 6: Server certificate generation and signing by root CA (for nginx virtual host): # cd ../rootCA # openssl genrsa -out ./private/server.key -des3 2048 # openssl req -new -key ./private/server.key -out server.csr -config openssl.cnf Enter pass phrase for ./private/server.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:test.local Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in server.csr -out certs/server.crt -config openssl.cnf Step 7: Client #1 certificate generation and signing by root CA: # openssl genrsa -out ./private/client1.key -des3 2048 # openssl req -new -key ./private/client1.key -out client1.csr -config openssl.cnf Enter pass phrase for ./private/client1.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #1 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client1.csr -out certs/client1.crt -config openssl.cnf Step 8: Client #1 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client1.p12 -inkey private/client1.key -in certs/client1.crt -certfile certs/rootca.crt Step 9: Client #2 certificate generation and signing by subordinate CA: # cd ../subCA/ # openssl genrsa -out ./private/client2.key -des3 2048 # openssl req -new -key ./private/client2.key -out client2.csr -config openssl.cnf Enter pass phrase for ./private/client2.key: You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [AU]: State or Province Name (full name) [Some-State]: Locality Name (eg, city) []: Organization Name (eg, company) [Internet Widgits Pty Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, YOUR name) []:Client #2 Email Address []: Please enter the following 'extra' attributes to be sent with your certificate request A challenge password []: An optional company name []: # openssl ca -in client2.csr -out certs/client2.crt -config openssl.cnf Step 10: Client #2 certificate converting to PKCS12 format: # openssl pkcs12 -export -out certs/client2.p12 -inkey private/client2.key -in certs/client2.crt -certfile certs/subca.crt Step 11: Passing server certificate and private key to nginx (performed with OS superuser privileges): # cd ../rootCA/ # cp -v certs/server.crt /etc/nginx/ssl/ # cp -v private/server.key /etc/nginx/ssl/ Step 12: Passing root and subordinate CA certificates to nginx (performed with OS superuser privileges): # cat certs/rootca.crt > /etc/nginx/ssl/client.pem # cat ../subCA/certs/subca.crt >> /etc/nginx/ssl/client.pem client.pem file look like this: # cat /etc/nginx/ssl/client.pem -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) ... -----BEGIN CERTIFICATE----- MIID4DCCAsigAwIBAgIBATANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTA0OTQzWhcNMTQwMjA0 ... -----END CERTIFICATE----- It looks like everything is working fine: # service nginx reload # Reloading nginx configuration: Enter PEM pass phrase: # nginx. # Step 13: Installing *.p12 certificates in browser (Firefox in my case) gives the problem I've mentioned above. Client #1 = 200 OK, Client #2 = 400 Bad request/The SSL certificate error. Any ideas what should I do? Update 1: Results of SSL connection test attempts: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/rootCA/certs/client1.crt -key ~/pki/rootCA/private/client1.key -showcerts Enter pass phrase for tmp/testcert/client1.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- Certificate chain 0 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIIDpjCCAo6gAwIBAgIBAjANBgkqhkiG9w0BAQUFADBWMQswCQYDVQQGEwJBVTET MBEGA1UECBMKU29tZS1TdGF0ZTEhMB8GA1UEChMYSW50ZXJuZXQgV2lkZ2l0cyBQ dHkgTHRkMQ8wDQYDVQQDEwZyb290Y2EwHhcNMTMwMjA0MTEwNjAzWhcNMTQwMjA0 ... -----END CERTIFICATE----- 1 s:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca i:/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca -----BEGIN CERTIFICATE----- MIID6TCCAtGgAwIBAgIJAJ/7VmaN048RMA0GCSqGSIb3DQEBBQUAMFYxCzAJBgNV BAYTAkFVMRMwEQYDVQQIEwpTb21lLVN0YXRlMSEwHwYDVQQKExhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQxDzANBgNVBAMTBnJvb3RjYTAeFw0xMzAyMDQxMDM1NTda ... -----END CERTIFICATE----- --- Server certificate subject=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=test.local issuer=/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca --- Acceptable client certificate CA names /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca /C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca --- SSL handshake has read 3395 bytes and written 2779 bytes --- New, TLSv1/SSLv3, Cipher is AES256-SHA Server public key is 2048 bit Secure Renegotiation IS supported Compression: zlib compression Expansion: zlib compression SSL-Session: Protocol : TLSv1 Cipher : AES256-SHA Session-ID: 15BFC2029691262542FAE95A48078305E76EEE7D586400F8C4F7C516B0F9D967 Session-ID-ctx: Master-Key: 23246CF166E8F3900793F0A2561879E5DB07291F32E99591BA1CF53E6229491FEAE6858BFC9AACAF271D9C3706F139C7 Key-Arg : None PSK identity: None PSK identity hint: None SRP username: None TLS session ticket: 0000 - c2 5e 1d d2 b5 6d 40 23-b2 40 89 e4 35 75 70 07 .^...m@#[email protected]. 0010 - 1b bb 2b e6 e0 b5 ab 10-10 bf 46 6e aa 67 7f 58 ..+.......Fn.g.X 0020 - cf 0e 65 a4 67 5a 15 ba-aa 93 4e dd 3d 6e 73 4c ..e.gZ....N.=nsL 0030 - c5 56 f6 06 24 0f 48 e6-38 36 de f1 b5 31 c5 86 .V..$.H.86...1.. ... 0440 - 4c 53 39 e3 92 84 d2 d0-e5 e2 f5 8a 6a a8 86 b1 LS9.........j... Compression: 1 (zlib compression) Start Time: 1359989684 Timeout : 300 (sec) Verify return code: 0 (ok) --- Everything seems fine with Client #2 and root CA certificate but request returns 400 Bad Request error: # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/rootca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 ... Compression: 1 (zlib compression) Start Time: 1359989989 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request Server: nginx/0.7.67 Date: Mon, 04 Feb 2013 15:00:43 GMT Content-Type: text/html Content-Length: 231 Connection: close <html> <head><title>400 The SSL certificate error</title></head> <body bgcolor="white"> <center><h1>400 Bad Request</h1></center> <center>The SSL certificate error</center> <hr><center>nginx/0.7.67</center> </body> </html> closed Verification fails with Client #2 certificate and subordinate CA certificate: # openssl s_client -connect test.local:443 -CAfile ~/pki/subCA/certs/subca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify error:num=19:self signed certificate in certificate chain verify return:0 ... Compression: 1 (zlib compression) Start Time: 1359990354 Timeout : 300 (sec) Verify return code: 19 (self signed certificate in certificate chain) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Still getting 400 Bad Request error with concatenated CA certificates and Client #2 (but still everything ok with Client #1): # cat certs/rootca.crt ../subCA/certs/subca.crt > certs/concatenatedca.crt # openssl s_client -connect test.local:443 -CAfile ~/pki/rootCA/certs/concatenatedca.crt -cert ~/pki/subCA/certs/client2.crt -key ~/pki/subCA/private/client2.key -showcerts Enter pass phrase for tmp/testcert/client2.key: CONNECTED(00000003) depth=1 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = rootca verify return:1 depth=0 C = AU, ST = Some-State, O = Internet Widgits Pty Ltd, CN = test.local verify return:1 --- ... Compression: 1 (zlib compression) Start Time: 1359990772 Timeout : 300 (sec) Verify return code: 0 (ok) --- GET / HTTP/1.0 HTTP/1.1 400 Bad Request ... Update 2: I've managed to recompile nginx with enabled debug. Here is the part of successfull conection by Client #1 track: 2013/02/05 14:08:23 [debug] 38701#0: *119 accept: <MY IP ADDRESS> fd:3 2013/02/05 14:08:23 [debug] 38701#0: *119 event timer add: 3: 60000:2856497512 2013/02/05 14:08:23 [debug] 38701#0: *119 kevent set event: 3: ft:-1 fl:0025 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28805200:660 2013/02/05 14:08:23 [debug] 38701#0: *119 malloc: 28834400:1024 2013/02/05 14:08:23 [debug] 38701#0: *119 posix_memalign: 28860000:4096 @16 2013/02/05 14:08:23 [debug] 38701#0: *119 http check ssl handshake 2013/02/05 14:08:23 [debug] 38701#0: *119 https ssl handshake: 0x16 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL server name: "test.local" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL handshake handler: 0 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:1, subject:"/C=AU /ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 verify:1, error:0, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #1",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_do_handshake: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http process request line 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: 524 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_read: -1 2013/02/05 14:08:23 [debug] 38701#0: *119 SSL_get_error: 2 2013/02/05 14:08:23 [debug] 38701#0: *119 http request line: "GET / HTTP/1.1" And here is the part of unsuccessfull conection by Client #2 track: 2013/02/05 13:51:34 [debug] 38701#0: *112 accept: <MY_IP_ADDRESS> fd:3 2013/02/05 13:51:34 [debug] 38701#0: *112 event timer add: 3: 60000:2855488975 2013/02/05 13:51:34 [debug] 38701#0: *112 kevent set event: 3: ft:-1 fl:0025 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28805200:660 2013/02/05 13:51:34 [debug] 38701#0: *112 malloc: 28834400:1024 2013/02/05 13:51:34 [debug] 38701#0: *112 posix_memalign: 28860000:4096 @16 2013/02/05 13:51:34 [debug] 38701#0: *112 http check ssl handshake 2013/02/05 13:51:34 [debug] 38701#0: *112 https ssl handshake: 0x16 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL server name: "test.local" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL handshake handler: 0 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:20, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:0, error:27, depth:1, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=rootca" 2013/02/05 13:51:34 [debug] 38701#0: *112 verify:1, error:27, depth:0, subject:"/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=Client #2",issuer: "/C=AU/ST=Some-State/O=Internet Widgits Pty Ltd/CN=subca" 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_do_handshake: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL: TLSv1, cipher: "AES256-SHA SSLv3 Kx=RSA Au=RSA Enc=AES(256) Mac=SHA1" 2013/02/05 13:51:34 [debug] 38701#0: *112 http process request line 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: 524 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_read: -1 2013/02/05 13:51:34 [debug] 38701#0: *112 SSL_get_error: 2 2013/02/05 13:51:34 [debug] 38701#0: *112 http request line: "GET / HTTP/1.1" So I'm getting OpenSSL error #20 and then #27. According to verify documentation: 20 X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY: unable to get local issuer certificate the issuer certificate could not be found: this occurs if the issuer certificate of an untrusted certificate cannot be found. 27 X509_V_ERR_CERT_UNTRUSTED: certificate not trusted the root CA is not marked as trusted for the specified purpose.

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54  | Next Page >