Search Results

Search found 25093 results on 1004 pages for 'console output'.

Page 527/1004 | < Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >

  • Debian virtual memory reaching limit

    - by Gregor
    As a relative newbie to systems, I inherited a Debian server and I've noticed that virtual memory is very high (around 95%!). The server has been running slow for around 6 months, and I was wondering if any of you had any tips on things I could try, particularly on freeing up memory. The server hosts various websites and also a Postit email server. Here are the details: Operating system Debian Linux 5.0 Webmin version 1.580 Time on system Thu Apr 12 11:12:21 2012 Kernel and CPU Linux 2.6.18-6-amd64 on x86_64 Processor information Intel(R) Core(TM)2 Duo CPU E7400 @ 2.80GHz, 2 cores System uptime 229 days, 12 hours, 50 minutes Running processes 138 CPU load averages 0.10 (1 min) 0.28 (5 mins) 0.36 (15 mins) CPU usage 14% user, 1% kernel, 0% IO, 85% idle Real memory 2.94 GB total, 1.69 GB used Virtual memory 3.93 GB total, 3.84 GB used Local disk space 142.84 GB total, 116.13 GB used Free m output: free -m total used free shared buffers cached Mem: 3010 2517 492 0 107 996 -/+ buffers/cache: 1413 1596 Swap: 4024 3930 93 Top output: top - 11:59:57 up 229 days, 13:38, 1 user, load average: 0.26, 0.24, 0.26 Tasks: 136 total, 2 running, 134 sleeping, 0 stopped, 0 zombie Cpu(s): 3.8%us, 0.5%sy, 0.0%ni, 95.0%id, 0.7%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3082544k total, 2773160k used, 309384k free, 111496k buffers Swap: 4120632k total, 4024712k used, 95920k free, 1036136k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 28796 www-data 16 0 304m 68m 6188 S 8 2.3 0:03.13 apache2 1 root 15 0 10304 592 564 S 0 0.0 0:00.76 init 2 root RT 0 0 0 0 S 0 0.0 0:04.06 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:05.67 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT 0 0 0 0 S 0 0.0 0:00.06 migration/1 6 root 34 19 0 0 0 S 0 0.0 0:01.26 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root 10 -5 0 0 0 S 0 0.0 0:00.12 events/0 9 root 10 -5 0 0 0 S 0 0.0 0:00.00 events/1 10 root 10 -5 0 0 0 S 0 0.0 0:00.00 khelper 11 root 10 -5 0 0 0 S 0 0.0 0:00.02 kthread 16 root 10 -5 0 0 0 S 0 0.0 0:15.51 kblockd/0 17 root 10 -5 0 0 0 S 0 0.0 0:01.32 kblockd/1 18 root 15 -5 0 0 0 S 0 0.0 0:00.00 kacpid 127 root 10 -5 0 0 0 S 0 0.0 0:00.00 khubd 129 root 10 -5 0 0 0 S 0 0.0 0:00.00 kseriod 180 root 10 -5 0 0 0 S 0 0.0 70:09.05 kswapd0 181 root 17 -5 0 0 0 S 0 0.0 0:00.00 aio/0 182 root 17 -5 0 0 0 S 0 0.0 0:00.00 aio/1 780 root 16 -5 0 0 0 S 0 0.0 0:00.00 ata/0 782 root 16 -5 0 0 0 S 0 0.0 0:00.00 ata/1 783 root 16 -5 0 0 0 S 0 0.0 0:00.00 ata_aux 802 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_0 803 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_1 804 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_2 805 root 10 -5 0 0 0 S 0 0.0 0:00.00 scsi_eh_3 1013 root 10 -5 0 0 0 S 0 0.0 49:27.78 kjournald 1181 root 15 -4 16912 452 448 S 0 0.0 0:00.05 udevd 1544 root 14 -5 0 0 0 S 0 0.0 0:00.00 kpsmoused 1706 root 13 -5 0 0 0 S 0 0.0 0:00.00 kmirrord 1995 root 18 0 193m 3324 1688 S 0 0.1 8:52.77 rsyslogd 2031 root 15 0 48856 732 608 S 0 0.0 0:01.86 sshd 2071 root 25 0 17316 1072 1068 S 0 0.0 0:00.00 mysqld_safe 2108 mysql 15 0 320m 72m 4368 S 0 2.4 1923:25 mysqld 2109 root 18 0 3776 500 496 S 0 0.0 0:00.00 logger 2180 postgres 15 0 99504 3016 2880 S 0 0.1 1:24.15 postgres 2184 postgres 15 0 99504 3596 3420 S 0 0.1 0:02.08 postgres 2185 postgres 15 0 99504 696 628 S 0 0.0 0:00.65 postgres 2186 postgres 15 0 99640 892 648 S 0 0.0 0:01.18 postgres

    Read the article

  • Wi-Fi Stick with ZD1211 chip refuses to work on Ubuntu >8.10. No clue.

    - by Benjamin Maus
    I have a machine running Ubuntu 9.10 (Karmic *x86_64*). Everything is running smooth so far, except for the Wi-Fi USB Stick. The same device worked perfectly in 8.10. The wireless device is a GW-US54GXS using the Zydas Zd1211 chipset. Dmesg output after plugging in: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' [ 196.304209] zd1211rw 2-1:1.0: phy0 [ 196.304227] usbcore: registered new interface driver zd1211rw [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Syslog output: Nov 5 11:20:24 somesystem kernel: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' Nov 5 11:20:24 kierkegaard NetworkManager: <info> Found radio killswitch rfkill0 (at /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/ieee80211/phy0/rfkill0) (driver <unknown>) Nov 5 11:20:24 somesystem kernel: [ 196.304209] zd1211rw 2-1:1.0: phy0 Nov 5 11:20:24 somesystem kernel: [ 196.304227] usbcore: registered new interface driver zd1211rw Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): new 802.11 WiFi device (driver: 'zd1211rw') Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/2 Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): now managed Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): device state change: 1 -> 2 (reason 2) Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): bringing up device. Nov 5 11:20:24 somesystem kernel: [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:24 somesystem kernel: [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 Nov 5 11:20:24 somesystem kernel: [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- Nov 5 11:20:24 somesystem NetworkManager: <WARN> nm_device_hw_bring_up(): (wlan0): device not up after timeout! Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): deactivating device (reason: 2). Nov 5 11:20:24 somesystem kernel: [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:29 somesystem wpa_supplicant[978]: Could not set interface 'wlan0' UP Nov 5 11:20:29 somesystem wpa_supplicant[978]: Failed to initialize driver interface Nov 5 11:20:29 somesystem NetworkManager: <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. Gnome tells me in the network menu that the device was "not ready". It appears in iwconfig but not in ifconfig. The same symptoms appear when I boot from the live CD. How can I solve this dilemma?

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • Linux: Apple Wireless A1314 Fn key not registered, looks like software bug

    - by ramplank
    I'm trying to set up my Apple Wireless Keyboard with my Kubuntu systems. These are PC hardware powered by Intel Atom and Intel i5 respectively. The keyboard has a US keyboard layout and has model number A1314 written on the back. It takes two AA batteries. I'm saying that because it appears there are multiple types of model A1314. I have tried this on a 10.04, 11.04, 11.10 and 12.04 system with no success. Every time using a bluetooth dongle and the KDE bluetooth notification tray applet, the keyboard can be connected. In both cases it shows up as "Apple Wireless Keyboard". Almost everything works as expected, in fact, I'm typing on it right now. But one thing doesn't: The Fn key. I'd like to use Fn + Down Arrow as PgDn / Page Down, I understand this is default behaviour on Apple keyboards. And of course I'd like the same for Page Up, Home and End. I'll stick to Page Down in my example. I used the xev tool to see the keycodes the system receives, and if I press on Fn nothing happens, and nothing is registered. If I press Fn + Down Arrow, xev only registers the down arrow. Here's the output from my 11.04 system to illustrate: Press just the Fn key: no output Press Down Arrow key: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699773, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2699860, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False Press Fn+Down Arrow Keys together: KeyPress event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701548, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XmbLookupString gives 0 bytes: XFilterEvent returns: False KeyRelease event, serial 36, synthetic NO, window 0x4400001, root 0x15d, subw 0x4400002, time 2701623, (44,45), root:(1352,298), state 0x10, keycode 116 (keysym 0xff54, Down), same_screen YES, XLookupString gives 0 bytes: XFilterEvent returns: False I've been searching this forum and other Linux-related forums for hours but I still have not found a solution. I mostly found advice on how to fix this when using an actual apple laptop or desktop, but I don't have that. They said to try something like the following echo 2 > /sys/module/hid_apple/ ... But since there's no hid_apple directory present on my systems, I've needed to modprobe hid_apple first. That didn't help either. I'm cool with changing some config files, or compiling my own patched kernel if that's necessary. I currently have a 10.04 and 12.04 system available to test. The same issue occurs when hooked up to Windows 7. Fn key still does nothing, not by itself or in combination with other keys. With some AutoHotkey fiddling, I was able to confirm the key is registered as pressed, but ignored by default. A custom AutoHotkey script can fix that. But AutoHotkey is only for Windows, I want my problem fixed on Linux. Hooked up to an iPad 2 it only works in combination with the F1-F12 keys. Not with the arrow keys. If the screen of the ipad is off, and I press just the Fn key, the screen will come on, so the key itself is registered as pressed. So to sum up my question: Can anyone help me get Page Up, Page Down, Home and End to work on this keyboard, when that requires me to use an Fn key which is currently not registered?

    Read the article

  • Reconstructing the disk order in RAID 6 with 7 disks

    - by rkotulla
    a little background to this question first: I am running a RAID-6 within a QNAP TS869L external RAID/NAS system. I started with 5 disks of 3 TB each back in the day, and later added another 2 disks of 3TB to the RAID. The QNAP internals handled the growing and re-syncing etc, and everything seemd to be perfectly fine. About 2 weeks ago, I had one of the disks (disk #5, disk #2 has gone bad in the mean time) fail, and somehow (I have no idea why), also disks 1 and 2 got kicked out of the array. I replaced disk #5, but the RAID didn't start working again. After some calls to QNAP technical support, they re-created the array (using mdadm --create --force --assume-clean ...), but the resulting array couldn't find a filesystem, and I was kindly referred to contact a data recovery company that I can't afford. After some digging through old log files, resetting the disk to factory default, etc, I found a few errors that were made during this re-create - I wish I still had some of the original metadata, but unfortunately i don't (I definitely learned that lesson). I'm currently at the point where I know the correct chunk-size (64K), metadata-version (1.0; factory default was 0.9, but from what I read 0.9 doesn't handle disks over 2 TB, mine are 3 TB), and I now find the ext4 filesystem that should be on the disks. Only variable left to determine is the right disk order! I started using the description found in answer #4 of "Recover RAID 5 data after created new array instead of re-using" but am a little confused on what the order should be for a proper RAID-6. RAID-5 is pretty well documented in a number of places, but RAID-6 much less so. Also, does the layout, i.e. distribution of parity and data chunks across the disks, change after the growing of the array from 5 to 7 disks, or does the re-sync re-organize them in such a way a native 7-disk RAID-6 would have been? Thanks some more mdadm output that might be helpful: mdadm version: [~] # mdadm --version mdadm - v2.6.3 - 20th August 2007 mdadm details from one of the disks in the array: [~] # mdadm --examine /dev/sda3 /dev/sda3: Magic : a92b4efc Version : 1.0 Feature Map : 0x0 Array UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Name : 0 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Raid Devices : 7 Used Dev Size : 5857395112 (2793.02 GiB 2998.99 GB) Array Size : 29286975360 (13965.12 GiB 14994.93 GB) Used Size : 5857395072 (2793.02 GiB 2998.99 GB) Super Offset : 5857395368 sectors State : clean Device UUID : 7c572d8f:20c12727:7e88c888:c2c357af Update Time : Tue Jun 10 13:01:06 2014 Checksum : d275c82d - correct Events : 7036 Chunk Size : 64K Array Slot : 0 (0, 1, failed, 3, failed, 5, 6) Array State : Uu_u_uu 2 failed mdadm details for the array in the current disk-order (based on my best guess reconstructed from old log-files) [~] # mdadm --detail /dev/md0 /dev/md0: Version : 01.00.03 Creation Time : Tue Jun 10 10:27:58 2014 Raid Level : raid6 Array Size : 14643487680 (13965.12 GiB 14994.93 GB) Used Dev Size : 2928697536 (2793.02 GiB 2998.99 GB) Raid Devices : 7 Total Devices : 5 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Jun 10 13:01:06 2014 State : clean, degraded Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Chunk Size : 64K Name : 0 UUID : 1c1614a5:e3be2fbb:4af01271:947fe3aa Events : 7036 Number Major Minor RaidDevice State 0 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 0 0 2 removed 3 8 51 3 active sync /dev/sdd3 4 0 0 4 removed 5 8 99 5 active sync /dev/sdg3 6 8 83 6 active sync /dev/sdf3 output from /proc/mdstat (md8, md9, and md13 are internally used RAIDs holding swap, etc; the one I'm after is md0) [~] # more /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md0 : active raid6 sdf3[6] sdg3[5] sdd3[3] sdb3[1] sda3[0] 14643487680 blocks super 1.0 level 6, 64k chunk, algorithm 2 [7/5] [UU_U_UU] md8 : active raid1 sdg2[2](S) sdf2[3](S) sdd2[4](S) sdc2[5](S) sdb2[6](S) sda2[1] sde2[0] 530048 blocks [2/2] [UU] md13 : active raid1 sdg4[3] sdf4[4] sde4[5] sdd4[6] sdc4[2] sdb4[1] sda4[0] 458880 blocks [8/7] [UUUUUUU_] bitmap: 21/57 pages [84KB], 4KB chunk md9 : active raid1 sdg1[6] sdf1[5] sde1[4] sdd1[3] sdc1[2] sda1[0] sdb1[1] 530048 blocks [8/7] [UUUUUUU_] bitmap: 37/65 pages [148KB], 4KB chunk unused devices: <none>

    Read the article

  • Make exact mp4 (H264) format for uploading to youtube

    - by WHITECOLOR
    With ffmpeg I'm converting video from mp3 and picture to upload it to youtube. After upload, conversion fails. Reasons are unknown. I believe the problem is in format. By the way If I'm uploading file 5 minutes length, it fails if I upload 30 seconds of this file it succeeds. I have donwload mp4 file from youtube. Then I uploaded it, it is done very fast. So a nice solution would be to convert videos to the same format that is done by google. I got the following output by mpeg: ffmpeg version N-44264-g070b0e1 Copyright (c) 2000-2012 the FFmpeg developers built on Sep 7 2012 17:38:57 with gcc 4.7.1 (GCC) configuration: --enable-gpl --enable-version3 --disable-pthreads --enable-runt ime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass - -enable-libcelt --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-l ibfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenj peg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheo ra --enable-libutvideo --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-li bvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --ena ble-zlib libavutil 51. 72.100 / 51. 72.100 libavcodec 54. 55.100 / 54. 55.100 libavformat 54. 25.105 / 54. 25.105 libavdevice 54. 2.100 / 54. 2.100 libavfilter 3. 16.100 / 3. 16.100 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 15.100 / 0. 15.100 libpostproc 52. 0.100 / 52. 0.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'youtubetrack0.mp4': Metadata: major_brand : mp42 minor_version : 0 compatible_brands: isommp42 creation_time : 2012-10-02 22:58:57 Duration: 00:06:46.66, start: 0.000000, bitrate: 176 kb/s Stream #0:0(und): Video: h264 (Constrained Baseline) (avc1 / 0x31637661), yu v420p, 450x360, 78 kb/s, 6 fps, 6 tbr, 12 tbn, 12 tbc Metadata: creation_time : 1970-01-01 00:00:00 handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 95 kb/s Metadata: creation_time : 2012-10-02 22:58:57 handler_name : IsoMedia File Produced by Google, 5-11-2011 Is it possible to construct ffmpeg parameters so that that would give the same format that google internally does? Is the information above sufficient? I couldn't construct needed params. For example I don't understand how to set tbn and what 95 kb/s mean in "Stream #0:1(und): Audio:". Now I just do: ffmpeg -i videoimage.jpg -i audio.mp3 video.mp4 Info I've got: ffmpeg version N-44998-gdf82454 Copyright (c) 2000-2012 the FFmpeg developers built on Oct 2 2012 23:03:12 with gcc 4.7.1 (GCC) configuration: --disable-static --enable-shared --enable-gpl --enable-version3 --disable-pthreads --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libass --enable-libcelt --enable-libopencore-amrnb --en able-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libnut --enable-libopenjpeg --enable-librtmp --enable-libschroedinger - -enable-libspeex --enable-libtheora --enable-libutvideo --enable-libvo-aacenc -- enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enab le-libxavs --enable-libxvid --enable-zlib libavutil 51. 73.101 / 51. 73.101 libavcodec 54. 63.100 / 54. 63.100 libavformat 54. 29.105 / 54. 29.105 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 libpostproc 52. 1.100 / 52. 1.100 Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'video.mp4': Metadata: major_brand : isom minor_version : 512 compatible_brands: isomiso2avc1mp41 encoder : Lavf54.25.105 Duration: 00:06:46.81, start: 0.000000, bitrate: 129 kb/s Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuvj420p, 450x360, 3392 kb/s, 25 fps, 25 tbr, 25 tbn, 50 tbc Metadata: handler_name : VideoHandler Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 127 kb/s Metadata: handler_name : SoundHandler This video fails the conversion on youtube. I also tried to use other vcode parmam and extensions of output file (mp4, wmv, avi) but failed too. Would be greatful for help.

    Read the article

  • nconf nagios config no services defined

    - by user1508056
    I've setup Nagios core on OSX 10.7 server via macports fine. It seems to load fine and the sample config files all copied over to /opt/local/etc/nagios/objects/ fine and are specified correctly in the nagios.cfg file. I then installed nconf manually and got it running without much fight. Then I clicked on "Generate Nagios config" in nconf and get 1 warning and 4 errors. When I expand the error box here what I see: Nagios Core 3.5.0 Copyright (c) 2009-2011 Nagios Core Development Team and Community Contributors Copyright (c) 1999-2009 Ethan Galstad Last Modified: 03-15-2013 License: GPL Website: http://www.nagios.org Reading configuration data... Read main config file okay... Read object config files okay... Running pre-flight check on configuration data... Checking services... Error: There are no services defined! Checked 0 services. Checking hosts... Error: There are no hosts defined! Checked 0 hosts. Checking host groups... Checked 0 host groups. Checking service groups... Checked 0 service groups. Checking contacts... Error: There are no contacts defined! Checked 0 contacts. Checking contact groups... Checked 0 contact groups. Checking service escalations... Checked 0 service escalations. Checking service dependencies... Checked 0 service dependencies. Checking host escalations... Checked 0 host escalations. Checking host dependencies... Checked 0 host dependencies. Checking commands... Checked 0 commands. Checking time periods... Checked 0 time periods. Checking for circular paths between hosts... Checking for circular host and service dependencies... Checking global event handlers... Checking obsessive compulsive processor commands... Checking misc settings... Warning: Nothing specified for illegal_macro_output_chars variable! Total Warnings: 1 Total Errors: 3 I've tried several different things (played with cache settings, changed file permissions/ownership, edited some config files manually, etc.) but nothing gets me past this step. The thing is, when I run 'sudo nagios -v /opt/local/etc/nagios/nagios.cfg' the output shows it is reading a number of services, a localhost, and a contact in the .cfg files...so I'm pretty confident those are ok and the problem is nconf isnt reading the correct .cfg files or something like that. Any ideas what to double check? I did lots of googling and found nothing on this specific issue--so either I'm special (I'm not) or am overlooking something really simple. The path to nagios binary is listed as /opt/local/bin/nagios, if that matters. Also, all the nagios files are owned by nagios:nagios, wheras nconf files are owned by user, with only the directories/files specified in the nconf docs belonging to the _www user and/or group (things like output, temp, config, etc.). Thanks.

    Read the article

  • Can't get IPv6 address using radvd on wireless interface on windows, wired works fine.

    - by AndrejaKo
    I set up my router to use OpenWRT and set it up to use IPv6 using a tunnel form SixXs. I'm having problems with stateless autoconfiguration using radvd. On my computer, wired connection can get its IPv6 address fine, but wireless can't. After spending some time on OpenWRT forums, I'm pretty much sure now that router is set up fine and that the problem is with my windows settings. Also, I do not have any problems getting IPv6 address on openSUSE 11.3. So what should I do to solve this and what information should I post? Here's radvdump output for wired interface: interface br-lan { AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag on; AdvOtherConfigFlag on; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 64; AdvDefaultLifetime 1800; AdvHomeAgentFlag off; AdvDefaultPreference medium; AdvSourceLLAddress on; prefix 2001:15c0:67d0::/64 { AdvValidLifetime 86400; AdvPreferredLifetime 14400; AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; # End of prefix definition }; # End of interface definition Here's radvdump output for wireless interface: # # radvd configuration generated by radvdump 1.6 # based on Router Advertisement from fe80::a0b7:deff:fef0:5b34 # received by interface br-lan # interface br-lan { AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag on; AdvOtherConfigFlag on; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 64; AdvDefaultLifetime 1800; AdvHomeAgentFlag off; AdvDefaultPreference medium; AdvSourceLLAddress on; prefix 2001:15c0:67d0::/64 { AdvValidLifetime 86400; AdvPreferredLifetime 14400; AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; # End of prefix definition }; # End of interface definition # # radvd configuration generated by radvdump 1.6 # based on Router Advertisement from fe80::a0b7:deff:fef0:5b34 # received by interface br-lan # interface br-lan { AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag on; AdvOtherConfigFlag on; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 64; AdvDefaultLifetime 1800; AdvHomeAgentFlag off; AdvDefaultPreference medium; AdvSourceLLAddress on; prefix 2001:15c0:67d0::/64 { AdvValidLifetime 86400; AdvPreferredLifetime 14400; AdvOnLink on; AdvAutonomous on; AdvRouterAddr off; }; # End of prefix definition }; # End of interface definition

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • Is there a bug with Apache 2.2 and content filters (and maybe mod_proxy)?

    - by asciiphil
    I'm running Apache 2.2.15-29 on RHEL 6 (actually Scientific Linux 6.4) and I'm trying to set up a reverse proxy with content rewriting so all of the links on the proxied web pages are rewritten to reference the proxy host. I'm running into a problem with some of the content rewriting and I'd like to know if this is a bug or if I'm doing something wrong (and how to do it right, if applicable). I'm proxying a subdirectory on an internal host (internal.example.com/foo) onto the root of an external host (external.example.com). I need to rewrite HTML, CSS, and Javascript content to fix all of the URLs. I'm also hosting some content locally on the external host, which I don't think is a problem but I'm mentioning here for completeness. My httpd.conf looks roughly like this: <VirtualHost *:80> ServerName external.example.com ServerAlias example.com # Serve all local content directly, reverse-proxy all unknown URIs. RewriteEngine On RewriteRule ^(/(index.html?)?)?$ http://internal.example.com/foo/ [P] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -f [OR] RewriteCond %{DOCUMENT_ROOT}%{REQUEST_FILENAME} -d RewriteRule ^.*$ - [L] RewriteRule ^/~ - [L] RewriteRule ^(.*)$ http://internal.example.com$1 [P] # Standard header rewriting. ProxyPassReverse / http://internal.example.com/foo/ ProxyPassReverseCookieDomain internal.example.com external.example.com ProxyPassReverseCookiePath /foo/ / # Strip any Accept-Encoding: headers from the client so we can process the pages # as plain text. RequestHeader unset Accept-Encoding # Use mod_proxy_html to fix URLs in text/html content. ProxyHTMLEnable On ProxyHTMLURLMap http://internal.example.com/foo/ / ProxyHTMLURLMap http://internal.example.com/foo / ProxyHTMLURLMap /foo/ / ## Use mod_substitute to fix URLs in CSS and Javascript #<Location /> # AddOutputFilterByType SUBSTITUTE text/css # AddOutputFilterByType SUBSTITUTE text/javascript # Substitute "s|http://internal.example.com/foo/|/|nq" #</Location> # Use mod_ext_filter to fix URLs in CSS and Javascript ExtFilterDefine fixurlcss mode=output intype=text/css cmd="/bin/sed -rf /etc/httpd/fixurls" ExtFilterDefine fixurljs mode=output intype=text/javascript cmd="/bin/sed -rf /etc/httpd/fixurls" <Location /> SetOutputFilter fixurlcss;fixurljs </Location> </VirtualHost> The text/html rewriting works just fine. When I use either mod_substitute or mod_ext_filter, the external server sends the pages as Transfer-Encoding: chunked, sends all of the data, and then closes the connection without sending the final, zero-length chunk. Some HTTP clients are unhappy with this. (Chrome won't process any content sent in this way, for example, so the pages don't get CSS applied to them.) Here's a sample wget session: $ wget -O /dev/null -S http://external.example.com/include/jquery.js --2013-11-01 11:36:36-- http://external.example.com/include/jquery.js Resolving external.example.com (external.example.com)... 192.168.0.1 Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 200 OK Date: Fri, 01 Nov 2013 15:36:36 GMT Server: Apache Last-Modified: Tue, 29 Oct 2013 13:09:10 GMT ETag: "1d60026-187b8-4e9e0ec273e35" Accept-Ranges: bytes Vary: Accept-Encoding X-UA-Compatible: IE=edge,chrome=1 Content-Type: text/javascript;charset=utf-8 Connection: close Transfer-Encoding: chunked Length: unspecified [text/javascript] Saving to: `/dev/null' [ <=> ] 100,280 --.-K/s in 0.005s 2013-11-01 11:36:37 (19.8 MB/s) - Read error at byte 100280 (Success).Retrying. --2013-11-01 11:36:38-- (try: 2) http://external.example.com/include/jquery.js Connecting to external.example.com (external.example.com)|192.168.0.1|:80... connected. HTTP request sent, awaiting response... HTTP/1.1 416 Requested Range Not Satisfiable Date: Fri, 01 Nov 2013 15:36:38 GMT Server: Apache Vary: Accept-Encoding Content-Type: text/html;charset=utf-8 Content-Length: 260 Connection: close The file is already fully retrieved; nothing to do. Am I doing something wrong? Am I hitting some sort of Apache bug? What do I need to do to get it working? (Note that I'd prefer solutions that work within RHEL-6-packaged RPMs and upgrading to Apache 2.4 would be a last resort, as we have a lot of infrastructure built around 2.2 on this system at the moment.)

    Read the article

  • Why am I unable to telnet to a local port that has a listening service?

    - by Skip Huffman
    I suspect this is either a very simple question, or a very complex one. I have a headless server running ubuntu 10.04 that I can ssh into. I have full root access to the system. I am trying to set up an ssh tunnel to allow me to vnc to the system (but that isn't my question. I have vnc running on port 5903, here is the netstat output for that: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:5903 0.0.0.0:* LISTEN 7173/Xtightvnc tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 465/sshd But when I try to telnet to that port, from within the same system and login, I get unable to connect errors # telnet localhost 5903 Trying ::1... Trying 127.0.0.1... telnet: Unable to connect to remote host: Connection timed out I am able to telnet to port 22 (as a verification) ~# telnet localhost 22 Trying ::1... Connected to localhost. Escape character is '^]'. SSH-2.0-OpenSSH_5.3p1 Debian-3ubuntu7 I have tried to open up any possible ports using ufw (probably clumsy fashion) # ufw status numbered Status: active To Action From -- ------ ---- [ 1] 5903 ALLOW IN Anywhere [ 2] 22 ALLOW IN Anywhere What else might be blocking this connection locally? Thank you, Edit: The only reference to port 5903 in iptable -L -n is this: Chain ufw-user-input (1 references) target prot opt source destination ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:5903 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:5903 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:22 ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 ACCEPT udp -- 0.0.0.0/0 0.0.0.0/0 udp dpt:8080 I can post the whole output if that will be useful. hosts.allow and hosts.deny both contain only comments. Re-Edit: Some other questions pointed me to nmap, so I ran a portscan through that utility: # nmap -v -sT localhost -p1-65535 Starting Nmap 5.00 ( http://nmap.org ) at 2011-11-09 09:58 PST NSE: Loaded 0 scripts for scanning. Warning: Hostname localhost resolves to 2 IPs. Using 127.0.0.1. Initiating Connect Scan at 09:58 Scanning localhost (127.0.0.1) [65535 ports] Discovered open port 22/tcp on 127.0.0.1 Connect Scan Timing: About 18.56% done; ETC: 10:01 (0:02:16 remaining) Connect Scan Timing: About 44.35% done; ETC: 10:00 (0:01:17 remaining) Completed Connect Scan at 10:00, 112.36s elapsed (65535 total ports) Host localhost (127.0.0.1) is up (0.00s latency). Interesting ports on localhost (127.0.0.1): Not shown: 65533 filtered ports PORT STATE SERVICE 22/tcp open ssh 80/tcp closed http Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 112.43 seconds Raw packets sent: 0 (0B) | Rcvd: 0 (0B) I think this shows that 5903 is blocked somehow. Which I pretty much knew. The question remains what is blocking it and how to modify. Re-re-edit: To check Paul Lathrop's suggested answer, I first verified my ip address with ifconfig: eth0 Link encap:Ethernet HWaddr 02:16:3e:42:28:8f inet addr:10.0.10.3 Bcast:10.0.10.255 Mask:255.255.255.0 Then tried to telnet to 5903 from that address: # telnet 10.0.10.3 5903 Trying 10.0.10.3... telnet: Unable to connect to remote host: Connection timed out No luck. Re-re-re-re-edit: Ok, I think I have isolated it a bit to vncserver, not the firewall, darn it. I shut off vncserver and had netcat listen on port 5903. My vnc client then was able to establish a connnection and sit and wait for a response. Looks like I should be chasing a vnc problem. At least that is progress Thanks for the help

    Read the article

  • how to install 'version.h' in ubuntu ?

    - by user252098
    Just now , I try to install the Jungo WinDriver in the Ubuntu 13.10 . But I am puzzled by the its manual of how to Install version.h : Install version.h: The file version.h is created when you first compile the Linux kernel source code. Some distributions provide a compiled kernel without the file version.h. Look under /usr/src/linux/include/linux to see whether you have this file. If you do not, follow these steps: Become super user: $ su Change directory to the Linux source directory: # cd /usr/src/linux Type: # make xconfig Save the configuration by choosing Save and Exit. Type: # make dep Exit super user mode: # exit But the shell says: warning: make dep is unnecessary now. Then, I found out there is a version.h in /usr/src/linux-headers-3.11.0.12-generic, so I type: /usr/src/windriver/redist# ./configure --with-kernel-source=/usr/src/linux-headers-3.11.0.12-generic But, the windriver run fails: USE_KBUILD = yes checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6_usb.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.usb: creating ./config.status config.status: creating makefile.usb.kbuild checking for cpu architecture... x86_64 checking for WinDriver root directory... /usr/src/WinDriver checking for linux kernel source... found at /usr/src/linux checking for lib directory... ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT)_32.so /usr/lib/$(SHARED_OBJECT).so; ln -s /usr/lib /usr/lib64; ln -sf $(ROOT_DIR)/lib/$(SHARED_OBJECT).so /usr/lib64/$(SHARED_OBJECT).so checking which directories to include... -I/usr/src/linux/include checking linux kernel version... 3.11.10.6 checking for modules installation directory... /lib/modules/3.11.0-12-generic/kernel/drivers/misc checking output directory... LINUX.3.11.0-12-generic.x86_64 checking target... LINUX.3.11.0-12-generic.x86_64/windrvr6.ko checking for regparm kernel option... find: `/usr/src/WinDriver/redist/.tmp_driver/.tmp_versions': No such file or directory 0 checking for right linked object... windrvr_gcc_v3.a checking for modpost location... /usr/src/linux/scripts/mod/modpost configure.wd: creating ./config.status config.status: creating makefile.wd.kbuild What is the problem?

    Read the article

  • Unity completely broken after upgrade to 12.10?

    - by NlightNFotis
    I am facing a very frustrating issue with my computer right now. I successfully upgraded to Ubuntu 12.10 this afternoon, but after the upgrade, the graphical user interface seems completely broken. To be more specific, I can not get the Unity bar to appear on the right. I have tried many things, including (but not limited to) purging and then reinstalling the fglrx drivers, apt-get install --reinstall ubuntu-desktop, apt-get install --reinstall unity, tried to remove the Xorg and Compiz configurations, checked to see if the Ubuntu Unity wall was enabled (it was) in ccsm, all to no avail. Could someone help me troubleshoot and essentially fix this issue? NOTE: This is the output when I try to enable unity via a terminal: compiz (core) - Info: Loading plugin: core compiz (core) - Info: Starting plugin: core unity-panel-service: no process found compiz (core) - Info: Loading plugin: reset compiz (core) - Error: Failed to load plugin: reset compiz (core) - Info: Loading plugin: ccp compiz (core) - Info: Starting plugin: ccp compizconfig - Info: Backend : gsettings compizconfig - Info: Integration : true compizconfig - Info: Profile : unity compiz (core) - Info: Loading plugin: composite compiz (core) - Info: Starting plugin: composite compiz (core) - Info: Loading plugin: opengl X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 153 (GLX) Minor opcode of failed request: 19 (X_GLXQueryServerString) Serial number of failed request: 22 Current serial number in output stream: 22 compiz (core) - Info: Unity is not supported by your hardware. Enabling software rendering instead (slow). compiz (core) - Info: Starting plugin: opengl Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 compiz (core) - Error: Plugin initScreen failed: opengl compiz (core) - Error: Failed to start plugin: opengl compiz (core) - Info: Unloading plugin: opengl compiz (core) - Info: Loading plugin: decor compiz (core) - Info: Starting plugin: decor compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available compiz (decor) - Warn: requested a pixmap type decoration when compositing isn't available Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 compiz (core) - Info: Loading plugin: imgpng compiz (core) - Info: Starting plugin: imgpng compiz (core) - Info: Loading plugin: vpswitch compiz (core) - Info: Starting plugin: vpswitch compiz (core) - Info: Loading plugin: resize compiz (core) - Info: Starting plugin: resize Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 compiz (core) - Info: Loading plugin: compiztoolbox compiz (core) - Info: Starting plugin: compiztoolbox compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Info: Loading plugin: move compiz (core) - Info: Starting plugin: move compiz (core) - Info: Loading plugin: gnomecompat compiz (core) - Info: Starting plugin: gnomecompat compiz (core) - Info: Loading plugin: mousepoll compiz (core) - Info: Starting plugin: mousepoll compiz (core) - Info: Loading plugin: wall compiz (core) - Info: Starting plugin: wall compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Error: Plugin init failed: wall compiz (core) - Error: Failed to start plugin: wall compiz (core) - Info: Unloading plugin: wall compiz (core) - Info: Loading plugin: regex compiz (core) - Info: Starting plugin: regex compiz (core) - Info: Loading plugin: snap compiz (core) - Info: Starting plugin: snap compiz (core) - Info: Loading plugin: unitymtgrabhandles compiz (core) - Info: Starting plugin: unitymtgrabhandles compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Error: Plugin init failed: unitymtgrabhandles compiz (core) - Error: Failed to start plugin: unitymtgrabhandles compiz (core) - Info: Unloading plugin: unitymtgrabhandles compiz (core) - Info: Loading plugin: place compiz (core) - Info: Starting plugin: place compiz (core) - Info: Loading plugin: grid compiz (core) - Info: Starting plugin: grid Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 compiz (core) - Info: Loading plugin: animation compiz (core) - Info: Starting plugin: animation compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Error: Plugin init failed: animation compiz (core) - Error: Failed to start plugin: animation compiz (core) - Info: Unloading plugin: animation compiz (core) - Info: Loading plugin: fade compiz (core) - Info: Starting plugin: fade compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Error: Plugin init failed: fade compiz (core) - Error: Failed to start plugin: fade compiz (core) - Info: Unloading plugin: fade compiz (core) - Info: Loading plugin: session compiz (core) - Info: Starting plugin: session compiz (core) - Info: Loading plugin: expo compiz (core) - Info: Starting plugin: expo compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Error: Plugin init failed: expo compiz (core) - Error: Failed to start plugin: expo compiz (core) - Info: Unloading plugin: expo compiz (core) - Info: Loading plugin: ezoom compiz (core) - Info: Starting plugin: ezoom compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Error: Plugin init failed: ezoom compiz (core) - Error: Failed to start plugin: ezoom compiz (core) - Info: Unloading plugin: ezoom compiz (core) - Info: Loading plugin: workarounds compiz (core) - Info: Starting plugin: workarounds compiz (core) - Error: Plugin 'opengl' not loaded. Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 compiz (core) - Info: Loading plugin: scale compiz (core) - Info: Starting plugin: scale compiz (core) - Error: Plugin 'opengl' not loaded. compiz (core) - Error: Plugin init failed: scale compiz (core) - Error: Failed to start plugin: scale compiz (core) - Info: Unloading plugin: scale Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Compiz (opengl) - Fatal: glXQueryExtensionsString is NULL for screen 0 Segmentation fault (core dumped)

    Read the article

  • Setting up and using Bing Translate API Service for Machine Translation

    - by Rick Strahl
    Last week I spent quite a bit of time trying to set up the Bing Translate API service. I can honestly say this was one of the most screwed up developer experiences I've had in a long while - specifically related to the byzantine sign up process that Microsoft has in place. Not only is it nearly impossible to find decent documentation on the required signup process, some of the links in the docs are just plain wrong, and some of the account pages you need to access the actual account information once signed up are not linked anywhere from the administration UI. To make things even harder is the fact that the APIs changed a while back, with a completely new authentication scheme that's described and not directly linked documentation topic also made for a very frustrating search experience. It's a bummer that this is the case too, because the actual API itself is easy to use and works very well - fast and reasonably accurate (as accurate as you can expect machine translation to be). But the sign up process is a pain in the ass doubtlessly leaving many people giving up in frustration. In this post I'll try to hit all the points needed to set up to use the Bing Translate API in one place since such a document seems to be missing from Microsoft. Hopefully the API folks at Microsoft will get their shit together and actually provide this sort of info on their site… Signing Up The first step required is to create a Windows Azure MarketPlace account. Go to: https://datamarket.azure.com/ Sign in with your Windows Live Id If you don't have an account you will be taken to a registration page which you have to fill out. Follow the links and complete the registration. Once you're signed in you can start adding services. Click on the Data Link on the main page Select Microsoft Translator from the list This adds the Microsoft Bing Translator to your services. Pricing The page shows the pricing matrix and the free service which provides 2 megabytes for translations a month for free. Prices go up steeply from there. Pricing is determined by actual bytes of the result translations used. Max translations are 1000 characters so at minimum this means you get around 2000 translations a month for free. However most translations are probable much less so you can expect larger number of translations to go through. For testing or low volume translations this should be just fine. Once signed up there are no further instructions and you're left in limbo on the MS site. Register your Application Once you've created the Data association with Translator the next step is registering your application. To do this you need to access your developer account. Go to https://datamarket.azure.com/developer/applications/register Provide a ClientId, which is effectively the unique string identifier for your application (not your customer id!) Provide your name The client secret was auto-created and this becomes your 'password' For the redirect url provide any https url: https://microsoft.com works Give this application a description of your choice so you can identify it in the list of apps Now, once you've registered your application, keep track of the ClientId and ClientSecret - those are the two keys you need to authenticate before you can call the Translate API. Oddly the applications page is hidden from the Azure Portal UI. I couldn't find a direct link from anywhere on the site back to this page where I can examine my developer application keys. To find them you can go to: https://datamarket.azure.com/developer/applications You can come back here to look at your registered applications and pick up the ClientID and ClientSecret. Fun eh? But we're now ready to actually call the API and do some translating. Using the Bing Translate API The good news is that after this signup hell, using the API is pretty straightforward. To use the translation API you'll need to actually use two services: You need to call an authentication API service first, before you can call the actual translator API. These two APIs live on different domains, and the authentication API returns JSON data while the translator service returns XML. So much for consistency. Authentication The first step is authentication. The service uses oAuth authentication with a  bearer token that has to be passed to the translator API. The authentication call retrieves the oAuth token that you can then use with the translate API call. The bearer token has a short 10 minute life time, so while you can cache it for successive calls, the token can't be cached for long periods. This means for Web backend requests you typically will have to authenticate each time unless you build a more elaborate caching scheme that takes the timeout into account (perhaps using the ASP.NET Cache object). For low volume operations you can probably get away with simply calling the auth API for every translation you do. To call the Authentication API use code like this:/// /// Retrieves an oAuth authentication token to be used on the translate /// API request. The result string needs to be passed as a bearer token /// to the translate API. /// /// You can find client ID and Secret (or register a new one) at: /// https://datamarket.azure.com/developer/applications/ /// /// The client ID of your application /// The client secret or password /// public string GetBingAuthToken(string clientId = null, string clientSecret = null) { string authBaseUrl = https://datamarket.accesscontrol.windows.net/v2/OAuth2-13; if (string.IsNullOrEmpty(clientId) || string.IsNullOrEmpty(clientSecret)) { ErrorMessage = Resources.Resources.Client_Id_and_Client_Secret_must_be_provided; return null; } var postData = string.Format("grant_type=client_credentials&client_id={0}" + "&client_secret={1}" + "&scope=http://api.microsofttranslator.com", HttpUtility.UrlEncode(clientId), HttpUtility.UrlEncode(clientSecret)); // POST Auth data to the oauth API string res, token; try { var web = new WebClient(); web.Encoding = Encoding.UTF8; res = web.UploadString(authBaseUrl, postData); } catch (Exception ex) { ErrorMessage = ex.GetBaseException().Message; return null; } var ser = new JavaScriptSerializer(); var auth = ser.Deserialize<BingAuth>(res); if (auth == null) return null; token = auth.access_token; return token; } private class BingAuth { public string token_type { get; set; } public string access_token { get; set; } } This code basically takes the client id and secret and posts it at the oAuth endpoint which returns a JSON string. Here I use the JavaScript serializer to deserialize the JSON into a custom object I created just for deserialization. You can also use JSON.NET and dynamic deserialization if you are already using JSON.NET in your app in which case you don't need the extra type. In my library that houses this component I don't, so I just rely on the built in serializer. The auth method returns a long base64 encoded string which can be used as a bearer token in the translate API call. Translation Once you have the authentication token you can use it to pass to the translate API. The auth token is passed as an Authorization header and the value is prefixed with a 'Bearer ' prefix for the string. Here's what the simple Translate API call looks like:/// /// Uses the Bing API service to perform translation /// Bing can translate up to 1000 characters. /// /// Requires that you provide a CLientId and ClientSecret /// or set the configuration values for these two. /// /// More info on setup: /// http://www.west-wind.com/weblog/ /// /// Text to translate /// Two letter culture name /// Two letter culture name /// Pass an access token retrieved with GetBingAuthToken. /// If not passed the default keys from .config file are used if any /// public string TranslateBing(string text, string fromCulture, string toCulture, string accessToken = null) { string serviceUrl = "http://api.microsofttranslator.com/V2/Http.svc/Translate"; if (accessToken == null) { accessToken = GetBingAuthToken(); if (accessToken == null) return null; } string res; try { var web = new WebClient(); web.Headers.Add("Authorization", "Bearer " + accessToken); string ct = "text/plain"; string postData = string.Format("?text={0}&from={1}&to={2}&contentType={3}", HttpUtility.UrlEncode(text), fromCulture, toCulture, HttpUtility.UrlEncode(ct)); web.Encoding = Encoding.UTF8; res = web.DownloadString(serviceUrl + postData); } catch (Exception e) { ErrorMessage = e.GetBaseException().Message; return null; } // result is a single XML Element fragment var doc = new XmlDocument(); doc.LoadXml(res); return doc.DocumentElement.InnerText; } The first of this code deals with ensuring the auth token exists. You can either pass the token into the method manually or let the method automatically retrieve the auth code on its own. In my case I'm using this inside of a Web application and in that situation I simply need to re-authenticate every time as there's no convenient way to manage the lifetime of the auth cookie. The auth token is added as an Authorization HTTP header prefixed with 'Bearer ' and attached to the request. The text to translate, the from and to language codes and a result format are passed on the query string of this HTTP GET request against the Translate API. The translate API returns an XML string which contains a single element with the translated string. Using the Wrapper Methods It should be pretty obvious how to use these two methods but here are a couple of test methods that demonstrate the two usage scenarios:[TestMethod] public void TranslateBingWithAuthTest() { var translate = new TranslationServices(); string clientId = DbResourceConfiguration.Current.BingClientId; string clientSecret = DbResourceConfiguration.Current.BingClientSecret; string auth = translate.GetBingAuthToken(clientId, clientSecret); Assert.IsNotNull(auth); string text = translate.TranslateBing("Hello World we're back home!", "en", "de",auth); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } [TestMethod] public void TranslateBingIntegratedTest() { var translate = new TranslationServices(); string text = translate.TranslateBing("Hello World we're back home!","en","de"); Assert.IsNotNull(text, translate.ErrorMessage); Console.WriteLine(text); } Other API Methods The Translate API has a number of methods available and this one is the simplest one but probably also the most common one that translates a single string. You can find additional methods for this API here: http://msdn.microsoft.com/en-us/library/ff512419.aspx Soap and AJAX APIs are also available and documented on MSDN: http://msdn.microsoft.com/en-us/library/dd576287.aspx These links will be your starting points for calling other methods in this API. Dual Interface I've talked about my database driven localization provider here in the past, and it's for this tool that I added the Bing localization support. Basically I have a localization administration form that allows me to translate individual strings right out of the UI, using both Google and Bing APIs: As you can see in this example, the results from Google and Bing can vary quite a bit - in this case Google is stumped while Bing actually generated a valid translation. At other times it's the other way around - it's pretty useful to see multiple translations at the same time. Here I can choose from one of the values and driectly embed them into the translated text field. Lost in Translation There you have it. As I mentioned using the API once you have all the bureaucratic crap out of the way calling the APIs is fairly straight forward and reasonably fast, even if you have to call the Auth API for every call. Hopefully this post will help out a few of you trying to navigate the Microsoft bureaucracy, at least until next time Microsoft upends everything and introduces new ways to sign up again. Until then - happy translating… Related Posts Translation method Source on Github Translating with Google Translate without Google API Keys Creating a data-driven ASP.NET Resource Provider© Rick Strahl, West Wind Technologies, 2005-2013Posted in Localization  ASP.NET  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to resolve Unmet dependencies error?

    - by dandelion
    Using my new install of Ubuntu I haven't been able to download anything from the software center except the maryo game without the following error: The following packages have unmet dependencies: vlc: Depends: vlc-nox (= 1.1.12-2~oneiric1) but 1.1.12-2~oneiric1 is to be installed Depends: libaa1 (>= 1.4p5) but 1.4p5-38build1 is to be installed Depends: libavcodec-extra-53 (>= 4:0.7-1) but 4:0.7.3ubuntu0.11.10.1 is to be installed Depends: libavutil-extra-51 (>= 4:0.7-1) but 4:0.7.3ubuntu0.11.10.1 is to be installed Depends: libc6 (>= 2.8) but 2.13-20ubuntu5.1 is to be installed Depends: libfreetype6 (>= 2.2.1) but 2.4.4-2ubuntu1.1 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.1-9ubuntu3 is to be installed Depends: libqtcore4 (>= 4:4.7.0~beta1) but 4:4.7.4-0ubuntu8.1 is to be installed Depends: libqtgui4 (>= 4:4.5.3) but 4:4.7.4-0ubuntu8.1 is to be installed Depends: libsdl-image1.2 (>= 1.2.10) but 1.2.10-2.1 is to be installed Depends: libsdl1.2debian (>= 1.2.10-1) but 1.2.14-6.1ubuntu4 is to be installed Depends: libstdc++6 (>= 4.6) but 4.6.1-9ubuntu3 is to be installed Depends: libva-x11-1 (> 1.0.12~) but it is not going to be installed Depends: libva1 (> 1.0.12~) but it is not going to be installed Depends: libxcb-randr0 (>= 1.1) but it is not going to be installed Depends: libxcb-xv0 (>= 1.2) but it is not going to be installed Depends: zlib1g (>= 1:1.2.3.3.dfsg) but 1:1.2.3.4.dfsg-3ubuntu3 is to be installed My system specs are version 11.10 64 bit. ge-g41m-es2l mother board amd 5770 video card wdc green 500 gig hard drive I have recently changed the motherboard, but otherwise have not changed my computer from when I used to be running the same version of Ubuntu. edit still unable to download output of sudo apt-get update output of sudo apt-get update ~$ sudo apt-get update Ign http://extras.ubuntu.com oneiric InRelease Ign http://security.ubuntu.com oneiric-security InRelease Ign http://archive.canonical.com oneiric InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://us.archive.ubuntu.com oneiric InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Ign http://us.archive.ubuntu.com oneiric-backports InRelease Hit http://extras.ubuntu.com oneiric Release.gpg Hit http://archive.canonical.com oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Ign http://us.archive.ubuntu.com oneiric-proposed InRelease Hit http://us.archive.ubuntu.com oneiric Release.gpg Hit http://extras.ubuntu.com oneiric Release Hit http://archive.canonical.com oneiric Release Hit http://security.ubuntu.com oneiric-security Release Hit http://ppa.launchpad.net oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release.gpg Hit http://us.archive.ubuntu.com oneiric-backports Release.gpg Hit http://extras.ubuntu.com oneiric/main Sources Hit http://archive.canonical.com oneiric/partner i386 Packages Hit http://security.ubuntu.com oneiric-security/main Sources Hit http://ppa.launchpad.net oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric-proposed Release.gpg Hit http://extras.ubuntu.com oneiric/main i386 Packages Ign http://extras.ubuntu.com oneiric/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Ign http://archive.canonical.com oneiric/partner TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports Release Hit http://security.ubuntu.com oneiric-security/main Translation-en Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed Release Hit http://us.archive.ubuntu.com oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric/restricted Sources Hit http://us.archive.ubuntu.com oneiric/universe Sources Hit http://us.archive.ubuntu.com oneiric/multiverse Sources Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/main Sources Hit http://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Sources Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-updates/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/main Sources Hit http://us.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://us.archive.ubuntu.com oneiric-backports/universe Sources Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/universe TranslationIndex Ign http://extras.ubuntu.com oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en_US Hit http://us.archive.ubuntu.com oneiric-proposed/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/universe TranslationIndex Ign http://archive.canonical.com oneiric/partner Translation-en_US Ign http://extras.ubuntu.com oneiric/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://archive.canonical.com oneiric/partner Translation-en Get:1 http://us.archive.ubuntu.com oneiric/main i386 Packages [1,583 kB] Hit http://us.archive.ubuntu.com oneiric/main Translation-en Hit http://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/main Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/universe Translation-en Err http://us.archive.ubuntu.com oneiric/main i386 Packages 404 Not Found [IP: 91.189.92.179 80] Fetched 1 B in 2s (0 B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.179 80] E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • How to avoid the Portlet Skin mismatch

    - by Martin Deh
    here are probably many on going debates whether to use portlets or taskflows in a WebCenter custom portal application.  Usually the main battle on which side to take in these debates are centered around which technology enables better performance.  The good news is that both of my colleagues, Maiko Rocha and George Maggessy have posted their respective views on this topic so I will not have to further the discussion.  However, if you do plan to use portlets in a WebCenter custom portal application, this post will help you not have the "portlet skin mismatch" issue.   An example of the presence of the mismatch can be view from the applications log: The skin customsharedskin.desktop specified on the requestMap will be used even though the consumer's skin's styleSheetDocumentId on the requestMap does not match the local skin's styleSheetDocument's id. This will impact performance since the consumer and producer stylesheets cannot be shared. The producer styleclasses will not be compressed to avoid conflicts. A reason the ids do not match may be the jars are not identical on the producer and the consumer. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. Notice that due to the mismatch the portlet's CSS will not be able to be compressed, which will most like impact performance in the portlet's consuming portal. The first part of the blog will define the portlet mismatch and cover some debugging tips that can help you solve the portlet mismatch issue.  Following that I will give a complete example of the creating, using and sharing a shared skin in both a portlet producer and the consumer application. Portlet Mismatch Defined  In general, when you consume/render an ADF page (or task flow) using the ADF Portlet bridge, the portlet (producer) would try to use the skin of the consumer page - this is called skin-sharing. When the producer cannot match the consumer skin, the portlet would generate its own stylesheet and reference it from its markup - this is called mismatched-skin. This can happen because: The consumer and producer use different versions of ADF Faces, or The consumer has additional skin-additions that the producer doesn't have or vice-versa, or The producer does not have the consumer skin For case (1) & (2) above, the producer still uses the consumer skin ID to render its markup. For case (3), the producer would default to using portlet skin. If there is a skin mis-match then there may be a performance hit because: The browser needs to fetch this extra stylesheet (though it should be cached unless expires caching is turned off) The generated portlet markup uses uncompressed styles resulting in a larger markup It is often not obvious when a skin mismatch occurs, unless you look for either of these indicators: The log messages in the producer log, for example: The skin blafplus-rich.desktop specified on the requestMap will not be used because the styleSheetDocument id on the requestMap does not match the local skin's styleSheetDocument's id. It could mean the jars are not identical. For example, one might have trinidad-skins.xml's skin-additions in a jar file on the class path that the other does not have. View the portlet markup inside the iframe, there should be a <link> tag to the portlet stylesheet resource like this (note the CSS is proxied through consumer's resourceproxy): <link rel=\"stylesheet\" charset=\"UTF-8\" type=\"text/css\" href=\"http:.../resourceproxy/portletId...252525252Fadf%252525252Fstyles%252525252Fcache%252525252Fblafplus-rich-portlet-d1062g-en-ltr-gecko.css... Using HTTP monitoring tool (eg, firebug, httpwatch), you can see a request is made to the portlet stylesheet resource (see URL above) There are a number of reasons for mismatched-skin. For skin to match the producer and consumer must match the following configurations: The ADF Faces version (different versions may have different style selectors) Style Compression, this is defined in the web.xml (default value is false, i.e. compression is ON) Tonal styles or themes, also defined in the web.xml via context-params The same skin additions (jars with skin) are available for both producer and consumer.  Skin additions are defined in the trinidad-skins.xml, using the <skin-addition> tags. These are then aggregated from all the jar files in the classpath. If there's any jar that exists on the producer but not the consumer, or vice veras, you get a mismatch. Debugging Tips  Ensure the style compression and tonal styles/themes match on the consumer and producer, by looking at the web.xml documents for the consumer & producer applications It is bit more involved to determine if the jars match.  However, you can enable the Trinidad logging to show which skin-addition it is processing.  To enable this feature, update the logging.xml log level of both the producer and consumer WLS to FINEST.  For example, in the case of the WebLogic server used by JDeveloper: $JDEV_USER_DIR/system<version number>/DefaultDomain/config/fmwconfig/servers/DefaultServer/logging.xml Add a new entry: <logger name="org.apache.myfaces.trinidadinternal.skin.SkinUtils" level="FINEST"/> Restart WebLogic.  Run the consumer page, you should see the following logging in both the consumer and producer log files. Any entries that don't match is the cause of the mismatch.  The following is an example of what the log will produce with this setting: [SRC_CLASS: org.apache.myfaces.trinidadinternal.skin.SkinUtils] [APP: WebCenter] [SRC_METHOD: _getMetaInfSkinsNodeList] Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/announcement-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/calendar-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/forum-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/page-service-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-kudos-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/peopleconnections-wall-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/portlet-client-adf-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/rtc-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/serviceframework-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/smarttag-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.skin/in1ar8/APP-INF/lib/spaces-service-skins.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/oracle.webcenter.composer/3yo7j/WEB-INF/lib/custComps-skin.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/adf-richclient-impl-11.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-faces.jar!/META-INF/trinidad-skins.xml Processing skin URL:zip:/tmp/_WL_user/adf.oracle.domain.webapp/q433f9/WEB-INF/lib/dvt-trinidad.jar!/META-INF/trinidad-skins.xml   The Complete Example The first step is to create the shared library.  The WebCenter documentation covering this is located here in section 15.7.  In addition, our ADF guru Frank Nimphius also covers this in hes blog.  Here are my steps (in JDeveloper) to create the skin that will be used as the shared library for both the portlet producer and consumer. Create a new Generic Application Give application name (i.e. MySharedSkin) Give a project name (i.e. MySkinProject) Leave Project Technologies blank (none selected), and click Finish Create the trinidad-skins.xml Right-click on the MySkinProject node in the Application Navigator and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file trinidad-skins.xml, and (IMPORTANT) give the directory path to MySkinProject\src\META-INF In the trinidad-skins.xml, complete the skin entry.  for example: <?xml version="1.0" encoding="windows-1252" ?> <skins xmlns="http://myfaces.apache.org/trinidad/skin">   <skin>     <id>mysharedskin.desktop</id>     <family>mysharedskin</family>     <extends>fusionFx-v1.desktop</extends>     <style-sheet-name>css/mysharedskin.css</style-sheet-name>   </skin> </skins> Create CSS file In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Gallery, select Web-Tier-> HTML, CSS File from the the Items and click OK In the Create Cascading Style Sheet dialog, give the name (i.e. mysharedskin.css) Ensure that the Directory path is the under the META-INF (i.e. MySkinProject\src\META-INF\css) Once the new CSS opens in the editor, add in a style selector.  For example, this selector will style the background of a particular panelGroupLayout: af|panelGroupLayout.customPGL{     background-color:Fuchsia; } Create the MANIFEST.MF (used for deployment JAR) In the Application Navigator, right click on the META-INF folder (where the trinidad-skins.xml is located), and select "New" In the New Galley, click on "General", select "File" from the Items, and click OK In the Create File dialog, name the file MANIFEST.MF, and (IMPORTANT) ensure that the directory path is to MySkinProject\src\META-INF Complete the MANIFEST.MF, where the extension name is the shared library name Manifest-Version: 1.1 Created-By: Martin Deh Implementation-Title: mysharedskin Extension-Name: mysharedskin.lib.def Specification-Version: 1.0.1 Implementation-Version: 1.0.1 Implementation-Vendor: MartinDeh Create new Deployment Profile Right click on the MySkinProject node, and select New From the New Gallery, select General->Deployment Profiles, Shared Library JAR File from Items, and click OK In the Create Deployment Profile dialog, give name (i.e.mysharedskinlib) and click OK In the Edit JAR Deployment dialog, un-check Include Manifest File option  Select Project Output->Contributors, and check Project Source Path Select Project Output->Filters, ensure that all items under the META-INF folder are selected Click OK to exit the Project Properties dialog Deploy the shared lib to WebLogic (start server before steps) Right click on MySkin Project and select Deploy For this example, I will deploy to JDeverloper WLS In the Deploy dialog, select Deploy to Weblogic Application Server and click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a shared Library radio is selected Click Finish Open the WebLogic console to see the deployed shared library The following are the steps to create a simple test Portlet Create a new WebCenter Portal - Portlet Producer Application In the Create Portlet Producer dialog, select default settings and click Finish Right click on the Portlets node and select New IIn the New Gallery, select Web-Tier->Portlets, Standards-based Java Portlet (JSR 286) and click OK In the General Portlet information dialog, give portlet name (i.e. MyPortlet) and click Next 2 times, stopping at Step 3 In the Content Types, select the "view" node, in the Implementation Method, select the Generate ADF-Faces JSPX radio and click Finish Once the portlet code is generated, open the view.jspx in the source editor Based on the simple CSS entry, which sets the background color of a panelGroupLayout, replace the <af:form/> tag with the example code <af:form>         <af:panelGroupLayout id="pgl1" styleClass="customPGL">           <af:outputText value="background from shared lib skin" id="ot1"/>         </af:panelGroupLayout>  </af:form> Since this portlet is to use the shared library skin, in the generated trinidad-config.xml, remove both the skin-family tag and the skin-version tag In the Application Resources view, under Descriptors->META-INF, double-click to open the weblogic-application.xml Add a library reference to the shared skin library (note: the library-name must match the extension-name declared in the MANIFEST.MF):  <library-ref>     <library-name>mysharedskin.lib.def</library-name>  </library-ref> Notice that a reference to oracle.webcenter.skin exists.  This is important if this portlet is going to be consumed by a WebCenter Portal application.  If this tag is not present, the portlet skin mismatch will happen.  Configure the portlet for deployment Create Portlet deployment WAR Right click on the Portlets node and select New In the New Gallery, select Deployment Profiles, WAR file from Items and click OK In the Create Deployment Profile dialog, give name (i.e. myportletwar), click OK Keep all of the defaults, however, remember the Context Root entry (i.e. MyPortlet4SharedLib-Portlets-context-root, this will be needed to obtain the producer WSDL URL) Click OK, then OK again to exit from the Properties dialog Since the weblogic-application.xml has to be included in the deployment, the portlet must be deployed as a WAR, within an EAR In the Application dropdown, select Deploy->New Deployment Profile... By default EAR File has been selected, click OK Give Deployment Profile (EAR) a name (i.e. MyPortletProducer) and click OK In the Properties dialog, select Application Assembly and ensure that the myportletwar is checked Keep all of the other defaults and click OK For this demo, un-check the Auto Generate ..., and all of the Security Deployment Options, click OK Save All In the Application dropdown, select Deploy->MyPortletProducer In the Deployment Action, select Deploy to Application Server, click Next Choose IntegratedWebLogicServer and click Next Select Deploy to selected instances in the domain radio, select Default Server (note: server must be already started), and ensure Deploy as a standalone Application radio is selected The select deployment type (identifying the deployment as a JSR 286 portlet) dialog appears.  Keep default radio "Yes" selection and click OK Open the WebLogic console to see the deployed Portlet The last step is to create the test portlet consuming application.  This will be done using the OOTB WebCenter Portal - Framework Application.  Create the Portlet Producer Connection In the JDeveloper Deployment log, copy the URL of the portlet deployment (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root Open a browser and paste in the URL.  The Portlet information page should appear.  Click on the WSRP v2 WSDL link Copy the URL from the browser (i.e. http://localhost:7101/MyPortlet4SharedLib-Portlets-context-root/portlets/wsrp2?WSDL) In the Application Resources view, right click on the Connections folder and select New Connection->WSRP Connection Give the producer a name or accept the default, click Next Enter (paste in) the WSDL URL, click Next If connection to Portlet is succesful, Step 3 (Specify Additional ...) should appear.  Accept defaults and click Finish Add the portlet to a test page Open the home.jspx.  Note in the visual editor, the orange dashed border, which identifies the panelCustomizable tag. From the Application Resources. select the MyPortlet portlet node, and drag and drop the node into the panelCustomizable section.  A Confirm Portlet Type dialog appears, keep default ADF Rich Portlet and click OK Configure the portlet to use the shared skin library Open the weblogic-application.xml and add the library-ref entry (mysharedskin.lib.def) for the shared skin library.  See create portlet example above for the steps Since by default, the custom portal using a managed bean to (dynamically) determine the skin family, the default trinidad-config.xml will need to be altered Open the trinidad-config.xml in the editor and replace the EL (preferenceBean) for the skin-family tag, with mysharedskin (this is the skin-family named defined in the trinidad-skins.xml) Remove the skin-version tag Right click on the index.html to test the application   Notice that the JDeveloper log view does not have any reporting of a skin mismatch.  In addition, since I have configured the extra logging outlined in debugging section above, I can see the processed skin jar in both the producer and consumer logs: <SkinUtils> <_getMetaInfSkinsNodeList> Processing skin URL:zip:/JDeveloper/system11.1.1.6.38.61.92/DefaultDomain/servers/DefaultServer/upload/mysharedskin.lib.def/[email protected]/app/mysharedskinlib.jar!/META-INF/trinidad-skins.xml 

    Read the article

  • Solaris 10 branded zone VM Templates for Solaris 11 on OTN

    - by jsavit
    Early this year I wrote the article Ours Goes To 11 which describes the ability to import Solaris 10 systems into a "Solaris 10 branded zone" under Oracle Solaris 11. I did this using Solaris 11 Express, and the capability remains in Solaris 11 with only slight changes. This important tool lets you painlessly inhaling a Solaris Container from Solaris 10 or entire Solaris 10 systems ("the global zone") into virtualized environments on a Solaris 11 OS. Just recently, Oracle provided Oracle VM Templates for Oracle Solaris 10 Zones to let you create Solaris 10 branded zones for Solaris 11 even if you don't currently have access to install media or a running Solaris 10 system. To use this, just download the Oracle VM Template for Oracle Solaris Zone 10 from OTN at http://www.oracle.com/technetwork/server-storage/solaris11/downloads/virtual-machines-1355605.html. This page contains images of Oracle Solaris 10 8/11 (the recent update to Solaris 10) in SPARC and x86 formats suitable for creating branded zones. The same page also has a VirtualBox image you can download for a complete Solaris 10 install in a guest virtual machine you can run on any host OS that supports VirtualBox. Both sets of downloads provide a quick - and extremely easy - way to set up a virtual Solaris 10 environment. In the case of the Oracle VM Templates, they illustrate several advanced features of Solaris 11. To start, just go to the above link, download the template for the hardware platform (SPARC or x86) you want, and download the README file also linked from that page. Install prerequisites The README file tells you to install the prerequisite Solaris 11 package that implements the Solaris 10 brand. Then you can install instances of zones with that brand. # pkg install pkg:/system/zones/brand/brand-solaris10 Packages to install: 1 Create boot environment: No Create backup boot environment: Yes DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 44/44 0.4/0.4 PHASE ACTIONS Install Phase 74/74 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 That took only a few minutes, and didn't require a reboot. Install the Solaris 10 zone Now it's time to run the downloaded template file. First make it executable via the chmod command, of course. I found that (unlike stated in the README) there was no need to rename the downloaded file to remove the .bin. When you run it you provide several parameters to describe the zone configuration: -a IP address - the IP address and optional netmask for the zone. This is the only mandatory parameter. -z zonename - the name of the zone you would like to create. -i interface - the package will create an exclusive-IP zone using a virtual NIC (vnic) based on this physical interface. In my case, I have a NIC called rge0. -p PATH - specifies the path in which you want the zoneroot to be placed. In my case, I have a ZFS dataset mounted at /zones, and this will create a zoneroot at /zones/s10u10. Kicking it off, you will see a copyright message, and then messages showing progress building the zone, which only takes a few minutes. # ./solaris-10u10-x86.bin -p /zones -a 192.168.1.100 -i rge0 -z s10u10 ... ... Checking disk-space for extraction Ok Extracting in /export/home/CDimages/s10zone/bootimage.ihaqvh ... 100% [===============================] Checking data integrity Ok Checking platform compatibility The host and the image do not have the same Solaris release: host Solaris release: 5.11 image Solaris release: 5.10 Will create a Solaris 10 branded zone. Warning: could not find a defaultrouter Zone won't have any defaultrouter configured IMAGE: ./solaris-10u10-x86.bin ZONE: s10u10 ZONEPATH: /zones/s10u10 INTERFACE: rge0 VNIC: vnicZBI13379 MAC ADDR: 2:8:20:5c:1a:cc IP ADDR: 192.168.1.100 NETMASK: 255.255.255.0 DEFROUTER: NONE TIMEZONE: US/Arizona Checking disk-space for installation Ok Installing in /zones/s10u10 ... 100% [===============================] Using a static exclusive-IP Attaching s10u10 Booting s10u10 Waiting for boot to complete booting... booting... booting... Zone s10u10 booted The zone's root password has been set using the root password of the local host. You can change the zone's root password to further harden the security of the zone: being root, log into the zone from the local host with the command 'zlogin s10u10'. Once logged in, change the root password with the command 'passwd'. The nifty part in my opinion (besides being so easy), is that the zone was created as an exclusive-IP zone on a virtual NIC. This network configuration lets you enforce traffic isolation from other zones, enforce network Quality of Service, and even let the zone set its own characteristics like IP address and packet size. Independence of the zone's network characteristics from the global zone is one of the enhancements in Solaris 10 that make it easier to consolidate zones while preserving their autonomy, yet provide control in a consolidated environment. Let's see what the virtual network environment looks like by issuing commands from the Solaris 11 global zone. First I'll use Old School ifconfig, and then I'll use the new ipadm and dladm commands. # ifconfig -a4 lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 rge0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2 inet 192.168.1.3 netmask ffffff00 broadcast 192.168.1.255 ether 0:14:d1:18:ac:bc vboxnet0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 3 inet 192.168.56.1 netmask ffffff00 broadcast 192.168.56.255 ether 8:0:27:f8:62:1c # dladm show-phys LINK MEDIA STATE SPEED DUPLEX DEVICE yge0 Ethernet unknown 0 unknown yge0 yge1 Ethernet unknown 0 unknown yge1 rge0 Ethernet up 1000 full rge0 vboxnet0 Ethernet up 1000 full vboxnet0 # dladm show-link LINK CLASS MTU STATE OVER yge0 phys 1500 unknown -- yge1 phys 1500 unknown -- rge0 phys 1500 up -- vboxnet0 phys 1500 up -- vnicZBI13379 vnic 1500 up rge0 s10u10/vnicZBI13379 vnic 1500 up rge0 s10u10/net0 vnic 1500 up rge0 # dladm show-vnic LINK OVER SPEED MACADDRESS MACADDRTYPE VID vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/vnicZBI13379 rge0 1000 2:8:20:5c:1a:cc random 0 s10u10/net0 rge0 1000 2:8:20:9d:d0:79 random 0 # ipadm show-addr ADDROBJ TYPE STATE ADDR lo0/v4 static ok 127.0.0.1/8 rge0/_a dhcp ok 192.168.1.3/24 vboxnet0/_a static ok 192.168.56.1/24 lo0/v6 static ok ::1/128 Log into the zone The install step already booted the zone, so lets log into it. Notice how you have to be appropriately privileged to log into a zone. This is my home system so I'm being a bit cavalier, but in a production environment you can give granular control of who can login to which zones. Voila! a Solaris 10 environment under a Solaris 11 kernel. Notice the output from the uname -a and ifconfig commands, and output from a ping to a nearby host. $ zlogin s10u10 zlogin: You lack sufficient privilege to run this command (all privs required) savit@home:~$ sudo zlogin s10u10 Password: [Connected to zone 's10u10' pts/5] Oracle Corporation SunOS 5.10 Generic Patch January 2005 # uname -a SunOS s10u10 5.10 Generic_Virtual i86pc i386 i86pc # ifconfig -a4 lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc # bash bash-3.2# ifconfig -a lo0: flags=2001000849 mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 vnicZBI13379: flags=1000843 mtu 1500 index 2 inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255 ether 2:8:20:5c:1a:cc bash-3.2# ping 192.168.1.2 192.168.1.2 is alive For fun, I configured Apache (setting its configuration file in /etc/apache2) and brought it up. Easy - took just a few minutes. bash-3.2# svcs apache2 STATE STIME FMRI disabled 12:38:46 svc:/network/http:apache2 bash-3.2# svcadm enable apache2 Summary In just a few minutes, I built a functioning virtual Solaris 10 environment under by Solaris 11 system. It was... easy! While I can still do it the manual way (creating and using a system archive), this is a low-effort way to create a Solaris 10 zone on Solaris 11.

    Read the article

  • 12.04 upgrade broke grub? (not wubi related)

    - by kaare
    I just updated from 11.10 to 12.04, with no major problems (it took a while to get past a request to restart ssh, mysql and some other services, but I did no fiddling by myself, everything was done by the installer). However, after restarting, grub can't do anything. Picking the new linux installation (first entry), I just get error: no such partition error: no such partition error: no such partition and picking the recovery-version just gives 5 lines instead of 3. I have windows 7 installed on a different drive, and can run it by booting from that drive instead. Picking it from the grub menu gives the same error as above (can't remember how many lines, though). I'll be honest and say that I don't remember if win 7 could be booted from grub before the update, though. In short, nothing on the grub menu works. any solutions? The grub menu changed appearance - before it was on a purple background, small letters, now it's white-on-black, big letters, looking very basic. The original installation was from a usb-drive, and I hadn't heard about wubi until I started googling this problem, so I doubt there's any connection. I really hope there are some grub-savvy people out there :) EDIT: ok. so, I made a bootable usb, and am running from that right now. when I ran the bootinfoscript, it warned me that "gawk" could not be found, using "busybox awk" instead. This may lead to unreliable results. just so you know. The contents of RESULTS.txt are: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Grub2 (v1.99) is installed in the MBR of /dev/sdb and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. => Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sdc. sda1: __________________________________________ File system: vfat Boot sector type: Dell Utility: FAT16 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /DELLBIO.BIN /DELLRMK.BIN /COMMAND.COM sda2: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sda3: __________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /bootmgr /Boot/BCD /Windows/System32/winload.exe sda4: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: vfat Boot sector type: Windows 7: FAT32 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /bootmgr /ntldr /NTDETECT.COM sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdb2: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb3: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sdb3 and looks at sector 375893584 of the same hard drive for core.img. core.img is at this location and looks for (,msdos3)/boot/grub on this drive. Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdb4: __________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Boot files: sdc1: __________________________________________ File system: ntfs Boot sector type: SYSLINUX 4.06 4.06-pre1 Boot sector info: Syslinux looks at sector 4649656 of /dev/sdc1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. No errors found in the Boot Parameter Block. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 63 240,974 240,912 de Dell Utility /dev/sda2 241,664 21,213,183 20,971,520 7 NTFS / exFAT / HPFS /dev/sda3 * 21,213,184 483,151,863 461,938,680 7 NTFS / exFAT / HPFS /dev/sda4 483,151,872 488,394,751 5,242,880 f W95 Extended (LBA) /dev/sda5 483,153,920 488,394,751 5,240,832 dd Dell Media Direct Drive: sdb _______________________________________ Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 63 345,886,749 345,886,687 7 NTFS / exFAT / HPFS /dev/sdb2 345,888,768 361,510,911 15,622,144 82 Linux swap / Solaris /dev/sdb3 * 361,510,912 390,807,786 29,296,875 83 Linux /dev/sdb4 390,809,600 488,394,751 97,585,152 83 Linux Drive: sdc _______________________________________ Disk /dev/sdc: 8015 MB, 8015282176 bytes 255 heads, 63 sectors/track, 974 cylinders, total 15654848 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdc1 * 2,048 15,652,863 15,650,816 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 07D8-0411 vfat DellUtility /dev/sda2 E2765BBC765B9061 ntfs RECOVERY /dev/sda3 98DC5E54DC5E2D2E ntfs OS /dev/sda5 7061-9DF5 vfat MEDIADIRECT /dev/sdb1 01CBBB4C3374C3B0 ntfs Data1 /dev/sdb2 1ca45f3f-f888-43d1-8137-02699597189a swap /dev/sdb3 6bc1b599-ad4b-403c-a155-a5bc81211f5e ext4 /dev/sdb4 58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 /dev/sdc1 0C02B64402B63316 ntfs PENDRIVE ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sdb4 /media/58e2b257-8608-4b11-b20b-dc162bb80b62 ext4 (rw,nosuid,nodev,uhelper=udisks) /dev/sdc1 /cdrom fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other,blksize=4096) ================================ sda5/boot.ini: ================================ [boot loader] timeout=0 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Embedded" /fastdetect /KERNEL=NTOSBOOT.EXE /maxmem=1024 =========================== sdb3/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, with Linux 3.2.0-24-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.2.0-24-generic ...' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.0.0-19-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro quiet splash $vt_handoff initrd /boot/initrd.img-3.0.0-19-generic } menuentry 'Ubuntu, with Linux 3.0.0-19-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e echo 'Loading Linux 3.0.0-19-generic ...' linux /boot/vmlinuz-3.0.0-19-generic root=UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.0.0-19-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd1,msdos3)' search --no-floppy --fs-uuid --set=root 6bc1b599-ad4b-403c-a155-a5bc81211f5e linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda3)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos3)' search --no-floppy --fs-uuid --set=root 98DC5E54DC5E2D2E chainloader +1 } menuentry "Microsoft Windows XP Embedded (on /dev/sda5)" --class windows --class os { insmod part_msdos insmod fat set root='(hd0,msdos5)' search --no-floppy --fs-uuid --set=root 7061-9DF5 drivemap -s (hd0) ${root} chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### =============================== sdb3/etc/fstab: ================================ # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdb3 during installation UUID=6bc1b599-ad4b-403c-a155-a5bc81211f5e / ext4 errors=remount-ro 0 1 # /home was on /dev/sdb4 during installation UUID=58e2b257-8608-4b11-b20b-dc162bb80b62 /home ext4 defaults,user_xattr 0 2 # swap was on /dev/sdb2 during installation UUID=1ca45f3f-f888-43d1-8137-02699597189a none swap sw 0 0 =================== sdb3: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.0.0-19-generic 2 = boot/initrd.img-3.2.0-24-generic 2 = boot/vmlinuz-3.0.0-19-generic 2 = boot/vmlinuz-3.2.0-24-generic 1 = vmlinuz 1 = vmlinuz.old 2 =========================== sdc1/boot/grub/grub.cfg: =========================== if loadfont /boot/grub/font.pf2 ; then set gfxmode=auto insmod efi_gop insmod efi_uga insmod gfxterm terminal_output gfxterm fi set menu_color_normal=white/black set menu_color_highlight=black/light-gray menuentry "Try Ubuntu without installing" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper quiet splash -- initrd /casper/initrd.lz } menuentry "Install Ubuntu" { set gfxpayload=keep linux /casper/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper only-ubiquity quiet splash -- initrd /casper/initrd.lz } menuentry "Check disc for defects" { set gfxpayload=keep linux /casper/vmlinuz boot=casper integrity-check quiet splash -- initrd /casper/initrd.lz } ========================= sdc1/syslinux/syslinux.cfg: ========================== # D-I config version 2.0 include menu.cfg default vesamenu.c32 prompt 0 timeout 50 # If you would like to use the new menu and be presented with the option to install or run from USB at startup, remove # from the following line. This line was commented out (by request of many) to allow the old menu to be presented and to enable booting straight into the Live Environment! # ui gfxboot bootlogo =================== sdc1: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) ?? = ?? boot/grub/grub.cfg 0 ================= sdc1: Location of files loaded by Syslinux: ================== GiB - GB File Fragment(s) ?? = ?? ldlinux.sys 1 ?? = ?? syslinux/chain.c32 1 ?? = ?? syslinux/gfxboot.c32 1 ?? = ?? syslinux/syslinux.cfg 0 ?? = ?? syslinux/vesamenu.c32 1 ============== sdc1: Version of COM32(R) files used by Syslinux: =============== syslinux/chain.c32 : COM32R module (v4.xx) syslinux/gfxboot.c32 : COM32R module (v4.xx) syslinux/vesamenu.c32 : COM32R module (v4.xx) =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in ./bootinfoscript: line 1646: [: 2.73495e+09: integer expression expected

    Read the article

  • Package Dependencies Error in almost every install

    - by Betaxpression
    New to Ubuntu. In the other sofware sources i have "Debian 4.0 eth" officially supported "non-us.debian.org/"; etc ... "ppa.launcpad.net" and installing applications has stopped working. I think i first came across this problem after installing Blender 2.58 When using update manager it is prompting for a partial upgrade. Almost every software when trying to install showing the same error Package Dependencies Error or GPG PUB KEY missing, tried to fixing to them but no luck. Output to: sudo apt-get update && sudo apt-get upgrade (links disabled http:// -- http:/ as new user can't put more no. of hyperlinks) Ign http:/non-us.debian.org stable/non-US InRelease Ign http:/non-us.debian.org stable/non-US Release.gpg Ign http:/non-us.debian.org stable/non-US Release Ign http:/non-us.debian.org stable/non-US/contrib TranslationIndex Ign http:/non-us.debian.org stable/non-US/main TranslationIndex Ign http:/non-us.debian.org stable/non-US/non-free TranslationIndex Err http:/non-us.debian.org stable/non-US/main Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free Sources 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/main amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/contrib amd64 Packages 503 Service Unavailable Err http:/non-us.debian.org stable/non-US/non-free amd64 Packages 503 Service Unavailable Ign http:/non-us.debian.org stable/non-US/contrib Translation-en_IN Ign http:/non-us.debian.org stable/non-US/contrib Translation-en Ign http:/non-us.debian.org stable/non-US/main Translation-en_IN Ign http:/non-us.debian.org stable/non-US/main Translation-en Ign http:/non-us.debian.org stable/non-US/non-free Translation-en_IN Ign http:/non-us.debian.org stable/non-US/non-free Translation-en Ign http:/archive.ubuntu.com natty InRelease Ign http:/archive.canonical.com natty InRelease Ign http:/extras.ubuntu.com natty InRelease Ign http:/http.us.debian.org stable InRelease Ign http:/ftp.us.debian.org etch InRelease Ign http:/archive.ubuntu.com natty-updates InRelease Hit http:/archive.canonical.com natty Release.gpg Get:1 http:/extras.ubuntu.com natty Release.gpg [72 B] Ign http:/ppa.launchpad.net natty InRelease Get:2 http:/http.us.debian.org stable Release.gpg [1,672 B] Ign http:/linux.dropbox.com natty InRelease Ign http:/ftp.us.debian.org etch Release.gpg Ign http:/archive.ubuntu.com natty-security InRelease Hit http:/archive.canonical.com natty Release Hit http:/extras.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Get:3 http:/linux.dropbox.com natty Release.gpg [489 B] Ign http:/ftp.us.debian.org etch Release Ign http:/dl.google.com stable InRelease Get:4 http:/archive.ubuntu.com natty Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner Sources Hit http:/extras.ubuntu.com natty/main Sources Get:5 http:/linux.dropbox.com natty Release [2,599 B] Get:6 http:/archive.ubuntu.com natty-updates Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.canonical.com natty/partner amd64 Packages Hit http:/extras.ubuntu.com natty/main amd64 Packages Get:7 http:/linux.dropbox.com natty/main amd64 Packages [784 B] Get:8 http:/archive.ubuntu.com natty-security Release.gpg [198 B] Ign http:/ppa.launchpad.net natty InRelease Ign http:/archive.canonical.com natty/partner TranslationIndex Ign http:/extras.ubuntu.com natty/main TranslationIndex Get:9 http:/http.us.debian.org stable Release [104 kB] Ign http:/linux.dropbox.com natty/main TranslationIndex Hit http:/archive.ubuntu.com natty Release Ign http:/ppa.launchpad.net natty InRelease Ign http:/http.us.debian.org stable Release Hit http:/archive.ubuntu.com natty-updates Release Get:10 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty-security Release Get:11 http:/ppa.launchpad.net natty InRelease [316 B] Ign http:/ppa.launchpad.net natty InRelease Hit http:/archive.ubuntu.com natty/restricted Sources Get:12 http:/ppa.launchpad.net natty Release.gpg [316 B] Ign http:/http.us.debian.org stable/main Sources/DiffIndex Get:13 http:/ppa.launchpad.net natty Release.gpg [316 B] Hit http:/archive.ubuntu.com natty/main Sources Ign http:/ftp.us.debian.org etch/contrib TranslationIndex Ign http:/http.us.debian.org stable/contrib Sources/DiffIndex Get:14 http:/ppa.launchpad.net natty Release.gpg [1,502 B] Ign http:/http.us.debian.org stable/non-free Sources/DiffIndex Ign http:/ftp.us.debian.org etch/main TranslationIndex Get:15 http:/ppa.launchpad.net natty Release.gpg [1,928 B] Ign http:/http.us.debian.org stable/main amd64 Packages/DiffIndex Ign http:/ftp.us.debian.org etch/non-free TranslationIndex Ign http:/ppa.launchpad.net natty Release.gpg Hit http:/http.us.debian.org stable/contrib amd64 Packages/DiffIndex W: GPG error: http:/http.us.debian.org stable Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY AED4B06F473041FA NO_PUBKEY 64481591B98321F9 W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_sunab_kdenlive-release_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message W: GPG error: http:/ppa.launchpad.net natty InRelease: File /var/lib/apt/lists/partial/ppa.launchpad.net_ubuntu-wine_ppa_ubuntu_dists_natty_InRelease doesn't start with a clearsigned message E: Could not open file /var/lib/apt/lists/http.us.debian.org_debian_dists_stable_contrib_binary-amd64_Packages.IndexDiff - open (2: No such file or directory) output to: sudo cat /etc/apt/sources.list # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release amd64 (20110427.1)]/ natty main restricted # See http:/help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http:/archive.ubuntu.com/ubuntu natty main restricted deb-src http:/archive.ubuntu.com/ubuntu natty restricted main multiverse universe #Added by software-properties ## Major bug fix updates produced after the final release of the ## distribution. deb http:/archive.ubuntu.com/ubuntu natty-updates main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-updates restricted main multiverse universe #Added by software-properties ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http:/archive.ubuntu.com/ubuntu natty universe deb http:/archive.ubuntu.com/ubuntu natty-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http:/archive.ubuntu.com/ubuntu natty multiverse deb http:/archive.ubuntu.com/ubuntu natty-updates multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse # deb-src http:/us.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse deb http:/archive.ubuntu.com/ubuntu natty-security main restricted deb-src http:/archive.ubuntu.com/ubuntu natty-security restricted main multiverse universe #Added by software-properties deb http:/archive.ubuntu.com/ubuntu natty-security universe deb http:/archive.ubuntu.com/ubuntu natty-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http:/archive.canonical.com/ubuntu natty partner deb-src http:/archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http:/extras.ubuntu.com/ubuntu natty main deb-src http:/extras.ubuntu.com/ubuntu natty main deb http:/ftp.us.debian.org/debian/ etch main contrib non-free deb-src http:/ftp.us.debian.org/debian/ etch main contrib non-free deb http:/http.us.debian.org/debian stable main contrib non-free deb-src http:/http.us.debian.org/debian stable main contrib non-free deb http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free deb-src http:/non-us.debian.org/debian-non-US stable/non-US main contrib non-free Thanks But after removing Debian repositories still getting this error: W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 9BDB3D89CE49EC21, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 80E7349A06ED541C, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 8C851674F96FD737, W:GPG error: http://ppa.launchpad.net natty Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 94E58C34A8670E8C, E:Unable to parse package file /var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_natty-updates_multiverse_i18n_Index (1) I actually tried this before, but i am always getting this error --Executing: gpg --ignore-time-conflict --no-options --no-default-keyring --secret-keyring /etc/apt/secring.gpg --trustdb-name /etc/apt/trustdb.gpg --keyring /etc/apt/trusted.gpg --primary-keyring /etc/apt/trusted.gpg --keyserver keyserver.ubuntu.com --recv-keys 8C851674F96FD737 gpg: requesting key F96FD737 from hkp server keyserver.ubuntu.com ?: keyserver.ubuntu.com: Connection refused gpgkeys: HTTP fetch error 7: couldn't connect: Connection refused gpg: no valid OpenPGP data found. gpg: Total number processed: 0

    Read the article

  • Multi-tenant ASP.NET MVC - Views

    - by zowens
    Part I – Introduction Part II – Foundation Part III – Controllers   So far we have covered the basic premise of tenants and how they will be delegated. Now comes a big issue with multi-tenancy, the views. In some applications, you will not have to override views for each tenant. However, one of my requirements is to add extra views (and controller actions) along with overriding views from the core structure. This presents a bit of a problem in locating views for each tenant request. I have chosen quite an opinionated approach at the present but will coming back to the “views” issue in a later post. What’s the deal? The path I’ve chosen is to use precompiled Spark views. I really love Spark View Engine and was planning on using it in my project anyways. However, I ran across a really neat aspect of the source when I was having a look under the hood. There’s an easy way to hook in embedded views from your project. There are solutions that provide this, but they implement a special Virtual Path Provider. While I think this is a great solution, I would rather just have Spark take care of the view resolution. The magic actually happens during the compilation of the views into a bin-deployable DLL. After the views are compiled, the are simply pulled out of the views DLL. Each tenant has its own views DLL that just has “.Views” appended after the assembly name as a convention. The list of reasons for this approach are quite long. The primary motivation is performance. I’ve had quite a few performance issues in the past and I would like to increase my application’s performance in any way that I can. My customized build of Spark removes insignificant whitespace from the HTML output so I can some some bandwidth and load time without having to deal with whitespace removal at runtime.   How to setup Tenants for the Host In the source, I’ve provided a single tenant as a sample (Sample1). This will serve as a template for subsequent tenants in your application. The first step is to add a “PostBuildStep” installer into the project. I’ve defined one in the source that will eventually change as we focus more on the construction of dependency containers. The next step is to tell the project to run the installer and copy the DLL output to a folder in the host that will pick up as a tenant. Here’s the code that will achieve it (this belongs in Post-build event command line field in the Build Events tab of settings) %systemroot%\Microsoft.NET\Framework\v4.0.30319\installutil "$(TargetPath)" copy /Y "$(TargetDir)$(TargetName)*.dll" "$(SolutionDir)Web\Tenants\" copy /Y "$(TargetDir)$(TargetName)*.pdb" "$(SolutionDir)Web\Tenants\" The DLLs with a name starting with the target assembly name will be copied to the “Tenants” folder in the web project. This means something like MultiTenancy.Tenants.Sample1.dll and MultiTenancy.Tenants.Sample1.Views.dll will both be copied along with the debug symbols. This is probably the simplest way to go about this, but it is a tad inflexible. For example, what if you have dependencies? The preferred method would probably be to use IL Merge to merge your dependencies with your target DLL. This would have to be added in the build events. Another way to achieve that would be to simply bypass Visual Studio events and use MSBuild.   I also got a question about how I was setting up the controller factory. Here’s the basics on how I’m setting up tenants inside the host (Global.asax) protected void Application_Start() { RegisterRoutes(RouteTable.Routes); // create a container just to pull in tenants var topContainer = new Container(); topContainer.Configure(config => { config.Scan(scanner => { scanner.AssembliesFromPath(Path.Combine(Server.MapPath("~/"), "Tenants")); scanner.AddAllTypesOf<IApplicationTenant>(); }); }); // create selectors var tenantSelector = new DefaultTenantSelector(topContainer.GetAllInstances<IApplicationTenant>()); var containerSelector = new TenantContainerResolver(tenantSelector); // clear view engines, we don't want anything other than spark ViewEngines.Engines.Clear(); // set view engine ViewEngines.Engines.Add(new TenantViewEngine(tenantSelector)); // set controller factory ControllerBuilder.Current.SetControllerFactory(new ContainerControllerFactory(containerSelector)); } The code to setup the tenants isn’t actually that hard. I’m utilizing assembly scanners in StructureMap as a simple way to pull in DLLs that are not in the AppDomain. Remember that there is a dependency on the host in the tenants and a tenant cannot simply be referenced by a host because of circular dependencies.   Tenant View Engine TenantViewEngine is a simple delegator to the tenant’s specified view engine. You might have noticed that a tenant has to define a view engine. public interface IApplicationTenant { .... IViewEngine ViewEngine { get; } } The trick comes in specifying the view engine on the tenant side. Here’s some of the code that will pull views from the DLL. protected virtual IViewEngine DetermineViewEngine() { var factory = new SparkViewFactory(); var file = GetType().Assembly.CodeBase.Without("file:///").Replace(".dll", ".Views.dll").Replace('/', '\\'); var assembly = Assembly.LoadFile(file); factory.Engine.LoadBatchCompilation(assembly); return factory; } This code resides in an abstract Tenant where the fields are setup in the constructor. This method (inside the abstract class) will load the Views assembly and load the compilation into Spark’s “Descriptors” that will be used to determine views. There is some trickery on determining the file location… but it works just fine.   Up Next There’s just a few big things left such as StructureMap configuring controllers with a convention instead of specifying types directly with container construction and content resolution. I will also try to find a way to use the Web Forms View Engine in a multi-tenant way we achieved with the Spark View Engine without using a virtual path provider. I will probably not use the Web Forms View Engine personally, but I’m sure some people would prefer using WebForms because of the maturity of the engine. As always, I love to take questions by email or on twitter. Suggestions are always welcome as well! (Oh, and here’s another link to the source code).

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • My nVidia GeForce 8600M GT card stopped working: "extension GLX missing on display 0:0"

    - by Wolter Hellmund
    So I turned on my computer and log into my gnome-shell session to find that the shell is gone, and there is instead a very thick panel with a vertically repeated pattern of the gradient the Unity panel displays. I ran some tests and found out that my graphic acceleration was failing. I have attached some outputs of some commands to help helpers help me: Some command outputs Video card: nVidia GeForce 8600M GT (note that it is not an Optimus card). Output of lspci | grep VGA: 01:00.0 VGA compatible controller: NVIDIA Corporation G84 [GeForce 8600M GT] (rev a1) Output of glxinfo: Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Contents of /var/log/Xorg.0.log: NVIDIA: Failed to load the NVIDIA kernel module. Please check your NVIDIA: system's kernel log for additional error messages. UnloadModule: "nvidia" Unloading nvidia Failed to load module "nvidia" (module-specific error, 0) If I use the Aditional Drivers system application, I see none of the drivers active, and when I try to activate any, I get an error telling me to check the jockey.log file. The contents of /var/log/jockey.log are: 2012-09-20 23:11:32,690 DEBUG: NVidia(nvidia_173).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:32,690 DEBUG: nvidia_173 is not the alternative in use 2012-09-20 23:11:34,606 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: enabled, b43legacy: enabled 2012-09-20 23:11:34,657 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:34,657 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-20 23:11:36,647 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:36,648 DEBUG: KMH enabled: False 2012-09-20 23:11:38,642 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:38,642 DEBUG: nvidia_current_updates is not the alternative in use 2012-09-20 23:11:40,546 DEBUG: NVidia(nvidia_173).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:40,546 DEBUG: nvidia_173 is not the alternative in use 2012-09-20 23:11:40,620 DEBUG: NVidia(nvidia_173).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:40,620 DEBUG: nvidia_173 is not the alternative in use 2012-09-20 23:11:40,629 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: enabled, b43legacy: enabled 2012-09-20 23:11:40,698 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:40,698 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-20 23:11:40,759 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:40,760 DEBUG: KMH enabled: False 2012-09-20 23:11:40,826 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:40,826 DEBUG: nvidia_current_updates is not the alternative in use 2012-09-20 23:11:45,415 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:45,415 DEBUG: KMH enabled: False 2012-09-20 23:11:47,367 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:11:47,367 DEBUG: KMH enabled: False 2012-09-20 23:11:55,961 WARNING: modinfo for module nvidia_current failed: ERROR: modinfo: could not find module nvidia_current 2012-09-20 23:11:55,962 ERROR: XorgDriverHandler.enable(): package or module not installed, aborting 2012-09-20 23:12:27,780 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:27,781 DEBUG: KMH enabled: False 2012-09-20 23:12:27,840 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:27,840 DEBUG: KMH enabled: False 2012-09-20 23:12:43,196 DEBUG: NVidia(nvidia_173).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,197 DEBUG: nvidia_173 is not the alternative in use 2012-09-20 23:12:43,208 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: enabled, b43legacy: enabled 2012-09-20 23:12:43,269 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,269 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-20 23:12:43,333 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,333 DEBUG: KMH enabled: False 2012-09-20 23:12:43,393 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,394 DEBUG: KMH enabled: False 2012-09-20 23:12:43,465 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,465 DEBUG: nvidia_current_updates is not the alternative in use 2012-09-20 23:12:43,531 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,531 DEBUG: KMH enabled: False 2012-09-20 23:12:43,587 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,587 DEBUG: KMH enabled: False 2012-09-20 23:12:43,663 DEBUG: NVidia(nvidia_173).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,663 DEBUG: nvidia_173 is not the alternative in use 2012-09-20 23:12:43,672 DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: enabled, b43legacy: enabled 2012-09-20 23:12:43,738 DEBUG: NVidia(nvidia_173_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,739 DEBUG: nvidia_173_updates is not the alternative in use 2012-09-20 23:12:43,803 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,803 DEBUG: KMH enabled: False 2012-09-20 23:12:43,863 DEBUG: NVidia(nvidia_current).enabled(): target_alt /usr/lib/nvidia-current/ld.so.conf current_alt /usr/lib/nvidia-current/ld.so.conf other target alt /usr/lib/nvidia-current/alt_ld.so.conf other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,863 DEBUG: KMH enabled: False 2012-09-20 23:12:43,930 DEBUG: NVidia(nvidia_current_updates).enabled(): target_alt None current_alt /usr/lib/nvidia-current/ld.so.conf other target alt None other current alt /usr/lib/nvidia-current/alt_ld.so.conf 2012-09-20 23:12:43,930 DEBUG: nvidia_current_updates is not the alternative in use 2012-09-20 23:24:16,305 DEBUG: Shutting down Things I have tried Checked the video card on a Windows session. Reran nvidia-xconfig. Purged all nvidia software from my computer, and reinstall it. Run nvidia-xconfig. Tried a different kernel.

    Read the article

  • Creating and maintaining Orchard translations

    - by Bertrand Le Roy
    Many volunteers have already stepped up to provide translations for Orchard. There are many challenges to overcome with translating such a project. Orchard is a very modular CMS, so the translation mechanism needs to account for the core as well as first and third party modules and themes. Another issue is that every new version of Orchard or of a module changes some localizable strings and adds new ones as others enter obsolescence. In order to address those problems, I've built a small Orchard module that automates some of the most complex tasks that maintaining a translation implies. In this post, I'll walk you through the operations I had to do to update the French translation for Orchard 1.0. In order to make sure you translate all the first party modules, I would recommend that you start from a full source code enlistment. The reason is that I'll show how you can extract the default en-US translation from any source code enlistment. That enables you to create a translation that is even more up-to-date than what is currently on the site. Alternatively, you could start by downloading the current en-US translation. If you decide to do so, just skip the relevant paragraphs. First, let's install the Orchard Translation Manager. I'm starting from a vanilla clone of the latest in the code repository. After you've setup the site, go into the dashboard and click on Gallery. Locate the Orchard Translation Manager in the list of modules and click "Install". Once the module is installed, you need to enable its one feature by going into Configuration/Features and clicking "Enable" next to Vandelay.TranslationManager. We're done with the setup that we need in order to start our translation work. We'll now switch to the command-line and to our favorite text editor. Open a command-line on the Orchard web site folder. I found the easiest way to do this is to do a SHIFT+right-click on the Orchard.Web folder in Windows Explorer and to click "Open command window here". Type bin\orchard to enter the Orchard command-line environment. If you do a "help commands" you should see four commands in the list that came from the module we just installed: extract default translation, install translation, package translation and sync translation. First, we're going to generate the default translation. Note that it is possible to generate that default translation for a specific list of modules and themes by using the /Extensions: switch, which should facilitate the translation of third party extensions, but in this tutorial we're going to generate it for the whole of the Orchard source code. extract default translation /Output:\temp .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This should have created an Orchard.en-us.po.zip file in the temp directory. Extract that archive into an orchard.po folder under \temp. The next step depends on whether you have an existing translation that you want to update or not. If you do have an existing translation, just extract it into the same \temp\orchard.po directory. That should result in a file structure where you have the default en-US translation alongside your own. If you don't have an existing translation, just continue, the commands will be the same. We are now going to synchronize those translations (or generate the stub for a new one if you didn't start from an existing translation). sync translation /Input:\temp\orchard.po /Culture:fr-FR After this command (where you should of course substitute fr-FR with the culture you're working on), we now have updated files that contain a few useful flags. Open each of the .po files under the culture you are working on (there should be around 36) with your favorite text editor. For all the strings that are still valid in the latest version, nothing changes and you don't need to do anything. For all the strings that disappeared from the default culture, the old translation will still be there but they will be prefixed with the following comment: # Obsolete translation Conveniently, all the obsolete strings will be grouped at the end of the file. You can select all those and delete them. For all the new strings, you will see the following comment: # Untranslated string This is where the hard work begins. You'll need to translate each of those new strings by entering the translation between the quotes in: msgstr "" Don't introduce hard carriage returns in the strings, just stay on one line (your text editor should do some reasonable wrapping so this shouldn't be a big deal). Once you're done with a file, save it. Make sure, and this is very important, that your text editor is saving using the UTF-8 encoding. In Notepad, that setting can be found in the file saving dialog by doing a "Save As" rather than a plain "Save": When all the po files have been edited, you are ready to package the translation for submission (a.k.a. sending e-mail to the localization mailing list). package translation /Culture:fr-FR /Input:\temp\orchard.po /Output:\temp You should now see a Orchard.fr-FR.po.zip file in temp that is ready to be submitted. That is, once you've tested it, which can be done by deploying it into the site: install translation \temp\orchard.fr-fr.po.zip Once this is done you can go into the dashboard under Configuration/Settings and click on "Add or remove supported cultures for the site". Choose your culture and click "Add". You can go back to settings and set the default culture. Save. You may now take a tour of the application and verify that everything works as expected: And that's it really. Creating a translation for Orchard is a matter of a few hours. If you don't see a translation for your culture, please consider creating it.

    Read the article

  • Using Radio Button in GridView with Validation

    - by Vincent Maverick Durano
    A developer is asking how to select one radio button at a time if the radio button is inside the GridView.  As you may know setting the group name attribute of radio button will not work if the radio button is located within a Data Representation control like GridView. This because the radio button inside the gridview bahaves differentely. Since a gridview is rendered as table element , at run time it will assign different "name" to each radio button. Hence you are able to select multiple rows. In this post I'm going to demonstrate how select one radio button at a time in gridview and add a simple validation on it. To get started let's go ahead and fire up visual studio and the create a new web application / website project. Add a WebForm and then add gridview. The mark up would look something like this: <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="false" > <Columns> <asp:TemplateField> <ItemTemplate> <asp:RadioButton ID="rb" runat="server" /> </ItemTemplate> </asp:TemplateField> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Col1" HeaderText="First Column" /> <asp:BoundField DataField="Col2" HeaderText="Second Column" /> </Columns> </asp:GridView> Noticed that I've added a templatefield column so that we can add the radio button there. Also I have set up some BoundField columns and set the DataFields as RowNumber, Col1 and Col2. These columns are just dummy columns and i used it for the simplicity of this example. Now where these columns came from? These columns are created by hand at the code behind file of the ASPX. Here's the code below: private DataTable FillData() { DataTable dt = new DataTable(); DataRow dr = null; //Create DataTable columns dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Col1", typeof(string))); dt.Columns.Add(new DataColumn("Col2", typeof(string))); //Create Row for each columns dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Col1"] = "AA"; dr["Col2"] = "BB"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Col1"] = "A"; dr["Col2"] = "B"; dt.Rows.Add(dr); return dt; } And here's the code for binding the GridView with the dummy data above. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { GridView1.DataSource = FillData(); GridView1.DataBind(); } } Okay we have now a GridView data with a radio button on each row. Now lets go ahead and switch back to ASPX mark up. In this example I'm going to use a JavaScript for validating the radio button to select one radio button at a time. Here's the javascript code below: function CheckOtherIsCheckedByGVID(rb) { var isChecked = rb.checked; var row = rb.parentNode.parentNode; if (isChecked) { row.style.backgroundColor = '#B6C4DE'; row.style.color = 'black'; } var currentRdbID = rb.id; parent = document.getElementById("<%= GridView1.ClientID %>"); var items = parent.getElementsByTagName('input'); for (i = 0; i < items.length; i++) { if (items[i].id != currentRdbID && items[i].type == "radio") { if (items[i].checked) { items[i].checked = false; items[i].parentNode.parentNode.style.backgroundColor = 'white'; items[i].parentNode.parentNode.style.color = '#696969'; } } } } The function above sets the row of the current selected radio button's style to determine that the row is selected and then loops through the radio buttons in the gridview and then de-select the previous selected radio button and set the row style back to its default. You can then call the javascript function above at onlick event of radio button like below: <asp:RadioButton ID="rb" runat="server" onclick="javascript:CheckOtherIsCheckedByGVID(this);" /> Here's the output below: On Load: After Selecting a Radio Button: As you have noticed, on initial load there's no default selected radio in the GridView. Now let's add a simple validation for that. We will basically display an error message if a user clicks a button that triggers a postback without selecting  a radio button in the GridView. Here's the javascript for the validation: function ValidateRadioButton(sender, args) { var gv = document.getElementById("<%= GridView1.ClientID %>"); var items = gv.getElementsByTagName('input'); for (var i = 0; i < items.length ; i++) { if (items[i].type == "radio") { if (items[i].checked) { args.IsValid = true; return; } else { args.IsValid = false; } } } } The function above loops through the rows in gridview and find all the radio buttons within it. It will then check each radio button checked property. If a radio is checked then set IsValid to true else set it to false.  The reason why I'm using IsValid is because I'm using the ASP validator control for validation. Now add the following mark up below under the GridView declaration: <br /> <asp:Label ID="lblMessage" runat="server" /> <br /> <asp:Button ID="btn" runat="server" Text="POST" onclick="btn_Click" ValidationGroup="GroupA" /> <asp:CustomValidator ID="CustomValidator1" runat="server" ErrorMessage="Please select row in the grid." ClientValidationFunction="ValidateRadioButton" ValidationGroup="GroupA" style="display:none"></asp:CustomValidator> <asp:ValidationSummary ID="ValidationSummary1" runat="server" ValidationGroup="GroupA" HeaderText="Error List:" DisplayMode="BulletList" ForeColor="Red" /> And then at Button Click event add this simple code below just to test if  the validation works: protected void btn_Click(object sender, EventArgs e) { lblMessage.Text = "Postback at: " + DateTime.Now.ToString("hh:mm:ss tt"); } Here's the output below that you can see in the browser:   That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView

    Read the article

< Previous Page | 523 524 525 526 527 528 529 530 531 532 533 534  | Next Page >