Search Results

Search found 11130 results on 446 pages for 'solaris 11'.

Page 216/446 | < Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >

  • arp -n responds with (incomplete) on the wrong subnet, can't remove it

    - by Hannes
    context There are 2 servers: server1 - eth0 10.129.76.16 eth0.2 192.168.0.103 server2 - eth0 10.129.79.1 eth0.2 192.168.62.101 The 192.x.x.x addresses are connected to the same vlan (vlan2) and are able to see eachother. The 10.x.x.x addresses are connected to different vlan's which are not able to see eachother. on request of David Swartz: the routing table on server 1 is: ~$ sudo route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.129.76.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.0.0 0.0.0.0 255.255.192.0 U 0 0 0 eth0.2 0.0.0.0 192.168.61.254 0.0.0.0 UG 100 0 0 eth0.2 the routing table on server 2 is: ~$ sudo route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 <public IP gw> 0.0.0.0 UG 100 0 0 eth0.11 10.129.79.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 <public IP> 0.0.0.0 255.255.255.128 U 0 0 0 eth0.11 192.168.0.0 0.0.0.0 255.255.192.0 U 0 0 0 eth0.2 Problem: When I ping from server 1 to server 2, it seems no packets are arriving and vice versa. When I check the routes (route -n) I see the default gw uses eth0.2 on both servers. But when I use arping, I get a response one way (from server 2 to server 1) but no response vice versa. arping 192.168.62.101 ARPING 192.168.62.101 from 10.129.76.16 eth0 ^CSent 2 probes (2 broadcast(s)) Received 0 response(s) As you can see it uses the 10.x.x.x address instead of the 192.x.x.x. And as I told before, the 10.x.x.x address is unreachable from the other server. When I force arping to use eth0.2, it does work. I don't have any problems with ping'ing other servers from any of those 2 servers. I did see this in the arp tables: ~# arp -n | grep 192.168.0.103 192.168.0.103 (incomplete) eth0 and ~# arp -n | grep 192.168.62.101 Question quite obvious... How can I make these servers see each other again? Things I've tied clear the apropriate entries in the arptable and tried to get rid of the (incomplete) But I think the biggest problem is that eth0 is used instead of eth0.2 for the packets from server 1 to server 2 Because of David Swartz' remark about the routing tables, I added a route in there defining the host. I added 192.168.0.103 0.0.0.0 255.255.255.255 UH 0 0 0 eth0.2 and 192.168.62.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth0.2 to the appropriate servers but this didn't solve the problem so I presume the problem is not in the routing. My guess I guess the problem lies in the following. ~$ arp -n | grep 192.168.0.103 192.168.0.103 (incomplete) eth0 but I'm unable to remove this entry. (arp -d 192.168.0.103 has no effect) Thanks for reading and even more thanks for answering!

    Read the article

  • Has this server been compromised?

    - by Griffo
    A friend is running a VPS (CentOS) His business partner was the sysadmin but has left him high and dry to look after the system. So, I've been asked to help out in fixing an apparent spam problem. His IP address got blacklisted for unsolicited mail. I'm not sure where to look for a problem, but I started with netstat to see what open connections were running. It looks to me like he has remote hosts connected to his SMTP server. Here's the output: Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10029 ESTABLISHED tcp 0 0 78.153.208.195:imap 86-40-60-183-dynamic.:10010 ESTABLISHED tcp 0 1 78.153.208.195:35563 news.avanport.pt:smtp SYN_SENT tcp 0 0 78.153.208.195:35559 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 0 0 78.153.208.195:35560 vip-us-br-mx.terra.com:smtp TIME_WAIT tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11647 CLOSING tcp 1 1 78.153.208.195:imaps 86-40-60-183-dynamic.:11645 CLOSING tcp 0 0 78.153.208.195:35562 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:35561 mx.a.locaweb.com.br:smtp TIME_WAIT tcp 0 0 78.153.208.195:imap 86-41-8-64-dynamic.b-:49446 ESTABLISHED Does this indicate that his server may be acting as an open relay? Mail should only be outgoing from localhost. Apologies for my lack of knowledge but I don't work on linux in my day job. EDIT: Here's some output from /var/log/maillog which looks like it may be the result of spam. If it appears to be the case to others, where should I look next to investigate a root cause? I put the server IP through www.checkor.com and it came back clean. Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.721674 status: local 0/10 remote 9/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886182 delivery 74116: deferral: 200.147.36.15_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_200.147.36.15./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.886255 status: local 0/10 remote 8/20 Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898266 delivery 74115: deferral: 187.31.0.11_does_not_like_recipient./Remote_host_said:_450_4.7.1_Client_host_rejected:_cannot_find_your_hostname,_[78.153.208.195]/Giving_up_on_187.31.0.11./ Jun 29 00:02:13 vps-1001108-595 qmail: 1309302133.898327 status: local 0/10 remote 7/20 Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137833 delivery 74111: deferral: Sorry,_I_wasn't_able_to_establish_an_SMTP_connection._(#4.4.1)/ Jun 29 00:02:14 vps-1001108-595 qmail: 1309302134.137914 status: local 0/10 remote 6/20 Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903536 delivery 74000: failure: 209.85.143.27_failed_after_I_sent_the_message./Remote_host_said:_550-5.7.1_[78.153.208.195_______1]_Our_system_has_detected_an_unusual_rate_of/550-5.7.1_unsolicited_mail_originating_from_your_IP_address._To_protect_our/550-5.7.1_users_from_spam,_mail_sent_from_your_IP_address_has_been_blocked./550-5.7.1_Please_visit_http://www.google.com/mail/help/bulk_mail.html_to_review/550_5.7.1_our_Bulk_Email_Senders_Guidelines._e25si1385223wes.137/ Jun 29 00:02:19 vps-1001108-595 qmail: 1309302139.903606 status: local 0/10 remote 5/20 Jun 29 00:02:19 vps-1001108-595 qmail-queue-handlers[15501]: Handlers Filter before-queue for qmail started ... EDIT #2 Here's the output of netstat -p with the imap and imaps lines removed. I also removed my own ssh session Active Internet connections (w/o servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 1 78.153.208.195:40076 any-in-2015.1e100.net:smtp SYN_SENT 24096/qmail-remote. tcp 0 1 78.153.208.195:40077 any-in-2015.1e100.net:smtp SYN_SENT 24097/qmail-remote. udp 0 0 78.153.208.195:48515 125.64.11.158:4225 ESTABLISHED 20435/httpd

    Read the article

  • FFMPEG dropping frames while encoding JPEG sequence at color change

    - by Matt
    I'm trying to put together a slide show using imagemagick and FFMPEG. I use imagemagick to expand a single photo into 30fps video (imagemagick also handles things like putting some text captions on the frames along the way). When I go to let ffmpeg digest it into a video it clips along nicely on the color parts of the video, but when it gets to a black and white section it reports "frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703" and drops every frame of video until it hits something with color. As you can imagine this results in entire photos being removed from the slideshow. Here is my latest dump... ffmpeg -y -r 30 -i "teststream/%06d.jpg" -c:v libx264 -r 30 newffmpeg.mp4 ffmpeg version git-2012-12-10-c3bb333 Copyright (c) 2000-2012 the FFmpeg developers built on Dec 10 2012 22:02:04 with gcc 4.6.1 (Ubuntu/Linaro 4.6.1-9ubuntu3) configuration: --enable-gpl --enable-libfaac --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-librtmp --enable-libtheora --enable-libvorbis --enable-libx264 --enable-nonfree --enable-version3 libavutil 52. 12.100 / 52. 12.100 libavcodec 54. 79.101 / 54. 79.101 libavformat 54. 49.100 / 54. 49.100 libavdevice 54. 3.102 / 54. 3.102 libavfilter 3. 26.101 / 3. 26.101 libswscale 2. 1.103 / 2. 1.103 libswresample 0. 17.102 / 0. 17.102 libpostproc 52. 2.100 / 52. 2.100 Input #0, image2, from 'teststream/%06d.jpg': Duration: 00:12:02.80, start: 0.000000, bitrate: N/A Stream #0:0: Video: mjpeg, yuvj444p, 720x480 [SAR 72:72 DAR 3:2], 25 fps, 25 tbr, 25 tbn, 25 tbc [libx264 @ 0x3450140] using SAR=1/1 [libx264 @ 0x3450140] using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE4.2 [libx264 @ 0x3450140] profile High, level 3.0 [libx264 @ 0x3450140] 264 - core 129 r2 1cffe9f - H.264/MPEG-4 AVC codec - Copyleft 2003-2012 - http://www.videolan.org/x264.html - options: cabac=1 ref=3 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=7 psy=1 psy_rd=1.00:0.00 mixed_ref=1 me_range=16 chroma_me=1 trellis=1 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=-2 threads=12 lookahead_threads=2 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=2 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=40 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 ip_ratio=1.40 aq=1:1.00 Output #0, mp4, to 'newffmpeg.mp4': Metadata: encoder : Lavf54.49.100 Stream #0:0: Video: h264 ([33][0][0][0] / 0x0021), yuvj420p, 720x480 [SAR 1:1 DAR 3:2], q=-1--1, 15360 tbn, 30 tbc Stream mapping: Stream #0:0 - #0:0 (mjpeg - libx264) Press [q] to stop, [?] for help Input stream #0:0 frame changed from size:720x480 fmt:yuvj444p to size:720x480 fmt:yuvj422p Input stream #0:0 frame changed from size:720x480 fmt:yuvj422p to size:720x480 fmt:yuvj444pp=584 frame= 2030 fps=102 q=32766.0 Lsize= 5203kB time=00:01:07.60 bitrate= 630.5kbits/s dup=0 drop=703 video:5179kB audio:0kB subtitle:0 global headers:0kB muxing overhead 0.472425% [libx264 @ 0x3450140] frame I:9 Avg QP:20.10 size: 33933 [libx264 @ 0x3450140] frame P:636 Avg QP:24.12 size: 6737 [libx264 @ 0x3450140] frame B:1385 Avg QP:27.04 size: 514 [libx264 @ 0x3450140] consecutive B-frames: 2.5% 15.2% 13.2% 69.2% [libx264 @ 0x3450140] mb I I16..4: 8.3% 80.3% 11.5% [libx264 @ 0x3450140] mb P I16..4: 1.5% 2.5% 0.2% P16..4: 41.7% 18.0% 10.3% 0.0% 0.0% skip:25.9% [libx264 @ 0x3450140] mb B I16..4: 0.0% 0.0% 0.0% B16..8: 26.6% 0.6% 0.1% direct: 0.2% skip:72.3% L0:35.0% L1:60.3% BI: 4.7% [libx264 @ 0x3450140] 8x8 transform intra:64.1% inter:75.1% [libx264 @ 0x3450140] coded y,uvDC,uvAC intra: 51.6% 78.0% 43.7% inter: 10.6% 14.9% 2.1% [libx264 @ 0x3450140] i16 v,h,dc,p: 29% 19% 6% 46% [libx264 @ 0x3450140] i8 v,h,dc,ddl,ddr,vr,hd,vl,hu: 23% 15% 17% 5% 9% 10% 7% 8% 6% [libx264 @ 0x3450140] i4 v,h,dc,ddl,ddr,vr,hd,vl,hu: 31% 18% 11% 5% 9% 10% 6% 6% 4% [libx264 @ 0x3450140] i8c dc,h,v,p: 46% 18% 24% 12% [libx264 @ 0x3450140] Weighted P-Frames: Y:20.1% UV:18.7% [libx264 @ 0x3450140] ref P L0: 59.2% 23.2% 13.1% 4.3% 0.2% [libx264 @ 0x3450140] ref B L0: 88.7% 8.3% 3.0% [libx264 @ 0x3450140] ref B L1: 95.0% 5.0% [libx264 @ 0x3450140] kb/s:626.88 Received signal 2: terminating. One last note: If I remove the -r 30 from the input and output it works flawlessly. I have no idea why the -r 30 is causing it to freak out.

    Read the article

  • dual boot install--no GRUB

    - by Jim Syyap
    My computer recently had a hardware upgrade and now runs on Windows 7. I decided to install Ubuntu 11.04 as dual boot using the ISO I got from ubuntu.com downloaded onto my USB stick. Restarting with the USB stick, I was able to install Ubuntu 11.04 choosing the option: Install Ubuntu 11.04 side by side with Windows 7 (or something like that). No errors were encountered on installation. However on restarting, there was no GRUB; the system went straight into Windows 7. Looking for answers, I found these: http://essayboard.com/2011/07/12/how-to-dual-boot-ubuntu-11-04-and-windows-7-the-traditional-way-through-grub-2/ http://ubuntuforums.org/showthread.php?t=1774523 Following their instructions, I got: Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== => Windows is installed in the MBR of /dev/sda. => Syslinux MBR (3.61-4.03) is installed in the MBR of /dev/sdb. => Grub2 (v1.99) is installed in the MBR of /dev/sdc and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos7)/boot/grub on this drive. sda1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: /grldr /bootmgr /Boot/BCD /grldr sda2: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows Vista/7 Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe sdb1: __________________________________________________ ________________________ File system: vfat Boot sector type: SYSLINUX 4.02 debian-20101016 ...........>...r>....... ......0...~.k...~...f...M.f.f....f..8~....>2} Boot sector info: Syslinux looks at sector 1437504 of /dev/sdb1 for its second stage. SYSLINUX is installed in the directory. The integrity check of the ADV area failed. According to the info in the boot sector, sdb1 starts at sector 0. But according to the info from fdisk, sdb1 starts at sector 62. Operating System: Boot files: /boot/grub/grub.cfg /syslinux/syslinux.cfg /ldlinux.sys sdc1: __________________________________________________ ________________________ File system: ntfs Boot sector type: Windows XP Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: sdc2: __________________________________________________ ________________________ File system: Extended Partition Boot sector type: - Boot sector info: sdc5: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc6: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: sdc7: __________________________________________________ ________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sdc8: __________________________________________________ ________________________ File system: swap Boot sector type: - Boot sector info: Going back into Ubuntu and running sudo fdisk -l , I got these: ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002f393 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 102400 7 HPFS/NTFS Partition 1 does not end on cylinder boundary. /dev/sda2 13 19458 156185600 7 HPFS/NTFS Disk /dev/sdb: 2011 MB, 2011168768 bytes 62 heads, 62 sectors/track, 1021 cylinders Units = cylinders of 3844 * 512 = 1968128 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2ab9 Device Boot Start End Blocks Id System /dev/sdb1 * 1 1021 1962331 c W95 FAT32 (LBA) Disk /dev/sdc: 1000.2 GB, 1000202043392 bytes 255 heads, 63 sectors/track, 121600 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00261ddd Device Boot Start End Blocks Id System /dev/sdc1 * 1 60657 487222656+ 7 HPFS/NTFS /dev/sdc2 60657 121600 489527681 5 Extended /dev/sdc5 120563 121600 8337703+ 82 Linux swap / Solaris /dev/sdc6 120073 120562 3930112 82 Linux swap / Solaris /dev/sdc7 60657 119584 473328640 83 Linux /dev/sdc8 119584 120072 3923968 82 Linux swap / Solaris Should I proceed and do the following? Assuming Ubuntu 11.04 was installed on device sdb1, do this: sudo mount /dev/sdb1 /mnt Then do this: sudo grub-install--root-directory=/mnt /dev/sdb Notice there are two dashes in front of the root directory, and I'm not using sdb1 but sdb. Since the command in step 15 had reinstalled Grub 2, now we need to unmount the /mnt (i.e. sdb1) to clean up. Do this: sudo umount /mnt Reboot and remove Ubuntu 11.04 CD/DVD from disk tray. Log into Ubuntu 11.04 (you have no choice but it will make you log into Ubuntu 11.04 at this point). Open up a terminal in Ubuntu 11.04 (using real installation, not live CD/DVD). Execute this command: sudo update-grub Reboot the machine.

    Read the article

  • MySQL reserves too much RAM

    - by Buddy
    I have a cheap VPS with 128Mb RAM and 256Mb burst. MySQL starts and reserves about 110Mb, but uses not more than 20Mb of them. My VPS Control Panel shows, that I use 127Mb (I also running nginx and sphinx), I know, that it shows reserved RAM, but when I reach over 128Mb, my VPS reboots automatically every 4 hours. So I want to force MySQL to reserve less RAM. How can i do that? I did some tweaks with my.conf but it helped not so much. top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2156 668 572 S 0.0 0.3 0:00.03 init 11311 root 15 0 11212 356 228 S 0.0 0.1 0:00.00 vzctl 11312 root 18 0 3712 1484 1248 S 0.0 0.6 0:00.01 bash 11347 root 18 0 2284 916 732 R 0.0 0.3 0:00.00 top 13978 root 17 -4 2248 552 344 S 0.0 0.2 0:00.00 udevd 14262 root 15 0 1812 564 472 S 0.0 0.2 0:00.03 syslogd 14293 sphinx 15 0 11816 1172 672 S 0.0 0.4 0:00.07 searchd 14305 root 25 0 7192 1036 636 S 0.0 0.4 0:00.00 sshd 14321 root 25 0 2832 836 668 S 0.0 0.3 0:00.00 xinetd 15389 root 18 0 3708 1300 1132 S 0.0 0.5 0:00.00 mysqld_safe 15441 mysql 15 0 113m 16m 4440 S 0.0 6.4 0:00.15 mysqld 15489 root 21 0 13056 1456 340 S 0.0 0.6 0:00.00 nginx 15490 nginx 18 0 13328 2388 992 S 0.0 0.9 0:00.06 nginx 15507 nginx 25 0 19520 5888 4244 S 0.0 2.2 0:00.00 php-cgi 15508 nginx 18 0 19636 4876 2748 S 0.0 1.9 0:00.12 php-cgi 15509 nginx 15 0 19668 4872 2716 S 0.0 1.9 0:00.11 php-cgi 15518 root 18 0 4492 1116 568 S 0.0 0.4 0:00.01 crond MySQL tuner: >> MySQLTuner 1.0.1 - Major Hayden <[email protected]> >> Bug reports, feature requests, and downloads at http://mysqltuner.com/ >> Run with '--help' for additional options and output filtering Please enter your MySQL administrative login: root Please enter your MySQL administrative password: -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.0.77 [OK] Operating on 32-bit architecture with less than 2GB RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in InnoDB tables: 1M (Tables: 1) [OK] Total fragmented tables: 0 -------- Performance Metrics ------------------------------------------------- [--] Up for: 38m 43s (37 q [0.016 qps], 20 conn, TX: 4M, RX: 3K) [--] Reads / Writes: 100% / 0% [--] Total buffers: 28.1M global + 832.0K per thread (100 max threads) [OK] Maximum possible memory usage: 109.4M (42% of installed RAM) [OK] Slow queries: 0% (0/37) [OK] Highest usage of available connections: 1% (1/100) [OK] Key buffer size / total MyISAM indexes: 128.0K/64.0K [OK] Query cache efficiency: 42.1% (8 cached / 19 selects) [OK] Query cache prunes per day: 0 [!!] Temporary tables created on disk: 27% (3 on disk / 11 total) [!!] Thread cache is disabled [OK] Table cache hit rate: 57% (8 open / 14 opened) [OK] Open file limit used: 1% (12/1K) [OK] Table locks acquired immediately: 100% (22 immediate / 22 locks) [!!] Connections aborted: 10% [OK] InnoDB data size / buffer pool: 1.5M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Set thread_cache_size to 4 as a starting value Your applications are not closing MySQL connections properly Variables to adjust: tmp_table_size (> 32M) max_heap_table_size (> 16M) thread_cache_size (start at 4) I think if I do what MySQLtuner says, MySQL will use more RAM.

    Read the article

  • Trying to prevent Windows from hibernating/sleeping automatically

    - by user328821
    My Dell XPS 8700 (Win 7) suddenly began putting itself to sleep at 6pm daily, even if I'm typing. I don't know what caused this to occur, except possibly a windows update that took place in the middle of the night. I initially went into settings for power and saw 2 plans set up, one from Dell and the other window's Power saver plan. I set both to never for sleep and hibernate yet it still occurred. I have current drivers and a fairly new UPS that has software to set to shutdown only after power loss. Dell is of little help, can anyone point me in the right direction? I did do the powerdfg -energy program and came up with this: Power Efficiency Diagnostics Report Scan Time 2014-05-08T19:21:48Z Scan Duration 60 seconds System Manufacturer Dell Inc. System Product Name XPS 8700 BIOS Date 08/23/2013 BIOS Version A04 OS Build 7601 Platform Role PlatformRoleDesktop Plugged In true Process Count 115 Thread Count 1631 Report GUID {097caf99-039b-44c3-b154-d797bfbfdfcc} Analysis Results Errors Power Policy:Sleep timeout is disabled (Plugged In) The computer is not configured to automatically sleep after a period of inactivity. System Availability Requests:System Required Request The device or driver has made a request to prevent the system from automatically entering sleep. Requesting Driver Instance HDAUDIO\FUNC_01&VEN_10EC&DEV_0899&SUBSYS_102805B7&REV_1000\4&220b1bbc&0&0001 Requesting Driver Device Realtek High Definition Audio CPU Utilization:Processor utilization is high The average processor utilization during the trace was high. The system will consume less power when the average processor utilization is very low. Review processor utilization for individual processes to determine which applications and services contribute the most to total processor utilization. Average Utilization (%) 9.48 Warnings Platform Timer Resolution:Platform Timer Resolution The default platform timer resolution is 15.6ms (15625000ns) and should be used whenever the system is idle. If the timer resolution is increased, processor power management technologies may not be effective. The timer resolution may be increased due to multimedia playback or graphical animations. Current Timer Resolution (100ns units) 10000 Maximum Timer Period (100ns units) 156001 Platform Timer Resolution:Outstanding Kernel Timer Request A kernel component or device driver has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Request Count 2 Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 10000 Requesting Process ID 8672 Requesting Process Path \Device\HarddiskVolume3\Program Files (x86)\Mozilla Firefox\firefox.exe Platform Timer Resolution:Outstanding Timer Request A program or service has requested a timer resolution smaller than the platform maximum timer resolution. Requested Period 100000 Requesting Process ID 1212 Requesting Process Path \Device\HarddiskVolume3\Windows\System32\svchost.exe Power Policy:802.11 Radio Power Policy is Maximum Performance (Plugged In) The current power policy for 802.11-compatible wireless network adapters is not configured to use low-power modes. CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name audiodg.exe PID 1304 Average Utilization (%) 4.73 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\msvcrt.dll 1.88 \Device\HarddiskVolume3\Windows\System32\MaxxAudioAPO5064.dll 1.77 \Device\HarddiskVolume3\Windows\System32\AudioEng.dll 0.80 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name thunderbird.exe PID 6036 Average Utilization (%) 0.35 Module Average Module Utilization (%) \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\xul.dll 0.16 \Device\HarddiskVolume3\Program Files (x86)\Mozilla Thunderbird\mozjs.dll 0.05 \SystemRoot\System32\win32k.sys 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace. Process Name dwm.exe PID 1340 Average Utilization (%) 0.25 Module Average Module Utilization (%) \Device\HarddiskVolume3\Windows\System32\dwmcore.dll 0.08 \Device\HarddiskVolume3\Windows\System32\nvwgf2umx.dll 0.05 \SystemRoot\system32\ntoskrnl.exe 0.03 CPU Utilization:Individual process with significant processor utilization. This process is responsible for a significant portion of the total processor utilization recorded during the trace.

    Read the article

  • Installing EclipseFP on Mac OS X

    - by Dom Kennedy
    I am trying to install EclipseFP. I'm running OS X Mavericks. I've tried following both the official installation instructions and the advice in this answer on SU, but I'm still having the same problem. I can get the plugin itself installed painlessly using Help -> Install New Software..., Bbut when I restart and switch to the Haskell perspective, things start to go wrong. The installation instructions tells me that I should receive a prompt to install BuildWrapper and Scion Browser. I do not receive this prompt. Furthermore, if I create a new Haskell project, my code has no syntax highlighting, and the Hoogle search feature does not appear to do anything. It's clear that the plugin is not set up correctly yet. I've tried running cabal update in Terminal, but this does not change anything. After several attempts going round in circles with this on Eclipse Juno, I uninstalled Eclispe and the Haskell Platform and performed a clean install of Eclipse Luna and the latest Haskell Platform. However, the problems are persisting. I've tried going into Preferences to see if I could sort any of this out manually. I should initially point out that my GHC installation seems to be correctly references under Preferences -> Haskell Implementations Under Haskell -> Helper executables, there are areas for configuring the options of both BuildWrapper and Scion Browser. At present, both are blank. I tried clicking the Install from Hackage... button beside each of them with no success; I receive an error message saying Expected executable <workspace>/.metadata/.plugins/net.sf.eclipsefp.haskell.ui/sandbox/.cabal-sandbox/bin/buildwrapper not found!` (replace buildwrapper for scion-browser and the message is the same) The Eclipse console displays the following exception after doing the above with BuildWrapper: src/Language/Haskell/BuildWrapper/GHCStorage.hs:313:32: Not in scope: data constructor ‘MatchGroup’ cabal.real: Error: some packages failed to install: buildwrapper-0.7.4 failed during the building phase. The exception was: ExitFailure 1 and after doing it for Scion-Browser: zip-archive-0.2.3.4 (reinstall) changes: text-1.1.0.0 -> 0.11.3.1 pandoc-1.12.3.3 (latest: 1.13) -http-conduit (new version) Graphalyze-0.14.1.0 (reinstall) changes: pandoc-1.12.4.2 -> 1.12.3.3, text-1.1.0.0 -> 0.11.3.1 cabal.real: The following packages are likely to be broken by the reinstalls: pandoc-1.12.4.2 unordered-containers-0.2.4.0 aeson-0.7.0.4 scientific-0.2.0.2 case-insensitive-1.1.0.3 HTTP-4000.2.10 Use --force-reinstalls if you want to install anyway. After receiving similar results as the above on previous attempts, I've tried using force-reinstalls and ended up at more dead ends. I am at a loss as to what is wrong and how to solve this. I should point out that my GHC installation appears to be correctly configured under Preferences -> Haskell -> Haskell Implementations. Apologies if any of this information is irrelevant, I'm just not really sure what is important and what isn't at this point. Any help anyone could provide me with would be greatly appreciated.

    Read the article

  • Informaton of pendriver with libudv on linux

    - by Catanzaro
    I'm doing a little app in C that read the driver information of my pendrive: Plugged it and typed dmesg: [ 7676.243994] scsi 7:0:0:0: Direct-Access Lexar USB Flash Drive 1100 PQ: 0 ANSI: 0 CCS [ 7676.248359] sd 7:0:0:0: Attached scsi generic sg2 type 0 [ 7676.256733] sd 7:0:0:0: [sdb] 7831552 512-byte logical blocks: (4.00 GB/3.73 GiB) [ 7676.266559] sd 7:0:0:0: [sdb] Write Protect is off [ 7676.266566] sd 7:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 7676.266569] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.285373] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.285383] sdb: sdb1 [ 7676.298661] sd 7:0:0:0: [sdb] Assuming drive cache: write through [ 7676.298667] sd 7:0:0:0: [sdb] Attached SCSI removable disk with "udevadm info -q all -n /dev/sdb" P: /devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1/1-1:1.0/host7/target7:0:0/7:0:0:0/block/sdb N: sdb W: 36 S: block/8:16 S: disk/by-id/usb-Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 S: disk/by-path/pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 E: UDEV_LOG=3 E: DEVPATH=/devices/pci0000:00/0000:00:11.0/0000:02:03.0/usb1/1-1/1-1:1.0/host7/target7:0:0/7:0:0:0/block/sdb E: MAJOR=8 E: MINOR=16 E: DEVNAME=/dev/sdb E: DEVTYPE=disk E: SUBSYSTEM=block E: ID_VENDOR=Lexar E: ID_VENDOR_ENC=Lexar\x20\x20\x20 E: ID_VENDOR_ID=05dc E: ID_MODEL=USB_Flash_Drive E: ID_MODEL_ENC=USB\x20Flash\x20Drive\x20 E: ID_MODEL_ID=a813 E: ID_REVISION=1100 E: ID_SERIAL=Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 E: ID_SERIAL_SHORT=AA5OCYQII8PSQXBB E: ID_TYPE=disk E: ID_INSTANCE=0:0 E: ID_BUS=usb E: ID_USB_INTERFACES=:080650: E: ID_USB_INTERFACE_NUM=00 E: ID_USB_DRIVER=usb-storage E: ID_PATH=pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 E: ID_PART_TABLE_TYPE=dos E: UDISKS_PRESENTATION_NOPOLICY=0 E: UDISKS_PARTITION_TABLE=1 E: UDISKS_PARTITION_TABLE_SCHEME=mbr E: UDISKS_PARTITION_TABLE_COUNT=1 E: DEVLINKS=/dev/block/8:16 /dev/disk/by-id/usb-Lexar_USB_Flash_Drive_AA5OCYQII8PSQXBB-0:0 /dev/disk/by-path/pci-0000:02:03.0-usb-0:1:1.0-scsi-0:0:0:0 and my software is: Codice: Seleziona tutto #include <stdio.h> #include <libudev.h> #include <stdlib.h> #include <locale.h> #include <unistd.h> int main(void) { struct udev_enumerate *enumerate; struct udev_list_entry *devices, *dev_list_entry; struct udev_device *dev; /* Create the udev object */ struct udev *udev = udev_new(); if (!udev) { printf("Can't create udev\n"); exit(0); } enumerate = udev_enumerate_new(udev); udev_enumerate_add_match_subsystem(enumerate, "scsi_generic"); udev_enumerate_scan_devices(enumerate); devices = udev_enumerate_get_list_entry(enumerate); udev_list_entry_foreach(dev_list_entry, devices) { const char *path; /* Get the filename of the /sys entry for the device and create a udev_device object (dev) representing it */ path = udev_list_entry_get_name(dev_list_entry); dev = udev_device_new_from_syspath(udev, path); /* usb_device_get_devnode() returns the path to the device node itself in /dev. */ printf("Device Node Path: %s\n", udev_device_get_devnode(dev)); /* The device pointed to by dev contains information about the hidraw device. In order to get information about the USB device, get the parent device with the subsystem/devtype pair of "usb"/"usb_device". This will be several levels up the tree, but the function will find it.*/ dev = udev_device_get_parent_with_subsystem_devtype( dev, "block", "disk"); if (!dev) { printf("Errore\n"); exit(1); } /* From here, we can call get_sysattr_value() for each file in the device's /sys entry. The strings passed into these functions (idProduct, idVendor, serial, etc.) correspond directly to the files in the directory which represents the USB device. Note that USB strings are Unicode, UCS2 encoded, but the strings returned from udev_device_get_sysattr_value() are UTF-8 encoded. */ printf(" VID/PID: %s %s\n", udev_device_get_sysattr_value(dev,"idVendor"), udev_device_get_sysattr_value(dev, "idProduct")); printf(" %s\n %s\n", udev_device_get_sysattr_value(dev,"manufacturer"), udev_device_get_sysattr_value(dev,"product")); printf(" serial: %s\n", udev_device_get_sysattr_value(dev, "serial")); udev_device_unref(dev); } /* Free the enumerator object */ udev_enumerate_unref(enumerate); udev_unref(udev); return 0; } the problem is that i obtain in output: Device Node Path: /dev/sg0 Errore and dont view information. subsystem and the devtype i think that are inserted well : "block" and "disk". thanks for help. Bye

    Read the article

  • How do bots access directories on a server that are not DocumentRoot of public IP address? How do I stop them?

    - by tmsimont
    I have a local network set up with apache2 and "named" running on OpenSuse 13.1 Linux. I used the "named" service to use my computer as a domain server. I set up my router to point to ask my computer for domain lookups, so I have a chance to have it rewrite a bunch of domains on my network to its own local IP, 192.168.0.111 This works great. I use virtual host configuration to allow various domains and subdomains (re-routed to the same IP via named) to pull up different directories in my computer. For example: <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias fmb.wa.net DocumentRoot /home/work/wa.net/fmb </VirtualHost> <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias postrecord.wa.net DocumentRoot /home/work/wa.net/postrecord </VirtualHost> <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias cvalley.wa.net DocumentRoot /home/work/wa.net/cvalley_local </VirtualHost> This makes it possible for me to hit cvalley.wa.net from any device in my network and get the site that lives in /home/work/wa.net/cvalley_local I decided to forward port 80 to this computer, so I could share a few development sites with coworkers. I can't control which site they see with the same named service, because they'd have to use my computer as their domain name server... So I added a line like this: <VirtualHost *:80> ServerName 192.168.0.111 ServerAlias MY.IP.XXX.XX DocumentRoot /home/work/wa.net/cvalley </VirtualHost> Where "MY.IP.XXX.XX" is my public IP address. This works as expected, when you hit my IP address from a public network you see the site that lives in /home/work/wa.net/cvalley. The point of confusion that I have is that there are public IP addresses in my logs in other sites. I would have expected it to be impossible to access other sites in my network, unless the public user somehow figured out what I'm calling my ServerAliases, and is mimicing my domain set up... How can public traffic be hitting my other local sites? How can I recreate this kind of access? Here are some examples of public IP's hitting my VirtualHost sites: 162.253.66.76 - - [15/Aug/2014:19:20:47 -0600] "GET /xmlrpc.php HTTP/1.0" 404 1004 "-" "-" 162.253.66.74 - - [16/Aug/2014:10:50:28 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 185.4.227.194 - - [16/Aug/2014:11:16:45 -0600] "GET http://24x7-allrequestsallowed.com/?PHPSESSID=1rysxtj500143WQMVT%5E_NAZ%5BQ HTTP/1.1" 200 262 "-" "-" 101.226.254.138 - - [16/Aug/2014:13:32:14 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 162.253.66.74 - - [16/Aug/2014:14:26:19 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 212.129.2.119 - - [16/Aug/2014:16:00:51 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 91.240.163.111 - - [16/Aug/2014:18:34:32 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 162.253.66.74 - - [16/Aug/2014:19:02:53 -0600] "GET / HTTP/1.0" 200 262 "-" "masscan/1.0 (https://github.com/robertdavidgraham/masscan)" 122.226.223.69 - - [17/Aug/2014:05:53:09 -0600] "GET http://www.k2proxy.com//hello.html HTTP/1.1" 404 1006 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" ::1 - - [17/Aug/2014:10:19:26 -0600] "OPTIONS * HTTP/1.0" 200 - "-" "Apache/2.4.6 (Linux/SUSE) OpenSSL/1.0.1e PHP/5.4.20 (internal dummy connection)" 162.209.65.196 - - [17/Aug/2014:15:31:53 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 111.206.199.163 - - [18/Aug/2014:11:12:56 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 37.187.180.168 - - [18/Aug/2014:15:40:00 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" 62.210.38.226 - - [18/Aug/2014:18:35:16 -0600] "HEAD / HTTP/1.0" 200 - "-" "-" Is there anything that I can do to reliably deny public access by default, but allow it only in one VirtualHost?

    Read the article

  • javafx tableview get selected data from ObservableList

    - by user3717821
    i am working on a javafx project and i need your help . while i am trying to get selected data from table i can get selected data from normal cell but can't get data from ObservableList inside tableview. code for my database: -- phpMyAdmin SQL Dump -- version 4.0.4 -- http://www.phpmyadmin.net -- -- Host: localhost -- Generation Time: Jun 10, 2014 at 06:20 AM -- Server version: 5.1.33-community -- PHP Version: 5.4.12 SET SQL_MODE = "NO_AUTO_VALUE_ON_ZERO"; SET time_zone = "+00:00"; /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; -- -- Database: `test` -- -- -------------------------------------------------------- -- -- Table structure for table `customer` -- CREATE TABLE IF NOT EXISTS `customer` ( `col0` int(11) NOT NULL, `col1` varchar(255) DEFAULT NULL, `col2` int(11) DEFAULT NULL, PRIMARY KEY (`col0`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1; -- -- Dumping data for table `customer` -- INSERT INTO `customer` (`col0`, `col1`, `col2`) VALUES (12, 'adasdasd', 231), (22, 'adasdasd', 231), (212, 'adasdasd', 231); /*!40101 SET CHARACTER_SET_CLIENT=@OLD_CHARACTER_SET_CLIENT */; /*!40101 SET CHARACTER_SET_RESULTS=@OLD_CHARACTER_SET_RESULTS */; /*!40101 SET COLLATION_CONNECTION=@OLD_COLLATION_CONNECTION */; my javafx codes: import java.sql.Connection; import java.sql.DriverManager; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Map; import javafx.application.Application; import javafx.beans.property.SimpleStringProperty; import javafx.beans.value.ChangeListener; import javafx.beans.value.ObservableValue; import javafx.collections.FXCollections; import javafx.collections.ObservableList; import javafx.event.ActionEvent; import javafx.event.EventHandler; import javafx.scene.Scene; import javafx.scene.control.Button; import javafx.scene.control.TableCell; import javafx.scene.control.TableColumn; import javafx.scene.control.TableColumn.CellDataFeatures; import javafx.scene.control.TablePosition; import javafx.scene.control.TableView; import javafx.scene.control.TableView.TableViewSelectionModel; import javafx.scene.control.cell.ChoiceBoxTableCell; import javafx.scene.control.cell.TextFieldTableCell; import javafx.scene.layout.BorderPane; import javafx.stage.Stage; import javafx.util.Callback; import javafx.util.StringConverter; class DBConnector { private static Connection conn; private static String url = "jdbc:mysql://localhost/test"; private static String user = "root"; private static String pass = "root"; public static Connection connect() throws SQLException{ try{ Class.forName("com.mysql.jdbc.Driver").newInstance(); }catch(ClassNotFoundException cnfe){ System.err.println("Error: "+cnfe.getMessage()); }catch(InstantiationException ie){ System.err.println("Error: "+ie.getMessage()); }catch(IllegalAccessException iae){ System.err.println("Error: "+iae.getMessage()); } conn = DriverManager.getConnection(url,user,pass); return conn; } public static Connection getConnection() throws SQLException, ClassNotFoundException{ if(conn !=null && !conn.isClosed()) return conn; connect(); return conn; } } public class DynamicTable extends Application{ Object newValue; //TABLE VIEW AND DATA private ObservableList<ObservableList> data; private TableView<ObservableList> tableview; //MAIN EXECUTOR public static void main(String[] args) { launch(args); } //CONNECTION DATABASE public void buildData(){ tableview.setEditable(true); Callback<TableColumn<Map, String>, TableCell<Map, String>> cellFactoryForMap = new Callback<TableColumn<Map, String>, TableCell<Map, String>>() { @Override public TableCell call(TableColumn p) { return new TextFieldTableCell(new StringConverter() { @Override public String toString(Object t) { return t.toString(); } @Override public Object fromString(String string) { return string; } }); } }; Connection c ; data = FXCollections.observableArrayList(); try{ c = DBConnector.connect(); //SQL FOR SELECTING ALL OF CUSTOMER String SQL = "SELECT * from CUSTOMer"; //ResultSet ResultSet rs = c.createStatement().executeQuery(SQL); /********************************** * TABLE COLUMN ADDED DYNAMICALLY * **********************************/ for(int i=0 ; i<rs.getMetaData().getColumnCount(); i++){ //We are using non property style for making dynamic table final int j = i; TableColumn col = new TableColumn(rs.getMetaData().getColumnName(i+1)); if(j==1){ final ObservableList<String> logLevelList = FXCollections.observableArrayList("FATAL", "ERROR", "WARN", "INFO", "INOUT", "DEBUG"); col.setCellFactory(ChoiceBoxTableCell.forTableColumn(logLevelList)); tableview.getColumns().addAll(col); } else{ col.setCellValueFactory(new Callback<CellDataFeatures<ObservableList,String>,ObservableValue<String>>(){ public ObservableValue<String> call(CellDataFeatures<ObservableList, String> param) { return new SimpleStringProperty(param.getValue().get(j).toString()); } }); tableview.getColumns().addAll(col); } if(j!=1) col.setCellFactory(cellFactoryForMap); System.out.println("Column ["+i+"] "); } /******************************** * Data added to ObservableList * ********************************/ while(rs.next()){ //Iterate Row ObservableList<String> row = FXCollections.observableArrayList(); for(int i=1 ; i<=rs.getMetaData().getColumnCount(); i++){ //Iterate Column row.add(rs.getString(i)); } System.out.println("Row [1] added "+row ); data.add(row); } //FINALLY ADDED TO TableView tableview.setItems(data); }catch(Exception e){ e.printStackTrace(); System.out.println("Error on Building Data"); } } @Override public void start(Stage stage) throws Exception { //TableView Button showDataButton = new Button("Add"); showDataButton.setOnAction(new EventHandler<ActionEvent>() { public void handle(ActionEvent event) { ObservableList<String> row = FXCollections.observableArrayList(); for(int i=1 ; i<=3; i++){ //Iterate Column row.add("asdasd"); } data.add(row); //FINALLY ADDED TO TableView tableview.setItems(data); } }); tableview = new TableView(); buildData(); //Main Scene BorderPane root = new BorderPane(); root.setCenter(tableview); root.setBottom(showDataButton); Scene scene = new Scene(root,500,500); stage.setScene(scene); stage.show(); tableview.getSelectionModel().selectedItemProperty().addListener(new ChangeListener() { @Override public void changed(ObservableValue observableValue, Object oldValue, Object newValue) { //Check whether item is selected and set value of selected item to Label if (tableview.getSelectionModel().getSelectedItem() != null) { TableViewSelectionModel selectionModel = tableview.getSelectionModel(); ObservableList selectedCells = selectionModel.getSelectedCells(); TablePosition tablePosition = (TablePosition) selectedCells.get(0); Object val = tablePosition.getTableColumn().getCellData(newValue); System.out.println("Selected Value " + val); System.out.println("Selected row " + newValue); } } }); } } please help me..

    Read the article

  • Why do I have to run aptitude update twice to install Ruby?

    - by Willie Wheeler
    Summary. I have a fresh EC2 Precise 64-bit instance (ami-82fa58eb). After launching the instance, I want to install ruby1.9.1 (among others). This doesn't work: aptitude update && apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade && aptitude install -y ruby1.9.1 ruby1.9.1-dev make as Aptitude can't find the Ruby package. But this works: aptitude update && aptitude update && apt-get -o Dpkg::Options::="--force-confnew" --force-yes -fuy dist-upgrade && aptitude install -y ruby1.9.1 ruby1.9.1-dev make I would like to understand why I need to run aptitude update twice. Details. The first and second runs look pretty different. First run: Ign http://security.ubuntu.com precise-security InRelease Ign http://archive.ubuntu.com precise InRelease Get: 1 http://security.ubuntu.com precise-security Release.gpg [198 B] Ign http://archive.ubuntu.com precise-updates InRelease Get: 2 http://security.ubuntu.com precise-security Release [49.6 kB] Hit http://archive.ubuntu.com precise Release.gpg Get: 3 http://archive.ubuntu.com precise-updates Release.gpg [198 B] Hit http://archive.ubuntu.com precise Release Get: 4 http://security.ubuntu.com precise-security/main amd64 Packages [161 kB] Get: 5 http://archive.ubuntu.com precise-updates Release [49.6 kB] Get: 6 http://security.ubuntu.com precise-security/restricted amd64 Packages [3,969 B] Hit http://archive.ubuntu.com precise/main amd64 Packages Get: 7 http://security.ubuntu.com precise-security/universe amd64 Packages [43.8 kB] Hit http://archive.ubuntu.com precise/restricted amd64 Packages Hit http://archive.ubuntu.com precise/universe amd64 Packages Get: 8 http://security.ubuntu.com precise-security/multiverse amd64 Packages [2,180 B] Hit http://archive.ubuntu.com precise/multiverse amd64 Packages Get: 9 http://security.ubuntu.com precise-security/main i386 Packages [165 kB] Hit http://archive.ubuntu.com precise/main i386 Packages Hit http://archive.ubuntu.com precise/restricted i386 Packages Hit http://archive.ubuntu.com precise/universe i386 Packages Hit http://archive.ubuntu.com precise/multiverse i386 Packages Get: 10 http://security.ubuntu.com precise-security/restricted i386 Packages [3,968 B] Hit http://archive.ubuntu.com precise/main TranslationIndex Get: 11 http://security.ubuntu.com precise-security/universe i386 Packages [44.0 kB] Hit http://archive.ubuntu.com precise/multiverse TranslationIndex Get: 12 http://security.ubuntu.com precise-security/multiverse i386 Packages [2,369 B] Get: 13 http://security.ubuntu.com precise-security/main TranslationIndex [73 B] Hit http://archive.ubuntu.com precise/restricted TranslationIndex Get: 14 http://security.ubuntu.com precise-security/multiverse TranslationIndex [71 B] Hit http://archive.ubuntu.com precise/universe TranslationIndex Get: 15 http://security.ubuntu.com precise-security/restricted TranslationIndex [71 B] Get: 16 http://archive.ubuntu.com precise-updates/main amd64 Packages [382 kB] Get: 17 http://security.ubuntu.com precise-security/universe TranslationIndex [73 B] Get: 18 http://security.ubuntu.com precise-security/main Translation-en [76.5 kB] Get: 19 http://security.ubuntu.com precise-security/multiverse Translation-en [995 B] Get: 20 http://security.ubuntu.com precise-security/restricted Translation-en [978 B] Get: 21 http://security.ubuntu.com precise-security/universe Translation-en [27.2 kB] Get: 22 http://archive.ubuntu.com precise-updates/restricted amd64 Packages [6,755 B] Get: 23 http://archive.ubuntu.com precise-updates/universe amd64 Packages [129 kB] Get: 24 http://archive.ubuntu.com precise-updates/multiverse amd64 Packages [8,677 B] Get: 25 http://archive.ubuntu.com precise-updates/main i386 Packages [387 kB] Get: 26 http://archive.ubuntu.com precise-updates/restricted i386 Packages [6,732 B] Get: 27 http://archive.ubuntu.com precise-updates/universe i386 Packages [130 kB] Get: 28 http://archive.ubuntu.com precise-updates/multiverse i386 Packages [9,672 B] Get: 29 http://archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B] Get: 30 http://archive.ubuntu.com precise-updates/multiverse TranslationIndex [2,605 B] Get: 31 http://archive.ubuntu.com precise-updates/restricted TranslationIndex [2,461 B] Get: 32 http://archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B] Get: 33 http://archive.ubuntu.com precise/main Translation-en [726 kB] Get: 34 http://archive.ubuntu.com precise/multiverse Translation-en [93.4 kB] Get: 35 http://archive.ubuntu.com precise/restricted Translation-en [2,395 B] Get: 36 http://archive.ubuntu.com precise/universe Translation-en [3,341 kB] Get: 37 http://archive.ubuntu.com precise-updates/main Translation-en [188 kB] Get: 38 http://archive.ubuntu.com precise-updates/multiverse Translation-en [5,414 B] Get: 39 http://archive.ubuntu.com precise-updates/restricted Translation-en [1,484 B] Get: 40 http://archive.ubuntu.com precise-updates/universe Translation-en [77.3 kB] Ign http://archive.ubuntu.com precise/main Translation-en_US Ign http://archive.ubuntu.com precise/multiverse Translation-en_US Ign http://archive.ubuntu.com precise/restricted Translation-en_US Ign http://archive.ubuntu.com precise/universe Translation-en_US Fetched 6,137 kB in 11s (538 kB/s) Reading package lists... Second run: Ign http://us-east-1.ec2.archive.ubuntu.com precise InRelease Ign http://us-east-1.ec2.archive.ubuntu.com precise-updates InRelease Get: 1 http://us-east-1.ec2.archive.ubuntu.com precise Release.gpg [198 B] Get: 2 http://us-east-1.ec2.archive.ubuntu.com precise-updates Release.gpg [198 B] Ign http://security.ubuntu.com precise-security InRelease Get: 3 http://us-east-1.ec2.archive.ubuntu.com precise Release [49.6 kB] Get: 4 http://us-east-1.ec2.archive.ubuntu.com precise-updates Release [49.6 kB] Get: 5 http://us-east-1.ec2.archive.ubuntu.com precise/main Sources [934 kB] Hit http://security.ubuntu.com precise-security Release.gpg Hit http://security.ubuntu.com precise-security Release Get: 6 http://us-east-1.ec2.archive.ubuntu.com precise/universe Sources [5,019 kB] Get: 7 http://security.ubuntu.com precise-security/main Sources [42.8 kB] Get: 8 http://security.ubuntu.com precise-security/universe Sources [13.5 kB] Hit http://security.ubuntu.com precise-security/main amd64 Packages Hit http://security.ubuntu.com precise-security/universe amd64 Packages Hit http://security.ubuntu.com precise-security/main i386 Packages Get: 9 http://us-east-1.ec2.archive.ubuntu.com precise/main amd64 Packages [1,273 kB] Hit http://security.ubuntu.com precise-security/universe i386 Packages Get: 10 http://us-east-1.ec2.archive.ubuntu.com precise/universe amd64 Packages [4,786 kB] Hit http://security.ubuntu.com precise-security/main TranslationIndex Hit http://security.ubuntu.com precise-security/universe TranslationIndex Hit http://security.ubuntu.com precise-security/main Translation-en Hit http://security.ubuntu.com precise-security/universe Translation-en Get: 11 http://us-east-1.ec2.archive.ubuntu.com precise/main i386 Packages [1,274 kB] Get: 12 http://us-east-1.ec2.archive.ubuntu.com precise/universe i386 Packages [4,796 kB] Get: 13 http://us-east-1.ec2.archive.ubuntu.com precise/main TranslationIndex [3,706 B] Get: 14 http://us-east-1.ec2.archive.ubuntu.com precise/universe TranslationIndex [2,922 B] Get: 15 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Sources [163 kB] Get: 16 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Sources [50.8 kB] Get: 17 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main amd64 Packages [382 kB] Get: 18 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe amd64 Packages [129 kB] Get: 19 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main i386 Packages [387 kB] Get: 20 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe i386 Packages [129 kB] Get: 21 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main TranslationIndex [3,564 B] Get: 22 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe TranslationIndex [2,850 B] Get: 23 http://us-east-1.ec2.archive.ubuntu.com precise/main Translation-en [726 kB] Get: 24 http://us-east-1.ec2.archive.ubuntu.com precise/universe Translation-en [3,341 kB] Get: 25 http://us-east-1.ec2.archive.ubuntu.com precise-updates/main Translation-en [188 kB] Get: 26 http://us-east-1.ec2.archive.ubuntu.com precise-updates/universe Translation-en [77.1 kB] Fetched 23.8 MB in 23s (1,026 kB/s) Reading package lists... Note. My question is almost exactly the same as Running 'apt-get upgrade' on Amazon EC2 AMI twice in succession upgrades very different packages except that I'm seeing this issue with aptitude updates rather than apt-get upgrades.

    Read the article

  • How to resolve Unmet dependencies error?

    - by dandelion
    Using my new install of Ubuntu I haven't been able to download anything from the software center except the maryo game without the following error: The following packages have unmet dependencies: vlc: Depends: vlc-nox (= 1.1.12-2~oneiric1) but 1.1.12-2~oneiric1 is to be installed Depends: libaa1 (>= 1.4p5) but 1.4p5-38build1 is to be installed Depends: libavcodec-extra-53 (>= 4:0.7-1) but 4:0.7.3ubuntu0.11.10.1 is to be installed Depends: libavutil-extra-51 (>= 4:0.7-1) but 4:0.7.3ubuntu0.11.10.1 is to be installed Depends: libc6 (>= 2.8) but 2.13-20ubuntu5.1 is to be installed Depends: libfreetype6 (>= 2.2.1) but 2.4.4-2ubuntu1.1 is to be installed Depends: libgcc1 (>= 1:4.1.1) but 1:4.6.1-9ubuntu3 is to be installed Depends: libqtcore4 (>= 4:4.7.0~beta1) but 4:4.7.4-0ubuntu8.1 is to be installed Depends: libqtgui4 (>= 4:4.5.3) but 4:4.7.4-0ubuntu8.1 is to be installed Depends: libsdl-image1.2 (>= 1.2.10) but 1.2.10-2.1 is to be installed Depends: libsdl1.2debian (>= 1.2.10-1) but 1.2.14-6.1ubuntu4 is to be installed Depends: libstdc++6 (>= 4.6) but 4.6.1-9ubuntu3 is to be installed Depends: libva-x11-1 (> 1.0.12~) but it is not going to be installed Depends: libva1 (> 1.0.12~) but it is not going to be installed Depends: libxcb-randr0 (>= 1.1) but it is not going to be installed Depends: libxcb-xv0 (>= 1.2) but it is not going to be installed Depends: zlib1g (>= 1:1.2.3.3.dfsg) but 1:1.2.3.4.dfsg-3ubuntu3 is to be installed My system specs are version 11.10 64 bit. ge-g41m-es2l mother board amd 5770 video card wdc green 500 gig hard drive I have recently changed the motherboard, but otherwise have not changed my computer from when I used to be running the same version of Ubuntu. edit still unable to download output of sudo apt-get update output of sudo apt-get update ~$ sudo apt-get update Ign http://extras.ubuntu.com oneiric InRelease Ign http://security.ubuntu.com oneiric-security InRelease Ign http://archive.canonical.com oneiric InRelease Ign http://ppa.launchpad.net oneiric InRelease Ign http://us.archive.ubuntu.com oneiric InRelease Ign http://us.archive.ubuntu.com oneiric-updates InRelease Ign http://us.archive.ubuntu.com oneiric-backports InRelease Hit http://extras.ubuntu.com oneiric Release.gpg Hit http://archive.canonical.com oneiric Release.gpg Hit http://security.ubuntu.com oneiric-security Release.gpg Hit http://ppa.launchpad.net oneiric Release.gpg Ign http://us.archive.ubuntu.com oneiric-proposed InRelease Hit http://us.archive.ubuntu.com oneiric Release.gpg Hit http://extras.ubuntu.com oneiric Release Hit http://archive.canonical.com oneiric Release Hit http://security.ubuntu.com oneiric-security Release Hit http://ppa.launchpad.net oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release.gpg Hit http://us.archive.ubuntu.com oneiric-backports Release.gpg Hit http://extras.ubuntu.com oneiric/main Sources Hit http://archive.canonical.com oneiric/partner i386 Packages Hit http://security.ubuntu.com oneiric-security/main Sources Hit http://ppa.launchpad.net oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric-proposed Release.gpg Hit http://extras.ubuntu.com oneiric/main i386 Packages Ign http://extras.ubuntu.com oneiric/main TranslationIndex Hit http://ppa.launchpad.net oneiric/main i386 Packages Ign http://ppa.launchpad.net oneiric/main TranslationIndex Ign http://archive.canonical.com oneiric/partner TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Sources Hit http://security.ubuntu.com oneiric-security/multiverse Sources Hit http://security.ubuntu.com oneiric-security/main i386 Packages Hit http://security.ubuntu.com oneiric-security/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric Release Hit http://us.archive.ubuntu.com oneiric-updates Release Hit http://security.ubuntu.com oneiric-security/universe i386 Packages Hit http://security.ubuntu.com oneiric-security/multiverse i386 Packages Hit http://security.ubuntu.com oneiric-security/main TranslationIndex Hit http://security.ubuntu.com oneiric-security/multiverse TranslationIndex Hit http://security.ubuntu.com oneiric-security/restricted TranslationIndex Hit http://security.ubuntu.com oneiric-security/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports Release Hit http://security.ubuntu.com oneiric-security/main Translation-en Hit http://security.ubuntu.com oneiric-security/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed Release Hit http://us.archive.ubuntu.com oneiric/main Sources Hit http://us.archive.ubuntu.com oneiric/restricted Sources Hit http://us.archive.ubuntu.com oneiric/universe Sources Hit http://us.archive.ubuntu.com oneiric/multiverse Sources Hit http://security.ubuntu.com oneiric-security/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/main Sources Hit http://us.archive.ubuntu.com oneiric-updates/restricted Sources Hit http://security.ubuntu.com oneiric-security/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Sources Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-updates/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-updates/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-updates/universe TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/main Sources Hit http://us.archive.ubuntu.com oneiric-backports/restricted Sources Hit http://us.archive.ubuntu.com oneiric-backports/universe Sources Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Sources Hit http://us.archive.ubuntu.com oneiric-backports/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-backports/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-backports/universe TranslationIndex Ign http://extras.ubuntu.com oneiric/main Translation-en_US Ign http://ppa.launchpad.net oneiric/main Translation-en_US Hit http://us.archive.ubuntu.com oneiric-proposed/restricted i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/main i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/universe i386 Packages Hit http://us.archive.ubuntu.com oneiric-proposed/main TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/restricted TranslationIndex Hit http://us.archive.ubuntu.com oneiric-proposed/universe TranslationIndex Ign http://archive.canonical.com oneiric/partner Translation-en_US Ign http://extras.ubuntu.com oneiric/main Translation-en Ign http://ppa.launchpad.net oneiric/main Translation-en Ign http://archive.canonical.com oneiric/partner Translation-en Get:1 http://us.archive.ubuntu.com oneiric/main i386 Packages [1,583 kB] Hit http://us.archive.ubuntu.com oneiric/main Translation-en Hit http://us.archive.ubuntu.com oneiric/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/main Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-updates/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/main Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-backports/universe Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/main Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/multiverse Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/restricted Translation-en Hit http://us.archive.ubuntu.com oneiric-proposed/universe Translation-en Err http://us.archive.ubuntu.com oneiric/main i386 Packages 404 Not Found [IP: 91.189.92.179 80] Fetched 1 B in 2s (0 B/s) W: Failed to fetch http://us.archive.ubuntu.com/ubuntu/dists/oneiric/main/binary-i386/Packages 404 Not Found [IP: 91.189.92.179 80] E: Some index files failed to download. They have been ignored, or old ones used instead.

    Read the article

  • Microsoft TypeScript : A Typed Superset of JavaScript

    - by shiju
    JavaScript is gradually becoming a ubiquitous programming language for the web, and the popularity of JavaScript is increasing day by day. Earlier, JavaScript was just a language for browser. But now, we can write JavaScript apps for browser, server and mobile. With the advent of Node.js, you can build scalable, high performance apps on the server with JavaScript. But many developers, especially developers who are working with static type languages, are hating the JavaScript language due to the lack of structuring and the maintainability problems of JavaScript. Microsoft TypeScript is trying to solve some problems of JavaScript when we are building scalable JavaScript apps. Microsoft TypeScript TypeScript is Microsoft's solution for writing scalable JavaScript programs with the help of Static Types, Interfaces, Modules and Classes along with greater tooling support. TypeScript is a typed superset of JavaScript that compiles to plain JavaScript. This would be more productive for developers who are coming from static type languages. You can write scalable JavaScript  apps in TypeScript with more productive and more maintainable manner, and later you can compiles to plain JavaScript which will be run on any browser and any OS. TypeScript will work with browser based JavaScript apps and JavaScript apps that following CommonJS specification. You can use TypeScript for building HTML 5 apps, Node.JS apps, WinRT apps. TypeScript is providing better tooling support with Visual Studio, Sublime Text, Vi, Emacs. Microsoft has open sourced its TypeScript languages on CodePlex at http://typescript.codeplex.com/    Install TypeScript You can install TypeScript compiler as a Node.js package via the NPM or you can install as a Visual Studio 2012 plug-in which will enable you better tooling support within the Visual Studio IDE. Since TypeScript is distributed as a Node.JS package, and it can be installed on other OS such as Linux and MacOS. The following command will install TypeScript compiler via an npm package for node.js npm install –g typescript TypeScript provides a Visual Studio 2012 plug-in as MSI file which will install TypeScript and also provides great tooling support within the Visual Studio, that lets the developers to write TypeScript apps with greater productivity and better maintainability. You can download the Visual Studio plug-in from here Building JavaScript  apps with TypeScript You can write typed version of JavaScript programs with TypeScript and then compiles it to plain JavaScript code. The beauty of the TypeScript is that it is already JavaScript and normal JavaScript programs are valid TypeScript programs, which means that you can write normal  JavaScript code and can use typed version of JavaScript whenever you want. TypeScript files are using extension .ts and this will be compiled using a compiler named tsc. The following is a sample program written in  TypeScript greeter.ts 1: class Greeter { 2: greeting: string; 3: constructor (message: string) { 4: this.greeting = message; 5: } 6: greet() { 7: return "Hello, " + this.greeting; 8: } 9: } 10:   11: var greeter = new Greeter("world"); 12:   13: var button = document.createElement('button') 14: button.innerText = "Say Hello" 15: button.onclick = function() { 16: alert(greeter.greet()) 17: } 18:   19: document.body.appendChild(button) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The above program is compiling with the TypeScript compiler as shown in the below picture The TypeScript compiler will generate a JavaScript file after compiling the TypeScript program. If your TypeScript programs having any reference to other TypeScript files, it will automatically generate JavaScript files for the each referenced files. The following code block shows the compiled version of plain JavaScript  for the above greeter.ts greeter.js 1: var Greeter = (function () { 2: function Greeter(message) { 3: this.greeting = message; 4: } 5: Greeter.prototype.greet = function () { 6: return "Hello, " + this.greeting; 7: }; 8: return Greeter; 9: })(); 10: var greeter = new Greeter("world"); 11: var button = document.createElement('button'); 12: button.innerText = "Say Hello"; 13: button.onclick = function () { 14: alert(greeter.greet()); 15: }; 16: document.body.appendChild(button); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Tooling Support with Visual Studio TypeScript is providing a plug-in for Visual Studio which will provide an excellent support for writing TypeScript  programs within the Visual Studio. The following screen shot shows the Visual Studio template for TypeScript apps   The following are the few screen shots of Visual Studio IDE for TypeScript apps. Summary TypeScript is Microsoft's solution for writing scalable JavaScript apps which will solve lot of problems involved in larger JavaScript apps. I hope that this solution will attract lot of developers who are really looking for writing maintainable structured code in JavaScript, without losing any productivity. TypeScript lets developers to write JavaScript apps with the help of Static Types, Interfaces, Modules and Classes and also providing better productivity. I am a passionate developer on Node.JS and would definitely try to use TypeScript for building Node.JS apps on the Windows Azure cloud. I am really excited about to writing Node.JS apps by using TypeScript, from my favorite development IDE Visual Studio. You can follow me on twitter at @shijucv

    Read the article

  • BNF – how to read syntax?

    - by Piotr Rodak
    A few days ago I read post of Jen McCown (blog) about her idea of blogging about random articles from Books Online. I think this is a great idea, even if Jen says that it’s not exciting or sexy. I noticed that many of the questions that appear on forums and other media arise from pure fact that people asking questions didn’t bother to read and understand the manual – Books Online. Jen came up with a brilliant, concise acronym that describes very well the category of posts about Books Online – RTFM365. I take liberty of tagging this post with the same acronym. I often come across questions of type – ‘Hey, i am trying to create a table, but I am getting an error’. The error often says that the syntax is invalid. 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT DEFAULT Guid_Default NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); 5 The answer is usually(1), ‘Ok, let me check it out.. Ah yes – you have to put name of the DEFAULT constraint before the type of constraint: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); Why many people stumble on syntax errors? Is the syntax poorly documented? No, the issue is, that correct syntax of the CREATE TABLE statement is documented very well in Books Online and is.. intimidating. Many people can be taken aback by the rather complex block of code that describes all intricacies of the statement. However, I don’t know better way of defining syntax of the statement or command. The notation that is used to describe syntax in Books Online is a form of Backus-Naur notatiion, called BNF for short sometimes. This is a notation that was invented around 50 years ago, and some say that even earlier, around 400 BC – would you believe? Originally it was used to define syntax of, rather ancient now, ALGOL programming language (in 1950’s, not in ancient India). If you look closer at the definition of the BNF, it turns out that the principles of this syntax are pretty simple. Here are a few bullet points: italic_text is a placeholder for your identifier <italic_text_in_angle_brackets> is a definition which is described further. [everything in square brackets] is optional {everything in curly brackets} is obligatory everything | separated | by | operator is an alternative ::= “assigns” definition to an identifier Yes, it looks like these six simple points give you the key to understand even the most complicated syntax definitions in Books Online. Books Online contain an article about syntax conventions – have you ever read it? Let’s have a look at fragment of the CREATE TABLE statement: 1 CREATE TABLE 2 [ database_name . [ schema_name ] . | schema_name . ] table_name 3 ( { <column_definition> | <computed_column_definition> 4 | <column_set_definition> } 5 [ <table_constraint> ] [ ,...n ] ) 6 [ ON { partition_scheme_name ( partition_column_name ) | filegroup 7 | "default" } ] 8 [ { TEXTIMAGE_ON { filegroup | "default" } ] 9 [ FILESTREAM_ON { partition_scheme_name | filegroup 10 | "default" } ] 11 [ WITH ( <table_option> [ ,...n ] ) ] 12 [ ; ] Let’s look at line 2 of the above snippet: This line uses rules 3 and 5 from the list. So you know that you can create table which has specified one of the following. just name – table will be created in default user schema schema name and table name – table will be created in specified schema database name, schema name and table name – table will be created in specified database, in specified schema database name, .., table name – table will be created in specified database, in default schema of the user. Note that this single line of the notation describes each of the naming schemes in deterministic way. The ‘optionality’ of the schema_name element is nested within database_name.. section. You can use either database_name and optional schema name, or just schema name – this is specified by the pipe character ‘|’. The error that user gets with execution of the first script fragment in this post is as follows: Msg 156, Level 15, State 1, Line 2 Incorrect syntax near the keyword 'DEFAULT'. Ok, let’s have a look how to find out the correct syntax. Line number 3 of the BNF fragment above contains reference to <column_definition>. Since column_definition is in angle brackets, we know that this is a reference to notion described further in the code. And indeed, the very next fragment of BNF contains syntax of the column definition. 1 <column_definition> ::= 2 column_name <data_type> 3 [ FILESTREAM ] 4 [ COLLATE collation_name ] 5 [ NULL | NOT NULL ] 6 [ 7 [ CONSTRAINT constraint_name ] DEFAULT constant_expression ] 8 | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] 9 ] 10 [ ROWGUIDCOL ] [ <column_constraint> [ ...n ] ] 11 [ SPARSE ] Look at line 7 in the above fragment. It says, that the column can have a DEFAULT constraint which, if you want to name it, has to be prepended with [CONSTRAINT constraint_name] sequence. The name of the constraint is optional, but I strongly recommend you to make the effort of coming up with some meaningful name yourself. So the correct syntax of the CREATE TABLE statement from the beginning of the article is like this: 1 CREATE TABLE dbo.Employees 2 (guid uniqueidentifier CONSTRAINT Guid_Default DEFAULT NEWSEQUENTIALID() ROWGUIDCOL, 3 Employee_Name varchar(60) 4 CONSTRAINT Guid_PK PRIMARY KEY (guid) ); That is practically everything you should know about BNF. I encourage you to study the syntax definitions for various statements and commands in Books Online, you can find really interesting things hidden there. Technorati Tags: SQL Server,t-sql,BNF,syntax   (1) No, my answer usually is a question – ‘What error message? What does it say?’. You’d be surprised to know how many people think I can go through time and space and look at their screen at the moment they received the error.

    Read the article

  • Soapi.CS : A fully relational fluent .NET Stack Exchange API client library

    - by Sky Sanders
    Soapi.CS for .Net / Silverlight / Windows Phone 7 / Mono as easy as breathing...: var context = new ApiContext(apiKey).Initialize(false); Question thisPost = context.Official .StackApps .Questions.ById(386) .WithComments(true) .First(); Console.WriteLine(thisPost.Title); thisPost .Owner .Questions .PageSize(5) .Sort(PostSort.Votes) .ToList() .ForEach(q=> { Console.WriteLine("\t" + q.Score + "\t" + q.Title); q.Timeline.ToList().ForEach(t=> Console.WriteLine("\t\t" + t.TimelineType + "\t" + t.Owner.DisplayName)); Console.WriteLine(); }); // if you can think it, you can get it. Output Soapi.CS : A fully relational fluent .NET Stack Exchange API client library 21 Soapi.CS : A fully relational fluent .NET Stack Exchange API client library Revision code poet Revision code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Revision code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Answer code poet Revision code poet Revision code poet 14 SOAPI-WATCH: A realtime service that notifies subscribers via twitter when the API changes in any way. Votes code poet Revision code poet Votes code poet Comment code poet Comment code poet Comment code poet Votes lfoust Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Revision code poet Comment lfoust Votes code poet Revision code poet Votes code poet Votes lfoust Votes code poet Revision code poet Comment Dave DeLong Revision code poet Revision code poet Votes code poet Comment lfoust Comment Dave DeLong Comment lfoust Comment lfoust Comment Dave DeLong Revision code poet 11 SOAPI-EXPLORE: Self-updating single page JavaSript API test harness Votes code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Revision code poet Votes code poet Revision code poet Revision code poet Revision code poet Comment code poet Revision code poet Votes code poet Comment code poet Question code poet Votes code poet 11 Soapi.JS V1.0: fluent JavaScript wrapper for the StackOverflow API Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Comment George Edison Answer George Edison Votes code poet Votes code poet Votes code poet Votes code poet Revision code poet Revision code poet Answer code poet Comment code poet Revision code poet Comment code poet Comment code poet Comment code poet Revision code poet Revision code poet Votes code poet Votes code poet Votes code poet Votes code poet Comment code poet Comment code poet Comment code poet Comment code poet Comment code poet 9 SOAPI-DIFF: Your app broke? Check SOAPI-DIFF to find out what changed in the API Votes code poet Revision code poet Comment Dennis Williamson Answer Dennis Williamson Votes code poet Votes Dennis Williamson Comment code poet Question code poet Votes code poet About A robust, fully relational, easy to use, strongly typed, end-to-end StackOverflow API Client Library. Out of the box, Soapi provides you with a robust client library that abstracts away most all of the messy details of consuming the API and lets you concentrate on implementing your ideas. A few features include: A fully relational model of the API data set exposed via a fully 'dot navigable' IEnumerable (LINQ) implementation. Simply tell Soapi what you want and it will get it for you. e.g. "On my first question, from the author of the first comment, get the first page of comments by that person on any post" my.Questions.First().Comments.First().Owner.Comments.ToList(); (yes this is a real expression that returns the data as expressed!) Full coverage of the API, all routes and all parameters with an intuitive syntax. Strongly typed Domain Data Objects for all API data structures. Eager and Lazy Loading of 'stub' objects. Eager\Lazy loading may be disabled. When finer grained control of requests is desired, the core RouteMap objects may be leveraged to request data from any of the API paths using all available parameters as documented on the help pages. A rich Asynchronous implementation. A configurable request cache to reduce unnecessary network traffic and to simplify your usage logic. There is no need to go out of your way to be frugal. You may set a distinct cache duration for any particular route. A configurable request throttle to ensure compliance with the api terms of usage and to simplify your code in that you do not have to worry about and respond to 50X errors. The RequestCache and Throttled Queue are thread-safe, so can make as many requests as you like from as many threads as you like as fast as you like and not worry about abusing the api or having to write reams of management/compensation code. Configurable retry threshold that will, by default, make up to 3 attempts to retrieve a request before failing. Every request made by Soapi is properly formed and directed so most any http error will be the result of a timeout or other network infrastructure. A retry buffer provides a level of fault tolerance that you can rely on. An almost identical javascript library, Soapi.JS, and it's full figured big brother, Soapi.JS2, that will enable you to leverage your server cycles and bandwidth for only those tasks that require it and offload things like status updates to the client's browser. License Licensed GPL Version 2 license. Why is Soapi.CS GPL? Can I get an LGPL license for Soapi.CS? (hint: probably) Platforms .NET 3.5 .NET 4.0 Silverlight 3 Silverlight 4 Windows Phone 7 Mono Download Source code lives @ http://soapics.codeplex.com. Binary releases are forthcoming. codeplex is acting up again. get the source and binaries @ http://bitbucket.org/bitpusher/soapi.cs/downloads The source is C# 3.5. and includes projects and solutions for the following IDEs Visual Studio 2008 Visual Studio 2010 ModoDevelop 2.4 Documentation Full documentation is available at http://soapi.info/help/cs/index.aspx Sample Code / Usage Examples Sample code and usage examples will be added as answers to this question. Full API Coverage all API routes are covered Full Parameter Parity If the API exposes it, Soapi giftwraps it for you. Building a simple app with Soapi.CS - a simple app that gathers all traces of a user in the whole stackiverse. Fluent Configuration - Setting up a Soapi.ApiContext could not be easier Bulk Data Import - A tiny app that quickly loads a SQLite data file with all users in the stackiverse. Paged Results - Soapi.CS transparently handles multi-page operations. Asynchronous Requests - Soapi.CS provides a rich asynchronous model that is especially useful when writing api apps in Silverlight or Windows Phone 7. Caching and Throttling - how and why Apps that use Soapi.CS Soapi.FindUser - .net utility for locating a user anywhere in the stackiverse Soapi.Explore - The entire API at your command Soapi.LastSeen - List users by last access time Add your app/site here - I know you are out there ;-) if you are not comfortable editing this post, simply add a comment and I will add it. The CS/SL/WP7/MONO libraries all compile the same code and with the exception of environmental considerations of Silverlight, the code samples are valid for all libraries. You may also find guidance in the test suites. More information on the SOAPI eco-system. Contact This library is currently the effort of me, Sky Sanders (code poet) and can be reached at gmail - sky.sanders Any who are interested in improving this library are welcome. Support Soapi You can help support this project by voting for Soapi's Open Source Ad post For more information about the origins of Soapi.CS and the rest of the Soapi eco-system see What is Soapi and why should I care?

    Read the article

  • Sending Messages to SignalR Hubs from the Outside

    - by Ricardo Peres
    Introduction You are by now probably familiarized with SignalR, Microsoft’s API for real-time web functionality. This is, in my opinion, one of the greatest products Microsoft has released in recent time. Usually, people login to a site and enter some page which is connected to a SignalR hub. Then they can send and receive messages – not just text messages, mind you – to other users in the same hub. Also, the server can also take the initiative to send messages to all or a specified subset of users on its own, this is known as server push. The normal flow is pretty straightforward, Microsoft has done a great job with the API, it’s clean and quite simple to use. And for the latter – the server taking the initiative – it’s also quite simple, just involves a little more work. The Problem The API for sending messages can be achieved from inside a hub – an instance of the Hub class – which is something that we don’t have if we are the server and we want to send a message to some user or group of users: the Hub instance is only instantiated in response to a client message. The Solution It is possible to acquire a hub’s context from outside of an actual Hub instance, by calling GlobalHost.ConnectionManager.GetHubContext<T>(). This API allows us to: Broadcast messages to all connected clients (possibly excluding some); Send messages to a specific client; Send messages to a group of clients. So, we have groups and clients, each is identified by a string. Client strings are called connection ids and group names are free-form, given by us. The problem with client strings is, we do not know how these map to actual users. One way to achieve this mapping is by overriding the Hub’s OnConnected and OnDisconnected methods and managing the association there. Here’s an example: 1: public class MyHub : Hub 2: { 3: private static readonly IDictionary<String, ISet<String>> users = new ConcurrentDictionary<String, ISet<String>>(); 4:  5: public static IEnumerable<String> GetUserConnections(String username) 6: { 7: ISet<String> connections; 8:  9: users.TryGetValue(username, out connections); 10:  11: return (connections ?? Enumerable.Empty<String>()); 12: } 13:  14: private static void AddUser(String username, String connectionId) 15: { 16: ISet<String> connections; 17:  18: if (users.TryGetValue(username, out connections) == false) 19: { 20: connections = users[username] = new HashSet<String>(); 21: } 22:  23: connections.Add(connectionId); 24: } 25:  26: private static void RemoveUser(String username, String connectionId) 27: { 28: users[username].Remove(connectionId); 29: } 30:  31: public override Task OnConnected() 32: { 33: AddUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 34: return (base.OnConnected()); 35: } 36:  37: public override Task OnDisconnected() 38: { 39: RemoveUser(this.Context.Request.User.Identity.Name, this.Context.ConnectionId); 40: return (base.OnDisconnected()); 41: } 42: } As you can see, I am using a static field to store the mapping between a user and its possibly many connections – for example, multiple open browser tabs or even multiple browsers accessing the same page with the same login credentials. The user identity, as is normal in .NET, is obtained from the IPrincipal which in SignalR hubs case is stored in Context.Request.User. Of course, this property will only have a meaningful value if we enforce authentication. Another way to go is by creating a group for each user that connects: 1: public class MyHub : Hub 2: { 3: public override Task OnConnected() 4: { 5: this.Groups.Add(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 6: return (base.OnConnected()); 7: } 8:  9: public override Task OnDisconnected() 10: { 11: this.Groups.Remove(this.Context.ConnectionId, this.Context.Request.User.Identity.Name); 12: return (base.OnDisconnected()); 13: } 14: } In this case, we will have a one-to-one equivalence between users and groups. All connections belonging to the same user will fall in the same group. So, if we want to send messages to a user from outside an instance of the Hub class, we can do something like this, for the first option – user mappings stored in a static field: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4: 5: foreach (String connectionId in HelloHub.GetUserConnections(username)) 6: { 7: context.Clients.Client(connectionId).sendUserMessage(message); 8: } 9: } And for using groups, its even simpler: 1: public void SendUserMessage(String username, String message) 2: { 3: var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>(); 4:  5: context.Clients.Group(username).sendUserMessage(message); 6: } Using groups has the advantage that the IHubContext interface returned from GetHubContext has direct support for groups, no need to send messages to individual connections. Of course, you can wrap both mapping options in a common API, perhaps exposed through IoC. One example of its interface might be: 1: public interface IUserToConnectionMappingService 2: { 3: //associate and dissociate connections to users 4:  5: void AddUserConnection(String username, String connectionId); 6:  7: void RemoveUserConnection(String username, String connectionId); 8: } SignalR has built-in dependency resolution, by means of the static GlobalHost.DependencyResolver property: 1: //for using groups (in the Global class) 2: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new GroupsMappingService()); 3:  4: //for using a static field (in the Global class) 5: GlobalHost.DependencyResolver.Register(typeof(IUserToConnectionMappingService), () => new StaticMappingService()); 6:  7: //retrieving the current service (in the Hub class) 8: var mapping = GlobalHost.DependencyResolver.Resolve<IUserToConnectionMappingService>(); Now all you have to do is implement GroupsMappingService and StaticMappingService with the code I shown here and change SendUserMessage method to rely in the dependency resolver for the actual implementation. Stay tuned for more SignalR posts!

    Read the article

  • Using Unity – Part 2

    - by nmarun
    In the first part of this series, we created a simple project and learned how to implement IoC pattern using Unity. In this one, I’ll show how you can instantiate other types that implement our IProduct interface. One place where this one would want to use this feature is to create mock types for testing purposes. Alright, let’s dig in. I added another class – Product2.cs  to the ProductModel project. 1: public class Product2 : IProduct 2: { 3: public string Name { get; set;} 4: public Category Category { get; set; } 5: public DateTime MfgDate { get;set; } 6:  7: public Product2() 8: { 9: Name = "Canon Digital Rebel XTi"; 10: Category = new Category {Name = "Electronics", SubCategoryName = "Digital Cameras"}; 11: MfgDate = DateTime.Now; 12: } 13:  14: public string WriteProductDetails() 15: { 16: return string.Format("Name: {0}<br/>Category: {1}<br/>Mfg Date: {2}", 17: Name, Category, MfgDate.ToShortDateString()); 18: } 19: } Highlights of this class are that it implements IProduct interface and it has some different properties than the Product class. The Category class looks like below: 1: public class Category 2: { 3: public string Name { get; set; } 4: public string SubCategoryName { get; set; } 5:  6: public override string ToString() 7: { 8: return string.Format("{0} - {1}", Name, SubCategoryName); 9: } 10: } We’ll go to our web.config file to add the configuration information about this new class – Product2 that we created. Let’s first add a typeAlias element. 1: <typeAlias alias="Product2" type="ProductModel.Product2, ProductModel"/> That’s all that is needed for us to get an instance of Product2 in our application. I have a new button added to the .aspx page and the click event of this button is where all the magic happens: 1: private IUnityContainer unityContainer; 2: protected void Page_Load(object sender, EventArgs e) 3: { 4: unityContainer = Application["UnityContainer"] as IUnityContainer; 5: 6: if (unityContainer == null) 7: { 8: productDetailsLabel.Text = "ERROR: Unity Container not populated in Global.asax.<p />"; 9: } 10: else 11: { 12: if (!IsPostBack) 13: { 14: IProduct productInstance = unityContainer.Resolve<IProduct>(); 15: productDetailsLabel.Text = productInstance.WriteProductDetails(); 16: } 17: } 18: } 19:  20: protected void Product2Button_Click(object sender, EventArgs e) 21: { 22: unityContainer.RegisterType<IProduct, Product2>(); 23: IProduct product2Instance = unityContainer.Resolve<IProduct>(); 24: productDetailsLabel.Text = product2Instance.WriteProductDetails(); 25: } The unityContainer instance is set in the Page_Load event. Line 22 in the click event of the Product2Button registers a type mapping in the container. In English, this means that when unityContainer tries to resolve for IProduct, it gets an instance of Product2. Once this code runs, following output is rendered: There’s another way of doing this. You can resolve an instance of the requested type with a name from the container. We’ll have to update the container element of our web.config file to include the following: 1: <container name="unityContainer"> 2: <types> 3: <type type="IProduct" mapTo="Product"/> 4: <!-- Named mapping for IProduct to Product --> 5: <type type="IProduct" mapTo="Product" name="LegacyProduct" /> 6: <!-- Named mapping for IProduct to Product2 --> 7: <type type="IProduct" mapTo="Product2" name="NewProduct" /> 8: </types> 9: </container> I’ve added a Dropdownlist and a button to the design page: 1: <asp:DropDownList ID="ModelTypesList" runat="server"> 2: <asp:ListItem Text="Legacy Product" Value="LegacyProduct" /> 3: <asp:ListItem Text="New Product" Value="NewProduct" /> 4: </asp:DropDownList> 5: <br /> 6: <asp:Button ID="SelectedModelButton" Text="Get Selected Instance" runat="server" 7: onclick="SelectedModelButton_Click" /> 1: protected void SelectedModelButton_Click(object sender, EventArgs e) 2: { 3: // get the selected value: LegacyProduct or NewProduct 4: string modelType = ModelTypesList.SelectedValue; 5: // pass the modelType to the Resolve method 6: IProduct customModel = unityContainer.Resolve<IProduct>(modelType); 7: productDetailsLabel.Text = customModel.WriteProductDetails(); 8: } Pretty straight forward right? The only thing to note here is that the values in the dropdownlist item need to match the name attribute of the type. Depending on what you select, you’ll get an instance of either the Product class or the Product2 class and the corresponding WriteProductDetails() method is called. Now you see, how either of these methods can be used to create mock objects your the test project. See the code here. I’ll continue to share more of Unity in the next blog.

    Read the article

  • Building services with the .NET framework Cont’d

    - by Allan Rwakatungu
    In my previous blog I wrote an introductory post on services and how you can build services using the .NET frameworks Windows Communication Foundation (WCF) In this post I will show how to develop a real world application using WCF The problem During the last meeting we realized developers in Uganda are not so cool – they don’t use twitter so may not get the latest news and updates from the technology world. We also noticed they mostly use kabiriti phones (jokes). With their kabiriti phones they are unable to access the twitter web client or alternative twitter mobile clients like tweetdeck , twirl or tweetie. However, the kabiriti phones support SMS (Yeeeeeeei). So what we going to do to make these developers cool and keep them updated is by enabling them to receive tweets via SMS. We shall also enable them to develop their own applications that can extend this functionality Analysis Thanks to services and open API’s solving our problem is going to be easy.  1. To get tweets we can use the twitter service for FREE 2. To send SMS we shall use www.clickatell.com/ as they can send SMS to any country in the world. Besides we could not find any local service that offers API's for sending SMS :(. 3. To enable developers to integrate with our application so that they can extend it and build even cooler applications we use WCF. In addittion , because connectivity might be an issue we decided to use WCF because if has a inbuilt queing features. We also choose WCF because this is a post about .NET and WCF :). The Code Accessing the tweets To consume twitters REST API we shall use the WCF REST starter kit. Like it name indicates , the REST starter kit is a set of .NET framework classes that enable developers to create and access REST style services ( like the twitter service). Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Http; using System.Net; using System.Xml.Linq;   namespace UG.Demo {     public class TwitterService     {         public IList<TwitterStatus> SomeMethodName()         {             //Connect to the twitter service (HttpClient is part of the REST startkit classes)             HttpClient cl = new HttpClient("http://api.twitter.com/1/statuses/friends_timeline.xml");             //Supply your basic authentication credentials             cl.TransportSettings.Credentials = new NetworkCredential("ourusername", "ourpassword");             //issue an http             HttpResponseMessage resp = cl.Get();             //ensure we got reponse 200             resp.EnsureStatusIsSuccessful();             //use XLinq to parse the REST XML             var statuses = from r in resp.Content.ReadAsXElement().Descendants("status")                            select new TwitterStatus                            {                                User = r.Element("user").Element("screen_name").Value,                                Status = r.Element("text").Value                            };             return statuses.ToList();         }     }     public class TwitterStatus     {         public string User { get; set; }         public string Status { get; set; }     } }  Sending SMS Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} public class SMSService     {         public void Send(string phone, string message)         {                         HttpClient cl1 = new HttpClient();              //the clickatell XML format for sending SMS             string xml = String.Format("<clickAPI><sendMsg><api_id>3239621</api_id><user>ourusername</user><password>ourpassword</password><to>{0}</to><text>{1}</text></sendMsg></clickAPI>",phone,message);             //Post form data             HttpUrlEncodedForm form = new HttpUrlEncodedForm();             form.Add("data", xml);             System.Net.ServicePointManager.Expect100Continue = false;             string uri = @"http://api.clickatell.com/xml/xml";             HttpResponseMessage resp = cl1.Post(uri, form.CreateHttpContent());             resp.EnsureStatusIsSuccessful();         }     }

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • Adaptive ADF/WebCenter template for the iPad

    - by Maiko Rocha
    One of my WebCenter Portal customers was asking about adaptive design with ADF/WebCenter Portal and how they could go about creating an adaptive iPad template for their WebCenter Portal application. They were looking not only for the out-of-the-box support for mobile Safari which is certified against PS5+ (11.1.1.6) for ADF/WebCenter - but also to create a specific template to streamline their workflow on the iPad. Seems like they wanted something in the lines of Yahoo! Mail provides for the iPad - so the example I will use is shamelessly inspired by Y! Mail's iPad UI.  But first, let's quickly understand how can we bake in some adaptive goodness into ADF Faces. First thing we need to understand is, yes, there are a couple of constraints that we will need to work around, namely, the use or layout managers and skins. Please also keep in mind that I'm not and I don't pretend to be a web designer, much less an UX specialist, so feel free to leave your thoughts on the matter in the comments section. Now, back to the limitations. Layout Managers ADF Faces layout managers create an abstraction on top of the generated HTML code for a page so a developer doesn't need to be worried about how to size and dimension the UI layout (eg, af:panelStretchLayout). Although layout managers are very helpful, in this specific situation we will need to know a little bit more of how the final HTML is being rendered so we can apply the CSS class accordingly and create transition containers where the media queries will be applied - now, if you're using 11gR2 (11.1.2.2.3) there's the new component af:panelGridLayout (here and here) that will greatly improve creating responsive templates and pages because it is based on the grid/fluid systems and will generate straight out to DIVs on your final page. For now, I'm limited to PS5 and the af:panelStretchLayout component as a starting point because that's the release my customer is on. Skins You won't be able to use media queries, or use anything with "@" notation on the skin CSS file - the skin pre-processor will remove all extraneous "@" from the CSS file. The solution is to split your CSS in two separate files: a skin CSS file and plain CSS where you will add the media queries. The issue here is that you won't be able to use media queries for any faces components. We can, though, still apply the media queries for the components like af:panelGroupLayout and af:panelBorderLayout through their styleClass property to enable these components to be responsive to to the iPad orientation, by changing its dimensions, font sizes, hide/show areas, etc. Difference between responsive and adaptive design The best definition of adaptive vs responsive web design I could find is this: “Responsive web design,” as coined by Ethan Marcotte, means “fluid grids, fluid images/media & media queries.” “Adaptive web design,” as I use it, is about creating interfaces that adapt to the user’s capabilities (in terms of both form and function). To me, “adaptive web design” is just another term for “progressive enhancement” of which responsive web design can (an often should) be an integral part, but is a more holistic approach to web design in that it also takes into account varying levels of markup, CSS, JavaScript and assistive technology support. Responsive/adapative web design is much more than slapping an HTML template with CSS around your content or application. The content and application themselves are part of your web design - in other words, a responsive template is just an afterthought if it is not originating from a responsive design the involves the whole web application/s. Tips on responsive / adapative design with ADF/WebCenter Some of the tips listed below were already mentioned in multiple blog posts about ADF layout and skinning, but it is still worth remembering: a simple guideline for ADF/WebCenter apps would be to first create a high-level group of devices, for example: smartphones, tablets,  and desktop. For each of these large groups, create the basic structure to provide responsiveness: a page template, a skin, and an external CSS: pagetemplate_smartphone.jspx, smartphone_skin.css, smartphone-responsive.css pagetemplate_tablet.jspx, tablet_skin.css, tablet-responsive.css pagetemplate_desktop.jspx, desktop_skin.css, desktop-responsive.css These three assets can be changed on the fly through an user-agent check on the server side, delivering the right UI to the right device. Within each of the assets, you can make fine adjustments for each subgroup of devices with media queries - for example, smart phones with different screen dimensions and pixel density. Having these three groups and the corresponding assets per group seem to be a good compromise between trying to put everything on a single set of assets - specially considering the constraints above - and going to the other side of the spectrum to create assets per discrete device (iPhone4, iPhone5, Nexus, S3, etc.). Keep in mind that these are my rules and are not in any shape or form a best practice - this is how it fits best for the scenarios I've been working with. If you need to use HTML tags on your page, surround them with af:group to protect the DOM structure For stretchable/fluid layouts: Use non-stretching containers: panelGroupLayout, panelBorderLayout, … panelBorderLayout can be used to approximate HTML table component To avoid multiple scroll bars, do not nest scrolling PanelGroupLayout components. Consider layout="vertical" For stretchable/fluid layouts: Most stretchable ADF components also work in flowing context with dimensionsFrom="auto" To stretch a component horizontally, use styleClass="AFStretchWidth" instead of  "width:100%" Skinning Don't use CSS3 @media, @import, animations, etc. on skin css files. They will be removed. CSS3 properties within a class (box-shadow, transition, etc.) work just fine. Consider resetting some skin classes to better control their rendering: body {color: inherit;font: inherit;} af|document {-tr-inhibit: all;} af|commandLink {-tr-inhibit: all;} af|goLink {-tr-inhibit: all;} af|inputText::content {font: inherit;} Specific meta tags and CSS properties: Use  <meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0, maximum-scale=1.0"/> to avoid zooming (if you want) Use -webkit-overflow-scrolling: touch to enable native momentum scrolling within overflown areas (here) Use text-rendering: optmizeLegibility to improve readability. (here) User text-overflow: ellipsis to gracefully crop overflown text. (here) The meta-tags are included in each and every page in the metaContainer facet of af:document tag. You can also use a javascript to inject the meta-tags from the template. For the purpose of the example, I wanted to use as few workarounds as possible.   The iPad template and sample application This sample application has been built as a WebCenter Portal application, but you will also be able to reuse the template and techniques on your vanilla ADF application. Keep in mind that I'm neither a designer nor a CSS specialist, so please don't bash me too much on the messy CSS file you'll find on the application.  I've extended the provided PreferencesBean class that comes with WebCenter Portal and added code to dinamically change the template and skin on the fly.   This is the sample application in landscape orientation: This is the sample application in portrait orientation - the left side menu hides automatically based on a CSS media query: Another screenshot with a skinned popup opened: This is a sample application for you to play with - ideally you shouldn't use it as a starting point. On the left side bar you will find links rendered from a WebCenter Portal navigation model - the link triggers a full request through an af:goLink, while the light blue PPR button triggers a PPR navigation. The dark blue toolbar buttons at the top don't have any function,while the Approve and Reject buttons show a skinned popup. The search box of course doesn't have any behavior attahed to it either. There's a known issue right now with some PPR calls that are randomly generating a 403 error redirecting to the login page - I didn't have time to investigate if this is iOS6 specific or not - if you have any insights please let me know your findings. You can download the sample here.

    Read the article

  • How Oracle Data Integration Customers Differentiate Their Business in Competitive Markets

    - by Irem Radzik
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 With data being a central force in driving innovation and competing effectively, data integration has become a key IT approach to remove silos and ensure working with consistent and trusted data. Especially with the release of 12c version, Oracle Data Integrator and Oracle GoldenGate offer easy-to-use and high-performance solutions that help companies with their critical data initiatives, including big data analytics, moving to cloud architectures, modernizing and connecting transactional systems and more. In a recent press release we announced the great momentum and analyst recognition Oracle Data Integration products have achieved in the data integration and replication market. In this press release we described some of the key new features of Oracle Data Integrator 12c and Oracle GoldenGate 12c. In addition, a few from our 4500+ customers explained how Oracle’s data integration platform helped them achieve their business goals. In this blog post I would like to go over what these customers shared about their experience. Land O’Lakes is one of America’s premier member-owned cooperatives, and offers an extensive line of agricultural supplies, as well as production and business services. Rich Bellefeuille, manager, ETL & data warehouse for Land O’Lakes told us how GoldenGate helped them modernize their critical ERP system without impacting service and how they are moving to new projects with Oracle Data Integrator 12c: “With Oracle GoldenGate 11g, we've been able to migrate our enterprise-wide implementation of Oracle’s JD Edwards EnterpriseOne, ERP system, to a new database and application server platform with minimal downtime to our business. Using Oracle GoldenGate 11g we reduced database migration time from nearly 30 hours to less than 30 minutes. Given our quick success, we are considering expansion of our Oracle GoldenGate 12c footprint. We are also in the midst of deploying a solution leveraging Oracle Data Integrator 12c to manage our pricing data to handle orders more effectively and provide a better relationship with our clients. We feel we are gaining higher productivity and flexibility with Oracle's data integration products." ICON, a global provider of outsourced development services to the pharmaceutical, biotechnology and medical device industries, highlighted the competitive advantage that a solid data integration foundation brings. Diarmaid O’Reilly, enterprise data warehouse manager, ICON plc said “Oracle Data Integrator enables us to align clinical trials intelligence with the information needs of our sponsors. It helps differentiate ICON’s services in an increasingly competitive drug-development industry."  You can find more info on ICON's implementation here. A popular use case for Oracle GoldenGate’s real-time data integration is offloading operational reporting from critical transaction processing systems. SolarWorld, one of the world’s largest solar-technology producers and the largest U.S. solar panel manufacturer, implemented Oracle GoldenGate for real-time data integration of manufacturing data for fast analysis. Russ Toyama, U.S. senior database administrator for SolarWorld told us real-time data helps their operations and GoldenGate’s solution supports high performance of their manufacturing systems: “We use Oracle GoldenGate for real-time data integration into our decision support system, which performs real-time analysis for manufacturing operations to continuously improve product quality, yield and efficiency. With reliable and low-impact data movement capabilities, Oracle GoldenGate also helps ensure that our critical manufacturing systems are stable and operate with high performance."  You can watch the full interview with SolarWorld's Russ Toyama here. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Starwood Hotels and Resorts is one of the many customers that found out how well Oracle Data Integration products work with Oracle Exadata. Gordon Light, senior director of information technology for StarWood Hotels, says they had notable performance gain in loading Oracle Exadata reporting environment: “We leverage Oracle GoldenGate to replicate data from our central reservations systems and other OLTP databases – significantly decreasing the overall ETL duration. Moving forward, we plan to use Oracle GoldenGate to help the company achieve near-real-time reporting.”You can listen about Starwood Hotels' implementation here. Many companies combine the power of Oracle GoldenGate with Oracle Data Integrator to have a single, integrated data integration platform for variety of use cases across the enterprise. Ufone is another good example of that. The leading mobile communications service provider of Pakistan has improved customer service using timely customer data in its data warehouse. Atif Aslam, head of management information systems for Ufone says: “Oracle Data Integrator and Oracle GoldenGate help us integrate information from various systems and provide up-to-date and real-time CRM data updates hourly, rather than daily. The applications have simplified data warehouse operations and allowed business users to make faster and better informed decisions to protect revenue in the fast-moving Pakistani telecommunications market.” You can read more about Ufone's use case here. In our Oracle Data Integration 12c launch webcast back in November we also heard from BT’s CTO Surren Parthab about their use of GoldenGate for moving to private cloud architecture. Surren also shared his perspectives on Oracle Data Integrator 12c and Oracle GoldenGate 12c releases. You can watch the video here. These are only a few examples of leading companies that have made data integration and real-time data access a key part of their data governance and IT modernization initiatives. They have seen real improvements in how their businesses operate and differentiate in today’s competitive markets. You can read about other customer examples in our Ebook: The Path to the Future and access resources including white papers, data sheets, podcasts and more via our Oracle Data Integration resource kit. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

  • C#/.NET Little Pitfalls: The Dangers of Casting Boxed Values

    - by James Michael Hare
    Starting a new series to parallel the Little Wonders series.  In this series, I will examine some of the small pitfalls that can occasionally trip up developers. Introduction: Of Casts and Conversions What happens when we try to assign from an int and a double and vice-versa? 1: double pi = 3.14; 2: int theAnswer = 42; 3:  4: // implicit widening conversion, compiles! 5: double doubleAnswer = theAnswer; 6:  7: // implicit narrowing conversion, compiler error! 8: int intPi = pi; As you can see from the comments above, a conversion from a value type where there is no potential data loss is can be done with an implicit conversion.  However, when converting from one value type to another may result in a loss of data, you must make the conversion explicit so the compiler knows you accept this risk.  That is why the conversion from double to int will not compile with an implicit conversion, we can make the conversion explicit by adding a cast: 1: // explicit narrowing conversion using a cast, compiler 2: // succeeds, but results may have data loss: 3: int intPi = (int)pi; So for value types, the conversions (implicit and explicit) both convert the original value to a new value of the given type.  With widening and narrowing references, however, this is not the case.  Converting reference types is a bit different from converting value types.  First of all when you perform a widening or narrowing you don’t really convert the instance of the object, you just convert the reference itself to the wider or narrower reference type, but both the original and new reference type both refer back to the same object. Secondly, widening and narrowing for reference types refers the going down and up the class hierarchy instead of referring to precision as in value types.  That is, a narrowing conversion for a reference type means you are going down the class hierarchy (for example from Shape to Square) whereas a widening conversion means you are going up the class hierarchy (from Square to Shape).  1: var square = new Square(); 2:  3: // implicitly convers because all squares are shapes 4: // (that is, all subclasses can be referenced by a superclass reference) 5: Shape myShape = square; 6:  7: // implicit conversion not possible, not all shapes are squares! 8: // (that is, not all superclasses can be referenced by a subclass reference) 9: Square mySquare = (Square) myShape; So we had to cast the Shape back to Square because at that point the compiler has no way of knowing until runtime whether the Shape in question is truly a Square.  But, because the compiler knows that it’s possible for a Shape to be a Square, it will compile.  However, if the object referenced by myShape is not truly a Square at runtime, you will get an invalid cast exception. Of course, there are other forms of conversions as well such as user-specified conversions and helper class conversions which are beyond the scope of this post.  The main thing we want to focus on is this seemingly innocuous casting method of widening and narrowing conversions that we come to depend on every day and, in some cases, can bite us if we don’t fully understand what is going on!  The Pitfall: Conversions on Boxed Value Types Can Fail What if you saw the following code and – knowing nothing else – you were asked if it was legal or not, what would you think: 1: // assuming x is defined above this and this 2: // assignment is syntactically legal. 3: x = 3.14; 4:  5: // convert 3.14 to int. 6: int truncated = (int)x; You may think that since x is obviously a double (can’t be a float) because 3.14 is a double literal, but this is inaccurate.  Our x could also be dynamic and this would work as well, or there could be user-defined conversions in play.  But there is another, even simpler option that can often bite us: what if x is object? 1: object x; 2:  3: x = 3.14; 4:  5: int truncated = (int) x; On the surface, this seems fine.  We have a double and we place it into an object which can be done implicitly through boxing (no cast) because all types inherit from object.  Then we cast it to int.  This theoretically should be possible because we know we can explicitly convert a double to an int through a conversion process which involves truncation. But here’s the pitfall: when casting an object to another type, we are casting a reference type, not a value type!  This means that it will attempt to see at runtime if the value boxed and referred to by x is of type int or derived from type int.  Since it obviously isn’t (it’s a double after all) we get an invalid cast exception! Now, you may say this looks awfully contrived, but in truth we can run into this a lot if we’re not careful.  Consider using an IDataReader to read from a database, and then attempting to select a result row of a particular column type: 1: using (var connection = new SqlConnection("some connection string")) 2: using (var command = new SqlCommand("select * from employee", connection)) 3: using (var reader = command.ExecuteReader()) 4: { 5: while (reader.Read()) 6: { 7: // if the salary is not an int32 in the SQL database, this is an error! 8: // doesn't matter if short, long, double, float, reader [] returns object! 9: total += (int) reader["annual_salary"]; 10: } 11: } Notice that since the reader indexer returns object, if we attempt to convert using a cast to a type, we have to make darn sure we use the true, actual type or this will fail!  If the SQL database column is a double, float, short, etc this will fail at runtime with an invalid cast exception because it attempts to convert the object reference! So, how do you get around this?  There are two ways, you could first cast the object to its actual type (double), and then do a narrowing cast to on the value to int.  Or you could use a helper class like Convert which analyzes the actual run-time type and will perform a conversion as long as the type implements IConvertible. 1: object x; 2:  3: x = 3.14; 4:  5: // if you want to cast, must cast out of object to double, then 6: // cast convert. 7: int truncated = (int)(double) x; 8:  9: // or you can call a helper class like Convert which examines runtime 10: // type of the value being converted 11: int anotherTruncated = Convert.ToInt32(x); Summary You should always be careful when performing a conversion cast from values boxed in object that you are actually casting to the true type (or a sub-type). Since casting from object is a widening of the reference, be careful that you either know the exact, explicit type you expect to be held in the object, or instead avoid the cast and use a helper class to perform a safe conversion to the type you desire. Technorati Tags: C#,.NET,Pitfalls,Little Pitfalls,BlackRabbitCoder

    Read the article

  • Randomly displayed flashing lines, no response to all shortcuts, just power off. [syslog included]

    - by B. Roland
    Hello! I have an old machine, and I want to use for that to learn employees how to use Ubuntu, and to be easyer to switch from Windows. I've been installed 10.04, and updated, but this strange stuff is happend. Graphical installion failed, same strange thing. With alternate workd. Sometimes, when I boot up, a boot message displayed: Keyboard failure..., often diplayed after reboot, and after shutdown, when I haven't plugged off from AC. I replaced the keyboard yet, same failure... If I powered off, and plugged off from AC, no keyboard problems displayed in boot time. Details Configuration: Dell OptiPlex GX60 - in original cover, no changes. 256 MB DDR 166 MHz Intel® Celeron® Processor 2.40 GHz Dell 0C3207 Base Board I know, that is not enough, but I have three other Nec compuers, with nearly similar config, and they works well with 9.10, 10.04, 10.10. Live CDs I've been tried with 10.04 and 10.10, but the problem is displayed too. With 9.10 no strange things displayed, but it froze, during a simple apt-get install. Syslog An error loop is logged here, but I paste the whole startup and error lines. The flashing lines are displayed sometimes immediately after login, but sometimes after 10 minutes, but once occured, that nothing happend. Strange thing is displayed immediately after login: here. An other boot, after some minutes, strange lines, and loop in log appeard: here. The loop should be that: Jan 23 00:20:08 machine_name kernel: [ 46.782212] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 23 00:20:08 machine_name kernel: [ 47.100033] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 23 00:20:08 machine_name kernel: [ 47.100045] render error detected, EIR: 0x00000000 Jan 23 00:20:08 machine_name kernel: [ 47.101487] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 16 at 9) Jan 23 00:20:11 machine_name kernel: [ 49.152020] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 23 00:20:11 machine_name gdm-simple-slave[1245]: WARNING: Unable to load file '/etc/gdm/custom.conf': No such file or directory Jan 23 00:20:11 machine_name acpid: client 1239[0:0] has disconnected Jan 23 00:20:11 machine_name acpid: client connected from 1247[0:0] Jan 23 00:20:11 machine_name acpid: 1 client rule loaded UPDATE Added syslog things: before errors, error loop, the complete shutdown(after the big updates): Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully called chroot. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully dropped privileges. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully limited resources. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Watchdog thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Canary thread running. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Sucessfully made thread 1337 of process 1337 (n/a) owned by '1001' high priority at nice level -11. Jan 28 20:40:30 machine_name rtkit-daemon[1339]: Supervising 1 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1345 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 2 threads of 1 processes of 1 users. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Sucessfully made thread 1349 of process 1337 (n/a) owned by '1001' RT at priority 5. Jan 28 20:40:32 machine_name rtkit-daemon[1339]: Supervising 3 threads of 1 processes of 1 users. Jan 28 20:40:37 machine_name pulseaudio[1337]: ratelimit.c: 2 events suppressed Jan 28 20:41:33 machine_name AptDaemon: INFO: Initializing daemon Jan 28 20:41:44 machine_name kernel: [ 167.691563] lo: Disabled Privacy Extensions Jan 28 20:47:33 machine_name AptDaemon: INFO: Quiting due to inactivity Jan 28 20:47:33 machine_name AptDaemon: INFO: Shutdown was requested Jan 28 20:59:50 machine_name kernel: [ 1253.840513] lo: Disabled Privacy Extensions Jan 28 21:17:02 machine_name CRON[1874]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 21:17:38 machine_name kernel: [ 2321.553239] lo: Disabled Privacy Extensions Jan 28 22:07:44 machine_name kernel: [ 5327.840254] lo: Disabled Privacy Extensions Jan 28 22:17:02 machine_name CRON[2665]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:32:38 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 22:57:03 machine_name kernel: [ 8286.641472] lo: Disabled Privacy Extensions Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: Called Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: username = [some_user] Jan 28 22:57:24 machine_name sudo: pam_sm_authenticate: /home/some_user is already mounted Jan 28 23:07:42 machine_name kernel: [ 8925.272030] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:07:42 machine_name kernel: [ 8925.272048] render error detected, EIR: 0x00000000 Jan 28 23:07:42 machine_name kernel: [ 8925.272093] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171453 at 171452) Jan 28 23:07:45 machine_name kernel: [ 8928.868041] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:10 machine_name acpid: client 925[0:0] has disconnected Jan 28 23:08:10 machine_name acpid: client connected from 8127[0:0] Jan 28 23:08:10 machine_name acpid: 1 client rule loaded Jan 28 23:08:11 machine_name kernel: [ 8955.046248] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:12 machine_name kernel: [ 8955.364016] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:12 machine_name kernel: [ 8955.364027] render error detected, EIR: 0x00000000 Jan 28 23:08:12 machine_name kernel: [ 8955.364407] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171457 at 171452) Jan 28 23:08:14 machine_name kernel: [ 8957.472025] [drm:i915_gem_idle] *ERROR* hardware wedged Jan 28 23:08:14 machine_name acpid: client 8127[0:0] has disconnected Jan 28 23:08:14 machine_name acpid: client connected from 8141[0:0] Jan 28 23:08:14 machine_name acpid: 1 client rule loaded Jan 28 23:08:15 machine_name kernel: [ 8958.671722] [drm:i915_gem_entervt_ioctl] *ERROR* Reenabling wedged hardware, good luck Jan 28 23:08:15 machine_name kernel: [ 8958.988015] [drm:i915_hangcheck_elapsed] *ERROR* Hangcheck timer elapsed... GPU hung Jan 28 23:08:15 machine_name kernel: [ 8958.988026] render error detected, EIR: 0x00000000 Jan 28 23:08:15 machine_name kernel: [ 8958.989400] [drm:i915_do_wait_request] *ERROR* i915_do_wait_request returns -5 (awaiting 171459 at 171452) Jan 28 23:08:16 machine_name init: tty4 main process (848) killed by TERM signal Jan 28 23:08:16 machine_name init: tty5 main process (856) killed by TERM signal Jan 28 23:08:16 machine_name NetworkManager: nm_signal_handler(): Caught signal 15, shutting down normally. Jan 28 23:08:16 machine_name init: tty2 main process (874) killed by TERM signal Jan 28 23:08:16 machine_name init: tty3 main process (875) killed by TERM signal Jan 28 23:08:16 machine_name init: tty6 main process (877) killed by TERM signal Jan 28 23:08:16 machine_name init: cron main process (890) killed by TERM signal Jan 28 23:08:16 machine_name init: tty1 main process (1146) killed by TERM signal Jan 28 23:08:16 machine_name avahi-daemon[644]: Got SIGTERM, quitting. Jan 28 23:08:16 machine_name avahi-daemon[644]: Leaving mDNS multicast group on interface eth0.IPv4 with address 10.238.11.134. Jan 28 23:08:16 machine_name acpid: exiting Jan 28 23:08:16 machine_name init: avahi-daemon main process (644) terminated with status 255 Jan 28 23:08:17 machine_name kernel: Kernel logging (proc) stopped. Jan 28 23:09:00 machine_name kernel: imklog 4.2.0, log source = /proc/kmsg started. Jan 28 23:09:00 machine_name rsyslogd: [origin software="rsyslogd" swVersion="4.2.0" x-pid="516" x-info="http://www.rsyslog.com"] (re)start Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's groupid changed to 103 Jan 28 23:09:00 machine_name rsyslogd: rsyslogd's userid changed to 101 Jan 28 23:09:00 machine_name rsyslogd-2039: Could no open output file '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] When I hit the On/Off button, the system shuts down normally. May be it a hardware problem, but I don't know... Can you say something useful to solve my problem?

    Read the article

  • bluetooth not working on Ubuntu 13.10

    - by iacopo
    I upgrated ubuntu from 13.4 to 13.10 and my bluetooth stopped working. When I open bluetooth I'm able to put it ON but the visibility doesn't show anything and didn't detect any device. when I: dmesg | grep Blue [ 2.046249] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.046252] usb 3-1: Manufacturer: Bluetooth v2.0 [ 15.255710] Bluetooth: Core ver 2.16 [ 15.255748] Bluetooth: HCI device and connection manager initialized [ 15.255759] Bluetooth: HCI socket layer initialized [ 15.255765] Bluetooth: L2CAP socket layer initialized [ 15.255776] Bluetooth: SCO socket layer initialized [ 20.110379] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.110386] Bluetooth: BNEP filters: protocol multicast [ 20.110400] Bluetooth: BNEP socket layer initialized [ 20.120635] Bluetooth: RFCOMM TTY layer initialized [ 20.120656] Bluetooth: RFCOMM socket layer initialized [ 20.120660] Bluetooth: RFCOMM ver 1.11 when I digit: lsusb Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0bc2:2300 Seagate RSS LLC Expansion Portable Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 002: ID 0e6a:6001 Megawin Technology Co., Ltd GEMBIRD Flexible keyboard KB-109F-B-DE Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 002: ID 13ee:0001 MosArt Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 002: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub when I: hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1B:10:00:2A:EC ACL MTU: 1017:8 SCO MTU: 64:0 DOWN RX bytes:457 acl:0 sco:0 events:16 errors:0 TX bytes:68 acl:0 sco:0 commands:16 errors:0 Features: 0xff 0xff 0x8d 0xfe 0x9b 0xf9 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT when I digit: rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no when I digit: sudo gedit /etc/bluetooth/main.conf [General] # List of plugins that should not be loaded on bluetoothd startup #DisablePlugins = network,input # Default adaper name # %h - substituted for hostname # %d - substituted for adapter id Name = %h-%d # Default device class. Only the major and minor device class bits are # considered. Class = 0x000100 # How long to stay in discoverable mode before going back to non-discoverable # The value is in seconds. Default is 180, i.e. 3 minutes. # 0 = disable timer, i.e. stay discoverable forever DiscoverableTimeout = 0 # How long to stay in pairable mode before going back to non-discoverable # The value is in seconds. Default is 0. # 0 = disable timer, i.e. stay pairable forever PairableTimeout = 0 # Use some other page timeout than the controller default one # which is 16384 (10 seconds). PageTimeout = 8192 # Automatic connection for bonded devices driven by platform/user events. # If a platform plugin uses this mechanism, automatic connections will be # enabled during the interval defined below. Initially, this feature # intends to be used to establish connections to ATT channels. AutoConnectTimeout = 60 # What value should be assumed for the adapter Powered property when # SetProperty(Powered, ...) hasn't been called yet. Defaults to true InitiallyPowered = true # Remember the previously stored Powered state when initializing adapters RememberPowered = false # Use vendor id source (assigner), vendor, product and version information for # DID profile support. The values are separated by ":" and assigner, VID, PID # and version. # Possible vendor id source values: bluetooth, usb (defaults to usb) #DeviceID = bluetooth:1234:5678:abcd # Do reverse service discovery for previously unknown devices that connect to # us. This option is really only needed for qualification since the BITE tester # doesn't like us doing reverse SDP for some test cases (though there could in # theory be other useful purposes for this too). Defaults to true. ReverseServiceDiscovery = true # Enable name resolving after inquiry. Set it to 'false' if you don't need # remote devices name and want shorter discovery cycle. Defaults to 'true'. NameResolving = true # Enable runtime persistency of debug link keys. Default is false which # makes debug link keys valid only for the duration of the connection # that they were created for. DebugKeys = false # Enable the GATT functionality. Default is false EnableGatt = false when I digit: dmesg | grep Bluetooth [ 2.013041] usb 3-1: Product: Bluetooth V2.0 Dongle [ 2.013049] usb 3-1: Manufacturer: Bluetooth v2.0 [ 13.798293] Bluetooth: Core ver 2.16 [ 13.798338] Bluetooth: HCI device and connection manager initialized [ 13.798352] Bluetooth: HCI socket layer initialized [ 13.798357] Bluetooth: L2CAP socket layer initialized [ 13.798368] Bluetooth: SCO socket layer initialized [ 20.184162] Bluetooth: BNEP (Ethernet Emulation) ver 1.3 [ 20.184173] Bluetooth: BNEP filters: protocol multicast [ 20.184197] Bluetooth: BNEP socket layer initialized [ 20.238947] Bluetooth: RFCOMM TTY layer initialized [ 20.238983] Bluetooth: RFCOMM socket layer initialized [ 20.239018] Bluetooth: RFCOMM ver 1.11 When I digit: uname -a Linux casa-desktop 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux When I digit: lsmod Module Size Used by parport_pc 32701 0 rfcomm 69070 4 bnep 19564 2 ppdev 17671 0 ip6t_REJECT 12910 1 xt_hl 12521 6 ip6t_rt 13507 3 nf_conntrack_ipv6 18938 9 nf_defrag_ipv6 34616 1 nf_conntrack_ipv6 ipt_REJECT 12541 1 xt_LOG 17718 8 xt_limit 12711 11 xt_tcpudp 12884 32 xt_addrtype 12635 4 nf_conntrack_ipv4 15012 9 nf_defrag_ipv4 12729 1 nf_conntrack_ipv4 xt_conntrack 12760 18 ip6table_filter 12815 1 ip6_tables 27025 1 ip6table_filter nf_conntrack_netbios_ns 12665 0 nf_conntrack_broadcast 12589 1 nf_conntrack_netbios_ns nf_nat_ftp 12741 0 nf_nat 26653 1 nf_nat_ftp kvm_amd 59958 0 nf_conntrack_ftp 18608 1 nf_nat_ftp kvm 431315 1 kvm_amd nf_conntrack 91736 8 nf_nat_ftp,nf_conntrack_netbios_ns,nf_nat,xt_conntrack,nf_conntrack_broadcast,nf_conntrack_ftp,nf_conntrack_ipv4,nf_conntrack_ipv6 iptable_filter 12810 1 crct10dif_pclmul 14289 0 crc32_pclmul 13113 0 ip_tables 27239 1 iptable_filter snd_hda_codec_realtek 55704 1 ghash_clmulni_intel 13259 0 aesni_intel 55624 0 aes_x86_64 17131 1 aesni_intel snd_hda_codec_hdmi 41117 1 x_tables 34059 13 ip6table_filter,xt_hl,ip_tables,xt_tcpudp,xt_limit,xt_conntrack,xt_LOG,iptable_filter,ip6t_rt,ipt_REJECT,ip6_tables,xt_addrtype,ip6t_REJECT lrw 13257 1 aesni_intel snd_hda_intel 48171 5 gf128mul 14951 1 lrw glue_helper 13990 1 aesni_intel ablk_helper 13597 1 aesni_intel joydev 17377 0 cryptd 20329 3 ghash_clmulni_intel,aesni_intel,ablk_helper snd_hda_codec 188738 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel arc4 12608 2 snd_hwdep 13602 1 snd_hda_codec rt2800pci 18690 0 snd_pcm 102033 3 snd_hda_codec_hdmi,snd_hda_codec,snd_hda_intel radeon 1402449 3 rt2800lib 79963 1 rt2800pci btusb 28267 0 rt2x00pci 13287 1 rt2800pci rt2x00mmio 13603 1 rt2800pci snd_page_alloc 18710 2 snd_pcm,snd_hda_intel rt2x00lib 55238 4 rt2x00pci,rt2800lib,rt2800pci,rt2x00mmio snd_seq_midi 13324 0 mac80211 596969 3 rt2x00lib,rt2x00pci,rt2800lib snd_seq_midi_event 14899 1 snd_seq_midi ttm 83995 1 radeon snd_rawmidi 30095 1 snd_seq_midi cfg80211 479757 2 mac80211,rt2x00lib drm_kms_helper 52651 1 radeon snd_seq 61560 2 snd_seq_midi_event,snd_seq_midi bluetooth 371880 12 bnep,btusb,rfcomm microcode 23518 0 eeprom_93cx6 13344 1 rt2800pci snd_seq_device 14497 3 snd_seq,snd_rawmidi,snd_seq_midi crc_ccitt 12707 1 rt2800lib snd_timer 29433 2 snd_pcm,snd_seq snd 69141 21 snd_hda_codec_realtek,snd_hwdep,snd_timer,snd_hda_codec_hdmi,snd_pcm,snd_seq,snd_rawmidi,snd_hda_codec,snd_hda_intel,snd_seq_device,snd_seq_midi psmouse 97626 0 drm 296739 5 ttm,drm_kms_helper,radeon k10temp 13126 0 soundcore 12680 1 snd serio_raw 13413 0 i2c_algo_bit 13413 1 radeon i2c_piix4 22106 0 video 19318 0 mac_hid 13205 0 lp 17759 0 parport 42299 3 lp,ppdev,parport_pc hid_generic 12548 0 usbhid 53014 0 hid 105818 2 hid_generic,usbhid pata_acpi 13038 0 usb_storage 62062 1 r8169 67341 0 sdhci_pci 18985 0 sdhci 42630 1 sdhci_pci mii 13934 1 r8169 pata_atiixp 13242 0 ohci_pci 13561 0 ahci 25819 2 libahci 31898 1 ahci Someone can help me?

    Read the article

< Previous Page | 212 213 214 215 216 217 218 219 220 221 222 223  | Next Page >