Search Results

Search found 34306 results on 1373 pages for 'ubuntu 14 04'.

Page 395/1373 | < Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >

  • connections on port 80 suddenly refused / server not responding

    - by user1394013
    my dedicated server stopped responding to requests on port 80 today out of sudden, i havent touched anything in more than a month. its ubuntu 10, varnish + nginx + php-fpm, only 1 website. load is at 0. i messaged my ISP if they changed something but no reply yet. i tried to access the site via http://web-sniffer.net/ and it times out on port 80, but if i connect directly to nginx on port 8080 it loads just fine. for normal users, it doesnt load on neither of these in normal browser. any tips what to check or what could be causing this?

    Read the article

  • Is there a proper way to clear logs?

    - by John H.
    I was wondering if there was a proper way to clear logs in general? I'm new to Ubuntu and I'm trying to set up Postfix. The log in question is /var/log/mail.log. I was wondering if there was a correct way to clear it, rather than me going in it and deleting all the lines and saving it. I find that sometimes errors don't get written to it immediately after I clear the log and save it. Side note: I'm having trouble setting up Postfix and am trying to make it easier for me to read the logs hoping it can help me out, instead of having to scroll all the way down.

    Read the article

  • Writing directory: permission denied even though dir seems to be chmodded correctly

    - by Aron Rotteveel
    I am having some trouble creating files in directory on my Ubuntu machine: I added myself to the www-data group in order for me to easily edit stuff in my /var/www dir on my development machine. stat /var/www shows the following: File: ‘/var/www’ Size: 4096 Blocks: 8 IO Block: 4096 map Device: 808h/2056d Inode: 142853 Links: 3 Access: (0775/drwxrwxr-x) Uid: ( 33/www-data) Gid: ( 33/www-data) Access: 2010-12-30 16:03:18.563998000 +0100 Modify: 2010-12-30 16:02:52.663998000 +0100 Change: 2010-12-30 16:03:13.111998001 +0100 Still, it is impossible for me to create anything below /var/www (the only way for it to work is to chmod it to 777. What am I missing?

    Read the article

  • Can't connect to web-server on local host behind NAT

    - by eyeinthebrick
    I got Ubuntu as host. I'm running a web-server on http://192.168.199.8:80. It is accessible from the local network, but when I'm trying to reach it by external IP, I go to my router's web-page. I arranged port forwarding on router for port 80 to my local IP 192.168.199.8. Unfortunately web-server is still unavailable via external IP. I checked whether the port is open via http://www.canyouseeme.org/. As it showed that the port is unavailable, I changed port used to 3659 (not forget to rearrange port forwarding rule). Although http://www.canyouseeme.org/ shows that port 3659 is open, I still can't reach my web-server. Where can the problem be?

    Read the article

  • How Do I Get To Preferences In FireFox

    - by user123
    I'm currently trying to get to preferences in FireFox in Linux Ubuntu. Because I don't have 10 reputation, I can't post an image (otherwise I would). All I see in the browser is the Address Bar, Downloads and Home. If I right-click (or left click) on any of these, I don't have further options (other than Home, but it only allows me to add items to the toolbar, none of which are Preferences/Options/Etc). I tried vising a website to see if more options would open; nothing. I tried right clicking on the main page and each toolbar item (listed) to see if there was another options like "Preferences" or "Options"; nothing. I tried entering "Preferences" in the address bar, thinking maybe it would open automatically; nothing. I tried right clicking on FireFox on the Linux Menu to see if I could open options without opening the program (and tried this even when the program was open); nothing.

    Read the article

  • Windows 7 Extend C Volume to Unallocated Space

    - by user327777
    a while back I installed Ubuntu and then later uninstalled it by I think deleting the partitions and recovering the windows 7 boot loader. I am not that experienced with partitioning yet. As you can see here there are two partitions that are now unallocated. The 9gb one is a recovery or something that came with the computer. How can I extend my C partition to use both of those? I do not want to have that much storage just wasted sitting there. Currently when I right click on C and hit extend the wizard pops up but there is no available space to extend. http://i.imgur.com/VxEkdyR.png http://i.imgur.com/DdFZWX9.png Thanks everyone!

    Read the article

  • intermittent SSH with ssh_exchange_identification error

    - by rafamvc
    My ssh connection to my server works every 30 min for around 10 min. Things that I figure out that might be the problem: The server is underload (it is a database server), but on those spare moments that I can connect, it is still under the same load, which doesnt make sense. The server is a ubuntu, and the consolekit was using a lot of virtual memory. Restarted the consolekit and it seems to be using a right amount of memory now. It is not the host alows or deny. Those are setup properly. It is not a firewall problem. Those settings were working and the same settings are working for other similar machines. It is on the ec2. Amazon cloud.

    Read the article

  • How to avoid specifying full path in sudoers file?

    - by s g
    I am trying to add a NOPASSWD entry for sudotest.sh (or any script/binary that requires sudo) in my /etc/sudoers file (on Ubuntu 12.04 LTS server), but in order to make it work, I must specify the full path. The following entry works just fine: %jenkins ALL=(ALL)NOPASSWD:/home/vts_share/test/sudotest.sh The problem is that the script might move to a different directory. This seems like a great chance to use the * wildcard in the path (i.e. /*/sudotest.sh) so that my script could be in any directory but the manual states that wildcards will not match the / character when used in a path. I've confirmed that it doesn't work. I know that I can use the word ALL in place of my script, but this means there is no password prompt for any commands which seems unsafe. How do I solve this?

    Read the article

  • Missed something? Cant upload files to server (permissions)

    - by Camran
    I can upload files as "root" to the Ubuntu server. Then I created a user (me). Next I added the user to the group www-data. Then assigned rwx permissions to www-data. Next, when I try to upload, delete or modify files VIA FILEZILLA, I cant. But via the terminal, I can change files using sudo command. What should I do to be able to upload files without getting the "permission denied" in filezilla? If you need more input let me know. Thanks

    Read the article

  • FTP client hangs after a while

    - by lfbn
    I'm using Linux Ubuntu 12.04 with curlftpfs to connect to a remote server. After mounting a remote ftp, opening and saving files with vim, list directories and files, for some time, after a while (no more of 30 minutes), with no reason it hangs. After opening other terminal tabs, all tabs remain iddle...but when using filezilla without restart the computer I still can get to the server and working with no problem. When using Nautilus, instead of curlftpfs, I'm having the same problem. After a while it hangs. Can anyone help me please?

    Read the article

  • samsung netbook OS for clean install

    - by Alex
    Hello - I recently had a problem with a corrupted registry on a samsung N120. When all else failed i reformatted the drive. However having bought the machine with windows home ed pre-installed, I didn't have original windows disck for the clean install. So I managed to install another edition of windows XP (PRO this time). Now windows opens, but several key functions are missing. e.g.: screen resolution - will not allow me any but 800x600 resolution native buttons - such as the fn + screen brightness is not working at all. Any suggestions please? ? Is there a way to get the samsung OS online (since I do have the manufacturer's/installed product key)? thanks PS: It has been my intention to install ubuntu, but i need to know i will not lose functions like screen brightness, volume, and the trackpad's scrolling function. I'd be happy to bypass the windows option if i was sure to have full keyboard/samsung functionality

    Read the article

  • After software update, why is webmin showing wrong mySQL version?

    - by teleute00
    I did a full OS/package update on a server running webmin. Now when I go into webmin, it's showing the correct new version of Ubuntu, Apache, etc...but it's still showing the old version of mySQL. At the command line, if I enter mysql -v, it shows the correct new one, but it's just not getting recognized in webmin. I found a file at /etc/webmin/mysql called "version", and it's just a text file with the version number in it. So theoretically I could just change this and it would be fine. However, this obviously doesn't seem like how this should go. How should this file normally get updated? ETA: Services have all been restarted (in fact, there's been a full reboot). Sorry for not specifying this...it just seemed like the obvious first thing to do and not worth mentioning. :-)

    Read the article

  • ssh (openSSH) questions

    - by Camran
    I have ubuntu 9.10 server. Firstly, is OpenSSH the same as SSHD? Secondly, In the terminal when typing whereis sshd i get this: whereis sshd /usr/sbin/sshd Also when typing whereis openssh i get this: whereis openssh /usr/lib/openssh How do I know if I have openssh? Also, some tutorials online suggest opening sshd_config, so when typing this: whereis sshd_config /usr/share/man/man5/sshd_config.5.gz // I get this... What should I do, because as you have answered my other Q about security, you have pointed out that it is the way you configure your ssh and etc which is important. Is there any guide for this? How should I configure this? I will be the only user for this server btw... If you need more input let me know and I will update this Q. Thanks

    Read the article

  • ssh (openSSH) questions

    - by Camran
    I have ubuntu 9.10 server. Firstly, is OpenSSH the same as SSHD? Secondly, In the terminal when typing whereis sshd i get this: whereis sshd /usr/sbin/sshd Also when typing whereis openssh i get this: whereis openssh /usr/lib/openssh How do I know if I have openssh? Also, some tutorials online suggest opening sshd_config, so when typing this: whereis sshd_config /usr/share/man/man5/sshd_config.5.gz // I get this... What should I do, because as you have answered my other Q about security, you have pointed out that it is the way you configure your ssh and etc which is important. Is there any guide for this? How should I configure this? I will be the only user for this server btw... If you need more input let me know and I will update this Q. Thanks

    Read the article

  • Slow data transfer using SSH

    - by Floste
    The server is an ubuntu server 11.04 with sshd. SSH works fine for console programs. But data transfer is slow, which is very annoying when transferring large files. I tried two different client programs and changed the port, but the speed is always the same. I know the server can transfer data a lot faster over SSL, which afaik uses AES. I configured my SSH client to use AES, too, but no effect. Why is using SSH multiple times slower than SSL and is there a way to improve transfer speed of SSH?

    Read the article

  • ATI propriatery drivers install latest 12.8, broke my kernel. Stuck on kernel 3.2.0-26

    - by user66987
    I messed up a bit. Hoping some here can help me. I tried to install the newest catalyst 12.8. Sadly, this broke my system. I was stuck in low graphics mode. I finally managed to restore the proprietary drivers, and get into ubuntu again. But now I am stuck on kernel 3.2.0.26. I had installed kernel 3.2.0-30, but the system no longer sees it. I have kernel 3.2.0-29 too, but the system cannot see that as well. In the grub menu. When I use sudo update-grub, they are both listed. Here are the output I get: Searching for GRUB installation directory ... found: /boot/grub Cannot determine root device. Assuming /dev/hda1 This error is probably caused by an invalid /etc/fstab Searching for default file ... found: /boot/grub/default Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst Searching for splash image ... none found, skipping ... Found kernel: /boot/vmlinuz-3.2.0-30-generic Found kernel: /boot/vmlinuz-3.2.0-29-generic Found kernel: /boot/vmlinuz-3.2.0-27-generic Found kernel: /boot/vmlinuz-3.2.0-26-generic Found GRUB 2: /boot/grub/core.img Found kernel: /boot/memtest86+.bin Updating /boot/grub/menu.lst ... done I have searched everywhere to find a solution to my problem, but can't find any solutions. If you need any log outputs to figure out the problem, please let me know which ones. Update: here is the output for grub.cfg # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b set locale_dir=($root)/boot/grub/locale set lang=nb_NO insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="${1}" if [ "${1}" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ "${recordfail}" != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "${linux_gfx_mode}" != "text" ]; then load_video; fi menuentry 'Ubuntu, med Linux 3.2.0-26-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-26-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-26-generic } menuentry 'Ubuntu, med Linux 3.2.0-26-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-26-generic ...' linux /boot/vmlinuz-3.2.0-26-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-26-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, med Linux 3.2.0-25-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-25-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-25-generic } menuentry 'Ubuntu, med Linux 3.2.0-25-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-25-generic ...' linux /boot/vmlinuz-3.2.0-25-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-25-generic } menuentry 'Ubuntu, med Linux 3.2.0-24-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-24-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, med Linux 3.2.0-24-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-24-generic ...' linux /boot/vmlinuz-3.2.0-24-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic } menuentry 'Ubuntu, med Linux 3.2.0-23-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux /boot/vmlinuz-3.2.0-23-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-23-generic } menuentry 'Ubuntu, med Linux 3.2.0-23-generic (gjenopprettelsesmodus)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b echo 'Laster Linux 3.2.0-23-generic ...' linux /boot/vmlinuz-3.2.0-23-generic root=UUID=270c7c58-06d8-4e6b-b9bb-8d92f46adc0b ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-23-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(hd2,msdos1)' search --no-floppy --fs-uuid --set=root 270c7c58-06d8-4e6b-b9bb-8d92f46adc0b linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sdb1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set=root 448AF3CE8AF3BA8E chainloader +1 } ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### How can I set kernel 3.2.0.30 as the default kernel? According to this file, kernel 3.2.0-30 does not exist.

    Read the article

  • Spam Activity From my computer

    - by Bnymn
    I'm using Ubuntu 12.04 64bit. I'm using HTTP proxy over ssh as mentioned here. If I do not start TinyProxy, everything is OK. But, when I start TinyProxy, I'm getting the following. I think there is an application running on my machine and watching the proxy to start. But I could not decide which one it could be. ps ax PID TTY STAT TIME COMMAND 1 ? Ss 0:01 /sbin/init 2 ? S 0:00 [kthreadd] 3 ? S 0:00 [ksoftirqd/0] 6 ? S 0:00 [migration/0] 7 ? S 0:00 [watchdog/0] 21 ? S< 0:00 [cpuset] 22 ? S< 0:00 [khelper] 23 ? S 0:00 [kdevtmpfs] 24 ? S< 0:00 [netns] 26 ? S 0:00 [sync_supers] 27 ? S 0:00 [bdi-default] 28 ? S< 0:00 [kintegrityd] 29 ? S< 0:00 [kblockd] 30 ? S< 0:00 [ata_sff] 31 ? S 0:00 [khubd] 32 ? S< 0:00 [md] 34 ? S 0:00 [khungtaskd] 35 ? S 0:00 [kswapd0] 36 ? SN 0:00 [ksmd] 37 ? SN 0:00 [khugepaged] 38 ? S 0:00 [fsnotify_mark] 39 ? S 0:00 [ecryptfs-kthrea] 40 ? S< 0:00 [crypto] 48 ? S< 0:00 [kthrotld] 49 ? S 0:00 [scsi_eh_0] 50 ? S 0:00 [scsi_eh_1] 51 ? S 0:00 [scsi_eh_2] 52 ? S 0:00 [scsi_eh_3] 75 ? S< 0:00 [devfreq_wq] 240 ? S< 0:00 [xfs_mru_cache] 241 ? S< 0:00 [xfslogd] 242 ? S< 0:00 [xfsdatad] 243 ? S< 0:00 [xfsconvertd] 245 ? S 0:00 [xfsbufd/sda3] 246 ? S 0:01 [xfsaild/sda3] 330 ? S 0:00 upstart-udev-bridge --daemon 333 ? Ss 0:00 /sbin/udevd --daemon 472 ? S< 0:00 [cfg80211] 479 ? S< 0:00 [kpsmoused] 671 ? S 0:00 upstart-socket-bridge --daemon 779 ? S 0:00 [xfsbufd/sda4] 781 ? S 0:01 [xfsaild/sda4] 785 ? S< 0:00 [ttm_swap] 800 ? S< 0:00 [hd-audio0] 803 ? S< 0:00 [hd-audio1] 857 ? Sl 0:00 rsyslogd -c5 869 ? Ss 0:04 dbus-daemon --system --fork --activation=upstart 881 ? Ss 0:00 /usr/sbin/modem-manager 883 ? Ss 0:00 /usr/sbin/bluetoothd 905 ? Ssl 0:02 NetworkManager 906 ? Ss 0:00 /usr/sbin/cupsd -F 910 ? Sl 0:02 /usr/lib/policykit-1/polkitd --no-debug 918 ? S 0:00 avahi-daemon: running [bunyamin-hp.local] 919 ? S 0:00 avahi-daemon: chroot helper 920 ? S< 0:00 [krfcommd] 956 ? Ss 0:00 /sbin/wpa_supplicant -B -P /run/sendsigs.omit.d/wpasupplicant.pid -u -s -O /var/run/wpa_supplicant 980 tty4 Ss+ 0:00 /sbin/getty -8 38400 tty4 985 tty5 Ss+ 0:00 /sbin/getty -8 38400 tty5 1000 tty2 Ss+ 0:00 /sbin/getty -8 38400 tty2 1006 tty3 Ss+ 0:00 /sbin/getty -8 38400 tty3 1009 tty6 Ss+ 0:00 /sbin/getty -8 38400 tty6 1024 ? Ss 0:00 acpid -c /etc/acpi/events -s /var/run/acpid.socket 1025 ? Ss 0:00 atd 1026 ? Ss 0:00 cron 1029 ? Ss 0:01 /usr/sbin/irqbalance 1034 ? Ssl 0:00 whoopsie 1091 ? Ssl 0:00 lightdm 1216 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1 1224 ? Sl 0:00 /usr/lib/accountsservice/accounts-daemon 1241 ? Sl 0:00 /usr/sbin/console-kit-daemon --no-daemon 1356 ? Sl 0:00 /usr/lib/upower/upowerd 1447 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/colord/colord 1539 ? SNl 0:00 /usr/lib/rtkit/rtkit-daemon 1723 ? Sl 0:00 /usr/lib/udisks/udisks-daemon 1724 ? S 0:00 udisks-daemon: not polling any devices 2077 ? Z 0:00 [lightdm] <defunct> 2433 ? Z 0:00 [lightdm] <defunct> 3491 ? S 0:00 [flush-8:0] 4023 ? S 0:00 [kworker/u:14] 4034 ? S 0:00 [migration/1] 4035 ? S 0:00 [kworker/1:3] 4036 ? S 0:00 [ksoftirqd/1] 4037 ? S 0:00 [watchdog/1] 4038 ? S 0:00 [migration/2] 4040 ? S 0:00 [ksoftirqd/2] 4041 ? S 0:00 [watchdog/2] 4042 ? S 0:00 [migration/3] 4043 ? S 0:00 [kworker/3:1] 4044 ? S 0:00 [ksoftirqd/3] 4045 ? S 0:00 [watchdog/3] 4047 ? S 0:00 [irq/43-mei] 4070 ? S 0:00 [kworker/3:0] 4072 ? S 0:00 [kworker/1:0] 4164 ? Ss 0:00 anacron -s 4549 tty7 Ss+ 1:13 /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 4683 ? Sl 0:00 lightdm --session-child 12 47 4718 ? Sl 0:00 /usr/bin/gnome-keyring-daemon --daemonize --login 4729 ? Ssl 0:00 gnome-session --session=gnome-fallback 4765 ? Ss 0:00 /usr/bin/ssh-agent /usr/bin/dbus-launch --exit-with-session gnome-session --session=gnome-fallback 4768 ? S 0:00 /usr/bin/dbus-launch --exit-with-session gnome-session --session=gnome-fallback 4769 ? Ss 0:00 //bin/dbus-daemon --fork --print-pid 5 --print-address 7 --session 4779 ? Sl 0:01 /usr/lib/gnome-settings-daemon/gnome-settings-daemon 4786 ? S 0:00 /usr/lib/gvfs/gvfsd 4788 ? Sl 0:00 /usr/lib/gvfs//gvfs-fuse-daemon -f /home/bunyamin/.gvfs 4797 ? Sl 0:00 /usr/lib/gnome-settings-daemon/gsd-printer 4799 ? Sl 0:03 metacity 4805 ? S 0:00 /usr/lib/x86_64-linux-gnu/gconf/gconfd-2 4811 ? Sl 0:10 gnome-panel 4814 ? S 0:00 syndaemon -i 2.0 -K -R -t 4819 ? S<l 0:00 /usr/bin/pulseaudio --start --log-target=syslog 4821 ? Sl 0:00 /usr/lib/dconf/dconf-service 4826 ? Sl 0:00 /usr/lib/gnome-settings-daemon/gnome-fallback-mount-helper 4828 ? Sl 0:06 nautilus -n 4830 ? Sl 0:02 nm-applet 4832 ? Sl 0:00 /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 4835 ? Sl 0:00 bluetooth-applet 4851 ? S 0:00 /usr/lib/pulseaudio/pulse/gconf-helper 4854 ? Sl 0:04 /usr/lib/indicator-applet/indicator-applet-complete 4859 ? S 0:00 /usr/lib/gvfs/gvfs-gdu-volume-monitor 4863 ? S 0:00 /usr/lib/gvfs/gvfs-gphoto2-volume-monitor 4865 ? Sl 0:00 /usr/lib/gvfs/gvfs-afc-volume-monitor 4871 ? S 0:00 /usr/lib/gvfs/gvfsd-trash --spawner :1.6 /org/gtk/gvfs/exec_spaw/0 4874 ? Sl 0:00 /usr/lib/indicator-application/indicator-application-service 4876 ? Sl 0:00 /usr/lib/indicator-datetime/indicator-datetime-service 4878 ? Sl 0:00 /usr/lib/indicator-messages/indicator-messages-service 4887 ? Sl 0:00 /usr/lib/indicator-printers/indicator-printers-service 4888 ? Sl 0:00 /usr/lib/indicator-session/indicator-session-service 4889 ? Sl 0:00 /usr/lib/indicator-sound/indicator-sound-service 4906 ? S 0:00 /usr/lib/geoclue/geoclue-master 4929 ? S 0:00 /usr/lib/ubuntu-geoip/ubuntu-geoip-provider 4938 ? Sl 0:11 /usr/lib/gnome-applets/multiload-applet-2 4939 ? Sl 0:01 /usr/lib/gnome-applets/cpufreq-applet 4953 ? S 0:00 /usr/lib/gvfs/gvfsd-metadata 4955 ? S 0:00 /usr/lib/gvfs/gvfsd-burn --spawner :1.6 /org/gtk/gvfs/exec_spaw/1 4957 ? Sl 3:22 /usr/lib/firefox/firefox 4973 ? Sl 0:00 /usr/lib/x86_64-linux-gnu/at-spi2-core/at-spi-bus-launcher 4997 ? Sl 0:00 /usr/lib/gnome-disk-utility/gdu-notification-daemon 5000 ? Sl 0:00 telepathy-indicator 5007 ? Sl 0:00 /usr/lib/telepathy/mission-control-5 5012 ? Sl 0:00 /usr/lib/gnome-online-accounts/goa-daemon 5018 ? Sl 0:00 gnome-screensaver 5019 ? Sl 0:01 zeitgeist-datahub 5025 ? Sl 0:00 /usr/bin/zeitgeist-daemon 5033 ? Sl 0:00 /usr/lib/zeitgeist/zeitgeist-fts 5041 ? S 0:00 /bin/cat 5052 ? Sl 0:08 /usr/bin/gnome-terminal -x /bin/sh -c '/home/bunyamin/Desktop/SSH Tunnel' 5058 ? S 0:00 gnome-pty-helper 5067 ? Sl 0:00 update-notifier 5090 ? S 0:00 /usr/bin/python /usr/lib/system-service/system-service-d 5130 ? Sl 0:00 /usr/lib/deja-dup/deja-dup/deja-dup-monitor 5135 ? S 0:00 /bin/sh -c nice run-parts --report /etc/cron.daily 5136 ? SN 0:00 run-parts --report /etc/cron.daily 5358 pts/4 Ss 0:00 bash 5482 ? S 0:00 [kworker/0:1] 5487 ? S 0:01 [kworker/2:0] 5550 ? Sl 1:15 /usr/lib/firefox/plugin-container /usr/lib/flashplugin-installer/libflashplayer.so -greomni /usr/lib/firefox/omni.ja 4957 true plugin 5717 ? S 0:00 /usr/lib/cups/notifier/dbus dbus:// 5824 ? SN 0:00 /bin/sh /etc/cron.daily/update-notifier-common 5825 ? SN 0:00 /usr/bin/python /usr/lib/update-notifier/package-data-downloader 5872 ? Sl 0:00 /usr/lib/notify-osd/notify-osd 5888 ? S 0:00 /sbin/udevd --daemon 5889 ? S 0:00 /sbin/udevd --daemon 5909 ? S 0:00 /sbin/dhclient -d -4 -sf /usr/lib/NetworkManager/nm-dhcp-client.action -pf /var/run/sendsigs.omit.d/network-manager.dhclient-eth1.pid -lf /var/lib/dhcp/dhclient-f5f0 5912 ? S 0:00 /usr/sbin/dnsmasq --no-resolv --keep-in-foreground --no-hosts --bind-interfaces --pid-file=/var/run/sendsigs.omit.d/network-manager.dnsmasq.pid --listen-address=127. 5975 pts/1 Ss+ 0:00 /bin/sh -c '/home/bunyamin/Desktop/SSH Tunnel' 5976 pts/1 S+ 0:00 /bin/sh /home/bunyamin/Desktop/SSH Tunnel 5977 pts/1 S+ 0:00 ssh -p443 [email protected] -L 8000:127.0.0.1:8000 5980 ? Sl 0:00 /usr/lib/gvfs/gvfsd-http --spawner :1.6 /org/gtk/gvfs/exec_spaw/2 6034 ? S 0:00 [kworker/u:0] 6054 ? S 0:00 [kworker/2:2] 6070 ? S 0:00 [kworker/0:3] 6094 ? Sl 0:02 gedit /home/bunyamin/Desktop/a.html 6101 ? S 0:00 [kworker/0:2] 6130 pts/4 R+ 0:00 ps ax TinyProxy LOG connect to ad.adserverplus.com:80 mx1.u4gf.com - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=160x600&s=2959021&T=3&_salt=1516586745&B=12&m=2&u=http%3A%2F%2Fsunshinefelling.com%2Findex.php%3Fview%3Darticle%26catid%3D45%253Aplus-size-dresses%26id%3D7512%253A2012-01-25-22-42-00%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D101&r=1 HTTP/1.0" - - bye bye bye connect to ad.adserverplus.com:80 connect to ad.bharatstudent.com:80 connect to ad.yieldmanager.com:80 142.91.199.250.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=0x0&y=29&s=2913320&_salt=2228719469&B=12&m=2&r=1 HTTP/1.0" - - 173.208.94.117 - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=0x0&y=29&s=3187816&_salt=462045326&B=12&m=2&r=1 HTTP/1.0" - - mx1.a54m.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=300x250&s=2887338&T=3&_salt=2925281520&B=12&m=2&u=http%3A%2F%2Fsecretskirt.com%2Findex.php%3Foption%3Dcom_contact%26view%3Dcontact%26id%3D1%26Itemid%3D95&r=1 HTTP/1.0" - - 108.62.75.54.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=300x250&s=3218437&T=3&_salt=2939054384&B=12&m=2&u=http%3A%2F%2Fwww.vifinances.com%2Ffinance-investing%2Finsurance-investment%2Fis-life-insurance-investment-necessarily-the-way-to-go.html&r=1 HTTP/1.0" - - connect to ad.yieldmanager.com:80 connect to ad.globe7.com:80 bye connect to ad.globe7.com:80 connect to ad.globe7.com:80 bye 173.208.94.22 - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=2922824&T=3&_salt=705371051&B=12&m=2&u=%3A%2F%2Fsunshinefelling.com%2Findex.php%3Fview%3Darticle%26catid%3D44%3Amature-womens-fashion%26id%3D6917%3A2012-01-25-22-37-27%26tmpl%3Dcomponent%26print%3D1%26layout%3Ddefault%26page%3D&r=1 HTTP/1.0" - - bye 23.19.10.44.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/st?ad_type=iframe&ad_size=160x600&section=3512129&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.yieldmanager.com:80 bye 142.91.189.27.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/imp?Z=0x0&y=29&s=3660215&_salt=2921537966&B=12&m=2&r=1 HTTP/1.0" - - connect to ad.scanmedios.com:80 bye 142.91.217.158.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globaltakeoff.net/st?ad_type=iframe&ad_size=160x600&section=2077929&pub_url=${PUB_URL} HTTP/1.0" - - 23.19.76.194.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=3127996&T=3&_salt=1952612979&B=12&m=2&u=http%3A%2F%2Fwww.oseey.com%2Fpure-core-watch%2Fcarbon-fiber-watch%2Fcarbon-monoxide-poisoning-awareness.html&r=1 HTTP/1.0" - - mx1.e6sb.com - - [17/Oct/2012 07:38:53] "GET http://ad.scanmedios.com/imp?Z=728x90&s=3522638&T=3&_salt=3444993091&B=12&m=2&u=http%3A%2F%2Fsunshinefelling.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D6013%3A2012-01-25-22-25-54%26catid%3D40%3Abig-beautiful-women-fashion%26Itemid%3D96&r=1 HTTP/1.0" - - connect to ad.tagjunction.com:80 connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 23.19.76.154.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=2569393 HTTP/1.0" - - connect to ads.creafi-online-media.com:80 bye 108.62.109.115.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3315330&_salt=2385926515&B=12&m=2&r=1 HTTP/1.0" - - 142.91.217.214.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3634166&T=3&_salt=1590442300&B=12&m=2&u=http%3A%2F%2Fwealthterritory.com%2Findex.php%3Foption%3Dcom_mailto%26tmpl%3Dcomponent%26link%3DaHR0cDovL3dlYWx0aHRlcnJpdG9yeS5jb20vaW5kZXgucGhwP29wdGlvbj1jb21fY29udGVudCZ2aWV3PWFydGljbGUmaWQ9NDY2NDoyMDExLTA3LTA2LTEzLTI2LTUwJmNhdGlkPTQxOnNlcnZpY2VzJkl0ZW1pZ&r=1 HTTP/1.0" - - 108.62.185.184.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ads.creafi-online-media.com/imp?Z=728x90&s=2885766&T=3&_salt=107120374&B=12&m=2&u=http%3A%2F%2Feconomicccore.com%2Findex.php%3Foption%3Dcom_content%26view%3Dcategory%26layout%3Dblog%26id%3D48%26Itemid%3D98%26limitstart%3D45&r=1 HTTP/1.0" - - bye bye bye connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 connect to ad.tagjunction.com:80 bye 108.62.75.252.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=728x90&section=3213387&pub_url=${PUB_URL} HTTP/1.0" - - bye connect to ad.tagjunction.com:80 bye connect to ad.yieldmanager.com:80 173.208.94.29 - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/st?ad_type=iframe&ad_size=728x90&section=3006024&pub_url=${PUB_URL} HTTP/1.0" - - 23.19.31.84.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=2586703&_salt=2905995697&B=12&m=2&r=1 HTTP/1.0" - - oxx-ef-Words.ipwagon.net - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=0x0&y=29&s=3630499&_salt=4037530564&B=12&m=2&r=1 HTTP/1.0" - - 142.91.185.53.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=0x0&y=29&s=3512541&_salt=1134875077&B=12&m=2&r=1 HTTP/1.0" - - connect to ad.globe7.com:80 108.177.187.37.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=300x250&s=3168350&T=3&_salt=548860046&B=12&m=2&u=http%3A%2F%2Flifehealthyliving.com%2Findex.php%3Fview%3Darticle%26catid%3D34%253Ahealthy-food%26id%3D4681%253A2012-05-16-20-40-19%26tmpl%3Dcomponent%26print%3D1%26layout%3Ddefault%26page%3D%26option%3Dcom_content%26Itemid%3D53&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 bye connect to ads.creafi-online-media.com:80 108.177.223.180.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=300x250&s=3331290&T=3&_salt=1270334669&B=12&m=2&u=http%3A%2F%2Fwww.vegls.com%2Faccident-attorneys-firms%2Fauto-accident-attorney%2Ffind-the-correct-auto-accident-attorney.html&r=1 HTTP/1.0" - - bye 142.91.185.38.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/st?ad_type=iframe&ad_size=160x600&section=818253 HTTP/1.0" - - connect to ad.yieldmanager.com:80 bye bye bye 108.62.75.230.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ads.creafi-online-media.com/st?ad_type=pop&ad_size=0x0&section=3323456&banned_pop_types=29&pop_times=1&pop_frequency=86400&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.adserverplus.com:80 bye connect to ad.adserverplus.com:80 bye 142.91.217.194.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=300x250&s=3068801&T=3&_salt=1246107431&B=12&m=2&u=http%3A%2F%2Fmoodoffashionandbeauty.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D756%3A2011-07-13-13-13-43%26catid%3D36%3Afashion-clothes%26Itemid%3D55&r=1 HTTP/1.0" - - connect to ad.smxchange.com:80 108.62.185.235.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=3307618&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.globe7.com:80 bye connect to ad.yieldmanager.com:80 bye bye connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 108.177.168.183.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/imp?Z=300x250&s=3582877&T=3&_salt=3271923155&B=12&m=2&u=http%3A%2F%2Fwomenhealthroad.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D5780%3A2011-12-12-16-56-53%26catid%3D40%3Ahealth-issues%26Itemid%3D96&r=1 HTTP/1.0" - - 23.19.3.100.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=2895969&T=3&_salt=207805714&B=12&m=2&u=http%3A%2F%2Feconomicccore.com%2Findex.php%3Fview%3Darticle%26catid%3D46%253Aeconomic-news%26id%3D6079%253A2011-09-29-07-39-13%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D96&r=1 HTTP/1.0" - - bye 142.91.199.212.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=2956039&pub_url=${PUB_URL} HTTP/1.0" - - bye 142.91.189.169.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=3004691&T=3&_salt=2747591679&B=12&m=2&u=http%3A%2F%2Fwww.qtsfinancial.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D5406%3Afinancial-statement-english-page%26catid%3D43%3Afinancial-analysis%26Itemid%3D99&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 23.19.31.58.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3323560&_salt=3172064457&B=12&m=2&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 iei-ix-Words.ipwagon.net - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=728x90&s=3187813&T=3&_salt=1110944041&B=12&m=2&u=http%3A%2F%2Fwww.workinhouses.com%2Fhtml%2Fwallingford-ct-connecticuts-best-places-for-your-home.html&r=1 HTTP/1.0" - - connect to cookex.amp.yahoo.com:80 173.208.94.116 - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=300x250&section=3213592&pub_url=${PUB_URL} HTTP/1.0" - - bye bye connect to ad.yieldmanager.com:80 connect to ads.creafi-online-media.com:80 bye 108.62.75.99.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=160x600&s=2913321&T=3&_salt=333033369&B=12&m=2&u=http%3A%2F%2Ffashionstreetlight.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D28850%3A2011-12-20-12-59-39%26catid%3D45%3Afashion-accessories%26Itemid%3D101&r=1 HTTP/1.0" - - bye 142.91.217.208.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://cookex.amp.yahoo.com/v2/cexposer/SIG=18kthu27g/*http%3A//ad.yieldmanager.com/imp?Z=300x250&s=2682517&T=3&_salt=1378331643&B=12&m=2&u=http%3A%2F%2Fwww.economicwindows.com%2Findex.php%3Fview%3Darticle%26catid%3D40%253Afinancial-info%26id%3D3854%253A2011-07-06-13-25-37%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D96&r=1 HTTP/1.0" - - bye bye bye 108.62.185.228.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3315448&_salt=4241487555&B=12&m=2&r=1 HTTP/1.0" - - 108.62.185.220.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ads.creafi-online-media.com/st?ad_type=iframe&ad_size=728x90&section=3269968 HTTP/1.0" - - connect to ad.tagjunction.com:80 bye connect to ad.globe7.com:80 bye 142.91.185.47.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/st?ad_type=pop&ad_size=0x0&section=2958317&banned_pop_types=29&pop_times=1&pop_frequency=0&pub_url=${PUB_URL} HTTP/1.0" - - bye 108.177.168.183.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/imp?Z=160x600&s=3582877&T=3&_salt=1313872999&B=12&m=2&u=http%3A%2F%2Fwomenhealthroad.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D5753%3A2011-12-12-16-56-46%26catid%3D40%3Ahealth-issues%26Itemid%3D96&r=1 HTTP/1.0" - - connect to ad.tagjunction.com:80 bye connect to ad.globe7.com:80 bye connect to ad.adserverplus.com:80 108.62.75.53.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.tagjunction.com/imp?Z=300x250&s=3127172&T=3&_salt=2152278771&B=12&m=2&u=http%3A%2F%2Fwww.oslims.com%2Ffashion-coffee%2Ffashion-slimming-coffee%2Fso-whats-your-poison-coffee-or-tea.html&r=1 HTTP/1.0" - - connect to ad.yieldmanager.com:80 bye bye 108.62.75.170.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/imp?Z=0x0&y=29&s=2909210&_salt=1773835502&B=12&m=2&r=1 HTTP/1.0" - - 23.19.79.3.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.globe7.com/st?ad_type=iframe&ad_size=728x90&section=3571505&pub_url=${PUB_URL} HTTP/1.0" - - 142.91.217.216.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3630472&T=3&_salt=462936220&B=12&m=2&u=http%3A%2F%2Fwww.economicwindows.com%2Findex.php%3Fview%3Darticle%26catid%3D41%253Afinancial-services%26id%3D4854%253A2011-07-06-13-26-56%26tmpl%3Dcomponent%26print%3D1%26layout%3Ddefault%26page%3D%26option%3Dcom_content%26Itemid%3D97&r=1 HTTP/1.0" - - connect to ad.yieldmanager.com:80 connect to ad.adserverplus.com:80 connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 142.91.189.176.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3187822&T=3&_salt=325267799&B=12&m=2&u=http%3A%2F%2Feconomysea.com%2Findex.php%3Foption%3Dcom_mailto%26tmpl%3Dcomponent%26link%3DaHR0cDovL2Vjb25vbXlzZWEuY29tL2luZGV4LnBocD9vcHRpb249Y29tX2NvbnRlbnQmdmlldz1hcnRpY2xlJmlkPTYzNDk6MjAxMS0wOS0yOC0yMC0wNC0xOSZjYXRpZD00NzplY29ub21pYy1uZXdzJkl0ZW1pZD05Nw&r=1 HTTP/1.0" - - connect to ad.adserverplus.com:80 142.91.190.240.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=2956040&T=3&_salt=3354730349&B=12&m=2&u=http%3A%2F%2Fdomarketings.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D279%3AWhy-Contractor-Leads-Are-Best-For-Getting-Ideal-Construction-Prospects%26catid%3D2%3Abusiness&r=1 HTTP/1.0" - - bye 108.62.75.6.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=3323456&T=3&_salt=1244915826&B=12&m=2&u=http%3A%2F%2Fdomarketings.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D989%3AThe-Basics-of-Failure-Mode-and-Effective-Analysis%26catid%3D2%3Abusiness&r=1 HTTP/1.0" - - bye 142.91.217.220.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=728x90&s=2921135&T=3&_salt=1337464905&B=12&m=2&u=http%3A%2F%2Financezone.com%2Findex.php%3Foption%3Dcom_content%26view%3Darticle%26id%3D7236%3A2011-09-05-19-56-54%26catid%3D49%3Acareer-banking%26Itemid%3D99&r=1 HTTP/1.0" - - bye connect to ad.yieldmanager.com:80 108.62.178.229.rdns.ubiquityservers.com - - [17/Oct/2012 07:38:53] "GET http://ad.adserverplus.com/st?ad_type=iframe&ad_size=160x600&section=3168350&pub_url=${PUB_URL} HTTP/1.0" - - connect to ad.yieldmanager.com:80 108.177.168.187.rdns.ubiquity.io - - [17/Oct/2012 07:38:53] "GET http://ad.smxchange.com/st?ad_type=iframe&ad_size=300x250&section=3285387&pop_nofreqcap=1&pub_url=${PUB_URL} HTTP/1.0" - - skg-wr-Words.ipwagon.net - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=0x0&y=29&s=3153972&_salt=3512711469&B=12&m=2&r=1 HTTP/1.0" - - bye connect to ad.yieldmanager.com:80 bye connect to ad.yieldmanager.com:80 mx1.u4gf.com - - [17/Oct/2012 07:38:53] "GET http://ad.yieldmanager.com/imp?Z=160x600&s=2959021&T=3&_salt=1516586745&B=12&m=2&u=http%3A%2F%2Fsunshinefelling.com%2Findex.php%3Fview%3Darticle%26catid%3D45%253Aplus-size-dresses%26id%3D7512%253A2012-01-25-22-42-00%26format%3Dpdf%26option%3Dcom_content%26Itemid%3D101&r=1 HTTP/1.0" - -

    Read the article

  • Installing Yaws server on Ubuntu 12.04 (Using a cloud service)

    - by Lee Torres
    I'm trying to get a Yaws web server working on a cloud service (Amazon AWS). I've compilled and installed a local copy on the server. My problem is that I can't get Yaws to run while running on either port 8000 or port 80. I have the following configuration in yaws.conf: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true This produces the following successful launch/result: Eshell V5.8.5 (abort with ^G) =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Using config file /home/ubuntu/yaws.conf =INFO REPORT==== 16-Sep-2012::17:21:06 === Ctlfile : /home/ubuntu/.yaws/yaws/default/CTL =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:8000 for <3> virtual servers: - http://domU-12-31-39-0B-1A-F6:8000 under /home/ubuntu/yaws/www/trial - =INFO REPORT==== 16-Sep-2012::17:21:06 === Yaws: Listening to 0.0.0.0:4443 for <1> virtual servers: - When I try to access the the url (http://ec2-72-44-47-235.compute-1.amazonaws.com), it never connects. I've tried using paping to check if port 80 or 8000 is open(http://code.google.com/p/paping/) and I get a "Host can not be resolved" error, so obviously something isn't working. I've also tried setting the yaws.conf so its at Port 80, appearing like this: port = 8000 listen = 0.0.0.0 docroot = /home/ubuntu/yaws/www/test dir_listings = true and I get the following error: =ERROR REPORT==== 16-Sep-2012::17:24:47 === Yaws: Failed to listen 0.0.0.0:80 : {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Can't listen to socket: {error,eacces} =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =ERROR REPORT==== 16-Sep-2012::17:24:47 === Top proc died, terminate gserv =INFO REPORT==== 16-Sep-2012::17:24:47 === application: yaws exited: {shutdown,{yaws_app,start,[normal,[]]}} type: permanent {"Kernel pid terminated",application_controller," {application_start_failure,yaws,>>>>>>{shutdown,>{yaws_app,start,[normal,[]]}}}"} I've also opened up the port 80 using iptables. Running sudo iptables -L gives this output: Chain INPUT (policy ACCEPT) target prot opt source destination ACCEPT tcp -- ip-192-168-2-0.ec2.internal ip-192-168-2-16.ec2.internal tcp dpt:http ACCEPT tcp -- 0.0.0.0 anywhere tcp dpt:http ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:http Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination In addition, I've gone to the security group panel in the Amazon AWS configuration area, and add ports 80, 8000, and 8080 to ip source 0.0.0.0 Please note: if you try to access the URL of the virtual server now, it likely won't connect because I'm not running currently running the yaws daemon. I've tested it when I've run yaws either through yaws or yaws -i Thanks for the patience

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Linux ls[] command

    - by pedro
    ubuntu@ubuntu:/$ ls /dev/tty[15-24]* /dev/tty1 /dev/tty13 /dev/tty17 /dev/tty40 /dev/tty44 /dev/tty48 /dev/tty10 /dev/tty14 /dev/tty18 /dev/tty41 /dev/tty45 /dev/tty49 /dev/tty11 /dev/tty15 /dev/tty19 /dev/tty42 /dev/tty46 what is wrong?

    Read the article

  • Why are Maven Goals not added by IntelliJ?

    - by Jasper
    I have produced a new Maven Project from gae-archetype-gwt from within IntelliJ, and everything is generated well, but the gae:... goals won't show up in the Maven View, and if I try to update Repository Indices, apart from the local repository I get errors only. When I run gae:unpack from terminal, everything works fine. Im running Ubuntu 10.04 Beta 1 and am using open-jdk, for which IntelliJ is also configured. UPDATE: WORKS FINE WITH UBUNTU 10.04 FINAL + JDK FROM PARTNER REPOSITORY

    Read the article

  • KDevelop has no build menu.

    - by Brian Hooper
    I have just installed KDevelop on my Ubuntu machine (KDevelop 3.9.95 on Ubuntu 9.10) with sudo apt-get install kdevelop I created a new project with the "Hello World" program in it, but there doesn't appear to be any way to compile anything. The manuals refer to the build menu but there isn't one, all all compile options on the other menus are greyed out. Does anyone know what I have done wrong?

    Read the article

  • Why does passing arguments to the command in an env invocation not work?

    - by timdisney
    I have a shell script to run node with some arguments like so: #!/usr/bin/env node --harmony_proxies ... This works fine under OS X but in Ubuntu it errors with: /usr/bin/env: node --harmony_proxies: No such file or directory Node is definitely installed and on the PATH since if I remove the --harmony_proxies flag it works fine. Is there some different way of passing arguments when using env in Ubuntu?

    Read the article

< Previous Page | 391 392 393 394 395 396 397 398 399 400 401 402  | Next Page >