Search Results

Search found 2617 results on 105 pages for 'mr man'.

Page 86/105 | < Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • dnsmasq Client TTL

    - by user548971
    I have a situation where my hosts file is constantly changing. Because of this I don't want clients to cache ip addresses resolved using the hosts file. Here is the command that starts dnsmasq for me: /usr/sbin/dnsmasq -K -R -y -Z -b -E -S 8.8.8.8 -l /tmp/dhcp.leases -r /tmp/resolv.conf.auto --stop-dns-rebind --rebind-localhost-ok --dhcp-range=lan,192.168.2.2,192.168.2.249,255.255.255.0,12h -2 eth0 In looking at this site: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html I see that the -T option has this description: -T, --local-ttl=<time> When replying with information from /etc/hosts or the DHCP leases file dnsmasq by default sets the time-to-live field to zero, meaning that the requester should not itself cache the information. This is the correct thing to do in almost all situations. This option allows a time-to-live (in seconds) to be given for these replies. This will reduce the load on the server at the expense of clients using stale data under some circumstances. My command doesn't have the -T option. Do I need it or does dnsmasq default TTL to zero without it?

    Read the article

  • How do I restore tab-completion on shell variables on the bash command-line?

    - by Eric
    I've long set my most-recently visited directories to shell variables d1, d2, etc. On an ancient Fedora machine I could type a command like $ cp $d1/ and the shell would replace $d1 with text like /home/acctname/projects/blog/ and would then show me the contents of .../blog, like any tab-completion. Now, both ubuntu wheezy/sid and fedora 16 just -escape the '$', and naturally there are no completions to show. You can see this behavior in action in an OSX Terminal window. On 10.8, do something like ls $HOME/ to see what I mean. Is there a bash shell variable or option that can restore the old behavior? man bash suggests this is a bug: complete (TAB) Attempt to perform completion on the text before point. Bash attempts completion treating the text as a variable (if the text begins with $), username (if the text begins with ~), hostname (if the text begins with @), or command (including aliases and functions) in turn. If none of these produces a match, filename completion is attempted. I get the above described completion when a token starts with '~' or a letter. It's just '$'-completion that's broken.

    Read the article

  • Securing NTP: which method to use?

    - by Harry
    Can someone good at NTP configuration please share which method is the best/easiest to implement a secure, tamper-proof version of NTP? Here are some difficulties... I don't have the luxury of having my own stratum 0 time source, so must rely on external time servers. Should I read up on the AutoKey method or should I try to go the MD5 route? Based on what I know about symmetric cryptography, it seems that the MD5 method relies on a pre-agreed set of keys (symmetric cryptography) between the client and the server, and, so, is prone to man-in-the-middle attack. AutoKey, on the other hand, does not appear to work behind a NAT or a masquerading host. Is this still true, by the way? (This reference link is dated 2004, so I'm not sure what is the state of art today.) 4.1 Are public AutoKey-talking time servers available? I browsed through the NTP book by David Mills. The book looks excellent in a way (coming from the NTP creator after all), but the information therein is also overwhelming. I just need to first configure a secure version of NTP and then may be later worry about its architectural and engineering underpinnings. Can someone please wade me through these drowning NTP waters? Don't necessarily need a working config from you, just info on which NTP mode/config to try and may be also a public time server that supports that mode/config. Many thanks, /HS

    Read the article

  • Windows 8 preinstall downgrade to Win 7: UEFI problems :/

    - by Tim Flemming
    I have a friend who has a Hp Laptop with UEFI booting. He locked himself out of windows 8 (forgot his password)... so he called me. Well after trying to use the latest version of chntpw in UBCD5.2.6, i came to the conclusion i needed to wipe the drive and install windows 7. A little googling suggested using certain versions of chntpw, but i decided to not waste the time burning endless CDs. Anyway, He agreed to installing Win7 "I hate the big tile thingys". So i attacked with Gparted. Wiped the whole drive. Set flags to boot. Set part table as MBR. However upon trying to install win7, it still claimed i had a GPT style disk. unable to install. the only thing i can think of I'm doing wrong is not changing the bios/UEFI to legacy... My question is, If I enable legacy, somehow get the drive to actually be MBR, and use my X86 DVD, should I be good? Is there some common glitches ppl miss when dealing with GPT/EFI/UEFI? I need answers cuz he now has a blank laptop which I am responsible for. Another question. windows 8 came installed on a GPT disk. SO WHY WHEN I BOOT THE INSTALL DVD DOES IT SAY IT CANT INSTALL TO A GPT STYLE DISK??? WTF??? talk about the bleeding edge man.

    Read the article

  • centos install / partitioning

    - by ServerSideX
    I'm using NOC-PS to remotely install Centos 6.2 via KVM / IPMI. I'm going to install cPanel as well and they recommend this layout /boot (99MB) swap (2x server RAM) / (remainder) In the o/s install profile within NOC-PS software, it shows as this: part /boot --fstype ext2 --size 250 part pv.01 --size 1 --grow volgroup vg pv.01 logvol / --vgname=vg --size=1 --grow --fstype ext4 --fsoptions=discard,noatime --name=root logvol /tmp --vgname=vg --size=1024 --fstype ext4 --fsoptions=discard,noatime --name=tmp logvol swap --vgname=vg --recommended --name=swap By the time the default partition setup was done installing Centos, I get this [root@server005 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg-root 532G 907M 504G 1% / tmpfs 7.8G 0 7.8G 0% /dev/shm /dev/sda1 243M 28M 202M 13% /boot /dev/mapper/vg-tmp 1008M 34M 924M 4% /tmp [root@server005 ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Dec 7 18:47:24 2012 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg-root / ext4 discard,noatime 1 1 UUID=58b31aaf-5072-4fb1-a858-33bc316fa793 /boot ext2 defaults 1 2 /dev/mapper/vg-tmp /tmp ext4 discard,noatime 1 2 /dev/mapper/vg-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0 My question is, how should the NOC-PS install profile look like to get the recommended cPanel partitioning? The server has 16GB RAM, dual 600GB SAS drives and will be used for cPanel shared hosting.

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • How to know who accessed a file or if a file has 'access' monitor in linux

    - by J L
    I'm a noob and have some questions about viewing who accessed a file. I found there are ways to see if a file was accessed (not modified/changed) through audit subsystem and inotify. However, from what I have read online, according to here: http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html it says to 'watch/monitor' file, I have to set a watch by using command like: # auditctl -w /etc/passwd -p war -k password-file So if I create a new file or directory, do I have to use audit/inotify command to 'set' watch first to 'watch' who accessed the new file? Also is there a way to know if a directory is being 'watched' through audit subsystem or inotify? How/where can I check the log of a file? edit: from further googling, I found this page saying: http://www.kernel.org/doc/man-pages/online/pages/man7/inotify.7.html The inotify API provides no information about the user or process that triggered the inotify event. So I guess this means that I cant figure out which user accessed a file? Only audit subsystem can be used to figure out who accessed a file?

    Read the article

  • TCP/UDP hole punching from and to the same NAT network

    - by Luc
    I was wondering if tcp/udp hole punching would still work when you are in the same network (behind a NAT), and what the packet's path would be. What happens when using hole punching on the same network, is that it will send a packet out with the same destination and source address. Only the source and destination port would differ. I imagine a router with NAT loopback enabled will handle this as it should, but how about other routers? Would they drop the packet, or would a router (the first?) from the ISP bounce the packet back after which it gets handled okay? I'm wondering because I was thinking about using this technique to circumvent a block between peers in a network (like a school network where clients can only access the internet, but any contact with each other is blocked). The only other option is to use a man in the middle as proxy (tunnel?). The disadvantage of this is that you have to have a server with significantly more bandwidth than one that would only do hole punching. Also the latency would increase significantly.

    Read the article

  • How to add a user to Wheel group?

    - by Natasha Thapa
    I am trying to add a use to wheel group using in a Ubuntu server. sudo usermod -aG wheel john I get: usermod: group 'wheel' does not exist On my /etc/sudoers I have this: > cat /etc/sudoers > # sudoers file. > # > # This file MUST be edited with the 'visudo' command as root. > # > # See the sudoers man page for the details on how to write a sudoers file. > # > > # Host alias specification > > # User alias specification > > # Cmnd alias specification > > # Defaults specification > > # User privilege specification root ALL=(ALL) ALL %root ALL=(ALL) NOPASSWD: ALL > > %wheel ALL=(ALL) NOPASSWD: ALL Do I have to do groupadd of wheel?

    Read the article

  • What I should know about memory management?

    - by bua
    first of all: I don't use stackadmin or similar so please don't vote for moving there, I'm reading man top and paper "what every programmer should know about memory ..." I need really simple explanation like for retard ;) Having following top dump: top - 11:21:19 up 37 days, 21:16, 4 users, load average: 0.41, 0.75, 1.09 Tasks: 313 total, 5 running, 308 sleeping, 0 stopped, 0 zombie Cpu(s): 0.4%us, 0.6%sy, 0.9%ni, 96.2%id, 0.1%wa, 0.0%hi, 1.9%si, 0.0%st Mem: 132103848k total, 131916948k used, 186900k free, 54000k buffers Swap: 73400944k total, 73070884k used, 330060k free, 13931192k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3305 tudb 25 10 144m 52m 940 R 6.0 0.0 1306:09 app 3011 tudb 15 0 71528 19m 604 S 3.3 0.0 171:57.83 app 3373 tudb 25 10 209m 93m 940 S 3.0 0.1 1074:53 app 3338 tudb 25 10 144m 47m 940 R 2.7 0.0 780:48.48 app 4227 tudb 25 10 208m 99m 904 S 1.3 0.1 198:56.01 app 8506 tudb 25 10 80.7g 49g 932 S 2.0 39.6 458:31.22 app I'm wondering what is: RES (my expl. physical memory consumption ? see 49GB) VIRT (memory mapped disk to cache? see 80GB) SHR (shared pages?) Swap: (is this cached label - for memory mapped disk into swap cache?) Should sum of RES give MEM: X used? or maybe sum of VIRT?

    Read the article

  • if I define `my_domain`, postfix does not expand mail aliases

    - by Norky
    I have postfix v2.6.6 running on CentOS 6.3, hostname priest.ocsl.local (private, internal domain) with a number of aliases supportpeople: [email protected], [email protected], [email protected] requests: "|/opt/rt4/bin/rt-mailgate --queue 'general' --action correspond --url http://localhost/", supportpeople help: "|/opt/rt4/bin/rt-mailgate --queue 'help' --action correspond --url http://localhost/", supportpeople If I leave postfix with its default configuration, then the aliases are resolved correctly/as I expect, so that incoming mail to, say, [email protected] will be piped through the rt-mailgate mailgate command and also be delivered (via the mail server for ocsl.co.uk (a publicly resolvable domain)) to [email protected], user2, etc. The problem comes when I define mydomain = ocsl.co.uk in /etc/postfix/main.cf (with the intention that outgoing mail come from, for example, [email protected]). When I do this, postfix continues to run the piped command correctly, however it no longer expands the nested aliases as I expect: instead of trying to deliver to [email protected], user2 etc, it tries to send to [email protected], which does not exist on the upstream mail server and generates NDRs. postconf -n for the non-working configuration follows (the working configuration differs only by the "mydomain" line. alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost mydomain = ocsl.co.uk newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop unknown_local_recipient_reject_code = 550 We did have things working as we expected/wanted previously on an older system running Sendmail.

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • What is the --daemon option?

    - by Pascal Dimassimo
    I was installing Solr with Jetty using these instructions. Basically, those instructions made you download the Jetty startup script and copy it to /etc/init.d/jetty. But it was not working. Each time I was starting Jetty, I had a "FAILED" message and nothing to understand why it was happening. I decided to open up the /etc/init.d/jetty script to understand what was happening. I saw that this script was using start-stop-daemon to launch jetty. After a couple of time of debugging, I discovered that removing the --daemon option at the end of the start-stop-daemon call was fixing my problem. I did a couple of research and discovered that this guy had the same problem and resolved it like I did: my removing the --daemon option. What is weird is that the switch does not seem to be specific to start-stop-daemon, because it is not documented in the man page. Also, I've seen it used for other commands. So what is that --daemon option doing? And why removing it resolved my problem? Note that I am working on Ubuntu 10.04.2 LTS.

    Read the article

  • less maximum buffer size?

    - by Tyzoid
    I was messing around with my system and found a novel way to use up memory, but it seems that the less command only holds a limited amount of data before stopping/killing the command. To test, run (careful! uses lots of system memory very fast!) $ cat /dev/zero | less From my testing, it looks like the command is killed after less reaches 2.5 gigabytes of memory, but I can't find anything in the man page that suggests that it would limit it in such a way. In addition, I couldn't find any documentation via the google on the subject. Any light to this quite surprising discovery would be great! System Information: Quad core intel i7, 8gb ram. $ uname -a Linux Tyler-Work 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ less --version less 458 (GNU regular expressions) Copyright (C) 1984-2012 Mark Nudelman less comes with NO WARRANTY, to the extent permitted by law. For information about the terms of redistribution, see the file named README in the less distribution. Homepage: http://www.greenwoodsoftware.com/less $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 14.04 LTS Release: 14.04 Codename: trusty

    Read the article

  • How to search file for matching whole lines?

    - by WilliamKF
    I have a command which sends to stdout a series of numbers, each on a new line. I need to determine whether a particular number exists in the list. The match needs to be exact, not a subset. For example, a simple way to approach this that does not work would be to do: /run/command/outputing/numbers | grep -c <numberToSearch> My this gives a false positive on the following list when searching for '456': 1234567 98765 23 1771 If the count is non-zero, a match was found or if it was zero, the number is not in the list. The issue with this is that the numberToSearch could match a subsequence of numbers on a line, instead I only want hits on the whole line. I looked at the man page for grep and did not see any way to only match whole lines. Is there a way to do this, or would I be better off using awk or sed or some other tool instead? I need a binary answer as to whether the number being search for is present or not.

    Read the article

  • locale: What is the LANGUAGE variable used for? (and when?)

    - by seya
    I am trying to understand the locales used in Linux. On my Ubuntu 11.10 system locale puts out the following: LANG=en_DK.UTF-8 LANGUAGE=en_GB:en LC_CTYPE=en_GB.UTF-8 LC_NUMERIC="en_DK.UTF-8" LC_TIME="en_DK.UTF-8" LC_COLLATE=en_GB.UTF-8 LC_MONETARY="en_DK.UTF-8" LC_MESSAGES=en_GB.UTF-8 LC_PAPER="en_DK.UTF-8" LC_NAME="en_DK.UTF-8" LC_ADDRESS="en_DK.UTF-8" LC_TELEPHONE="en_DK.UTF-8" LC_MEASUREMENT="en_DK.UTF-8" LC_IDENTIFICATION="en_DK.UTF-8" LC_ALL= (en_dk is for using international day format, continental European number formatting (1.234,56) etc.) I think I understand what the LC_* family does, that LANG is the fallback if one of them is not set and that LC_ALL sets all of the LC_* variables to its value. What I don't know yet, is what LANGUAGE is used for. The notation en_GB:en reminds me of the Accept-Language HTTP header. With the settings above it would mean, British English is used, if a translation for it exists. Otherwise any existing English translation (en_US, en_AU, ..., whatever) would be used. Am I right so far? Also what programs actually obey the LANGUAGE setting? In how far is it different from LC_MESSAGES? Unfortunately, man locale only documents the LC_* family. And searching the web for 'linux locale LANGUAGE' or similar is a mute point. (Of course language is a word often used when talking about locales, and it may also be shown just in the output of locale without being discussed). Does anybody of you can help me out there?

    Read the article

  • Upstart multiple instances of service not working.

    - by Dax
    I started playing with MongoDB on Lucid. Now I would like to run a DB and Config server on the same box. They both use the same binary to launch, but with different config files and running on different ports. All directories for log and lib is split so one goes to mongodb and the other to mongoconf. Each process can be started without any problems on their own. start mongodb stop mongodb start mongoconf stop mongoconf But if I try to start both, the second one would just start and exit. Using 'initctl log-priority debug' I got the following in the logs. Jan 6 12:44:12 mongo4 init: event_finished: Finished started event Jan 6 12:44:12 mongo4 init: job_process_handler: Ignored event 1 (1) for process 5690 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) main process (5690) terminated with status 1 Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) goal changed from start to stop Jan 6 12:44:12 mongo4 init: mongoconf (mongoconf) state changed from running to stopping man 5 init shows that you can use instance names to differentiate the two. I tried using 'instance mongoconf' in the on upstart script and 'instance mongodb' in the other one, and it still fails. I can manually start the other process, so there is definitely no conflicts on port numbers or directories. Any ideas on what to try or how to get output on why it is 'terminated with status 1'? Thanx

    Read the article

  • package manager cannot create users? installation script fails?

    - by RapidWebs
    It seems that when trying to install any package which requires the creation of a system User and Group, the installer fails with the error: install: invalid user 'username' When it happened the first time, I assumed it was related to the package, or it's installation script. But now I am attempting a different package, and alas, the same issue arises! note: /etc/passwd and /etc/group is not chattr -i Here is the example output from an attempted installation (of varnish): Selecting previously unselected package varnish. Unpacking varnish (from .../varnish_3.0.2-1ubuntu0.1_amd64.deb) ... Processing triggers for man-db ... Setting up libvarnishapi1 (3.0.2-1ubuntu0.1) ... Setting up varnish (3.0.2-1ubuntu0.1) ... install: invalid user `varnish' dpkg: error processing varnish (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for libc-bin ... ldconfig deferred processing now taking place Errors were encountered while processing: varnish note: running ubuntu 12.04 For now, I have resolved the issue by manually creating the users, and running apt-get install -f, but this does not resolve the actual issue. Any ideas?

    Read the article

  • rsync via cron. How do I enable logging?

    - by tetranz
    Hi all I'm backing up a remote server to another computer using rsync. In cron.daily I have a file with this: rsync -avz -e ssh [email protected]:/ /mybackup/ It uses a public / private key pair to login. This seems to work well most of the time however, I've (foolishly) only ever really checked it by looking at the dates on some important files (MySQL dumps) that I know change every day. Obviously, an error could occur after that file. Sometimes it fails. When I run it manually, something like "client reset" sometimes happens. What is the best way to log it so that I can check with certainty if it completed or not? The cron log doesn't indicate any errors. I haven't tried it but the rsync man page on the oldish version of CentOS on the backup machine doesn't show the --log-file option. I guess I could redirect stdout with but I don't really want to know about every file. I just want to know if it all worked or not.. Thanks

    Read the article

  • How to route 1 VPN through another on OS X?

    - by Eeep
    Hi everyone. Thanks a lot for your help! I've been tinkering with this for a while and have read many posts along with Googling for help, but my knowledge of TCP/IP is really weak... I have access to two different VPN servers. 1 Is set up in Network Settings and connects through PPP 2 Is set up through Tunnelblick and uses OpenVPN. I can connect to either tunnel #1 or tunnel #2, but not both one after the other... One of my major to-do's this year is study TCP/IP, but for now, would you be super-helpful and help me fix this really clearly? I have no experience with routing, DNS, gateways or any of that. If you tell me, "Set your gateway to XXX.XXX.XX.XXX" can you specify how I get that IP, off of what interface so I don't get messed up? I can figure out the terminal just fine if you let me know what to type, and I WILL read the man pages on everything you help me with. Thanks a million!

    Read the article

  • Apache mod_disk_cache on a seperate drive

    - by pavs
    how can I set Apache mod_disk_cache on a separate drive from where the OS/Apache is installed? I have set up this on my apache2.conf: <IfModule mod_cache_disk.c> # cache cleaning is done by htcacheclean, which can be configured in # /etc/default/apache2 # # For further information, see the comments in that file, # /usr/share/doc/apache2/README.Debian, and the htcacheclean(8) # man page. # This path must be the same as the one in /etc/default/apache2 CacheRoot /media/cacheHD # This will also cache local documents. It usually makes more sense to # put this into the configuration for just one virtual host. CacheEnable disk / # The result of CacheDirLevels * CacheDirLength must not be higher than # 20. Moreover, pay attention on file system limits. Some file systems # do not support more than a certain number of inodes and # subdirectories (e.g. 32000 for ext3) CacheDirLevels 2 CacheDirLength 1 </IfModule> and it doesn't seem to be caching anything at all. The drive itself is a freshly installed ssd drive formatted with ext4.

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Create PDF document using iTextSharp in ASP.Net 4.0 and MemoryMappedFile

    - by sreejukg
    In this article I am going to demonstrate how ASP.Net developers can programmatically create PDF documents using iTextSharp. iTextSharp is a software component, that allows developers to programmatically create or manipulate PDF documents. Also this article discusses the process of creating in-memory file, read/write data from/to the in-memory file utilizing the new feature MemoryMappedFile. I have a database of users, where I need to send a notice to all my users as a PDF document. The sending mail part of it is not covered in this article. The PDF document will contain the company letter head, to make it more official. I have a list of users stored in a database table named “tblusers”. For each user I need to send customized message addressed to them personally. The database structure for the users is give below. id Title Full Name 1 Mr. Sreeju Nair K. G. 2 Dr. Alberto Mathews 3 Prof. Venketachalam Now I am going to generate the pdf document that contains some message to the user, in the following format. Dear <Title> <FullName>, The message for the user. Regards, Administrator Also I have an image, bg.jpg that contains the background for the document generated. I have created .Net 4.0 empty web application project named “iTextSharpSample”. First thing I need to do is to download the iTextSharp dll from the source forge. You can find the url for the download here. http://sourceforge.net/projects/itextsharp/files/ I have extracted the Zip file and added the itextsharp.dll as a reference to my project. Also I have added a web form named default.aspx to my project. After doing all this, the solution explorer have the following view. In the default.aspx page, I inserted one grid view and associated it with a SQL Data source control that bind data from tblusers. I have added a button column in the grid view with text “generate pdf”. The output of the page in the browser is as follows. Now I am going to create a pdf document when the user clicking on the Generate PDF button. As I mentioned before, I am going to work with the file in memory, I am not going to create a file in the disk. I added an event handler for button by specifying onrowcommand event handler. My gridview source looks like <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataSourceID="SqlDataSource1" Width="481px" CellPadding="4" ForeColor="#333333" GridLines="None" onrowcommand="Generate_PDF" > ………………………………………………………………………….. ………………………………………………………………………….. </asp:GridView> In the code behind, I wrote the corresponding event handler. protected void Generate_PDF(object sender, GridViewCommandEventArgs e) { // The button click event handler code. // I am going to explain the code for this section in the remaining part of the article } The Generate_PDF method is straight forward, It get the title, fullname and message to some variables, then create the pdf using these variables. The code for getting data from the grid view is as follows // get the row index stored in the CommandArgument property int index = Convert.ToInt32(e.CommandArgument); // get the GridViewRow where the command is raised GridViewRow selectedRow = ((GridView)e.CommandSource).Rows[index]; string title = selectedRow.Cells[1].Text; string fullname = selectedRow.Cells[2].Text; string msg = @"There are some changes in the company policy, due to this matter you need to submit your latest address to us. Please update your contact details / personnal details by visiting the member area of the website. ................................... "; since I don’t want to save the file in the disk, I am going the new feature introduced in .Net framework 4, called Memory-Mapped Files. Using Memory-Mapped mapped file, you can created non-persisted memory mapped files, that are not associated with a file in a disk. So I am going to create a temporary file in memory, add the pdf content to it, then write it to the output stream. To read more about MemoryMappedFile, read this msdn article http://msdn.microsoft.com/en-us/library/dd997372.aspx The below portion of the code using MemoryMappedFile object to create a test pdf document in memory and perform read/write operation on file. The CreateViewStream() object will give you a stream that can be used to read or write data to/from file. The code is very straight forward and I included comment so that you can understand the code. using (MemoryMappedFile mmf = MemoryMappedFile.CreateNew("test1.pdf", 1000000)) { // Create a new pdf document object using the constructor. The parameters passed are document size, left margin, right margin, top margin and bottom margin. iTextSharp.text.Document d = new iTextSharp.text.Document(PageSize.A4, 72,72,172,72); //get an instance of the memory mapped file to stream object so that user can write to this using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { // associate the document to the stream. PdfWriter.GetInstance(d, stream); /* add an image as bg*/ iTextSharp.text.Image jpg = iTextSharp.text.Image.GetInstance(Server.MapPath("Image/bg.png")); jpg.Alignment = iTextSharp.text.Image.UNDERLYING; jpg.SetAbsolutePosition(0, 0); //this is the size of my background letter head image. the size is in points. this will fit to A4 size document. jpg.ScaleToFit(595, 842); d.Open(); d.Add(jpg); d.Add(new Paragraph(String.Format("Dear {0} {1},", title, fullname))); d.Add(new Paragraph("\n")); d.Add(new Paragraph(msg)); d.Add(new Paragraph("\n")); d.Add(new Paragraph(String.Format("Administrator"))); d.Close(); } //read the file data byte[] b; using (MemoryMappedViewStream stream = mmf.CreateViewStream()) { BinaryReader rdr = new BinaryReader(stream); b = new byte[mmf.CreateViewStream().Length]; rdr.Read(b, 0, (int)mmf.CreateViewStream().Length); } Response.Clear(); Response.ContentType = "Application/pdf"; Response.BinaryWrite(b); Response.End(); } Press ctrl + f5 to run the application. First I got the user list. Click on the generate pdf icon. The created looks as follows. Summary: Creating pdf document using iTextSharp is easy. You will get lot of information while surfing the www. Some useful resources and references are mentioned below http://itextsharp.com/ http://www.mikesdotnetting.com/Article/82/iTextSharp-Adding-Text-with-Chunks-Phrases-and-Paragraphs http://somewebguy.wordpress.com/2009/05/08/itextsharp-simplify-your-html-to-pdf-creation/ Hope you enjoyed the article.

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • Building extensions for Expression Blend 4 using MEF

    - by Timmy Kokke
    Introduction Although it was possible to write extensions for Expression Blend and Expression Design, it wasn’t very easy and out of the box only one addin could be used. With Expression Blend 4 it is possible to write extensions using MEF, the Managed Extensibility Framework. Until today there’s no documentation on how to build these extensions, so look thru the code with Reflector is something you’ll have to do very often. Because Blend and Design are build using WPF searching the visual tree with Snoop and Mole belong to the tools you’ll be using a lot exploring the possibilities.  Configuring the extension project Extensions are regular .NET class libraries. To create one, load up Visual Studio 2010 and start a new project. Because Blend is build using WPF, choose a WPF User Control Library from the Windows section and give it a name and location. I named mine DemoExtension1. Because Blend looks for addins named *.extension.dll  you’ll have to tell Visual Studio to use that in the Assembly Name. To change the Assembly Name right click your project and go to Properties. On the Application tab, add .Extension to name already in the Assembly name text field. To be able to debug this extension, I prefer to set the output path on the Build tab to the extensions folder of Expression Blend. This means that everything that used to go into the Debug folder is placed in the extensions folder. Including all referenced assemblies that have the copy local property set to false. One last setting. To be able to debug your extension you could start Blend and attach the debugger by hand. I like it to be able to just hit F5. Go to the Debug tab and add the the full path to Blend.exe in the Start external program text field. Extension Class Add a new class to the project.  This class needs to be inherited from the IPackage interface. The IPackage interface can be found in the Microsoft.Expression.Extensibility namespace. To get access to this namespace add Microsoft.Expression.Extensibility.dll to your references. This file can be found in the same folder as the (Expression Blend 4 Beta) Blend.exe file. Make sure the Copy Local property is set to false in this reference. After implementing the interface the class would look something like: using Microsoft.Expression.Extensibility; namespace DemoExtension1 { public class DemoExtension1:IPackage { public void Load(IServices services) { } public void Unload() { } } } These two methods are called when your addin is loaded and unloaded. The parameter passed to the Load method, IServices services, is your main entry point into Blend. The IServices interface exposes the GetService<T> method. You will be using this method a lot. Almost every part of Blend can be accessed thru a service. For example, you can use to get to the commanding services of Blend by calling GetService<ICommandService>() or to get to the Windowing services by calling GetService<IWindowService>(). To get Blend to load the extension we have to implement MEF. (You can get up to speed on MEF on the community site or read the blog of Mr. MEF, Glenn Block.)  In the case of Blend extensions, all that needs to be done is mark the class with an Export attribute and pass it the type of IPackage. The Export attribute can be found in the System.ComponentModel.Composition namespace which is part of the .NET 4 framework. You need to add this to your references. using System.ComponentModel.Composition; using Microsoft.Expression.Extensibility;   namespace DemoExtension1 { [Export(typeof(IPackage))] public class DemoExtension1:IPackage { Blend is able to find your addin now. Adding UI The addin doesn’t do very much at this point. The WPF User Control Library came with a UserControl so lets use that in this example. I just drop a Button and a TextBlock onto the surface of the control to have something to show in the demo. To get the UserControl to work in Blend it has to be registered with the WindowService.  Call GetService<IWindowService>() on the IServices interface to get access to the windowing services. The UserControl will be used in Blend on a Palette and has to be registered to enable it. This is done by calling the RegisterPalette on the IWindowService interface and passing it an identifier, an instance of the UserControl and a caption for the palette. public void Load(IServices services) { IWindowService windowService = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(); windowService.RegisterPalette("DemoExtension", uc, "Demo Extension"); } After hitting F5 to start debugging Expression Blend will start. You should be able to find the addin in the Window menu now. Activating this window will show the “Demo Extension” palette with the UserControl, style according to the settings of Blend. Now what? Because little is publicly known about how to access different parts of Blend adding breakpoints in Debug mode and browsing thru objects using the Quick Watch feature of Visual Studio is something you have to do very often. This demo extension can be used for that purpose very easily. Add the click event handler to the button on the UserControl. Change the contructor to take the IServices interface and store this in a field. Set a breakpoint in the Button_Click method. public partial class UserControl1 : UserControl { private readonly IServices _services;   public UserControl1(IServices services) { _services = services; InitializeComponent(); }   private void button1_Click(object sender, RoutedEventArgs e) { } } Change the call to the constructor in the load method and pass it the services property. public void Load(IServices services) { IWindowService service = services.GetService<IWindowService>(); UserControl1 uc = new UserControl1(services); service.RegisterPalette("DemoExtension", uc, "Demo Extension"); } Hit F5 to compile and start Blend. Got to the window menu and start show the addin. Click on  the button to hit the breakpoint. Now place the carrot text _services text in the code window and hit Shift+F9 to show the Quick Watch window. Now start exploring and discovering where to find everything you need.  More Information The are no official resources available yet. Microsoft has released one extension for expression Blend that is very useful as a reference, the Microsoft Expression Blend® Add-in Preview for Windows® Phone. This will install a .extension.dll file in the extension folder of Blend. You can load this file with Reflector and have a peek at how Microsoft is building his addins. Conclusion I hope this gives you something to get started building extensions for Expression Blend. Until Microsoft releases the final version, which hopefully includes more information about building extensions, we’ll have to work on documenting it in the community.

    Read the article

< Previous Page | 82 83 84 85 86 87 88 89 90 91 92 93  | Next Page >