Search Results

Search found 6963 results on 279 pages for 'ia 32'.

Page 41/279 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Windows 7 on a 64-bit computer

    - by GetFree
    I read on Wikipedia that Windows 7 on a 64-bit PC needs twice as much RAM as on a 32-bit PC. I understand why is that: every number stored in memory takes 8 bytes rather than just 4. That, in simple terms, means that your amount of RAM is reduced to half when you use Windows 7 on a 64-bit computer. Now, I have a Intel Core 2 Duo Laptop with Windows Vista right now (2 GB of RAM). My question is: Since Core 2 is a 64-bit architecture, if I upgrade to Windows 7 will my laptop be working as if it had just 1 GB of RAM? Or... to say it in other words: Having a 64-bit PC with Windows 7 do you need twice as much RAM as you need on a 32-bit PC to have the same performance? If I am right, then I'd say it's a terrible business to have a 64-bit computer and Windows 7 on it (I hope I am mistaken, though). Follow-up: After some answers, I'm realizing it's not the same thing to have a 32-bit OS on a 64-bit PC than a 64-bit OS on a 64-bit PC. Apparently, the problem of Windows 7 requiring twice as much RAM on 64-bit architectures is when you have both the OS and PC supporting 64 bits. I'd like new answers to address this issue. Also, is it possible to have more that 4 GB of RAM on a 64-bit PC using a 32-bit version of Windows?

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • Not able to access Silverlight.net and ONLY Silverlight.net - All other domains work!

    - by Sootah
    Alrighty folks, I have an extremely odd problem. I am able to surf the web fine with one odd (and really annoying at the moment) exception: Microsoft's Silverlight.net. Every other site that I go to works just fine. This is quite frustrating because I'm in the middle of programming a web app in Silverlight 4.0, and whenever I do a search for any code examples, tutorials, or whatnot at least 50% of the results are hosted in the silverlight.net forums. The error message that I get is: Oops! Google Chrome could not find www.silverlight.net It doesn't work in my other browsers either (both IE and FireFox). What's odd, is that while the error message would lead me to assume it's a DNS error, I can ping the URL just fine. C:\Users\The Doot>ping silverlight.net Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Reply from 206.72.125.201: bytes=32 time=106ms TTL=106 Ping statistics for 206.72.125.201: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 103ms, Maximum = 110ms, Average = 106ms I've checked my HOSTS file, and there's nothing that refers to ANY Microsoft URL in there. What could be causing this!?? More importantly, how do I fix it? Just for kicks, I've even included the results of a traceroute here for your enjoyment. OS: Windows 7 Ultimate Thanks in advance! -Sootah

    Read the article

  • Debian grub2 update removed Windows boot option.

    - by Wrikken
    Since I updated grub to grub 2 I no longer get the option to boot to Windows (which is unfortunately sometimes necessary for proprietary MSIE browser plugins I need to use for work). Relevant /boot/grub/menu.lst portion: ### END DEBIAN AUTOMAGIC KERNELS LIST # This is a divider, added to separate the menu items below from the Debian # ones. title Other operating systems: root # This entry automatically added by the Debian installer for a non-linux OS # on /dev/hda1 title Windows NT/2000/XP root (hd0,0) savedefault makeactive chainloader +1 This however does not appear anymore. I do have some entries in /boot/grub/grub.cfg with entries like these: menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os { insmod part_msdos insmod ext2 set root='(hd1,msdos1)' search --no-floppy --fs-uuid --set e638c434-4884-412f-a141-2c194f881fae echo 'Loading Linux 2.6.32-5-amd64 ...' linux /boot/vmlinuz-2.6.32-5-amd64 root=UUID=e638c434-4884-412f-a141-2c194f881fae ro quiet echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-5-amd64 } Do I have to alter that file? If so, what is the correct syntax for a Windows boot? If not, what could be the problem?

    Read the article

  • Windows Server doesn't connect to a network share

    - by Dmitriy N. Laykom
    Windows Server doesn't connect to a network share. Network share is working. Blockquote Pinging 109.123.146.223 with 32 bytes of data: Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Ping statistics for 109.123.146.223: Packets: Sent = 3, Received = 3, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms net view \shareaddress Blockquote System error 53 has occurred. The network path was not found. When I connected the network share I observed this error message: Blockquote \ "Mapped disk letter" refers to a location that is unavailable. It could be on a hard drive on this computer, or on a network. Check to make sure that the disk is properly inserted, or that you are connected to the Internet or your network, and then try again. If it still cannot be located, the information might have been moved to a different location The network share was mounted via Group Policy. Perchance anyone knows how I can avoid this error? When the OS has been restored from the disk problem has been solved

    Read the article

  • LVS / IPVS difference in ActiveConn since upgrading

    - by Hans
    I've recently migrated from an old version of LVS / ldirectord (Ultra Monkey) to a new Debian install with ldirectord. Now the amount of Active Connections is usually higher than the amount of Inactive Connections, it used to be the other way around. Basically on the old load balancer the connections looked something like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 12 252 -> 10.84.32.22:0 Masq 1 18 368 However since migrating it to the new load balancer it looks more like: -> RemoteAddress:Port Forward Weight ActiveConn InActConn -> 10.84.32.21:0 Masq 1 313 141 -> 10.84.32.22:0 Masq 1 276 183 Old load balancer: Debian 3.1 ipvsadm 1.24 ldirectord 1.2.3 New load balancer: Debian 6.0.5 ipvsadm 1.25 ldirectord 1.0.3 (I guess the versioning system changed) Is it because the old load balancer was running a kernel from 2005, and ldirectord from 2004, and things have simply changed in the past 7 - 8 years? Did I miss some sysctl settings that I should be enforcing for it to behave in the same way? Everything appears to be working fine but can anyone see an issue with this behaviour? Thanks in advance! Additional info: I'm using LVS in masquerading mode, the real servers have the load balancer as their gateway. The real servers are running Apache, which hasn't changed during the upgrade. The boxes themselves show roughly the same amount of Inactive Connections shown in ipvsadm.

    Read the article

  • Can't Connect SQL server - process being used by another process. Conflict with IIS?

    - by shinya
    I'm having problem connecting to MS SQL Server (2012 Express) after accessing a database through IIS (web site). I can access the data through web site no problem, but I can't access the data from any other programs (i.e SSMS) until I reboot the SQL server. It seems that the connection stays open even if I close a browser. Here is error message I'm getting Unable to open the physical file "C:---------". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Unable to open the physical file "C:-------". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Cannot open user default database. Login failed. Login failed for user 'Myserver\myname'. (.Net SqlClient Data Provider) Server Name: MYPC\SQLEXPRESS Error Number: 5120 Severity: 16 State: 101 Line Number: 65536 I follow the help link and it told me to move TCP before named pipes in the protocol order list. I tried it but it didn't help at all. What is the proper settings on SQL server or IIS in order to release process after closing a browser. How do I avoid getting this error? Thank you for your help

    Read the article

  • On Ubuntu get: "-bash: ./flume No such file or directory" BUT flume is there and executable. Same binary OK on RHEL

    - by lcbrevard
    This is already posted in serverfault - and may be more apprpriate there. Reworked a bit from the orginal posting. We have a product built on CentOS 4 32-bit Linux that runs unmodified on 32- and 64-bit CentOS/RHEL 4 and 5 and SLES 10. It also runs unmodified on SLES 9 64-bit. [SLES 9 32-bit requires a different libstdc++.] The name of the main binary executable is 'flume' Yesterday we tried to put this on 64-bit Ubuntu 10 and, even though the file is there and the right size, we get: -bash: ./flume: No such file or directory 'file flume' shows it to be a 32-bit ELF (can't remember the exact output and the system is on an isolated network) If put into /usr/local/bin, then 'which flume' returns: /usr/local/bin/flume The file is marked as executable (did 'chmod +x flume') and lsattr shows no problems with attribute bits. I was not able to try 'ldd flume' yet. I have also not tried 'strace flume'. Currently I am with an air conditioning failure. [It's been that kind of week!] I now suspect that some library is not there. This is a profoundly unhelpful message and one I have never seen before. Is this peculiar to Ubuntu or perhaps just to this installation. We gave up and moved to a RHEL 4 system and everything is fine. But I sure would like to know what causes this.

    Read the article

  • Why does 64-bit Windows need a separate "Program Files (x86)" folder?

    - by Stephen Jennings
    I know that on a 64-bit version of Windows the "Program Files" folder is for 64-bit programs and the "Program Files (x86)" folder is for 32-bit programs, but why is this even necessary? By "necessary", I don't mean "why could Microsoft not have made any other design decisions?" because of course they could have. Rather, I mean, "why, given the current design of 64-bit Windows, must 32-bit programs have a separate top-level folder from 64-bit programs?" There are plenty of questions on Super User and elsewhere that assert "one is for 32-bit programs, one is for 64-bit programs", but none that I can find give the reason. From my experience, it doesn't seem to matter whether a 32-bit program is installed in the correct place or not. Does Windows somehow present itself differently to a program running out of "Program Files (x86)"? Is there a description that shows exactly what's different for a program installed in "Program Files (x86)" instead of "Program Files"? I think it's unlikely that Microsoft would introduce a new folder without a legitimate technical reason.

    Read the article

  • Windows Server don't connect to network share

    - by user104775
    Windows Server don't connect to network share. Network share is work. Ping Blockquote Pinging 109.123.146.223 with 32 bytes of data: Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Reply from 109.123.146.223: bytes=32 time<1ms TTL=63 Ping statistics for 109.123.146.223: Packets: Sent = 3, Received = 3, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms net view \shareaddress Blockquote System error 53 has occurred. The network path was not found. When network share was connected, I was got a error message: Blockquote \ "Mapped disk letter" refers to a location that is unavailable. It could be on a hard drive on this computer, or on a network. Check to make sure that the disk is properly inserted, or that you are connected to the Internet or your network, and then try again. If it still cannot be located, the information might have been moved to a different location Network share mounted via Group Policy. Any ideas?

    Read the article

  • .NET 3.5 Installation Problems in Windows 8

    - by Rick Strahl
    Windows 8 installs with .NET 4.5. A default installation of Windows 8 doesn't seem to include .NET 3.0 or 3.5, although .NET 2.0 does seem to be available by default (presumably because Windows has app dependencies on that). I ran into some pretty nasty compatibility issues regarding .NET 3.5 which I'll describe in this post. I'll preface this by saying that depending on how you install Windows 8 you may not run into these issues. In fact, it's probably a special case, but one that might be common with developer folks reading my blog. Specifically it's the install order that screwed things up for me -  installing Visual Studio before explicitly installing .NET 3.5 from Windows Features - in particular. If you install Visual Studio 2010 I highly recommend you install .NET 3.5 from Windows features BEFORE you install Visual Studio 2010 and save yourself the trouble I went through. So when I installed Windows 8, and then looked at the Windows Features to install after the fact in the Windows Feature dialog, I thought - .NET 3.5 - who needs it. I'd be happy to not have to install .NET 3.5, but unfortunately I found out quite a while after initial installation that one of my applications/tools (DevExpress's awesome CodeRush) depends on it and won't install without it. Enabling .NET 3.5 in Windows 8 If you want to run .NET 3.5 on Windows 8, don't download an installer - those installers don't work on Windows 8, and you don't need to do this because you can use the Windows Features dialog to enable .NET 3.5: And that *should* do the trick. If you do this before you install other apps that require .NET 3.5 and install a non-SP1 one version of it, you are going to have no problems. Unfortunately for me, even after I've installed the above, when I run the CodeRush installer I still get this lovely dialog: Now I double checked to see if .NET 3.5 is installed - it is, both for 32 bit and 64 bit. I went as far as creating a small .NET Console app and running it to verify that it actually runs. And it does… So naturally I thought the CodeRush installer is a little whacky. After some back and forth Alex Skorkin on Twitter pointed me in the right direction: He asked me to look in the registry for exact info on which version of .NET 3.5 is installed here: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP where I found that .NET 3.5 SP1 was installed. This is the 64 bit key which looks all correct. However, when I looked under the 32 bit node I found: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\NET Framework Setup\NDP\v3.5 Notice that the service pack number is set to 0, rather than 1 (which it was for the 64 bit install), which is what the installer requires. So to summarize: the 64 bit version is installed with SP1, the 32 bit version is not. Uhm, Ok… thanks for that! Easy to fix, you say - just install SP1. Nope, not so easy because the standalone installer doesn't work on Windows 8. I can't get either .NET 3.5 installer or the SP 1 installer to even launch. They simply start and hang (or exit immediately) without messages. I also tried to get Windows to update .NET 3.5 by checking for Windows Updates, which should pick up on the dated version of .NET 3.5 and pull down SP1, but that's also no go. Check for Updates doesn't bring down any updates for me yet. I'm sure at some random point in the future Windows will deem it necessary to update .NET 3.5 to SP1, but at this point it's not letting me coerce it to do it explicitly. How did this happen I'm not sure exactly whether this is the cause and effect, but I suspect the story goes like this: Installed Windows 8 without support for .NET 3.5 Installed Visual Studio 2010 which installs .NET 3.5 (no SP) I now had .NET 3.5 installed but without SP1. I then: Tried to install CodeRush - Error: .NET 3.5 SP1 required Enabled .NET 3.5 in Windows Features I figured enabling the .NET 3.5 Windows Features would do the trick. But still no go. Now I suspect Visual Studio installed the 32 bit version of .NET 3.5 on my machine and Windows Features detected the previous install and didn't reinstall it. This left the 32 bit install at least with no SP1 installed. How to Fix it My final solution was to completely uninstall .NET 3.5 *and* to reboot: Go to Windows Features Uncheck the .NET Framework 3.5 Restart Windows Go to Windows Features Check .NET Framework 3.5 and voila, I now have a proper installation of .NET 3.5. I tried this before but without the reboot step in between which did not work. Make sure you reboot between uninstalling and reinstalling .NET 3.5! More Problems The above fixed me right up, but in looking for a solution it seems that a lot of people are also having problems with .NET 3.5 installing properly from the Windows Features dialog. The problem there is that the feature wasn't properly loading from the installer disks or not downloading the proper components for updates. It turns out you can explicitly install Windows features using the DISM tool in Windows.dism.exe /online /enable-feature /featurename:NetFX3 /Source:f:\sources\sxs You can try this without the /Source flag first - which uses the hidden Windows installer files if you kept those. Otherwise insert the DVD or ISO and point at the path \sources\sxs path where the installer lives. This also gives you a little more information if something does go wrong.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Virtual host in Apache Zend

    - by llocani
    I'd like to ask you if you can tell me why I can't get Vhost in Apache to work my Vhostconf is: NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] DocumentRoot "E:/Archivos de programa/Zend/Apache2/htdocs" ServerName localhost <Directory "E:/Archivos de programa/Zend/Apache2/htdocs"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> #AllowOveride all </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands" ServerName healinghands.loc <Directory "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands/logs/error.log" CustomLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/HealingHands/logs/access.log" common #AllowOveride all </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] DocumentRoot "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects" ServerName dev.loc <Directory "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects"> Options FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/logs/error.log" CustomLog "E:/Documents and Settings/dvieira/Mis documentos/NetBeansProjects/logs/access.log" common #AllowOveride all </VirtualHost> My httpd.conf is: ServerRoot "E:\Archivos de programa\Zend\Apache2" Listen 80 LoadModule actions_module modules/mod_actions.so LoadModule alias_module modules/mod_alias.so LoadModule asis_module modules/mod_asis.so LoadModule auth_basic_module modules/mod_auth_basic.so LoadModule auth_digest_module modules/mod_auth_digest.so LoadModule authn_default_module modules/mod_authn_default.so LoadModule authn_file_module modules/mod_authn_file.so LoadModule authz_default_module modules/mod_authz_default.so LoadModule authz_groupfile_module modules/mod_authz_groupfile.so LoadModule authz_host_module modules/mod_authz_host.so LoadModule authz_user_module modules/mod_authz_user.so LoadModule autoindex_module modules/mod_autoindex.so LoadModule cgi_module modules/mod_cgi.so LoadModule dir_module modules/mod_dir.so LoadModule env_module modules/mod_env.so LoadModule filter_module modules/mod_filter.so LoadModule headers_module modules/mod_headers.so LoadModule imagemap_module modules/mod_imagemap.so LoadModule include_module modules/mod_include.so LoadModule info_module modules/mod_info.so LoadModule isapi_module modules/mod_isapi.so LoadModule log_config_module modules/mod_log_config.so LoadModule mime_module modules/mod_mime.so LoadModule mime_magic_module modules/mod_mime_magic.so LoadModule negotiation_module modules/mod_negotiation.so LoadModule rewrite_module modules/mod_rewrite.so LoadModule setenvif_module modules/mod_setenvif.so LoadModule ssl_module modules/mod_ssl.so LoadModule status_module modules/mod_status.so LoadModule userdir_module modules/mod_userdir.so <IfModule !mpm_netware_module> <IfModule !mpm_winnt_module> User daemon Group daemon </IfModule> </IfModule> ServerAdmin [email protected] DocumentRoot "E:\Archivos de programa\Zend\Apache2/htdocs" <Directory /> Options FollowSymLinks AllowOverride all Order allow,deny Allow from all </Directory> <IfModule dir_module> DirectoryIndex index.php index.html home.php </IfModule> <FilesMatch "^\.ht"> Order allow,deny Deny from all Satisfy All </FilesMatch> ErrorLog "logs/error.log" LogLevel warn <IfModule log_config_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %b" common <IfModule logio_module> LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" %I %O" combinedio </IfModule> CustomLog "logs/access.log" common </IfModule> <IfModule alias_module> Alias /NetBeansProjects "E:\Documents and Settings\dvieira\Mis documentos\NetBeansProjects" ScriptAlias /cgi-bin/ "E:\Archivos de programa\Zend\Apache2/cgi-bin/" </IfModule> <IfModule cgid_module> </IfModule> <Directory "E:\Archivos de programa\Zend\Apache2/cgi-bin"> AllowOverride None Options None Order allow,deny Allow from all </Directory> DefaultType text/plain <IfModule mime_module> TypesConfig conf/mime.types AddType application/x-compress .Z AddType application/x-gzip .gz .tgz </IfModule> Include conf/extra/httpd-vhosts.conf <IfModule ssl_module> SSLRandomSeed startup builtin SSLRandomSeed connect builtin </IfModule> Include "conf/zend.conf" NameVirtualHost *:80 <VirtualHost *:80> Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/zend-default-vhost-80.conf" </VirtualHost> Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/globals-*.conf" Include "E:\Archivos de programa\Zend\ZendServer/etc/sites.d/vhost_*.conf" And my host in Windows: 127.0.0.1 localhost 127.0.0.1 healinghands.loc 127.0.0.1 dev.loc And I can't get any of the browser to recognize dev.loc or healinghands.loc but a ping does it. Localhost is working fine. I've spent 3 days now traying to solve this for my one but I finally quit and have to ask. The error should be this Error Code 11002: host not found. Background: this error indicates that the gateway could not find an authoritative DNS server for the website you are trying to access. Date: 5/20/2013 5:51:03 PM Server: Source: DNS problem. i'd like to add this ping Haciendo ping a healinghands.loc [127.0.0.1] con 32 bytes de datos: Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Respuesta desde 127.0.0.1: bytes=32 tiempo<1m TTL=128 Estadísticas de ping para 127.0.0.1: Paquetes: enviados = 4, recibidos = 4, perdidos = 0 (0% perdidos), Tiempos aproximados de ida y vuelta en milisegundos: Mínimo = 0ms, Máximo = 0ms, Media = 0ms Today i've tryed something: i've add this domains into the exceptions of mi ie proxy config. This worked for healinghands.loc but not for dev.loc i really do not understand why, both config are exactly the same except for de documentroot. I will continue searching

    Read the article

  • Wireless Problem on Acer Aspire 5610z

    - by Ugur Can Yalaki
    I installed ubuntu 12.04 on my machine, but I can't get wireless connection to work. My computer is Acer Aspire 5610z. I found that some other people that have same computer, face the same problem. Here is some information about it: ****** info trace ****** * uname -a * Linux ucy-Aspire-5610Z 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:22:28 UTC 2013 i686 i686 i386 GNU/Linux * lsb_release * Distributor ID: Ubuntu Description: Ubuntu 12.04.3 LTS Release: 12.04 Codename: precise * lspci * 05:00.0 Network controller [0280]: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 01) Subsystem: AMBIT Microsystem Corp. Device [1468:0422] Kernel driver in use: b43-pci-bridge 06:01.0 Ethernet controller [0200]: Broadcom Corporation BCM4401-B0 100Base-TX [14e4:170c] (rev 02) Subsystem: Acer Incorporated [ALI] Device [1025:0090] Kernel driver in use: b44 * lsusb * Bus 001 Device 004: ID 04e8:6863 Samsung Electronics Co., Ltd Bus 001 Device 002: ID 5986:0100 Acer, Inc Orbicam Bus 002 Device 002: ID 046d:c52f Logitech, Inc. Wireless Mouse M305 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub * PCMCIA Card Info * PRODID_1="" PRODID_2="" PRODID_3="" PRODID_4="" MANFID=0000,0000 FUNCID=255 * iwconfig * * rfkill * 0: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no * lsmod * ssb_hcd 12781 0 ssb 51554 2 ssb_hcd,b44 * nm-tool * NetworkManager Tool State: connected (global) Device: usb0 [Wired connection 2] ------------------------------------------- Type: Wired Driver: rndis_host State: connected Default: yes HW Address: Capabilities: Carrier Detect: yes Wired Properties Carrier: on IPv4 Settings: Address: 192.168.42.7 Prefix: 24 (255.255.255.0) Gateway: 192.168.42.129 DNS: 192.168.42.129 IPv6 Settings: Address: ::a05d:a1ff:fea4:1738 Prefix: 64 Gateway: fe80::504d:76ff:fe86:db04 Address: fe80::a05d:a1ff:fea4:1738 Prefix: 64 Gateway: fe80::504d:76ff:fe86:db04 DNS: fe80::504d:76ff:fe86:db04 Device: eth2 ----------------------------------------------------------------- Type: Wired Driver: b44 State: unavailable Default: no HW Address: Capabilities: Carrier Detect: yes Wired Properties Carrier: off * NetworkManager.state * [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true * NetworkManager.conf * [main] plugins=ifupdown,keyfile dns=dnsmasq [ifupdown] managed=false * interfaces * auto lo iface lo inet loopback * iwlist * * resolv.conf * nameserver 127.0.0.1 * blacklist * [/etc/modprobe.d/blacklist-ath_pci.conf] blacklist ath_pci [/etc/modprobe.d/blacklist-bcm43.conf] blacklist b43 blacklist b43legacy blacklist ssb blacklist bcm43xx blacklist brcm80211 blacklist brcmfmac blacklist brcmsmac blacklist bcma [/etc/modprobe.d/blacklist.conf] blacklist evbug blacklist usbmouse blacklist usbkbd blacklist eepro100 blacklist de4x5 blacklist eth1394 blacklist snd_intel8x0m blacklist snd_aw2 blacklist i2c_i801 blacklist prism54 blacklist bcm43xx blacklist garmin_gps blacklist asus_acpi blacklist snd_pcsp blacklist pcspkr blacklist amd76x_edac * modinfo * filename: /lib/modules/3.8.0-32-generic/kernel/drivers/usb/host/ssb-hcd.ko license: GPL description: Common USB driver for SSB Bus author: Hauke Mehrtens srcversion: E127A51EDC8F44D2C2A8F15 alias: ssb:v4243id0819rev* alias: ssb:v4243id0817rev* alias: ssb:v4243id0808rev* depends: ssb intree: Y vermagic: 3.8.0-32-generic SMP mod_unload modversions 686 filename: /lib/modules/3.8.0-32-generic/kernel/drivers/ssb/ssb.ko license: GPL description: Sonics Silicon Backplane driver srcversion: 14621F6EC014F731244437C alias: pci:v000014E4d00004350sv*sd*bc*sc*i* alias: pci:v000014E4d0000432Csv*sd*bc*sc*i* alias: pci:v000014E4d0000432Bsv*sd*bc*sc*i* alias: pci:v000014E4d00004329sv*sd*bc*sc*i* alias: pci:v000014E4d00004328sv*sd*bc*sc*i* alias: pci:v000014E4d00004325sv*sd*bc*sc*i* alias: pci:v000014E4d00004324sv*sd*bc*sc*i* alias: pci:v000014E4d0000A8D6sv*sd*bc*sc*i* alias: pci:v000014E4d00004322sv*sd*bc*sc*i* alias: pci:v000014E4d00004321sv*sd*bc*sc*i* alias: pci:v000014E4d00004320sv*sd*bc*sc*i* alias: pci:v000014E4d00004319sv*sd*bc*sc*i* alias: pci:v000014A4d00004318sv*sd*bc*sc*i* alias: pci:v000014E4d00004318sv*sd*bc*sc*i* alias: pci:v000014E4d00004315sv*sd*bc*sc*i* alias: pci:v000014E4d00004312sv*sd*bc*sc*i* alias: pci:v000014E4d00004311sv*sd*bc*sc*i* alias: pci:v000014E4d00004307sv*sd*bc*sc*i* alias: pci:v000014E4d00004306sv*sd*bc*sc*i* alias: pci:v000014E4d00004301sv*sd*bc*sc*i* depends: intree: Y vermagic: 3.8.0-32-generic SMP mod_unload modversions 686 * udev rules * PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb1:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb2:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb3:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" * dmesg * [ 2.385241] ssb: Found chip with id 0x4311, rev 0x01 and package 0x00 [ 2.385256] ssb: Core 0 found: ChipCommon (cc 0x800, rev 0x11, vendor 0x4243) [ 2.385266] ssb: Core 1 found: IEEE 802.11 (cc 0x812, rev 0x0A, vendor 0x4243) [ 2.385276] ssb: Core 2 found: USB 1.1 Host (cc 0x817, rev 0x03, vendor 0x4243) [ 2.385286] ssb: Core 3 found: PCI-E (cc 0x820, rev 0x01, vendor 0x4243) [ 2.448147] ssb: Sonics Silicon Backplane found on PCI device 0000:05:00.0 [ 2.468112] ssb: Found chip with id 0x4401, rev 0x02 and package 0x00 [ 2.468124] ssb: Core 0 found: Fast Ethernet (cc 0x806, rev 0x07, vendor 0x4243) [ 2.468132] ssb: Core 1 found: V90 (cc 0x807, rev 0x03, vendor 0x4243) [ 2.468140] ssb: Core 2 found: PCI (cc 0x804, rev 0x0A, vendor 0x4243) [ 2.508230] ssb: Sonics Silicon Backplane found on PCI device 0000:06:01.0 [ 2.528620] b44 ssb1:0 eth0: Broadcom 44xx/47xx 10/100 PCI ethernet driver ******** done ******** Thank you already for your help.

    Read the article

  • The Joy Of Hex

    - by Jim Giercyk
    While working on a mainframe integration project, it occurred to me that some basic computer concepts are slipping into obscurity. For example, just about anyone can tell you that a 64-bit processor is faster than a 32-bit processer. A grade school child could tell you that a computer “speaks” in ‘1’s and ‘0’s. Some people can even tell you that there are 8 bits in a byte. However, I have found that even the most seasoned developers often can’t explain the theory behind those statements. That is not a knock on programmers; in the age of IntelliSense, what reason do we have to work with data at the bit level? Many computer theory classes treat bit-level programming as a thing of the past, no longer necessary now that storage space is plentiful. The trouble with that mindset is that the world is full of legacy systems that run programs written in the 1970’s.  Today our jobs require us to extract data from those systems, regardless of the format, and that often involves low-level programming. Because it seems knowledge of the low-level concepts is waning in recent times, I thought a review would be in order.       CHARACTER: See Spot Run HEX: 53 65 65 20 53 70 6F 74 20 52 75 6E DECIMAL: 83 101 101 32 83 112 111 116 32 82 117 110 BINARY: 01010011 01100101 01100101 00100000 01010011 01110000 01101111 01110100 00100000 01010010 01110101 01101110 In this example, I have broken down the words “See Spot Run” to a level computers can understand – machine language.     CHARACTER:  The character level is what is rendered by the computer.  A “Character Set” or “Code Page” contains 256 characters, both printable and unprintable.  Each character represents 1 BYTE of data.  For example, the character string “See Spot Run” is 12 Bytes long, exclusive of the quotation marks.  Remember, a SPACE is an unprintable character, but it still requires a byte.  In the example I have used the default Windows character set, ASCII, which you can see here:  http://www.asciitable.com/ HEX:  Hex is short for hexadecimal, or Base 16.  Humans are comfortable thinking in base ten, perhaps because they have 10 fingers and 10 toes; fingers and toes are called digits, so it’s not much of a stretch.  Computers think in Base 16, with numeric values ranging from zero to fifteen, or 0 – F.  Each decimal place has a possible 16 values as opposed to a possible 10 values in base 10.  Therefore, the number 10 in Hex is equal to the number 16 in Decimal.  DECIMAL:  The Decimal conversion is strictly for us humans to use for calculations and conversions.  It is much easier for us humans to calculate that [30 – 10 = 20] in decimal than it is for us to calculate [1E – A = 14] in Hex.  In the old days, an error in a program could be found by determining the displacement from the entry point of a module.  Since those values were dumped from the computers head, they were in hex. A programmer needed to convert them to decimal, do the equation and convert back to hex.  This gets into relative and absolute addressing, a topic for another day.  BINARY:  Binary, or machine code, is where any value can be expressed in 1s and 0s.  It is really Base 2, because each decimal place can have a possibility of only 2 characters, a 1 or a 0.  In Binary, the number 10 is equal to the number 2 in decimal. Why only 1s and 0s?  Very simply, computers are made up of lots and lots of transistors which at any given moment can be ON ( 1 ) or OFF ( 0 ).  Each transistor is a bit, and the order that the transistors fire (or not fire) is what distinguishes one value from  another in the computers head (or CPU).  Consider 32 bit vs 64 bit processing…..a 64 bit processor has the capability to read 64 transistors at a time.  A 32 bit processor can only read half as many at a time, so in theory the 64 bit processor should be much faster.  There are many more factors involved in CPU performance, but that is the fundamental difference.    DECIMAL HEX BINARY 0 0 0000 1 1 0001 2 2 0010 3 3 0011 4 4 0100 5 5 0101 6 6 0110 7 7 0111 8 8 1000 9 9 1001 10 A 1010 11 B 1011 12 C 1100 13 D 1101 14 E 1110 15 F 1111   Remember that each character is a BYTE, there are 2 HEX characters in a byte (called nibbles) and 8 BITS in a byte.  I hope you enjoyed reading about the theory of data processing.  This is just a high-level explanation, and there is much more to be learned.  It is safe to say that, no matter how advanced our programming languages and visual studios become, they are nothing more than a way to interpret bits and bytes.  There is nothing like the joy of hex to get the mind racing.

    Read the article

  • Checksum Transformation

    The Checksum Transformation computes a hash value, the checksum, across one or more columns, returning the result in the Checksum output column. The transformation provides functionality similar to the T-SQL CHECKSUM function, but is encapsulated within SQL Server Integration Services, for use within the pipeline without code or a SQL Server connection. As featured in The Microsoft Data Warehouse Toolkit by Joy Mundy and Warren Thornthwaite from the Kimbal Group. Have a look at the book samples especially Sample package for custom SCD handling. All input columns are passed through the transformation unaltered, those selected are used to generate the checksum which is passed out through a single output column, Checksum. This does not restrict the number of columns available downstream from the transformation, as columns will always flow through a transformation. The Checksum output column is in addition to all existing columns within the pipeline buffer. The Checksum Transformation uses an algorithm based on the .Net framework GetHashCode method, it is not consistent with the T-SQL CHECKSUM() or BINARY_CHECKSUM() functions. The transformation does not support the following Integration Services data types, DT_NTEXT, DT_IMAGE and DT_BYTES. ChecksumAlgorithm Property There ChecksumAlgorithm property is defined with an enumeration. It was first added in v1.3.0, when the FrameworkChecksum was added. All previous algorithms are still supported for backward compatibility as ChecksumAlgorithm.Original (0). Original - Orginal checksum function, with known issues around column separators and null columns. This was deprecated in the first SQL Server 2005 RTM release. FrameworkChecksum - The hash function is based on the .NET Framework GetHash method for object types. This is based on the .NET Object.GetHashCode() method, which unfortunately differs between x86 and x64 systems. For that reason we now default to the CRC32 option. CRC32 - Using a standard 32-bit cyclic redundancy check (CRC), this provides a more open implementation. The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox by hand. This process has been described in detail in the related FAQ entry for How do I install a task or transform component?, just select Checksum from the SSIS Data Flow Items list in the Choose Toolbox Items window. Downloads The Checksum Transformation is available for SQL Server 2005, SQL Server 2008 (includes R2) and SQL Server 2012. Please choose the version to match your SQL Server version, or you can install multiple versions and use them side by side if you have more than one version of SQL Server installed. Checksum Transformation for SQL Server 2005 Checksum Transformation for SQL Server 2008 Checksum Transformation for SQL Server 2012 Version History SQL Server 2012 Version 3.0.0.27 – SQL Server 2012 release. Includes upgrade support for both 2005 and 2008 packages to 2012. (5 Jun 2010) SQL Server 2008 Version 2.0.0.27 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . Fix for upgrade mappings between 2005 and 2008. (19 Oct 2010) Version 2.0.0.24 - SQL Server 2008 release. Introduces the new CRC-32 algorithm, which is consistent across x86 and x64.. The default algorithm is now CRC32. (29 Oct 2008) Version 2.0.0.6 - SQL Server 2008 pre-release. This version was released by mistake as part of the site migration, and had known issues. (20 Oct 2008) SQL Server 2005 Version 1.5.0.43 – Fix for CRC-32 algorithm that inadvertently made it sort dependent. Fix for race condition which sometimes lead to the error Item has already been added. Key in dictionary: '79764919' . (19 Oct 2010) Version 1.5.0.16 - Introduces the new CRC-32 algorithm, which is consistent across x86 and x64. The default algorithm is now CRC32. (20 Oct 2008) Version 1.4.0.0 - Installer refresh only. (22 Dec 2007) Version 1.4.0.0 - Refresh for minor UI enhancements. (5 Mar 2006) Version 1.3.0.0 - SQL Server 2005 RTM. The checksum algorithm has changed to improve cardinality when calculating multiple column checksums. The original algorithm is still available for backward compatibility. Fixed custom UI bug with Output column name not persisting. (10 Nov 2005) Version 1.2.0.1 - SQL Server 2005 IDW 15 June CTP. A user interface is provided, as well as the ability to change the checksum output column name. (29 Aug 2005) Version 1.0.0 - Public Release (Beta). (30 Oct 2004) Screenshot

    Read the article

  • Managing text-maps in a 2D array on to be painted on HTML5 Canvas

    - by weka
    So, I'm making a HTML5 RPG just for fun. The map is a <canvas> (512px width, 352px height | 16 tiles across, 11 tiles top to bottom). I want to know if there's a more efficient way to paint the <canvas>. Here's how I have it right now. How tiles are loaded and painted on map The map is being painted by tiles (32x32) using the Image() piece. The image files are loaded through a simple for loop and put into an array called tiles[] to be PAINTED on using drawImage(). First, we load the tiles... and here's how it's being done: // SET UP THE & DRAW THE MAP TILES tiles = []; var loadedImagesCount = 0; for (x = 0; x <= NUM_OF_TILES; x++) { var imageObj = new Image(); // new instance for each image imageObj.src = "js/tiles/t" + x + ".png"; imageObj.onload = function () { console.log("Added tile ... " + loadedImagesCount); loadedImagesCount++; if (loadedImagesCount == NUM_OF_TILES) { // Onces all tiles are loaded ... // We paint the map for (y = 0; y <= 15; y++) { for (x = 0; x <= 10; x++) { theX = x * 32; theY = y * 32; context.drawImage(tiles[5], theY, theX, 32, 32); } } } }; tiles.push(imageObj); } Naturally, when a player starts a game it loads the map they last left off. But for here, it an all-grass map. Right now, the maps use 2D arrays. Here's an example map. [[4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 4, 1, 1, 1, 1, 1], [1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 1, 1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 13, 13, 1, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 13, 13, 11, 11, 11, 13, 13, 13, 13, 13, 13, 13, 1], [13, 13, 13, 1, 1, 1, 1, 1, 1, 1, 13, 13, 13, 13, 13, 1], [1, 1, 1, 13, 13, 13, 13, 13, 13, 13, 13, 13, 13, 1, 1, 1]]; I get different maps using a simple if structure. Once the 2d array above is return, the corresponding number in each array will be painted according to Image() stored inside tile[]. Then drawImage() will occur and paint according to the x and y and times it by 32 to paint on the correct x-y coordinate. How multiple map switching occurs With my game, maps have five things to keep track of: currentID, leftID, rightID, upID, and bottomID. currentID: The current ID of the map you are on. leftID: What ID of currentID to load when you exit on the left of current map. rightID: What ID of currentID to load when you exit on the right of current map. downID: What ID of currentID to load when you exit on the bottom of current map. upID: What ID of currentID to load when you exit on the top of current map. Something to note: If either leftID, rightID, upID, or bottomID are NOT specific, that means they are a 0. That means they cannot leave that side of the map. It is merely an invisible blockade. So, once a person exits a side of the map, depending on where they exited... for example if they exited on the bottom, bottomID will the number of the map to load and thus be painted on the map. Here's a representational .GIF to help you better visualize: As you can see, sooner or later, with many maps I will be dealing with many IDs. And that can possibly get a little confusing and hectic. The obvious pros is that it load 176 tiles at a time, refresh a small 512x352 canvas, and handles one map at time. The con is that the MAP ids, when dealing with many maps, may get confusing at times. My question Is this an efficient way to store maps (given the usage of tiles), or is there a better way to handle maps? I was thinking along the lines of a giant map. The map-size is big and it's all one 2D array. The viewport, however, is still 512x352 pixels. Here's another .gif I made (for this question) to help visualize: Sorry if you cannot understand my English. Please ask anything you have trouble understanding. Hopefully, I made it clear. Thanks.

    Read the article

  • Converting Celsius Processor Temperature to Fahrenheit

    - by WindowsEscapist
    I'm editing a Conky theme. I would like it to output the processor temperatures in degrees Fahrenheit instead of Celsius. In the ~/.conkyrc file, the command sensors | grep 'Core 0' | cut -c18-19 is used to find the temperature in Celsius for the first processor core. I want to use bc to compute this (give it outputvalue*9/5+32). Problem is, bc wants just absolute values, and I see no way to pass it program output. If I try to use something like temp=$(sensors | grep 'Core 0' | cut -c18-19) & echo 'temp*9/5+32' | bc, it ends up giving me 32 because it registers "temp" as a 0.

    Read the article

  • Xenserver 5.5 U2 a bit unstable with an unstable W2003 VM

    - by twistedbrain
    In the last week I had to reboot the host system twice and the second one by means of the power button. The system is a Dell PE 6950 (4 Opteron dual core, 2,8 Ghz, 16 GB RAM, 900 GB disk in a RAID 10 array composed by 4 450GB 15000 rpm disks) with XenServer 5.5 U2. We're installing it and at now there are working in production a 2003 32 bit Windows server VM with 2 GB RAM, 3 vcpu and about 200 GB disk in 4 partition (12 GB boot, 20 GB program, 80 GB user data, 80 GB other data). The first time I was compressing many Windows 2003 folders (some tens GB by means of W2003 compressed folder option) from a Windows remote console and some hours before my colleague installed Backup Exec agent that was alredy installed and that required a reboot (that was pending). The console stopped responding, it was no more possible to connect by means of remote console or by means of the console of the XenCentre, it was still possible from the network to use the shared folder of the VM and the programs on it (2 db and a GIS program), but the print server didn't work any more and I couldn't give remote reboot from other domain controller hosts. I couldn't stop the virtual machine neither from the XenCentre, neither from the command line of the host also forcing the reboot. I had to reboot the host server. Yesterday it has been worst. I installed a template of another VM, CentOS 5.3 and then, put the DVD in the drive of the host. Then, before the install and after the boot I checked for defect the DVD and the W2003 VM began to respond slowly (I was connected by means of an administration remote console) the task manager showed only mid or low load, it is, only the first of the 3 vcpu was loaded (about 70%), while the other two were about at 20% and also the disk I/O was not so heavy. Then the users were not so happy because they couldn't any more use the MS word docs on the server. I immediately stopped the check of the DVD (to do that I had to force the stop of such Centos 5.3 VM), then some users could again use their docs, but other had still problems, so I decided to reboot the VM, but it doesn't stopped, neither from XenCenter, neither from command line, neither forcing the reboot. Then I tried to reboot of the host, but it didn't worked, neither from the XenCentre, neither from the host prompt (shutdown -r now as root: it told that it was shutting down, but then it didn't did that). So I had to power off by means of the power button of the server (before I tried some Magic SyS REQ, but I saw that Xen isn't compiled with this option enabled). What could you suggest about my problem, what can I look, search and see? In the W2003 VM logs there are no errors or warning to explain what happened. Some more exciting, amusing and inspiring words of poetry (the facts happened around 11 am): \# egrep -i 'err|warning' xensource.log [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: watching xenstore paths: [ /xapi/0/frontend/vbd/51712/hotplug; /local/domain/0/backend/vbd/0/51712/tapdisk-error ] with timeout 1200.000000 seconds [20100219 10:32:05.597|debug|culo|6301 unix-RPC|VBD.plug R:c81bcda701f6|xenops] watch: fired on /local/domain/0/backend/vbd/0/51712/tapdisk-error [20100219 10:32:14.314|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: watching xenstore paths: [ /local/domain/0/backend/vbd/0/51712/shutdown-done; /local/domain/0/error/device/vbd/51712/error ] with timeout 1200.000000 seconds [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/0 [20100219 10:32:14.337|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] xenstore-rm /local/domain/0/error/device/vbd/51712 [20100219 10:32:14.338|debug|culo|6335 unix-RPC|VBD.unplug R:9258f54578d6|xenops] watch: fired on /local/domain/0/error/device/vbd/51712/error [20100219 10:53:48.903|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 14 guest session [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.048|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51744 [20100219 10:53:52.085|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.086|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51728 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/tap/14 [20100219 10:53:52.122|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51712 [20100219 10:53:52.127|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vbd/14 [20100219 10:53:52.128|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vbd/51760 [20100219 10:53:52.496|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:52.497|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/0 [20100219 10:53:53.385|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/0/error/backend/vif/14 [20100219 10:53:53.386|debug|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|xenops] xenstore-rm /local/domain/14/error/device/vif/1 [20100219 10:53:53.389| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/0 [20100219 10:53:53.391| warn|culo|6418|Async.VM.hard_shutdown R:88d2095678f7|hotplug] Warning, deleting 'vif' entry from /xapi/14/hotplug/vif/1 [20100219 11:20:49.766|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|helpers] Ignoring exception: INTERNAL_ERROR: [ Xb.Noent ] while Vmops.destroy_domain: Destroying domid 11 guest session [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.339|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/832 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/tap/11 [20100219 11:20:50.360|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/768 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vbd/11 [20100219 11:20:50.366|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vbd/5696 [20100219 11:20:50.753|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] Device.Vif.hard_shutdown about to blow away backend and error paths [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/0/error/backend/vif/11 [20100219 11:20:50.754|debug|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|xenops] xenstore-rm /local/domain/11/error/device/vif/1 [20100219 11:20:50.757| warn|culo|6484|Async.VM.clean_shutdown R:78d3c3e28cb6|hotplug] Warning, deleting 'vif' entry from /xapi/11/hotplug/vif/1 [20100219 11:28:13.803|debug|culo|6610 inet-RPC|Connection to VM console R:e9f8b76e8975|console] error: INTERNAL_ERROR: [ Unix.Unix_error(63, "connect", "") ]

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How LINQ to Object statements work

    - by rajbk
    This post goes into detail as to now LINQ statements work when querying a collection of objects. This topic assumes you have an understanding of how generics, delegates, implicitly typed variables, lambda expressions, object/collection initializers, extension methods and the yield statement work. I would also recommend you read my previous two posts: Using Delegates in C# Part 1 Using Delegates in C# Part 2 We will start by writing some methods to filter a collection of data. Assume we have an Employee class like so: 1: public class Employee { 2: public int ID { get; set;} 3: public string FirstName { get; set;} 4: public string LastName {get; set;} 5: public string Country { get; set; } 6: } and a collection of employees like so: 1: var employees = new List<Employee> { 2: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 3: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 4: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 5: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 6: }; Filtering We wish to  find all employees that have an even ID. We could start off by writing a method that takes in a list of employees and returns a filtered list of employees with an even ID. 1: static List<Employee> GetEmployeesWithEvenID(List<Employee> employees) { 2: var filteredEmployees = new List<Employee>(); 3: foreach (Employee emp in employees) { 4: if (emp.ID % 2 == 0) { 5: filteredEmployees.Add(emp); 6: } 7: } 8: return filteredEmployees; 9: } The method can be rewritten to return an IEnumerable<Employee> using the yield return keyword. 1: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 2: foreach (Employee emp in employees) { 3: if (emp.ID % 2 == 0) { 4: yield return emp; 5: } 6: } 7: } We put these together in a console application. 1: using System; 2: using System.Collections.Generic; 3: //No System.Linq 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" }, 15: }; 16: var filteredEmployees = GetEmployeesWithEvenID(employees); 17:  18: foreach (Employee emp in filteredEmployees) { 19: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 20: emp.ID, emp.FirstName, emp.LastName, emp.Country); 21: } 22:  23: Console.ReadLine(); 24: } 25: 26: static IEnumerable<Employee> GetEmployeesWithEvenID(IEnumerable<Employee> employees) { 27: foreach (Employee emp in employees) { 28: if (emp.ID % 2 == 0) { 29: yield return emp; 30: } 31: } 32: } 33: } 34:  35: public class Employee { 36: public int ID { get; set;} 37: public string FirstName { get; set;} 38: public string LastName {get; set;} 39: public string Country { get; set; } 40: } Output: ID 2 First_Name Jim Last_Name Ashlock Country UK ID 4 First_Name Jill Last_Name Anderson Country AUS Our filtering method is too specific. Let us change it so that it is capable of doing different types of filtering and lets give our method the name Where ;-) We will add another parameter to our Where method. This additional parameter will be a delegate with the following declaration. public delegate bool Filter(Employee emp); The idea is that the delegate parameter in our Where method will point to a method that contains the logic to do our filtering thereby freeing our Where method from any dependency. The method is shown below: 1: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 2: foreach (Employee emp in employees) { 3: if (filter(emp)) { 4: yield return emp; 5: } 6: } 7: } Making the change to our app, we create a new instance of the Filter delegate on line 14 with a target set to the method EmployeeHasEvenId. Running the code will produce the same output. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, filterDelegate); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  37: public class Employee { 38: public int ID { get; set;} 39: public string FirstName { get; set;} 40: public string LastName {get; set;} 41: public string Country { get; set; } 42: } Lets use lambda expressions to inline the contents of the EmployeeHasEvenId method in place of the method. The next code snippet shows this change (see line 15).  For brevity, the Employee class declaration has been skipped. 1: public delegate bool Filter(Employee emp); 2:  3: public class Program 4: { 5: [STAThread] 6: static void Main(string[] args) 7: { 8: var employees = new List<Employee> { 9: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 10: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 11: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 12: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 13: }; 14: var filterDelegate = new Filter(EmployeeHasEvenId); 15: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 16:  17: foreach (Employee emp in filteredEmployees) { 18: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 19: emp.ID, emp.FirstName, emp.LastName, emp.Country); 20: } 21: Console.ReadLine(); 22: } 23: 24: static bool EmployeeHasEvenId(Employee emp) { 25: return emp.ID % 2 == 0; 26: } 27: 28: static IEnumerable<Employee> Where(IEnumerable<Employee> employees, Filter filter) { 29: foreach (Employee emp in employees) { 30: if (filter(emp)) { 31: yield return emp; 32: } 33: } 34: } 35: } 36:  The output displays the same two employees.  Our Where method is too restricted since it works with a collection of Employees only. Lets change it so that it works with any IEnumerable<T>. In addition, you may recall from my previous post,  that .NET 3.5 comes with a lot of predefined delegates including public delegate TResult Func<T, TResult>(T arg); We will get rid of our Filter delegate and use the one above instead. We apply these two changes to our code. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14:  15: foreach (Employee emp in filteredEmployees) { 16: Console.WriteLine("ID {0} First_Name {1} Last_Name {2} Country {3}", 17: emp.ID, emp.FirstName, emp.LastName, emp.Country); 18: } 19: Console.ReadLine(); 20: } 21: 22: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 23: foreach (var x in source) { 24: if (filter(x)) { 25: yield return x; 26: } 27: } 28: } 29: } We have successfully implemented a way to filter any IEnumerable<T> based on a  filter criteria. Projection Now lets enumerate on the items in the IEnumerable<Employee> we got from the Where method and copy them into a new IEnumerable<EmployeeFormatted>. The EmployeeFormatted class will only have a FullName and ID property. 1: public class EmployeeFormatted { 2: public int ID { get; set; } 3: public string FullName {get; set;} 4: } We could “project” our existing IEnumerable<Employee> into a new collection of IEnumerable<EmployeeFormatted> with the help of a new method. We will call this method Select ;-) 1: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 2: foreach (var emp in employees) { 3: yield return new EmployeeFormatted { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; 7: } 8: } The changes are applied to our app. 1: public class Program 2: { 3: [STAThread] 4: static void Main(string[] args) 5: { 6: var employees = new List<Employee> { 7: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 8: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 9: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 10: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 11: }; 12:  13: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 14: var formattedEmployees = Select(filteredEmployees); 15:  16: foreach (EmployeeFormatted emp in formattedEmployees) { 17: Console.WriteLine("ID {0} Full_Name {1}", 18: emp.ID, emp.FullName); 19: } 20: Console.ReadLine(); 21: } 22:  23: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 24: foreach (var x in source) { 25: if (filter(x)) { 26: yield return x; 27: } 28: } 29: } 30: 31: static IEnumerable<EmployeeFormatted> Select(IEnumerable<Employee> employees) { 32: foreach (var emp in employees) { 33: yield return new EmployeeFormatted { 34: ID = emp.ID, 35: FullName = emp.LastName + ", " + emp.FirstName 36: }; 37: } 38: } 39: } 40:  41: public class Employee { 42: public int ID { get; set;} 43: public string FirstName { get; set;} 44: public string LastName {get; set;} 45: public string Country { get; set; } 46: } 47:  48: public class EmployeeFormatted { 49: public int ID { get; set; } 50: public string FullName {get; set;} 51: } Output: ID 2 Full_Name Ashlock, Jim ID 4 Full_Name Anderson, Jill We have successfully selected employees who have an even ID and then shaped our data with the help of the Select method so that the final result is an IEnumerable<EmployeeFormatted>.  Lets make our Select method more generic so that the user is given the freedom to shape what the output would look like. We can do this, like before, with lambda expressions. Our Select method is changed to accept a delegate as shown below. TSource will be the type of data that comes in and TResult will be the type the user chooses (shape of data) as returned from the selector delegate. 1:  2: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 3: foreach (var x in source) { 4: yield return selector(x); 5: } 6: } We see the new changes to our app. On line 15, we use lambda expression to specify the shape of the data. In this case the shape will be of type EmployeeFormatted. 1:  2: public class Program 3: { 4: [STAThread] 5: static void Main(string[] args) 6: { 7: var employees = new List<Employee> { 8: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 9: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 10: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 11: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 12: }; 13:  14: var filteredEmployees = Where(employees, emp => emp.ID % 2 == 0); 15: var formattedEmployees = Select(filteredEmployees, (emp) => 16: new EmployeeFormatted { 17: ID = emp.ID, 18: FullName = emp.LastName + ", " + emp.FirstName 19: }); 20:  21: foreach (EmployeeFormatted emp in formattedEmployees) { 22: Console.WriteLine("ID {0} Full_Name {1}", 23: emp.ID, emp.FullName); 24: } 25: Console.ReadLine(); 26: } 27: 28: static IEnumerable<T> Where<T>(IEnumerable<T> source, Func<T, bool> filter) { 29: foreach (var x in source) { 30: if (filter(x)) { 31: yield return x; 32: } 33: } 34: } 35: 36: static IEnumerable<TResult> Select<TSource, TResult>(IEnumerable<TSource> source, Func<TSource, TResult> selector) { 37: foreach (var x in source) { 38: yield return selector(x); 39: } 40: } 41: } The code outputs the same result as before. On line 14 we filter our data and on line 15 we project our data. What if we wanted to be more expressive and concise? We could combine both line 14 and 15 into one line as shown below. Assuming you had to perform several operations like this on our collection, you would end up with some very unreadable code! 1: var formattedEmployees = Select(Where(employees, emp => emp.ID % 2 == 0), (emp) => 2: new EmployeeFormatted { 3: ID = emp.ID, 4: FullName = emp.LastName + ", " + emp.FirstName 5: }); A cleaner way to write this would be to give the appearance that the Select and Where methods were part of the IEnumerable<T>. This is exactly what extension methods give us. Extension methods have to be defined in a static class. Let us make the Select and Where extension methods on IEnumerable<T> 1: public static class MyExtensionMethods { 2: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 3: foreach (var x in source) { 4: if (filter(x)) { 5: yield return x; 6: } 7: } 8: } 9: 10: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 11: foreach (var x in source) { 12: yield return selector(x); 13: } 14: } 15: } The creation of the extension method makes the syntax much cleaner as shown below. We can write as many extension methods as we want and keep on chaining them using this technique. 1: var formattedEmployees = employees 2: .Where(emp => emp.ID % 2 == 0) 3: .Select (emp => new EmployeeFormatted { ID = emp.ID, FullName = emp.LastName + ", " + emp.FirstName }); Making these changes and running our code produces the same result. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new EmployeeFormatted { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (EmployeeFormatted emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } 55:  56: public class EmployeeFormatted { 57: public int ID { get; set; } 58: public string FullName {get; set;} 59: } Let’s change our code to return a collection of anonymous types and get rid of the EmployeeFormatted type. We see that the code produces the same output. 1: using System; 2: using System.Collections.Generic; 3:  4: public class Program 5: { 6: [STAThread] 7: static void Main(string[] args) 8: { 9: var employees = new List<Employee> { 10: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 11: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 12: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 13: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 14: }; 15:  16: var formattedEmployees = employees 17: .Where(emp => emp.ID % 2 == 0) 18: .Select (emp => 19: new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: } 23: ); 24:  25: foreach (var emp in formattedEmployees) { 26: Console.WriteLine("ID {0} Full_Name {1}", 27: emp.ID, emp.FullName); 28: } 29: Console.ReadLine(); 30: } 31: } 32:  33: public static class MyExtensionMethods { 34: public static IEnumerable<T> Where<T>(this IEnumerable<T> source, Func<T, bool> filter) { 35: foreach (var x in source) { 36: if (filter(x)) { 37: yield return x; 38: } 39: } 40: } 41: 42: public static IEnumerable<TResult> Select<TSource, TResult>(this IEnumerable<TSource> source, Func<TSource, TResult> selector) { 43: foreach (var x in source) { 44: yield return selector(x); 45: } 46: } 47: } 48:  49: public class Employee { 50: public int ID { get; set;} 51: public string FirstName { get; set;} 52: public string LastName {get; set;} 53: public string Country { get; set; } 54: } To be more expressive, C# allows us to write our extension method calls as a query expression. Line 16 can be rewritten a query expression like so: 1: var formattedEmployees = from emp in employees 2: where emp.ID % 2 == 0 3: select new { 4: ID = emp.ID, 5: FullName = emp.LastName + ", " + emp.FirstName 6: }; When the compiler encounters an expression like the above, it simply rewrites it as calls to our extension methods.  So far we have been using our extension methods. The System.Linq namespace contains several extension methods for objects that implement the IEnumerable<T>. You can see a listing of these methods in the Enumerable class in the System.Linq namespace. Let’s get rid of our extension methods (which I purposefully wrote to be of the same signature as the ones in the Enumerable class) and use the ones provided in the Enumerable class. Our final code is shown below: 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; //Added 4:  5: public class Program 6: { 7: [STAThread] 8: static void Main(string[] args) 9: { 10: var employees = new List<Employee> { 11: new Employee { ID = 1, FirstName = "John", LastName = "Wright", Country = "USA" }, 12: new Employee { ID = 2, FirstName = "Jim", LastName = "Ashlock", Country = "UK" }, 13: new Employee { ID = 3, FirstName = "Jane", LastName = "Jackson", Country = "CHE" }, 14: new Employee { ID = 4, FirstName = "Jill", LastName = "Anderson", Country = "AUS" } 15: }; 16:  17: var formattedEmployees = from emp in employees 18: where emp.ID % 2 == 0 19: select new { 20: ID = emp.ID, 21: FullName = emp.LastName + ", " + emp.FirstName 22: }; 23:  24: foreach (var emp in formattedEmployees) { 25: Console.WriteLine("ID {0} Full_Name {1}", 26: emp.ID, emp.FullName); 27: } 28: Console.ReadLine(); 29: } 30: } 31:  32: public class Employee { 33: public int ID { get; set;} 34: public string FirstName { get; set;} 35: public string LastName {get; set;} 36: public string Country { get; set; } 37: } 38:  39: public class EmployeeFormatted { 40: public int ID { get; set; } 41: public string FullName {get; set;} 42: } This post has shown you a basic overview of LINQ to Objects work by showning you how an expression is converted to a sequence of calls to extension methods when working directly with objects. It gets more interesting when working with LINQ to SQL where an expression tree is constructed – an in memory data representation of the expression. The C# compiler compiles these expressions into code that builds an expression tree at runtime. The provider can then traverse the expression tree and generate the appropriate SQL query. You can read more about expression trees in this MSDN article.

    Read the article

  • Windows 7 upgrade on XP and Vista

    - by icc97
    I am upgrading a Windows XP (32-bit) machine and a Windows Vista (32-bit) machine to Windows 7 (32-bit). The most important files and accounts are on the Windows XP machine. What I would like to do is the following: backup the XP machine using Windows Easy Transfer upgrade the Windows Vista machine to a fresh install of Windows 7 install the XP backup on the Vista machine and see if everything is working Is this possible? I would have thought its possible as once the Vista machine is upgraded to Windows 7 it should be the same as if I had upgraded the XP machine, but I don't want to waste my time if its not. Thanks

    Read the article

  • How can I add usbip modules on Redhat 6 kernel?

    - by Gk.
    I have RHEL 6 with # uname -r 2.6.32-131.0.15.el6.x86_64 I'm trying to build usbip modules on staging driver. Everything is OK. I have all needed *.ko files. But I cannot add those modules on running kernel. # pwd /lib/modules/2.6.32-131.0.15.el6.x86_64 # ls | grep ko usbip_common_mod.ko usbip.ko vhci-hcd.ko # modprobe usbip FATAL: Error inserting usbip (/lib/modules/2.6.32-131.0.15.el6.x86_64/usbip.ko): Required key not available # insmod usbip.ko insmod: error inserting 'usbip.ko': -1 Required key not available How can I add it? Do I need to rebuild whole kernel? TIA, giobuon

    Read the article

  • Siteminder webagent installation failed

    - by Sreekanth
    Hi All, I have tried to install a 32 bit Netegrity(CA) siteminder web agent on 64 bit Windows server 2003. The IIS server is V 6.0 and so it is also 64 bit. But the problem is while configuring the agent it is showing metabase error. Previously 64 bit web agent was installed on the same server and removed. The IIS V 6.0 is also reinstalled on the server. But after that when I tried to install 32 bit siteminder web agent, it is showing an error as said above. Is it possible to install a 32 bit siteminder web agent on 64 bit IIS? Please help me as soon as possible. Thanks in advance.

    Read the article

  • Installing Xen 4.0.1 from Source on Ubuntu 10.10

    - by markus
    I'm trying to install Xen 4.0.1 from Source on Ubuntu 10.10 Server Edition. I started with a clean machine and followed the instructions from https://help.ubuntu.com/community/Xen. So I installed the packages mentioned there with: sudo apt-get install gettext bin86 bcc libc6-dev-i386 iasl texinfo git When making the source with make world I get this error: + git clone -o xen -n git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen.git linux-2.6-pvops.git.tmp Initialized empty Git repository in /home/homer/xen/linux-2.6-pvops.git.tmp/.git/ remote: Counting objects: 1855434, done. remote: Compressing objects: 100% (291939/291939), done. Receiving objects: 100% (1855434/1855434), 368.49 MiB | 11.00 MiB/s, done. remote: Total 1855434 (delta 1553214), reused 1847760 (delta 1546656) Resolving deltas: 100% (1553214/1553214), done. + cd linux-2.6-pvops.git.tmp + git checkout -b xen/stable-2.6.32.x xen/xen/stable-2.6.32.x fatal: git checkout: branch xen/stable-2.6.32.x already exists make[3]: *** [linux-2.6-pvops.git/.valid-src] Error 128 Does anybody have an idea what i can do?

    Read the article

  • What is the fastest MD5 sum calculator?

    - by netvope
    I tested the speed of md5sum on a few Ubuntu 8.04 servers Pentium III 700 MHz: 52 MB/s Atom 1.6 GHz, 32-bit: 119 MB/s Core 2 (Yorkfield) 2.5GHz, 32-bit: 194 MB/s Core 2 (Yorkfield) 2.5GHz, 64-bit: 222 MB/s Then I downloaded a tool (by apt-get install) called md5deep and found that it's roughly 20% faster (as tested on the 32-bit Core 2 server). This makes me feel that the "vanilla" md5sum included in Ubuntu isn't the fastest one. Questions: Other than md5deep, are you aware of any MD5 calculators that are potentially faster than md5sum? (Answers for software from other OS are also welcomed.) If I want to compile md5sum myself, where can I download the source? What compiler options would you suggest for the Core 2 server? (note: gcc 4.2.4 in Ubuntu 8.04 does not seem to support -march=core2)

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >