Search Results

Search found 2313 results on 93 pages for 'twice'.

Page 88/93 | < Previous Page | 84 85 86 87 88 89 90 91 92 93  | Next Page >

  • Hard drive mounted at / , duplicate mounted hard drive after using MountManager

    - by HellHarvest
    possible duplicate post I'm running 12.04 64bit. My system is a dual boot for both Ubuntu and Windows7. Both operating systems are sharing the drive named "Elements". My volume named "Elements" is a 1TB SATA NTFS hard drive that shows up twice in the side bar in nautilus. One of the icons is functional and even has the convenient "eject" icon next to it. Below is a picture of the left menu in Nautilus, with System Monitor-File Systems tab open on top of it. Can someone advise me about how to get rid of this extra icon? I think the problem is much more deep-rooted than just a GUI glitch on Nautilus' part. The other icon does nothing but spit out the following error when I click on it (image below). This only happened AFTER I tried using Mount Manager to automate mounting the drive at start up. I've already uninstalled Mount Manager, and restarted, but the problem didn't go away. The hard drive does mount automatically now, so I guess that's cool. But now, every time I boot up now and open Nautilus, BOTH of these icons appear, one of which is fictitious and useless. According to the image above and the outputs of several other commands, it appears to be mounted at / In which case, no matter where I am in Nautilus when I try to click on that icon, of course it will tell me that that drive is in use by another program... Nautilus. I'm afraid of trying to unmount this hard drive (sdb6) because of where it appears to be mounted. I'm kind of a noob, and I have this gut feeling that tells me trying to unmount a drive at / will destroy my entire file system. This fear was further strengthened by the output of "$ fsck" at the very bottom of this post. Error immediately below when that 2nd "Elements" hard drive is clicked in Nautilus: Unable to mount Elements Mount is denied because the NTFS volume is already exclusively opened. The volume may be already mounted, or another software may use it which could be identified for example by the help of the 'fuser' command. It's odd to me that that error message above claims that it's an NTFS volume when everything else tell me that it's an ext4 volume. The actual hard drive "Elements" is in fact an NTFS volume. Here's the output of a few commands and configuration files that may be of interest: $ fuser -a / /: 2120r 2159rc 2160rc 2172r 2178rc 2180rc 2188r 2191rc 2200rc 2203rc 2205rc 2206r 2211r 2212r 2214r 2220r 2228r 2234rc 2246rc 2249rc 2254rc 2260rc 2261r 2262r 2277rc 2287rc 2291rc 2311rc 2313rc 2332rc 2334rc 2339rc 2343rc 2344rc 2352rc 2372rc 2389rc 2422r 2490r 2496rc 2501rc 2566r 2573rc 2581rc 2589rc 2592r 2603r 2611rc 2613rc 2615rc 2678rc 2927r 2981r 3104rc 4156rc 4196rc 4206rc 4213rc 4240rc 4297rc 5032rc 7609r 7613r 7648r 9593rc 18829r 18833r 19776r $ sudo df -h Filesystem Size Used Avail Use% Mounted on /dev/sdb6 496G 366G 106G 78% / udev 2.0G 4.0K 2.0G 1% /dev tmpfs 791M 1.5M 790M 1% /run none 5.0M 0 5.0M 0% /run/lock none 2.0G 672K 2.0G 1% /run/shm /dev/sda1 932G 312G 620G 34% /media/Elements /home/solderblob/.Private 496G 366G 106G 78% /home/solderblob /dev/sdb2 188G 100G 88G 54% /media/A2B24EACB24E852F /dev/sdb1 100M 25M 76M 25% /media/System Reserved $ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00093cab Device Boot Start End Blocks Id System /dev/sda1 2048 1953519615 976758784 7 HPFS/NTFS/exFAT Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d9b Device Boot Start End Blocks Id System /dev/sdb1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sdb2 206848 392378768 196085960+ 7 HPFS/NTFS/exFAT /dev/sdb3 392380414 1465147391 536383489 5 Extended /dev/sdb5 1456762880 1465147391 4192256 82 Linux swap / Solaris /dev/sdb6 392380416 1448374271 527996928 83 Linux /dev/sdb7 1448376320 1456758783 4191232 82 Linux swap / Solaris Partition table entries are not in disk order $ cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 defaults 0 1 UUID=F6549CC4549C88CF /media/Elements ntfs-3g users 0 0 $ sudo blkid /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ sudo blkid -c /dev/null (appears to be exactly the same as above) /dev/sda1: LABEL="Elements" UUID="F6549CC4549C88CF" TYPE="ntfs" /dev/sdb1: LABEL="System Reserved" UUID="5CDE130FDE12E156" TYPE="ntfs" /dev/sdb2: UUID="A2B24EACB24E852F" TYPE="ntfs" /dev/sdb6: UUID="77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e" TYPE="ext4" $ mount /dev/sdb6 on / type ext4 (rw) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda1 on /media/Elements type fuseblk (rw,noexec,nosuid,nodev,allow_other,blksize=4096) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,noexec,nosuid,nodev) /home/solderblob/.Private on /home/solderblob type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=76a47b0175afa48d,ecryptfs_fnek_sig=391b2d8b155215f7) gvfs-fuse-daemon on /home/solderblob/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=solderblob) /dev/sdb2 on /media/A2B24EACB24E852F type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) /dev/sdb1 on /media/System Reserved type fuseblk (rw,nosuid,nodev,allow_other,default_permissions,blksize=4096) $ ls -a . A2B24EACB24E852F Ubuntu 12.04.1 LTS amd64 .. Elements System Reserved $ cat /proc/mounts rootfs / rootfs rw 0 0 sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0 proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0 udev /dev devtmpfs rw,relatime,size=2013000k,nr_inodes=503250,mode=755 0 0 devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /run tmpfs rw,nosuid,relatime,size=809872k,mode=755 0 0 /dev/disk/by-uuid/77039a2a-83d4-47a1-8a8c-a2ec4e4dfd0e / ext4 rw,relatime,user_xattr,acl,barrier=1,data=ordered 0 0 none /sys/fs/fuse/connections fusectl rw,relatime 0 0 none /sys/kernel/debug debugfs rw,relatime 0 0 none /sys/kernel/security securityfs rw,relatime 0 0 none /run/lock tmpfs rw,nosuid,nodev,noexec,relatime,size=5120k 0 0 none /run/shm tmpfs rw,nosuid,nodev,relatime 0 0 /dev/sda1 /media/Elements fuseblk rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0 /home/solderblob/.Private /home/solderblob ecryptfs rw,relatime,ecryptfs_fnek_sig=391b2d8b155215f7,ecryptfs_sig=76a47b0175afa48d,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs 0 0 gvfs-fuse-daemon /home/solderblob/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0 /dev/sdb2 /media/A2B24EACB24E852F fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 /dev/sdb1 /media/System\040Reserved fuseblk rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096 0 0 gvfs-fuse-daemon /root/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,relatime,user_id=0,group_id=0 0 0 $ fsck fsck from util-linux 2.20.1 e2fsck 1.42 (29-Nov-2011) /dev/sdb6 is mounted. WARNING!!! The filesystem is mounted. If you continue you ***WILL*** cause ***SEVERE*** filesystem damage. Do you really want to continue<n>? no check aborted.

    Read the article

  • MySQL for Excel 1.1.3 has been released

    - by Javier Treviño
    The MySQL Windows Experience Team is proud to announce the release of MySQL for Excel version 1.1.3, the  latest addition to the MySQL Installer for Windows. MySQL for Excel is an application plug-in enabling data analysts to very easily access and manipulate MySQL data within Microsoft Excel. It enables you to directly work with a MySQL database from within Microsoft Excel so you can easily do tasks such as: Importing MySQL Data into Excel Exporting Excel data directly into MySQL to a new or existing table Editing MySQL data directly within Excel MySQL for Excel is installed using the MySQL Installer for Windows. The MySQL installer comes in 2 versions   Full (150 MB) which includes a complete set of MySQL products with their binaries included in the download Web (1.5 MB - a network install) which will just pull MySQL for Excel over the web and install it when run.   You can download MySQL Installer from our official Downloads page at http://dev.mysql.com/downloads/installer/. MySQL for Excel 1.1.3 introduces the following features:   Upon saving a Workbook containing Worksheets in Edit Mode, the user is asked if he wants to exit the Edit Mode on all Worksheets before their parent Workbook is saved so the Worksheets are saved unprotected, otherwise the Worksheets will remain protected and the users will be able to unprotect them later retrieving the passkeys from the application log after closing MySQL for Excel. Added background coloring to the column names header row of an Import Data operation to have the same look as the one in an Edit Data operation (i.e. gray-ish background). Connection passwords can be stored securely just like MySQL Workbench does and these secured passwords are shared with Workbench in the same way connections are. Changed the way the MySQL for Excel ribbon toggle button works, instead of just showing or hiding the add-in it actually opens and closes it. Added a connection test before any operation against the database (schema creation, data import, append, export or edition) so the operation dialog is not shown and a friendlier error message is shown.   Also this release contains the following bug fixes:   Added a check on every connection test for an expired password, if the password has been expired a dialog is now shown to the user to reset the password. Bug #17354118 - DON'T HANDLE EXPIRED PASSWORDS Added code to escape text values to be imported to an Excel worksheet that start with an equals sign so Excel does not treat those values as formulas that will fail evaluation. This is an option turned on by default that can be turned off by users if they wish to import values to be treated as Excel formulas. Bug #17354102 - ERROR IMPORTING TEXT VALUES TO EXCEL STARTING WITH AN EQUALS SIGN Added code to properly check the reason for a failing connection, if it's a failing password the user gets a dialog to retry the connection with a different password until the connection succeeds, a connection error not related to the password is thrown or the user cancels. If the failing connection is not related to a bad password an error message is shown to the users indicating the reason of the failure. Bug #16239007 - CONNECTIONS TO MYSQL SERVICES NOT RUNNING DISPLAY A WRONG PASSWORD ERROR MESSAGE Added global options dialog that can be accessed from the Schema Selection and DB Object Selection panels where the timeouts for the connection to the DB Server and for the query commands can be changed from their default values (15 seconds for the connection timeout and 30 seconds for the query timeout). MySQL Bug #68732, Bug #17191646 - QUERY TIMEOUT CANNOT BE ADJUSTED IN MYSQL FOR EXCEL Changed the Varchar(65,535) data type shown in the Export Data data type combo box to Text since the maximum row size is 65,535 bytes and any autodetected column data type with a length greater than 4,000 should be set to Text actually for the table to be created successfully. MySQL Bug #69779, Bug #17191633 - EXPORT FAILS FOR EXCEL FILES CONTAINING > 4000 CHARACTERS OF TEXT PER CELL Removed code that was replacing all spaces typed by the user in an overriden data type for a new column in an Export Data operation, also improved the data type detection code to flag as invalid data types with parenthesis but without any text inside or where the contents inside the parenthesis are not valid for the specific data type. Bug #17260260 - EXPORT DATA SET TYPE NOT WORKING WITH MEMBER VALUES CONTAINING SPACES Added support for the year data type with a length of 2 or 4 and a validation that valid values are integers between 1901-2155 (for 4-digit years) or between 0-99 (for 2-digit years). Bug #17259915 - EXPORT DATA YEAR DATA TYPE NOT RECOGNIZED IF DECLARED WITH A DISPLAY WIDTH) Fixed code for Export Data operations where users overrode the data type for columns typing Text in the data type combobox, which is a valid data type but was not recognized as such. Bug #17259490 - EXPORT DATA TEXT DATA TYPE NOT RECOGNIZED AS A VALID DATA TYPE Changed the location of the registry where the MySQL for Excel add-in is installed to HKEY_LOCAL_MACHINE instead of HKEY_CURRENT_USER so the add-in is accessible by all users and not only to the user that installed it. For this to work with Excel 2007 a hotfix may be required (see http://support.microsoft.com/kb/976477). MySQL Bug #68746, Bug #16675992 - EXCEL-ADD-IN IS ONLY INSTALLED FOR USER ACCOUNT THAT THE INSTALLATION RUNS UNDER Added support for Excel 2013 Single Document Interface, now that Excel 2013 creates 1 window per workbook also the Excel Add-In maintains an independent custom task pane in each window. MySQL Bug #68792, Bug #17272087 - MYSQL FOR EXCEL SIDEBAR DOES NOT APPEAR IN EXCEL 2013 (WITH WORKAROUND) Included the latest MySQL Utility with a code fix for the COM exception thrown when attempting to open Workbench in the Manage Connections window. Bug #17258966 - MYSQL WORKBENCH NOT OPENED BY CLICKING MANAGE CONNECTIONS HOTLABEL Fixed code for Append Data operations that was not applying a calculated automatic mapping correctly when the source and target tables had different number of columns, some columns with the same name but some of those lying on column indexes beyond the limit of the other source/target table. MySQL Bug #69220, Bug #17278349 - APPEND DOESN'T AUTOMATICALLY DETECT EXCEL COL HEADER WITH SAME NAME AS SQL FIELD Fixed some code for Edit Data operations that was escaping special characters twice (during edition in Excel and then upon sending the query to the MySQL server). MySQL Bug #68669, Bug #17271693 - A BACKSLASH IS INSERTED BEFORE AN APOSTROPHE EDITING TABLE WITH MYSQL FOR EXCEL Upgraded MySQL Utility with latest version that encapsulates dialog base classes and introduces more classes to handle Workbench connections, and removed these from the Excel project. Bug #16500331 - CAN'T DELETE CONNECTIONS CREATED WITHIN ADDIN You can access the MySQL for Excel documentation at http://dev.mysql.com/doc/refman/5.6/en/mysql-for-excel.html You can find our team’s blog at http://blogs.oracle.com/MySQLOnWindows. You can also post questions on our MySQL for Excel forum found at http://forums.mysql.com/. Enjoy and thanks for the support!

    Read the article

  • tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

    - by Daniel Moth
    We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_grid. This is part 2. tile_static memory You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy. Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view. In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(direct3d) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array // each tile of threads has its own copy of locA, // shared among the threads of the tile tile_static float locA[16][16]; Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors. tile_barrier In amp.h one of the types introduced is tile_barrier. You cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes a single member, the method wait. 15: // Given a tiled_index object named t_idx 16: t_idx.barrier.wait(); 17: // more code …in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal. Tiled Matrix Multiplication Example – part 2 So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tiling. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code. 01: void MatrixMultiplyTiled(vector<float>& vC, const vector<float>& vA, const vector<float>& vB, int M, int N, int W) 02: { 03: static const int TS = 16; 04: array_view<const float,2> a(M, W, vA); 05: array_view<const float,2> b(W, N, vB); 06: array_view<writeonly<float>,2> c(M,N,vC); 07: parallel_for_each(c.grid.tile< TS, TS >(), 08: [=] (tiled_index< TS, TS> t_idx) restrict(direct3d) 09: { 10: int row = t_idx.local[0]; int col = t_idx.local[1]; 11: float sum = 0.0f; 12: for (int i = 0; i < W; i += TS) { 13: tile_static float locA[TS][TS], locB[TS][TS]; 14: locA[row][col] = a(t_idx.global[0], col + i); 15: locB[row][col] = b(row + i, t_idx.global[1]); 16: t_idx.barrier.wait(); 17: for (int k = 0; k < TS; k++) 18: sum += locA[row][k] * locB[k][col]; 19: t_idx.barrier.wait(); 20: } 21: c[t_idx.global] = sum; 22: }); 23: } Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB) – we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time. May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool. Why would I introduce this tiling complexity into my code? Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory. Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts. Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory will not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 150 times faster than the serial-version you had, and you do not need to invest to make it 250 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the non-tiled model, cannot be tiled. Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Windows 8 Launch&ndash;Why OEM and Retailers Should STFU

    - by D'Arcy Lussier
    Microsoft has gotten a lot of flack for the Surface from OEM/hardware partners who create Windows-based devices and I’m sure, to an extent, retailers who normally stock and sell Windows-based devices. I mean we all know how this is supposed to work – Microsoft makes the OS, partners make the hardware, retailers sell the hardware. Now Microsoft is breaking the rules by not only offering their own hardware but selling them via online and through their Microsoft branded stores! The thought has been that Microsoft is trying to set a standard for the other hardware companies to reach for. Maybe. I hope, at some level, Microsoft may be covertly responding to frustrations associated with trusting the OEMs and Retailers to deliver on their part of the supply chain. I know as a consumer, I’m very frustrated with the Windows 8 launch. Aside from the Surface sales, there’s nothing happening at the retail level. Let me back up and explain. Over the weekend I visited a number of stores in hopes of trying out various Windows 8 devices. Out of three retailers (Staples, Best Buy, and Future Shop), not *one* met my expectations. Let me be honest with you Staples, I never really have high expectations from your computer department. If I need paper or pens, whatever, but computers – you’re not the top of my list for price or selection. Still, considering you flaunted Win 8 devices in your flyer I expected *something* – some sign of effort that you took the Windows 8 launch seriously. As I entered the 1910 Pembina Highway location in Winnipeg, there was nothing – no signage, no banners – nothing that would suggest Windows 8 had even launched. I made my way to the laptops. I had to play with each machine to determine which ones were running Windows 8. There wasn’t anything on the placards that made it obvious which were Windows 8 machines and which ones were Windows 7. Likewise, there was no easy way to identify the touch screen laptop (the HP model) from the others without physically touching the screen to verify. Horrible experience. In the same mall as the Staples I mentioned above, there’s a Future Shop. Surely they would be more on the ball. I walked in to the 1910 Pembina Highway location and immediately realized I would not get a better experience. Except for the sign by the front door mentioning Windows 8, there was *nothing* in the computer department pointing you to the Windows 8 devices. Like in Staples, the Win 8 laptops were mixed in with the Win 7 ones and there was nothing notable calling out which ones were running Win 8. I happened to hit up the St. James Street location today, thinking since its a busier store they must have more options. To their credit, they did have two staff members decked out in Windows 8 shirts and who were helping a customer understand Windows 8. But otherwise, there was nothing highlighting the Windows 8 devices and they were again mixed in with the rest of the Win 7 machines. Finally, we have the St. James Street Best Buy location here in Winnipeg. I’m sure Best Buy will have their act together. Nope, not even close. Same story as the others: minimal signage (there was a sign as you walked in with a link to this schedule of demo days), Windows 8 hardware mixed with the rest of the PC offerings, and no visible call-outs identifying which were Win 8 based. This meant that, like Future Shop and Staples, if you wanted to know which machine had Windows 8 you had to go and scrutinize each machine. Also, there was nothing identifying which ones were touch based and which were not. Just Another Day… To these retailers, it seemed that the Windows 8 launch was just another day, with another product to add to the showroom floor. Meanwhile, Apple has their dedicated areas *in all three stores*. It was dead simple to find where the Apple products were compared to the Windows 8 products. No wonder Microsoft is starting to push their own retail stores. No wonder Microsoft is trying to funnel orders through them instead of relying on these bloated retail big box stores who obviously can’t manage a product launch. It’s Not Just The Retailers… Remember when the Acer CEO, Founder, and President of Computer Global Operations all weighed in on how Microsoft releasing the Surface would have a “huge negative impact for the ecosystem and other brands may take a negative reaction”? Also remember the CEO stating “[making hardware] is not something you are good at so please think twice”? Well the launch day has come and gone, and so far Microsoft is the only one that delivered on having hardware available on the October 26th date. Oh sure, there are laptops running Windows 8 – but all in one desktop PCs? I’ve only seen one or two! And tablets are *non existent*, with some showing an early to late November availability on Best Buy’s website! So while the retailers could be doing more to make it easier to find Windows 8 devices, the manufacturers could help by *getting devices into stores*! That’s supposedly something that these companies are good at, according to the Acer CEO. So Here’s What the Retailers and Manufacturers Need To Do… Get Product Out The pivotal timeframe will be now to the end of November. We need to start seeing all these fantastic pieces of hardware ship – including the Samsung ATIV Smart PC Pro, the Acer Iconia, the Asus TAICHI 21, and the sexy Samsung Series 7 27” desktop. It’s not enough to see product announcements, we need to see actual devices. Make It Easy For Customers To Find Win8 Devices You want to make it easy to sell these things? Make it easy for people to find them! Have staff on hand that really know how these devices run and what can be done with them. Don’t just have a single demo day, have people who can demo it every day! Make It Easy to See the Features There’s touch screen desktops, touch screen laptops, tablets, non-touch laptops, etc. People need to easily find the features for each machine. If I’m looking for a touch-laptop, I shouldn’t need to sift through all the non-touch laptops to find them – at the least, I need to quickly be able to see which ones are touch. I feel silly even typing this because this should be retail 101 and I have no retail background (but I do have an extensive background as a customer). In Summary… Microsoft launching the Surface and selling them through their own channels isn’t slapping its OEM and retail partners in the face; its slapping them to wake the hell up and stop coasting through Windows launch events like they don’t matter. Unless I see some improvements from vendors and retailers in November, I may just hold onto my money for a Surface Pro even if I have to wait until early 2013. Your move OEM/Retailers. *Update – While my experience has been in Winnipeg, similar experiences have been voiced from colleagues in Calgary and Edmonton.

    Read the article

  • Rebuilding CoasterBuzz, Part III: The architecture using the "Web stack of love"

    - by Jeff
    This is the third post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch in the next month or two. More: Part I: Evolution, and death to WCF Part II: Hot data objects I finally hit a point in the re-do of CoasterBuzz where I feel like the major pieces are in place... rewritten, ported and what not, so that I can focus now on front-end design and more interesting creative problems. I've been asked on more than one occasion (OK, just twice) what's going on under the covers, so I figure this might be a good time to explain the overall architecture. As it turns out, I'm using a whole lof of the "Web stack of love," as Scott Hanselman likes to refer to it. Oh that Hanselman. First off, at the center of it all, is BizTalk. Just kidding. That's "enterprise architecture" humor, where every discussion starts with how they'll use BizTalk. Here are the bigger moving parts: It's fairly straight forward. A common library lives in a number of Web apps, all of which are (or will be) powered by ASP.NET MVC 4. They all talk to the same database. There is the main Web site, which also has the endpoint for the Silverlight-based Feed app. The cstr.bz site handles redirects, which are generated when news items are published and sent to Twitter. Facebook publishing is handled via the RSS Graffiti Facebook app. The API site handles requests from the Windows Phone app. The main site depends very heavily on POP Forums, the open source, MVC-based forum I maintain. It serves a number of functions, primarily handling users. These user objects serve in non-forum roles to handle things like news and database contributions, maintaining track records (coaster nerd for "list of rides I've been on") and, perhaps most importantly, paid club memberships. Before I get into more specifics, note that the "glue" for everything is Ninject, the dependency injection framework. I actually prefer StructureMap these days, but I started with Ninject in POP Forums a long time ago. POP Forums has a static class, PopForumsActivation, that new's up an instance of the container, and you can call it from where ever. The downside is that the forums require Ninject in your MVC app as the default dependency resolver. At some point, I'll decouple it, but for now it's not in the way. In the general sense, the entire set of apps follow a repository-service-controller-view pattern. Repos just do data access, service classes do business logic, controllers compose and route, views view. The forum also provides Scoring Game functionality. The Scoring Game is a reasonably abstract framework to award users points based on certain actions, and then award achievements when a certain number of point events happen. For example, the forum already awards a point when someone plus-one's a post you made. You can set up an achievement that says, "Give the user an award when they've had 100 posts plus'd." It also does zero-point entries into the ledger, so if you make a post, you could award an achievement based on 100 posts made. Wiring in the scoring game to CoasterBuzz functionality is just a matter of going to the Ninject container and getting an instance of the event publisher, and passing it events. Forum adapters were introduced into POP Forums a few versions ago, and they can intercept the model generated for forum topic lists and threads and designate an alternate view. These are used to make the "Day in Pictures" forum, where users can upload photos as frame-by-frame photo threads. Another adapter adds an association UI, so users can associate specific amusement parks with their trip report posts. The Silverlight-based Feed app talks to a simple JSON endpoint in the main app. This uses an underlying library I wrote ages ago, simply called Feeds, that aggregates event information. You inherit from a base class that creates instances of a publisher interface, and then use that class to send it an event type and any number of data fields. Feeds has two publishers: One is to the database, and that's used for the endpoint that talks to the Silverlight app. The second publisher publishes to Twitter, if the event is of the type "news." The wiring is a little strange, because for the new posts and topics events, I'm actually pulling out the forum repository classes from the Ninject container and replacing them with overridden methods to publish. I should probably be doing this at the service class level, but whatever. It's my mess. cstr.bz doesn't do anything interesting. It looks up the path, and if it has a match, does a 301 redirect to the long URL. The API site just serves up JSON for the Windows Phone app. The Windows Phone app is Silverlight, of course, and there isn't much to it. It does use the control toolkit, but beyond that, it relies on a simple class that creates a Webclient and calls the server for JSON to deserialize. The same class is now used by the Feed app, which used to use WCF. Simple is better. Data access in POP Forums is all straight SQL, because a lot of it was ported from the ASP.NET version. Most CoasterBuzz data access is handled by the Entity Framework, using the code-first model. The context class in this case does a lot of work to make sure that the table and key mapping works, since much of it breaks from the normal conventions of EF. One of the more powerful things you can do with EF, once you understand the little gotchas, is split tables by row into different entities. For example, a roller coaster photo has everything in the same row, including the metadata, the thumbnail bytes and the image itself. Obviously, if you want to get a list of photos to iterate over in a view, you don't want to get the image data. The use of navigation properties makes it easier to get just what you want. The front end includes Razor views in MVC, and jQuery is used for client-side goodness. I'm also using jQuery UI in a few places, for tabs, a dialog box and autocomplete. I'm also, tentatively, using jQuery Mobile. I've already ported most forum views to Mobile, but they need some work as v1.1 isn't finished yet. I'm not sure if I'll ship CoasterBuzz with mobile views or not yet. It's on the radar, but not something in my delivery criteria. That covers all of the big frameworks in play. Next time I hope to talk more about the front-end experience, which to me is where most of the fun is these days. Hoping to launch in the next month or two. Getting tired of looking at the old site!

    Read the article

  • Finally, upgrade from Nokia X3 to Samsung Galaxy S III

    This time, something slightly different but nonetheless not less interesting, hopefully. Living on a remote island like Mauritius, ill-praised 'Cyber Island' in the Indian Ocean, has its advantages in life style and relaxed environment to life in but in terms of technological aspects it can be quite a nightmare. Well, I guess this might be different story to report about... one day. Cyber Island Mauritius Despite it's shiny advertisement as Cyber Island and business in ICT hub to Africa, Mauritius is not on the latest track of available models in computer hardware or, in the context of this article, cellulars or smart-phone, or communication technology in general. Okay, I have to admit that this statement is only partly true. Money can buy, even here in Mauritius. Luckily, there are ways and ways to deal with this outcry of modern, read: technological, civilisation issues. Online shopping you might think? Yes, for sure, until you discover in your checkout procedure that a small island in the Indian Ocean isn't a preferred destination for delivery and the precious time you spent on putting your items into your cart and feeding your personal level of anticipation gets ruined on the last stint. Ordering from abroad saves you money Anyway, I got in touch with my personal courier and luckily there were some extra-kilos left in the luggage. First obstacle sorted, we have a Transporter! Okay, on the next occasion off to Amazon online and using their Prime service for fast delivery. Actually, the order was placed on Saturday evening and everything got delivered on Tuesday morning - nice job in less than 72 hours. Okay, among the items of that shopping rush I ordered a shiny Samsung Galaxy S III 16GB in oceanic blue - did I mention, that you hardly get a blue model in Mauritius? - for my BWE. Interesting side-notes: First, Amazon Germany dropped the prices for roughly 30% on the S3, and we got the 16GB model for less than 500 Euro (or approx. Rs. 19.500,-) compared to the usual Rs. 27.000,- on the local market. It even varies whether the local price is inclusive or exclusive VAT (15%). Second, since a while she was bothering me to get an iPhone and an iPad for her, fair enough I thought, decent hardware, posh design and reliable services. Until we watched the 'magical' introduction of Samsung's new models at the IFA exhibition, she read the bashing comments on Google+ on the iPhone 5 and I gave her a brief summary on the law suit between Apple and Samsung in the USA. So, yes, Samsung USA is right, the next big thing is already here - literally. My BWE loves the look and touch of the Galaxy S3. And for me it was more cost-effective in terms of purchases done at the App Store, ups, Play Store. Transfer of contacts, text messages and media files Okay, now that the hardware is in place, how to transfer all those contacts, text messages, media files, etc. between those two devices? In the past, I used to use the Nokia Communication Suite between various models but now for Android? Well, as usual Google and Bing are reliable friends and among the first hits I came across an article about How to Transfer Contacts from Nokia to Android. Couldn't be easier, right? Well, sort of... my main Windows systems are already running on Windows 8, and this actually caused problems with the mobile/smart-phone device drivers. The article provides the download for an older version 1.10 which upgrades to 2.11 (as time of writing this entry) but both couldn't get the Galaxy S3 and the Nokia connected. Shame on me... the product page clearly doesn't mention Windows 8 (for now) and Windows 8 isn't available for the general audience at all... After I took a spare machine running on Windows Vista everything went smooth. Software installed, upgrade done, device drivers for Android automatically downloaded and installed, and the same painless routine for the Nokia part. I think, I rebooted the system twice during the whole setup procedure but hey, it was more or less a distraction while coding some stuff in ASP.NET MVC and Telerik Kendo UI. The transfer of contacts and text messages was done via Wondershare MobileGo for Android, and all media files by moving the additional microSD card from one device to the other. But even without an external SD card, it would have been very easy to copy the files via Windows Explorer directly. Little catch and excellent service Fine, we are almost done and the only step left is to shift the SIM card... Ouch, gotcha! The X3 uses a standard size SIM card while the S III only accepts microSIM form factor. What an irony, bigger smartphone needs smaller SIM card. Luckily, the next showroom of Emtel is just 5 mins away up the road, and the service staff over there know their job. Finally, after roughly 10 mins of paper work, activation and small chit-chat, the S3 came to life on the mobile network. Owning a smart-phone now and knowing that my BWE would like to interact more on social networks away from home, especially to upload pictures and provide local 'check-ins', I activated a data package for her in advance, too. Even that it is Saturday, everything was already done and ready to be used. Nice bonus: The Emtel clerk directly offered me to set up the configuration for the Emtel data services, yes sure, go ahead, this saves me to search for that in the settings. Okay, spoiler-alert here, setting a static APN to access the Emtel network and the internet wouldn't be a challenge. But hey, she already had the phone in her hands and I could keep my eyes on the children. Well done, Emtel! Resume Thanks to the useful software package by Wondershare is was a hands-free experience to transfer all the data from a Nokia mobile on Symbian S60 to a Samsung Galaxy S III on Android Ice Cream Sandwich (ICS). In the future, this wont be a serious issue at all anymore thanks to synchronisation services and cloud storage. And for now, I'm only waiting for the official upgrades for Jelly Bean.

    Read the article

  • Thoughts on Build 2013

    - by D'Arcy Lussier
    Originally posted on: http://geekswithblogs.net/dlussier/archive/2013/06/30/153294.aspxAnd so another Build conference has come to an end. Below are my thoughts/perspectives on various aspects of the event. I’ll do a separate blog post on my thoughts of the Build message for developers. The Good Moscone center was a great venue for Build! Easy to get around, easy to get to, and well maintained, it was a very comfortable conference venue. Yeah, the free swag was nice. Build has built up an expectation that attendees will always get something; it’ll be interesting to see how Microsoft maintains this expectation over the next few Build events. I still maintain that free swag should never be the main reason one attends an event, and for me this was definitely just an added bonus. I’m planning on trying to use the Surface as a dedicated 2nd device at work for meetings, I’ll share my experiences over the next few months. The hackathon event was a great idea, although personally I couldn’t justify spending the money on a conference registration just to spend the entire conference coding. Still, the apps that were created were really great and there was a lot of passion and excitement around the hackathon. I wonder if they couldn’t have had the hackathon on the Monday/Tuesday for those that wanted to participate so they didn’t miss any of the actual conference over Wed/Thurs. San Francisco was a great city to host Build. Getting from hotels to the conference center was very easy (well especially for me, I was only 3 blocks away) and the city itself felt very safe. However, if I never have to fly into SFO again I’ll be alright with that! Delays going into and out of SFO and both apparently were due to the airport itself. The Bad Build is one of those oddities on the conference landscape where people will pay to commit to attending an event without knowing anything about the sessions. We got our list of conference sessions when we registered on Tuesday, not before. And even then, we only got titles and not descriptions (those were eventually made available via the conference’s mobile application). I get it…they’re going to make announcements and they don’t want to give anything away through the session titles. But honestly, there wasn’t anything in the session titles that I would have considered a surprise. Breakfasts were brutal. High-carb pastries, donuts, and muffins with fruit and hard boiled eggs does not a conference breakfast make. I can’t believe that the difference between a continental breakfast per person and a hot breakfast buffet would have been a huge impact to a conference fee that was already around $2000. The vendor area was anemic. I don’t know why Microsoft forces the vendors into cookie-cutter booth areas (this year they were all made of plywood material). WPC, TechEd – booth areas there allow the vendors to be creative with their displays. Not so much for Build. Really odd was the lack of Microsoft’s own representation around Bing. In the day 1 keynote Microsoft made a big deal about Bing as an API. Yet there was nobody in the vendor area set up to provide more information or have discussions with about the Bing API. The Ugly Our name badges were NFC enabled. The purpose of this, beyond the vendors being able to scan your info, wasn’t really made clear. An attendee I talked to showed how you could get a reader app on your phone so you can scan other members cards and collect their contact info – which is a kewl idea; business cards are so 1990’s. But I was *shocked* at the amount of information that was on our name badges! Here’s what’s displayed on our name badge: - Name - Company - Twitter Handle I’m ok with that. But here’s what actually gets read: - Name - Company - Address Used for Registration - Phone Number Used for Registration So sharing that info with another attendee, they get way more of my info than just how to find me on Twitter! Microsoft, you need to fix this for the future. If vendors want to collect information on attendees, they should be able to collect an ID from the badge, then get a report with corresponding records afterwards. My personal information should not be so readily available, and without my knowledge! Final Verdict Maybe its my older age, maybe its where I’m at in life with family, maybe its where I’m at in my career, but when I consider whether a conference experience was valuable I get to the core reason I attend: opportunities to learn, opportunities to network, opportunities to engage with Microsoft. Opportunities to Learn:  Sessions I attended were generally OK, with some really stand out ones on Day 2. I would love to see Microsoft adopt the Dojo format for a portion of their sessions. Hands On Labs are dull, lecture style sessions are great for information sharing. But a guided hands-on coding session (Read: Dojo) provides the best of both worlds. Being that all content is publically available online to everyone (Build attendee or not), the value of attending the conference sessions is decreased. The value though is in the discussions that take part in person afterwards, which leads to… Opportunities to Network: I enjoyed getting together with old friends and connecting with Twitter friends in person for the first time. I also had an opportunity to meet total strangers. So from a networking perspective, Build was fantastic! I still think it would have been great to have an area for ad-hoc discussions – where speakers could announce they’d be available for more questions after their sessions, or attendees who wanted to discuss more in depth on a topic with other attendees could arrange space. Some people have no problems being outgoing and making these things happen, but others are not and a structured model is more attractive. Opportunities to Engage with Microsoft: Hit and miss on this one. Outside of the vendor area, unless you cornered or reached out to a speaker, there wasn’t any defined way to connect with blue badges. And as I mentioned above, Microsoft didn’t have full representation in the vendor area (no Bing). All in all, Build was a fun party where I was informed about some new stuff and got some free swag. Was it worth the time away from home and the hit to my PD budget? I’d say Somewhat. Build is a great informational conference, but I wouldn’t call it a learning conference. Considering that TechEd seems to be moving to more of an IT Pro focus, independent developer conferences seem to be the best value for those looking to learn and not just be informed. With the rapid development cycle Microsoft is embracing, we’re already seeing Build happening twice within a 12 month period. If that continues, the value of attending Build in person starts to diminish – especially with so much content available online. If Microsoft wants Build to be a must-attend event in the future, they need to start incorporating aspects of Tech Ed, past PDCs, and other conferences so those that want to leave with more than free swag have something to attract them.

    Read the article

  • Finally, upgrade from Nokia X3 to Samsung Galaxy S III

    This time, something slightly different but nonetheless not less interesting, hopefully. Living on a remote island like Mauritius, ill-praised 'Cyber Island' in the Indian Ocean, has its advantages in life style and relaxed environment to life in but in terms of technological aspects it can be quite a nightmare. Well, I guess this might be different story to report about... one day. Cyber Island Mauritius Despite it's shiny advertisement as Cyber Island and business in ICT hub to Africa, Mauritius is not on the latest track of available models in computer hardware or, in the context of this article, cellulars or smart-phone, or communication technology in general. Okay, I have to admit that this statement is only partly true. Money can buy, even here in Mauritius. Luckily, there are ways and ways to deal with this outcry of modern, read: technological, civilisation issues. Online shopping you might think? Yes, for sure, until you discover in your checkout procedure that a small island in the Indian Ocean isn't a preferred destination for delivery and the precious time you spent on putting your items into your cart and feeding your personal level of anticipation gets ruined on the last stint. Ordering from abroad saves you money Anyway, I got in touch with my personal courier and luckily there were some extra-kilos left in the luggage. First obstacle sorted, we have a Transporter! Okay, on the next occasion off to Amazon online and using their Prime service for fast delivery. Actually, the order was placed on Saturday evening and everything got delivered on Tuesday morning - nice job in less than 72 hours. Okay, among the items of that shopping rush I ordered a shiny Samsung Galaxy S III 16GB in oceanic blue - did I mention, that you hardly get a blue model in Mauritius? - for my BWE. Interesting side-notes: First, Amazon Germany dropped the prices for roughly 30% on the S3, and we got the 16GB model for less than 500 Euro (or approx. Rs. 19.500,-) compared to the usual Rs. 27.000,- on the local market. It even varies whether the local price is inclusive or exclusive VAT (15%). Second, since a while she was bothering me to get an iPhone and an iPad for her, fair enough I thought, decent hardware, posh design and reliable services. Until we watched the 'magical' introduction of Samsung's new models at the IFA exhibition, she read the bashing comments on Google+ on the iPhone 5 and I gave her a brief summary on the law suit between Apple and Samsung in the USA. So, yes, Samsung USA is right, the next big thing is already here - literally. My BWE loves the look and touch of the Galaxy S3. And for me it was more cost-effective in terms of purchases done at the App Store, ups, Play Store. Transfer of contacts, text messages and media files Okay, now that the hardware is in place, how to transfer all those contacts, text messages, media files, etc. between those two devices? In the past, I used to use the Nokia Communication Suite between various models but now for Android? Well, as usual Google and Bing are reliable friends and among the first hits I came across an article about How to Transfer Contacts from Nokia to Android. Couldn't be easier, right? Well, sort of... my main Windows systems are already running on Windows 8, and this actually caused problems with the mobile/smart-phone device drivers. The article provides the download for an older version 1.10 which upgrades to 2.11 (as time of writing this entry) but both couldn't get the Galaxy S3 and the Nokia connected. Shame on me... the product page clearly doesn't mention Windows 8 (for now) and Windows 8 isn't available for the general audience at all... After I took a spare machine running on Windows Vista everything went smooth. Software installed, upgrade done, device drivers for Android automatically downloaded and installed, and the same painless routine for the Nokia part. I think, I rebooted the system twice during the whole setup procedure but hey, it was more or less a distraction while coding some stuff in ASP.NET MVC and Telerik Kendo UI. The transfer of contacts and text messages was done via Wondershare MobileGo for Android, and all media files by moving the additional microSD card from one device to the other. But even without an external SD card, it would have been very easy to copy the files via Windows Explorer directly. Little catch and excellent service Fine, we are almost done and the only step left is to shift the SIM card... Ouch, gotcha! The X3 uses a standard size SIM card while the S III only accepts microSIM form factor. What an irony, bigger smartphone needs smaller SIM card. Luckily, the next showroom of Emtel is just 5 mins away up the road, and the service staff over there know their job. Finally, after roughly 10 mins of paper work, activation and small chit-chat, the S3 came to life on the mobile network. Owning a smart-phone now and knowing that my BWE would like to interact more on social networks away from home, especially to upload pictures and provide local 'check-ins', I activated a data package for her in advance, too. Even that it is Saturday, everything was already done and ready to be used. Nice bonus: The Emtel clerk directly offered me to set up the configuration for the Emtel data services, yes sure, go ahead, this saves me to search for that in the settings. Okay, spoiler-alert here, setting a static APN to access the Emtel network and the internet wouldn't be a challenge. But hey, she already had the phone in her hands and I could keep my eyes on the children. Well done, Emtel! Resume Thanks to the useful software package by Wondershare is was a hands-free experience to transfer all the data from a Nokia mobile on Symbian S60 to a Samsung Galaxy S III on Android Ice Cream Sandwich (ICS). In the future, this wont be a serious issue at all anymore thanks to synchronisation services and cloud storage. And for now, I'm only waiting for the official upgrades for Jelly Bean.

    Read the article

  • NUMA-aware placement of communication variables

    - by Dave
    For classic NUMA-aware programming I'm typically most concerned about simple cold, capacity and compulsory misses and whether we can satisfy the miss by locally connected memory or whether we have to pull the line from its home node over the coherent interconnect -- we'd like to minimize channel contention and conserve interconnect bandwidth. That is, for this style of programming we're quite aware of where memory is homed relative to the threads that will be accessing it. Ideally, a page is collocated on the node with the thread that's expected to most frequently access the page, as simple misses on the page can be satisfied without resorting to transferring the line over the interconnect. The default "first touch" NUMA page placement policy tends to work reasonable well in this regard. When a virtual page is first accessed, the operating system will attempt to provision and map that virtual page to a physical page allocated from the node where the accessing thread is running. It's worth noting that the node-level memory interleaving granularity is usually a multiple of the page size, so we can say that a given page P resides on some node N. That is, the memory underlying a page resides on just one node. But when thinking about accesses to heavily-written communication variables we normally consider what caches the lines underlying such variables might be resident in, and in what states. We want to minimize coherence misses and cache probe activity and interconnect traffic in general. I don't usually give much thought to the location of the home NUMA node underlying such highly shared variables. On a SPARC T5440, for instance, which consists of 4 T2+ processors connected by a central coherence hub, the home node and placement of heavily accessed communication variables has very little impact on performance. The variables are frequently accessed so likely in M-state in some cache, and the location of the home node is of little consequence because a requester can use cache-to-cache transfers to get the line. Or at least that's what I thought. Recently, though, I was exploring a simple shared memory point-to-point communication model where a client writes a request into a request mailbox and then busy-waits on a response variable. It's a simple example of delegation based on message passing. The server polls the request mailbox, and having fetched a new request value, performs some operation and then writes a reply value into the response variable. As noted above, on a T5440 performance is insensitive to the placement of the communication variables -- the request and response mailbox words. But on a Sun/Oracle X4800 I noticed that was not the case and that NUMA placement of the communication variables was actually quite important. For background an X4800 system consists of 8 Intel X7560 Xeons . Each package (socket) has 8 cores with 2 contexts per core, so the system is 8x8x2. Each package is also a NUMA node and has locally attached memory. Every package has 3 point-to-point QPI links for cache coherence, and the system is configured with a twisted ladder "mobius" topology. The cache coherence fabric is glueless -- there's not central arbiter or coherence hub. The maximum distance between any two nodes is just 2 hops over the QPI links. For any given node, 3 other nodes are 1 hop distant and the remaining 4 nodes are 2 hops distant. Using a single request (client) thread and a single response (server) thread, a benchmark harness explored all permutations of NUMA placement for the two threads and the two communication variables, measuring the average round-trip-time and throughput rate between the client and server. In this benchmark the server simply acts as a simple transponder, writing the request value plus 1 back into the reply field, so there's no particular computation phase and we're only measuring communication overheads. In addition to varying the placement of communication variables over pairs of nodes, we also explored variations where both variables were placed on one page (and thus on one node) -- either on the same cache line or different cache lines -- while varying the node where the variables reside along with the placement of the threads. The key observation was that if the client and server threads were on different nodes, then the best placement of variables was to have the request variable (written by the client and read by the server) reside on the same node as the client thread, and to place the response variable (written by the server and read by the client) on the same node as the server. That is, if you have a variable that's to be written by one thread and read by another, it should be homed with the writer thread. For our simple client-server model that means using split request and response communication variables with unidirectional message flow on a given page. This can yield up to twice the throughput of less favorable placement strategies. Our X4800 uses the QPI 1.0 protocol with source-based snooping. Briefly, when node A needs to probe a cache line it fires off snoop requests to all the nodes in the system. Those recipients then forward their response not to the original requester, but to the home node H of the cache line. H waits for and collects the responses, adjudicates and resolves conflicts and ensures memory-model ordering, and then sends a definitive reply back to the original requester A. If some node B needed to transfer the line to A, it will do so by cache-to-cache transfer and let H know about the disposition of the cache line. A needs to wait for the authoritative response from H. So if a thread on node A wants to write a value to be read by a thread on node B, the latency is dependent on the distances between A, B, and H. We observe the best performance when the written-to variable is co-homed with the writer A. That is, we want H and A to be the same node, as the writer doesn't need the home to respond over the QPI link, as the writer and the home reside on the very same node. With architecturally informed placement of communication variables we eliminate at least one QPI hop from the critical path. Newer Intel processors use the QPI 1.1 coherence protocol with home-based snooping. As noted above, under source-snooping a requester broadcasts snoop requests to all nodes. Those nodes send their response to the home node of the location, which provides memory ordering, reconciles conflicts, etc., and then posts a definitive reply to the requester. In home-based snooping the snoop probe goes directly to the home node and are not broadcast. The home node can consult snoop filters -- if present -- and send out requests to retrieve the line if necessary. The 3rd party owner of the line, if any, can respond either to the home or the original requester (or even to both) according to the protocol policies. There are myriad variations that have been implemented, and unfortunately vendor terminology doesn't always agree between vendors or with the academic taxonomy papers. The key is that home-snooping enables the use of a snoop filter to reduce interconnect traffic. And while home-snooping might have a longer critical path (latency) than source-based snooping, it also may require fewer messages and less overall bandwidth. It'll be interesting to reprise these experiments on a platform with home-based snooping. While collecting data I also noticed that there are placement concerns even in the seemingly trivial case when both threads and both variables reside on a single node. Internally, the cores on each X7560 package are connected by an internal ring. (Actually there are multiple contra-rotating rings). And the last-level on-chip cache (LLC) is partitioned in banks or slices, which with each slice being associated with a core on the ring topology. A hardware hash function associates each physical address with a specific home bank. Thus we face distance and topology concerns even for intra-package communications, although the latencies are not nearly the magnitude we see inter-package. I've not seen such communication distance artifacts on the T2+, where the cache banks are connected to the cores via a high-speed crossbar instead of a ring -- communication latencies seem more regular.

    Read the article

  • Experiences with learning Chinese

    - by Greg Low
    I've had a few friends asking me about learning Chinese and what I've found works and doesn't work. I was answering a question on a mailing list today and I thought I should post this info where it might be useful to many. The question that was initially asked was whether Rosetta Stone was useful but I've provided much more info on learning the language here. I’ve used Rosetta Stone with Chinese but it’s really hard to know whether to recommend it or not. Rosetta Stone works the same way in all languages. They show you photos and then let you both see and hear the target language and get you to work out what they’re talking about. The thinking is that that’s how children learn. However, at first, I found it very frustrating. I’d be staring at photos trying to work out what they were really trying to get at. Sometimes it’s far from obvious. I could not have survived without Google Translate open at the same time. The other weird thing is that the photos are from a mixture of countries. While that’s good in a way, it also means that they are endlessly showing pictures of something that would never happen in the target language and culture. For any language, constant interaction with a speaker of the target language is needed. Rosetta Stone has a “Studio” option. That’s the best part of the program. In my case, it lets me connect around twice a week to a live online class from Beijing. Classes usually have the teacher plus two to four students. You get some Studio access with the initial packages but need to purchase it for ongoing use. I find it very inexpensive. It seems to work out to about $70 (AUD/USD) for six months. That’s a real bargain. The other downside to Rosetta Stone is that they tend to teach very formal language, but as with other languages, that’s not how the locals speak. It might have been correct at one point but no-one actually says that. As an example, Rosetta Stone teach Gonggòng qìche (pronounced roughly like “gong gong chee chure” for bus. Most of my friends from areas like Taiwan would just say Gongche. Google Translate says Zongxiàn (pronounced somewhat like “dzong sheean”) instead. Mind you, the Rosetta Stone option isn't really as bad as "omnibus"; it's more like saying "public bus". If you say the option they provide, people would understand you. I also listen to ChinesePod in the car. They also have SpanishPod. Each podcast is about five minutes of spoken conversation. It is very good for providing current language. Another resource I use is local Meetup groups. Most cities have these and for a variety of languages. It’s way less structured (just standard conversation) but good for getting interaction. The obvious challenge for Asian languages is reading/writing. The input editors for Chinese that are part of Windows are excellent. Many of my Chinese friends speak fluently but cannot read or write. I was determined to learn to do both. For writing, I’m talking about on a computer, not with a pen. (Mind you, I can barely write English with a pen nowadays). When using Rosetta Stone, you can choose to have the Chinese words displayed in pinyin (Wo xihuan xuéxí zhongguó) or in Chinese characters (???????) or both. This year, I’ve been forcing myself to just use the Chinese characters. I use a pinyin input editor in Windows though, as it’s very fast.  (The character recognition input in the iPad is also amazing). Notice from the example that I provided above that the pronunciation of the pinyin isn’t that obvious to us at first either.  Since changing to only using characters, I find I can now read many more Chinese characters fluently. It’s a major challenge though. I can read about 300 now and yet you need around 2,500 to be able to read a newspaper fairly well. Tones are a major issue for some Asian languages. Mandarin has four tones (plus a neutral tone) and there is a major difference in meaning between two words that are spelled the same in pinyin but with different tones. For example, Ma (3rd tone?) is a horse, Ma (1st tone?) is like “mom”, and ma (neutral tone?) is a question mark and so on. Clearly you don’t want to mix these up. As in English, they also have words that do sound the same but mean different things in different contexts. What’s interesting is that even though we see two words that differ only by tone as very similar, to a native speaker, if you say the right words with the wrong tone, you might as well have said a completely different word. My wife’s dialect of Chinese has eight tones. It’s much worse. The reason I’m so keen to learn to read/write Chinese is that even though the different dialects are pronounced so differently that speakers of one dialect often cannot understand another dialect, the writing is generally the same. The only difference is that many years ago, the Chinese government created a simplified set of characters for some of the most commonly used ones. Older Chinese and most Cantonese speakers often struggle with the simplified characters. This is the simplified form of “three apples”: ????   This is the traditional form of the same words: ????  Note that two of the characters are the same but the middle two are quite different. For most languages, the best thing is to watch current movies in the target language but to watch them with the target language as subtitles, not your native language. You want to know what they actually said, not what it roughly means (which is what the English subtitle would give you). The difficulty with Asian languages like Chinese is that you have the added challenge of understanding the subtitles when they are written in the target language. I wish there were Mandarin Chinese movies with pinyin subtitles. For learning to read characters, I also recommend HanCard on the iPad. It is targeted at the HSK language proficiency levels. (I’m intending to take the first HSK exam as soon as I’m ready). Hope that info helps someone get started.  

    Read the article

  • Top 10 Browser Productivity Tips

    - by Renso
    Originally posted on: http://geekswithblogs.net/renso/archive/2013/10/14/top-10-browser-productivity-tips.aspxYou don’t have to be a geek to be a productive browser user. The tips below have been selected by actions users take most of the time to navigate a web-site but use long-standing keyboard or mouse actions to get them done, when there are keyboard short-cuts you can use instead. Since you hands are already on the keyboard it is almost always faster to sue a keyboard shortcut to get something done that you usually used the mouse for. For example right-clicking on something to copy it and then doing to same for pasting something is very time consuming, keyboard shortcuts have been created that simplify the task. All it takes are a few memory brain cells to remember them. Here are the tips, in no particular order:   Tip 1 Hold down the spacebar on your keyboard to page to the end of your web page rather than using your mouse. This is really a slow way of doing it. If you want to page one page at a time, hit the spacebar once, and again to page again. But if you want to page all the way to the end of the web page simply hit Ctrl+End (that is hold down the Ctrl key and hit the End button on your keyboard). To get to the top of your web page, simply hit Ctrl + Home to go all the way to the top of your web page. Tip 2 Where are my downloads? Some folks run downloads again-and-again because they do not know where the last one went and they do not see the popup, or browser note on their web page in the footer, etc. Simply hit Ctrl+J. Works in most browsers. Tip 3 Selecting a US state from a drop down box. Don’t use the mouse, takes just way too long to scroll. When you tab to the drop down box or click on it with your mouse, simply hit the first character of the state and it will be selected. For Texas for example hit the letter “T” twice on your keyboard to get to it. The same concept can be applied to any drop down box that is alphabetical or numerically sorted. Tip 4 Fixing spelling errors. All modern-day browsers support this now. You see the red wavy lines underscoring a word, yes it is a spelling error. How do you fix it? Don’t overtype it or try and fix it manually, fist right-click on it and a list of suggestions comes up. If it does not show up, like my name “Renso” and you know how to spell your name as in this example, look further down the list of options (the little window popup that appears when you right click) and you should see an option to “Add to Dictionary”. Be warned, when you add it, it only adds it to the browser you’re using’s dictionary. If you use Google Chrome, Firefox and IE, each one will have their own list. Tip 5 So you have trouble seeing the text on the screen. Or you are looking at a photo, for example in Facebook. You want to zoom in to read better or zoom into a photo a bit more. Hit Ctrl++ (hold down Ctrl key and hit the plus key – actually it’s the equal key but it is easier to remember that it is plus for bigger). Hit the minus to zoom out. Now you can’t remember what the original size was since you were so excited to hit it 20 times, or was that 21… Simply hit Ctrl+0 (that is zero) and it will reset it to the default. Tip 6 So you closed a couple of tabs in your browser. Suddenly you remember something you wanted to double-check something on one of the tabs, you cannot remember the URL ad the tab is gone forever, or is it? Simply hit Ctrl+Shift+t and it will bring back your tabs one by one each time you click the T. This has also been a great way for me to quickly close some tabs because I don’t want my boss to see I’m shopping and then hitting Ctrl+Shift+t to quickly get it back and complete my check-put and purchase. Or, for parents, when you walk into your daughter’s room and you see she quickly clicks and closes a window/tab in here browser. Not to worry my little darling, daddy will Ctrl+Shift+t and see what boys on Facebook you were talking too… Tip 7 The web browser is frozen on your PC/Laptop/Whatever, in this example it may be your Internet Explorer browser. I don’t mention Firefox or Chrome here because it probably never happens in their world. You cannot close it, it won’t respond to anything you have done s far except for the next step you are about to take, which is throw your two-day old coffee on your keyboard. This happens especially on sites that want to force you to complete a purchase order. Hit Ctrl+Alt+Del on your keyboard on any version of windows, select TASK MANAGER. In the  First Tab, which is the Process Tab, look for the item in question. In this example you should see Internet Explorer. Right-click it and select “End Task”. It will force the thread out of memory and terminate that process. You can of course do this with any program running under your account. Tip 8 This is a personal favorite of mine. To select words in the paragraph without using the mouse. You don’t want to select one character at a time like when you use the Ctrl+arrows as it can be very slow if you want to select a lot of text. You also want to select whole words. Simply use the Ctrl+Shift_arrow (right or left depending which direction you want to go. Tip 9 I was a bit reluctant to add this one, but being in the professional services industry still come across many-a-folk that simply can’t copy-and-paste them-all text or images that reside on them screens, y’all. Ctrl+c to copy and Ctrl+v to paste it. Works a lot faster than using the mouse. You may be asking: “Well why in the devil did they not use Ctrl+p for paste…. because that is for printing. This is of course not limited to the browser world, it applies to almost any piece of software running on PC or Mac. Go try it on an image on your browser, right-click it and select copy. Open a word document and Ctrl+v to paste the image in there. Please consider copyright laws. Tip 10 Getting rid of annoying ads. Now this only works when you load a web page, meaning when you get back to the same page later you will have to do this again and you will need to learn a tool to do it, WELL WORTH IT. For example, I use GrooveShark to listen to music but I don’t like the ads they show. Install a tool like Firebug for Firefox or use the Ctrl+Shift+I on Chrome to bring up the developer toolbar. Shows at the bottom of the page. With Firefox, once you have installed Firebug as an add-on, a yellow bug should appear on the top right-hand-side of your browser, click on it to display the developer toolbar. You will need to learn how to use it, but once you know how to select an item/section on the window (usually just right-click the add you don’t want to see and select “Inspect Element”, the developer toolbar will appear (if not already there)) and then simply hit delete and it will remove the add from the screen. If you don’t know HTML you may need to play with it a bit, but once you understand how it works can open up a whole new world for you on how web pages actually work. If you can think of any others that have saved you a ton of time please let me know so I can add them to a top 99 list.

    Read the article

  • Solaris 11 SRU / Update relationship explained, and blackout period on delivery of new bug fixes eliminated

    - by user12244672
    Relationship between SRUs and Update releases As you may know, Support Repository Updates (SRUs) for Oracle Solaris 11 are released monthly and are available to customers with an appropriate support contract.  SRUs primarily deliver bug fixes.  They may also deliver low risk feature enhancements. Solaris Update are typically released once or twice a year, containing support for new hardware, new software feature enhancements, and all bug fixes available at the time the Update content was finalized.  They also contain a significant number of new bug fixes, for issues found internally in Oracle and complex customer bug fixes which  require significant "soak" time to ensure their efficacy prior to release. Changes to SRU and Update Naming Conventions We're changing the naming convention of Update releases from a date based format such as Oracle Solaris 10 8/11 to a simpler "dot" version numbering, e.g. Oracle Solaris 11.1. Oracle Solaris 11 11/11 (i.e. the initial Oracle Solaris 11 release) may be referred to as 11.0. SRUs will simply be named as "dot.dot" releases, e.g. Oracle Solaris 11.1.1, for SRU1 after Oracle Solaris 11.1. Many Oracle products and infrastructure tools such as BugDB and MOS are tailored towards this "dot.dot" style of release naming, so these name changes align Oracle Solaris with these conventions. No Blackout Periods on Bug Fix Releases The Oracle Solaris 11 release process has been enhanced to eliminate blackout periods on the delivery of new bug fixes to customers. Previously, Oracle Solaris Updates were a superset of all preceding bug fix deliveries.  This made for a very simple update message - that which releases later is always a superset of that which was delivered previously. However, it had a downside.  Once the contents of an Update release were frozen prior to release, the release of new bug fixes for customer issues was also frozen to maintain the Update's superset relationship. Since the amount of change allowed into the final internal builds of an Update release is reduced to mitigate risk, this throttling back also impacted the release of new bug fixes to customers. This meant that there was effectively a 6 to 9 week hiatus on the release of new bug fixes prior to the release of each Update.  That wasn't good for customers awaiting critical bug fixes. We've eliminated this hiatus on the delivery of new bug fixes in Oracle Solaris 11 by allowing new bug fixes to continue to be released in SRUs even after the contents of the next Update release have been frozen. The release of SRUs will remain contiguous, with the first SRU released after the Update release effectively being a superset of both the the Update release and all preceding SRUs*.  That is, later SRUs are supersets of the content of previous SRUs. Therefore, the progression path from the final SRUs prior to the Update release is to the first SRU after the Update release, rather than to the Update release itself. The timeline / logical sequence of releases can be shown as follows: Updates: 11.0                                                11.1                               11.2     etc.                  \                                                         \                                    \ SRUs:       11.0.1, 11.0.2,...,11.0.12, 11.0.13, 11.1.1, 11.1.2,...,11.1.x, 11.2.1, etc. For example, for systems with Oracle Solaris 11 11/11 SRU12.4 or later installed, the recommended update path is to Oracle Solaris 11.1.1 (i.e. SRU1 after Solaris 11.1) or later rather than to the Solaris 11.1 release itself.  This will ensure no bug fixes are "lost" during the update. If for any reason you do wish to update from SRU12.4 or later to the 11.1 release itself - for example to update a test system - the instructions to do so are in the SRU12.4 README, https://updates.oracle.com/Orion/Services/download?type=readme&aru=15564533 For systems with Oracle Solaris 11 11/11 SRU11.4 or earlier installed, customers can update to either the 11.1 release or any 11.1 SRU as both will be supersets of their current version. Please do read the README of the SRU you are updating to, as it will contain important installation instructions which will save you time and effort. *Nerdy details: SRUs only contain the latest change delta relative to the Update on which they are based.  Their dependencies will, however, effectively pull in the Update content.  Customers maintaining a local Repo (e.g. behind their firewall), need to add both the 11.1 content and the relevant SRU content to their Repo, to enable the SRU's dependencies to be resolved.  Both will be available from the standard Support Repo and from MOS.  This is no different to existing SRUs for Oracle Solaris 11.0, whereby you may often get away with using just the SRU content to update, but the original 11.0 content may be needed in the Repo to resolve dependencies.

    Read the article

  • Real tortoises keep it slow and steady. How about the backups?

    - by Maria Zakourdaev
      … Four tortoises were playing in the backyard when they decided they needed hibiscus flower snacks. They pooled their money and sent the smallest tortoise out to fetch the snacks. Two days passed and there was no sign of the tortoise. "You know, she is taking a lot of time", said one of the tortoises. A little voice from just out side the fence said, "If you are going to talk that way about me I won't go." Is it too much to request from the quite expensive 3rd party backup tool to be a way faster than the SQL server native backup? Or at least save a respectable amount of storage by producing a really smaller backup files?  By saying “really smaller”, I mean at least getting a file in half size. After Googling the internet in an attempt to understand what other “sql people” are using for database backups, I see that most people are using one of three tools which are the main players in SQL backup area:  LiteSpeed by Quest SQL Backup by Red Gate SQL Safe by Idera The feedbacks about those tools are truly emotional and happy. However, while reading the forums and blogs I have wondered, is it possible that many are accustomed to using the above tools since SQL 2000 and 2005.  This can easily be understood due to the fact that a 300GB database backup for instance, using regular a SQL 2005 backup statement would have run for about 3 hours and have produced ~150GB file (depending on the content, of course).  Then you take a 3rd party tool which performs the same backup in 30 minutes resulting in a 30GB file leaving you speechless, you run to management persuading them to buy it due to the fact that it is definitely worth the price. In addition to the increased speed and disk space savings you would also get backup file encryption and virtual restore -  features that are still missing from the SQL server. But in case you, as well as me, don’t need these additional features and only want a tool that performs a full backup MUCH faster AND produces a far smaller backup file (like the gain you observed back in SQL 2005 days) you will be quite disappointed. SQL Server backup compression feature has totally changed the market picture. Medium size database. Take a look at the table below, check out how my SQL server 2008 R2 compares to other tools when backing up a 300GB database. It appears that when talking about the backup speed, SQL 2008 R2 compresses and performs backup in similar overall times as all three other tools. 3rd party tools maximum compression level takes twice longer. Backup file gain is not that impressive, except the highest compression levels but the price that you pay is very high cpu load and much longer time. Only SQL Safe by Idera was quite fast with it’s maximum compression level but most of the run time have used 95% cpu on the server. Note that I have used two types of destination storage, SATA 11 disks and FC 53 disks and, obviously, on faster storage have got my backup ready in half time. Looking at the above results, should we spend money, bother with another layer of complexity and software middle-man for the medium sized databases? I’m definitely not going to do so.  Very large database As a next phase of this benchmark, I have moved to a 6 terabyte database which was actually my main backup target. Note, how multiple files usage enables the SQL Server backup operation to use parallel I/O and remarkably increases it’s speed, especially when the backup device is heavily striped. SQL Server supports a maximum of 64 backup devices for a single backup operation but the most speed is gained when using one file per CPU, in the case above 8 files for a 2 Quad CPU server. The impact of additional files is minimal.  However, SQLsafe doesn’t show any speed improvement between 4 files and 8 files. Of course, with such huge databases every half percent of the compression transforms into the noticeable numbers. Saving almost 470GB of space may turn the backup tool into quite valuable purchase. Still, the backup speed and high CPU are the variables that should be taken into the consideration. As for us, the backup speed is more critical than the storage and we cannot allow a production server to sustain 95% cpu for such a long time. Bottomline, 3rd party backup tool developers, we are waiting for some breakthrough release. There are a few unanswered questions, like the restore speed comparison between different tools and the impact of multiple backup files on restore operation. Stay tuned for the next benchmarks.    Benchmark server: SQL Server 2008 R2 sp1 2 Quad CPU Database location: NetApp FC 15K Aggregate 53 discs Backup statements: No matter how good that UI is, we need to run the backup tasks from inside of SQL Server Agent to make sure they are covered by our monitoring systems. I have used extended stored procedures (command line execution also is an option, I haven’t noticed any impact on the backup performance). SQL backup LiteSpeed SQL Backup SQL safe backup database <DBNAME> to disk= '\\<networkpath>\par1.bak' , disk= '\\<networkpath>\par2.bak', disk= '\\<networkpath>\par3.bak' with format, compression EXECUTE master.dbo.xp_backup_database @database = N'<DBName>', @backupname= N'<DBName> full backup', @desc = N'Test', @compressionlevel=8, @filename= N'\\<networkpath>\par1.bak', @filename= N'\\<networkpath>\par2.bak', @filename= N'\\<networkpath>\par3.bak', @init = 1 EXECUTE master.dbo.sqlbackup '-SQL "BACKUP DATABASE <DBNAME> TO DISK= ''\\<networkpath>\par1.sqb'', DISK= ''\\<networkpath>\par2.sqb'', DISK= ''\\<networkpath>\par3.sqb'' WITH DISKRETRYINTERVAL = 30, DISKRETRYCOUNT = 10, COMPRESSION = 4, INIT"' EXECUTE master.dbo.xp_ss_backup @database = 'UCMSDB', @filename = '\\<networkpath>\par1.bak', @backuptype = 'Full', @compressionlevel = 4, @backupfile = '\\<networkpath>\par2.bak', @backupfile = '\\<networkpath>\par3.bak' If you still insist on using 3rd party tools for the backups in your production environment with maximum compression level, you will definitely need to consider limiting cpu usage which will increase the backup operation time even more: RedGate : use THREADPRIORITY option ( values 0 – 6 ) LiteSpeed : use  @throttle ( percentage, like 70%) SQL safe :  the only thing I have found was @Threads option.   Yours, Maria

    Read the article

  • Projected Results

    - by Sylvie MacKenzie, PMP
    Excerpt from PROFIT - ORACLE - by Monica Mehta Yasser Mahmud has seen a revolution in project management over the past decade. During that time, the former Primavera product strategist (who joined Oracle when his company was acquired in 2008) has not only observed a transformation in the way IT systems support corporate projects but the role project portfolio management (PPM) plays in the enterprise. “15 years ago project management was the domain of project management office (PMO),” Mahmud recalls of earlier days. “But over the course of the past decade, we've seen it transform into a mission critical enterprise discipline, that has made Primavera indispensable in the board room. Now, as a senior manager, a board member, or a C-level executive you have direct and complete visibility into what’s kind of going on in the organization—at a level of detail that you're going to consume that information.” Now serving as Oracle’s vice president of product strategy and industry marketing, Mahmud shares his thoughts on how Oracle’s Primavera solutions have evolved and how best-in-class project portfolio management systems can help businesses stay competitive. Profit: What do you feel are the market dynamics that are changing project management today? Mahmud: First, the data explosion. We're generating data at twice the rate at which we can actually store it. The same concept applies for project-intensive organizations. A lot of data is gathered, but what are we really doing with it? Are we turning data into insight? Are we using that insight and turning it into foresight with analytics tools? This is a key driver that will separate the very good companies—the very competitive companies—from those that are not as competitive. Another trend is centered on the explosion of mobile computing. By the year 2013, an estimated 35 percent of the world’s workforce is going to be mobile. That’s one billion people. So the question is not if you're going to go mobile, it’s how fast you are going to go mobile. What kind of impact does that have on how the workforce participates in projects? What worked ten to fifteen years ago is not going to work today. It requires a real rethink around the interfaces and how data is actually presented. Profit: What is the role of project management in this new landscape? Mahmud: We recently conducted a PPM study with the Economist Intelligence Unit centered to determine how important project management is considered within organizations. Our target was primarily CFOs, CIOs, and senior managers and we discovered that while 95 percent of participants believed it critical to their business, only six percent were confident that projects were delivered on time and on budget. That’s a huge gap. Most organizations are looking for efficiency, especially in these volatile financial times. But senior management can’t keep track of every project in a large organization. As a result, executives are attempting to inventory the work being conducted under their watch. What is often needed is a very high-level assessment conducted at the board level to say, “Here are the 50 initiatives that we have underway. How do they line up with our strategic drivers?” This line of questioning can provide early warning that work and strategy are out of alignment; finding the gap between what the business needs to do and the actual performance scorecard. That’s low-hanging fruit for any executive looking to increase efficiency and save money. But it can only be obtained through proper assessment of existing projects—and you need a project system of record to get that done. Over the next decade or so, project management is going to transform into holistic work management. Business leaders will want make sure key projects align with corporate strategy, but also the ability to drill down into daily activity and smaller projects to make sure they line up as well. Keeping employees from working on tasks—even for a few hours—that don’t line up with corporate goals will, in many ways, become a competitive differentiator. Profit: How do all of these market challenges and shifting trends impact Oracle’s Primavera solutions and meeting customers’ needs? Mahmud: For Primavera, it’s a transformation from being a project management application to a PPM system in the enterprise. Also making that system a mission-critical application by connecting to other key applications within the ecosystem, such as the enterprise resource planning (ERP), supply chain, and CRM systems. Analytics have also become a huge component. Business analytics have made Oracle’s Primavera applications pertinent in the boardroom. Now, as a senior manager, a board member, a CXO, CIO, or CEO, you have direct visibility into what’s going on in the organization at a level that you're able to consume that information. In addition, all of this information pairs up really well with your financials and other data. Certainly, when you're an Oracle shop, you have that visibility that you didn’t have before from a project execution perspective. Profit: What new strategies and tools are being implemented to create a more efficient workplace for users? Mahmud: We believe very strongly that just because you call something an enterprise project portfolio management system doesn’t make it so—you have to get people to want to participate in the system. This can’t be mandated down from the top. It simply doesn’t work that way. A truly adoptable solution is one that makes it super easy for all types users to participate, by providing them interfaces where they live. Keeping that in mind, a major area of development has been alternative user interfaces. This is increasingly resulting in the creation of lighter weight, targeted interfaces such as iOS applications, and smartphones interfaces such as for iPhone and Android platform. Profit: How does this translate into the development of Oracle’s Primavera solutions? Mahmud: Let me give you a few examples. We recently announced the launch of our Primavera P6 Team Member application, which is a native iOS application for the iPhone. This interface makes it easier for team members to do their jobs quickly and effectively. Similarly, we introduced the Primavera analytics application, which can be consumed via mobile devices, and when married with Oracle Spatial capabilities, users can get a geographical view of what’s going on and which projects are occurring in various locations around the world. Lastly, we introduced advanced email integration that allows project team members to status work via E-mail. This functionality leverages the fact that users are in E-mail system throughout the day and allows them to status their work without the need to launch the Primavera application. It comes back to a mantra: provide as many alternative user interfaces as possible, so you can give people the ability to work, to participate, to raise issues, to create projects, in the places where they live. Do it in such a way that it’s non-intrusive, do it in such a way that it’s easy and intuitive and they can get it done in a short amount of time. If you do that, workers can get back to doing what they're actually getting paid for.

    Read the article

  • 5 Lessons learnt in localization / multi language support in WPF

    - by MarkPearl
    For the last few months I have been secretly working away at the second version of an application that we initially released a few years ago. It’s called MaxCut and it is a free panel/cut optimizer for the woodwork, glass and metal industry. One of the motivations for writing MaxCut was to get an end to end experience in developing an application for general consumption. From the early days of v1 of MaxCut I would get the odd email thanking me for the software and then listing a few suggestions on how to improve it. Two of the most dominant suggestions that we received were… Support for imperial measurements (the original program only supported the metric system) Multi language support (we had someone who volunteered to translate the program into Japanese for us). I am not going to dive into the Imperial to Metric support in todays blog post, but I would like to cover a few brief lessons we learned in adding support for multi-language functionality in the software. I have sectioned them below under different lessons. Lesson 1 – Build multi-language support in from the start So the first lesson I learnt was if you know you are going to do multi language support – build it in from the very beginning! One of the power points of WPF/Silverlight is data binding in XAML and so while it wasn’t to painful to retro fit multi language support into the programing, it was still time consuming and a bit tedious to go through mounds and mounds of views and would have been a minor job to have implemented this while the form was being designed. Lesson 2 – Accommodate for varying word lengths using Grids The next lesson was a little harder to learn and was learnt a bit further down the road in the development cycle. We developed everything in English, assuming that other languages would have similar character length words for equivalent meanings… don’t!. A word that is short in your language may be of varying character lengths in other languages. Some language like Dutch and German allow for concatenation of nouns which has the potential to create really long words. We picked up a few places where our views had been structured incorrectly so that if a word was to long it would get clipped off or cut out. To get around this we began using the WPF grid extensively with column widths that would automatically expand if they needed to. Generally speaking the grid replacement got round this hurdle, and if in future you have a choice between a stack panel or a grid – think twice before going for the easier option… often the grid will be a bit more work to setup, but will be more flexible. Lesson 3 – Separate the separators Our initial run through moving the words to a resource dictionary led us to make what I thought was one potential mistake. If we had a label like the following… “length : “ In the resource dictionary we put it as a single entry. This is fine until you start using a word more than once. For instance in our scenario we used the word “length’ frequently. with different variations of the word with grammar and separators included in the resource we ended up having what I would consider a bloated dictionary. When we removed the separators from the words and put them as their own resources we saw a dramatic reduction in dictionary size… so something that looked like this… “length : “ “length. “ “length?” Was reduced to… “length” “:” “?” “.” While this may not seem like a reduction at first glance, consider that the separators “:?.” are used everywhere and suddenly you see a real reduction in bloat. Lesson 4 – Centralize the Language Dictionary This lesson was learnt at the very end of the project after we had already had a release candidate out in the wild. Because our translations would be done on a volunteer basis and remotely, we wanted it to be really simple for someone to translate our program into another language. As a common design practice we had tiered the application so that we had a business logic layer, a ui layer, etc. The problem was in several of these layers we had resource files specific for that layer. What this resulted in was us having multiple resource files that we would need to send to our translators. To add to our problems, some of the wordings were duplicated in different resource files, which would result in additional frustration from our translators as they felt they were duplicating work. Eventually the workaround was to make a separate project in VS2010 with just the language translations. We then exposed the dictionary as public within this project and made it as a reference to the other projects within the solution. This solved out problem as now we had a central dictionary and could remove any duplication's. Lesson 5 – Make a dummy translation file to test that you haven’t missed anything The final lesson learnt about multi language support in WPF was when checking if you had forgotten to translate anything in the inline code, make a test resource file with dummy data. Ideally you want the data for each word to be identical. In our instance we made one which had all the resource key values pointing to a value of test. This allowed us point the language file to our test resource file and very quickly browse through the program and see if we had missed any linking. The alternative to this approach is to have two language files and swap between the two while running the program to make sure that you haven’t missed anything, but the downside of dual language file approach is that it is much a lot harder spotting a mistake if everything is different – almost like playing Where’s Wally / Waldo. It is much easier spotting variance in uniformity – meaning when you put the “test’ keyword for everything, anything that didn’t say “test” stuck out like a sore thumb. So these are my top five lessons learnt on implementing multi language support in WPF. Feel free to make any suggestions in the comments section if you feel maybe something is more important than one of these or if I got it wrong!

    Read the article

  • MySQL Utility Users' Console Oerview

    - by rudrap
    MySQL Utility Users' Console (mysqluc): The MySQL Utilities Users' Console is designed to make using the utilities easier via a dedicated console. It helps us to use the utilities without worrying about the python and utility paths. Why do we need a special console? - It does provide a unique shell environment with command completion, help for each utility, user defined variables, and type completion for options. - You no longer have to type out the entire name of the utility. - You don't need to remember the name of a database utility you want to use. - You can define variables and reuse them in your utility commands. - It is possible to run utility command along with mysqluc and come out of the mysqluc console. Console commands: mysqluc> help Command Description ----------------------           --------------------------------------------------- help utilities                     Display list of all utilities supported. help <utility>                  Display help for a specific utility. help or help commands   Show this list. exit or quit                       Exit the console. set <variable>=<value>  Store a variable for recall in commands. show options                   Display list of options specified by the user on launch. show variables                 Display list of variables. <ENTER>                       Press ENTER to execute command. <ESCAPE>                     Press ESCAPE to clear the command entry. <DOWN>                       Press DOWN to retrieve the previous command. <UP>                               Press UP to retrieve the next command in history. <TAB>                            Press TAB for type completion of utility, option,or variable names. <TAB><TAB>                Press TAB twice for list of matching type completion (context sensitive). How do I use it? Pre-requisites: - Download the latest version of MySQL Workbench. - Mysql Servers are running. - Your Pythonpath is set. (e.g. Export PYTHONPATH=/...../mysql-utilities/) Check the Version of mysqluc Utility: /usr/bin/python mysqluc.py –version It should display something like this MySQL Utilities mysqluc.py version 1.1.0 - MySQL Workbench Distribution 5.2.44 Copyright (c) 2010, 2012 Oracle and/or its affiliates. All rights reserved. This program is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, to the extent permitted by law. Use of TAB to get the current utilities: mysqluc> mysqldb<TAB><TAB> Utility Description -------------        ------------------------------------------------------------ mysqldbcopy      copy databases from one server to another mysqldbexport    export metadata and data from databases mysqldbimport    import metadata and data from files mysqluc> mysqldbcopy –source=$se<TAB> Variable Value -------- ---------------------------------------------------------------------- server1 root@localhost:3306 server2 root@localhost:3307 you can see the variables starting with se and then decide which to use Run a utility via the console: /usr/bin/python mysqluc.py -e "mysqldbcopy --source=root@localhost:3306 --destination=root@localhost:3307 dbname" Get help for utilities in the console: mysqluc> help utilities Display help for a utility mysqluc> help mysqldbcopy Details about mysqldbcopy and its options set variables and use them in commands: mysqluc> set server1 = root@localhost:3306 mysqluc>show variables Variable Value -------- ---------------------------------------------------------------------- server1    root@localhost:3306 server2    root@localhost:3307 mysqluc> mysqldbcopy –source=$server1 –destination=$server2 dbname <Enter> Mysqldbcopy utility output will display. mysqluc>show options Display list of options specified by the user mysqluc SERVER=root@host123 VAR_A=57 -e "show variables" Variable Value -------- ----------------------------------------------------------------- SERVER root@host123 VAR_A 57 Finding option names for an Utility: mysqluc> mysqlserverclone --n Option Description ------------------- --------------------------------------------------------- --new-data=NEW_DATA the full path to the location of the data directory for the new instance --new-port=NEW_PORT the new port for the new instance - default=3307 --new-id=NEW_ID the server_id for the new instance - default=2 Limitations: User defined variables have a lifetime of the console run time.

    Read the article

  • How to trace a function array argument in DTrace

    - by uejio
    I still use dtrace just about every day in my job and found that I had to print an argument to a function which was an array of strings.  The array was variable length up to about 10 items.  I'm not sure if the is the right way to do it, but it seems to work and is not too painful if the array size is small.Here's an example.  Suppose in your application, you have the following function, where n is number of item in the array s.void arraytest(int n, char **s){    /* Loop thru s[0] to s[n-1] */}How do you use DTrace to print out the values of s[i] or of s[0] to s[n-1]?  DTrace does not have if-then blocks or for loops, so you can't do something like:    for i=0; i<arg0; i++        trace arg1[i]; It turns out that you can use probe ordering as a kind of iterator. Probes with the same name will fire in the order that they appear in the script, so I can save the value of "n" in the first probe and then use it as part of the predicate of the next probe to determine if the other probe should fire or not.  So the first probe for tracing the arraytest function is:pid$target::arraytest:entry{    self->n = arg0;}Then, if I want to print out the first few items of the array, I first check the value of n.  If it's greater than the index that I want to print out, then I can print that index.  For example, if I want to print out the 3rd element of the array, I would do something like:pid$target::arraytest:entry/self->n > 2/{    printf("%s",stringof(arg1 + 2 * sizeof(pointer)));}Actually, that doesn't quite work because arg1 is a pointer to an array of pointers and needs to be copied twice from the user process space to the kernel space (which is where dtrace is). Also, the sizeof(char *) is 8, but for some reason, I have to use 4 which is the sizeof(uint32_t). (I still don't know how that works.)  So, the script that prints the 3rd element of the array should look like:pid$target::arraytest:entry{    /* first, save the size of the array so that we don't get            invalid address errors when indexing arg1+n. */    self->n = arg0;}pid$target::arraytest:entry/self->n > 2/{    /* print the 3rd element (index = 2) of the second arg. */    i = 2;    size = 4;    self->a_t = copyin(arg1+size*i,size);    printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}If your array is large, then it's quite painful since you have to write one probe for every array index.  For example, here's the full script for printing the first 5 elements of the array:#!/usr/sbin/dtrace -spid$target::arraytest:entry{        /* first, save the size of the array so that we don't get           invalid address errors when indexing arg1+n. */        self->n = arg0;}pid$target::arraytest:entry/self->n > 0/{        i = 0;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 1/{        i = 1;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 2/{        i = 2;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 3/{        i = 3;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));}pid$target::arraytest:entry/self->n > 4/{        i = 4;        size = sizeof(uint32_t);        self->a_t = copyin(arg1+size*i,size);        printf("%s: a[%d]=%s",probefunc,i,copyinstr(*(uint32_t *)self->a_t));} If the array is large, then your script will also have to be very long to print out all values of the array.

    Read the article

  • Application Event Log keeps getting corrupted

    - by yakatz
    I recently asked about repairing a corrupt event log, because it seemed to be a one-off event. The event log has since exhibited the same behavior 3 times. We have been trying to find patterns, but so far we have found nothing. The server runs several ASP.NET applications and three scheduled tasks written in .NET. The last modified date of the event log once happened to be the same time as one of the scheduled tasks, but the others have not been. Any suggestions of where to look next or a way we can get any information out of a corrupt evtx file? The server is running critical e-commerce applications, so we want to keep the number of restarts required to a minimum. Edit: I ran DUMPEL and got very strange results. 1/9/2012 4:14:05 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x1070 Faulting application start time: 0x01cccf1386d30991 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:07 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_79d9 P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER975.tmp.appcompat.txt C:\Windows\Temp\WERA03.tmp.WERInternalMetadata.xml C:\Windows\Temp\WERA13.tmp.hdmp C:\Windows\Temp\WERD21.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cd7d09dfc84119d82a2ac6a789038bd5661acfb_cab_128f0e67 Analysis symbol: Rechecking for solution: 0 Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:07 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_79d9 P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER975.tmp.appcompat.txt C:\Windows\Temp\WERA03.tmp.WERInternalMetadata.xml C:\Windows\Temp\WERA13.tmp.hdmp C:\Windows\Temp\WERD21.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_cd7d09dfc84119d82a2ac6a789038bd5661acfb_cab_128f0e67 Analysis symbol: Rechecking for solution: 0 Report Id: dbf4f691-3b06-11e1-9025-005056a602e6 Report Status: 0 1/9/2012 4:14:12 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x16ac Faulting application start time: 0x01cccf139f475c0c Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: e03bae70-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:16 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER2579.tmp.appcompat.txt C:\Windows\Temp\WER25F7.tmp.WERInternalMetadata.xml C:\Windows\Temp\WER25F8.tmp.hdmp C:\Windows\Temp\WER28F6.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_cab_0b63321b Analysis symbol: Rechecking for solution: 0 Report Id: e03bae70-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:16 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: C:\Windows\Temp\WER2579.tmp.appcompat.txt C:\Windows\Temp\WER25F7.tmp.WERInternalMetadata.xml C:\Windows\Temp\WER25F8.tmp.hdmp C:\Windows\Temp\WER28F6.tmp.mdmp These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_cab_0b63321b Analysis symbol: Rechecking for solution: 0 Report Id: e03bae70-3b06-11e1-9025-005056a602e6 Report Status: 0 1/9/2012 4:14:21 PM 1 100 1000 Application Error N/A SERVERNAME Faulting application name: w3wp.exe, version: 7.5.7601.17514, time stamp: 0x4ce7a5f8 Faulting module name: ntdll.dll, version: 6.1.7601.17514, time stamp: 0x4ce7ba58 Exception code: 0xc0000374 Fault offset: 0x000ce653 Faulting process id: 0x17f8 Faulting application start time: 0x01cccf13a4ba5126 Faulting application path: C:\Windows\SysWOW64\inetsrv\w3wp.exe Faulting module path: C:\Windows\SysWOW64\ntdll.dll Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 1/9/2012 4:14:21 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_1cfb4872 Analysis symbol: Rechecking for solution: 0 Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 Report Status: 4 1/9/2012 4:14:21 PM 4 0 1001 Windows Error Reporting N/A SERVERNAME Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: w3wp.exe P2: 7.5.7601.17514 P3: 4ce7a5f8 P4: StackHash_9c6c P5: 6.1.7601.17514 P6: 4ce7ba58 P7: c0000374 P8: 000ce653 P9: P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_w3wp.exe_c49a67649524ad11b64bbf809211bc5ba742a3d6_1cfb4872 Analysis symbol: Rechecking for solution: 0 Report Id: e57a0a85-3b06-11e1-9025-005056a602e6 Report Status: 0 None of the files referenced actually exist (not even in WER ReportArchive). These should not be the only events mentioned. The log file has been cleared twice since January 9, so those events should not even be listed at all.

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Sporadic unspecific kernel panic

    - by koma
    I'm experiencing seldom (so far about once a month) hard crashes on our ubuntu server 10.04 LTS box. The box itself is quite old (Dell PowerEdge 750 from 2004, Pentium4 2.8 GHz). I set up netconsole after it crashed twice last thursday and was able to extract the following output: [ 9354.062473] invalid opcode: 0000 [#1] SMP [ 9354.062516] last sysfs file: /sys/devices/pci0000:00/0000:00:1d.0/usb2/2-2/2-2:1.0/uevent [ 9354.062555] Modules linked in: ppdev adm1026 hwmon_vid i2c_i801 bridge stp dcdbas psmouse serio_raw netconsole configfs shpchp lp parport usbhid hid e1000 [ 9354.062685] [ 9354.062704] Pid: 3988, comm: rsync Not tainted 2.6.38-12-generic-pae #51~lucid1-Ubuntu Dell Computer Corporation PowerEdge 750 /0R1479 [ 9354.062773] EIP: 0060:[<c104fef1>] EFLAGS: 00010046 CPU: 1 [ 9354.062802] EIP is at check_preempt_wakeup+0x181/0x250 [ 9354.062826] EAX: 00000002 EBX: f2a10ccc ECX: 00000000 EDX: 00000002 [ 9354.062850] ESI: f1db71cc EDI: f1db71a0 EBP: f1dbdea8 ESP: f1dbde8c [ 9354.062875] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 [ 9354.062900] Process rsync (pid: 3988, ti=f1dbc000 task=f1db71a0 task.ti=f1dbc000) [ 9354.062933] Stack: [ 9354.062951] 0053ea60 f7907680 f28da840 f2a10ca0 c153ea60 f7907680 c153ea60 f1dbdebc [ 9354.063019] c103f98a f2a10ca0 f7907680 00000001 f1dbdef8 c104f97f 00000000 f2f0bacc [ 9354.063088] f7904338 00000001 00000003 00000000 f2f0bacc 00000001 00000001 00000086 [ 9354.063157] Call Trace: [ 9354.063183] [<c103f98a>] check_preempt_curr+0x6a/0x80 [ 9354.063210] [<c104f97f>] try_to_wake_up+0x5f/0x3f0 [ 9354.063236] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.063261] [<c104fd64>] wake_up_process+0x14/0x20 [ 9354.063286] [<c1077a1d>] hrtimer_wakeup+0x1d/0x30 [ 9354.063310] [<c1077f4a>] __run_hrtimer+0x7a/0x1c0 [ 9354.063336] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.063360] [<c1078310>] hrtimer_interrupt+0x120/0x2b0 [ 9354.063390] [<c1535c36>] smp_apic_timer_interrupt+0x56/0x8a [ 9354.063418] [<c152f459>] apic_timer_interrupt+0x31/0x38 [ 9354.063446] [<c1520000>] ? mca_attach_bus+0x5/0xc0 [ 9354.063469] Code: 8b 9b 20 01 00 00 8b 86 24 01 00 00 3b 83 24 01 00 00 75 e6 85 db 0f 84 a3 00 00 00 89 da 89 f0 e8 75 f6 fe ff 83 f8 01 0f 85 00 <fe> ff ff 89 f8 e8 95 f9 fe ff 8b 5e 1c 85 db 0f 84 e4 fe ff ff [ 9354.063804] EIP: [<c104fef1>] check_preempt_wakeup+0x181/0x250 SS:ESP 0068:f1dbde8c [ 9354.064231] ---[ end trace 290689cea65aea7f ]--- [ 9354.064290] Kernel panic - not syncing: Fatal exception in interrupt [ 9354.064352] Pid: 3988, comm: rsync Tainted: G D 2.6.38-12-generic-pae #51~lucid1-Ubuntu [ 9354.064424] Call Trace: [ 9354.064481] [<c152c057>] ? panic+0x5c/0x15b [ 9354.064539] [<c15302bd>] ? oops_end+0xcd/0xd0 [ 9354.064539] [<c100d9e4>] ? die+0x54/0x80 [ 9354.064539] [<c152f926>] ? do_trap+0x96/0xc0 [ 9354.064539] [<c100ba00>] ? do_invalid_op+0x0/0xa0 [ 9354.064539] [<c100ba8b>] ? do_invalid_op+0x8b/0xa0 [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c144884d>] ? __kfree_skb+0x3d/0x90 [ 9354.064539] [<c1042ae7>] ? update_curr+0x247/0x2a0 [ 9354.064539] [<c10447bb>] ? update_cfs_load+0x11b/0x2d0 [ 9354.064539] [<c1042a25>] ? update_curr+0x185/0x2a0 [ 9354.064539] [<c152f6bf>] ? error_code+0x67/0x6c [ 9354.064539] [<c104fef1>] ? check_preempt_wakeup+0x181/0x250 [ 9354.064539] [<c103f98a>] ? check_preempt_curr+0x6a/0x80 [ 9354.064539] [<c104f97f>] ? try_to_wake_up+0x5f/0x3f0 [ 9354.064539] [<c1077a00>] ? hrtimer_wakeup+0x0/0x30 [ 9354.064539] [<c104fd64>] ? wake_up_process+0x14/0x20 [ 9354.064539] [<c1077a1d>] ? hrtimer_wakeup+0x1d/0x30 [ 9354.064539] [<c1077f4a>] ? __run_hrtimer+0x7a/0x1c0 [ 9354.064539] [<c107dbad>] ? ktime_get+0x6d/0x110 [ 9354.064539] [<c1078310>] ? hrtimer_interrupt+0x120/0x2b0 [ 9354.064539] [<c1535c36>] ? smp_apic_timer_interrupt+0x56/0x8a [ 9354.064539] [<c152f459>] ? apic_timer_interrupt+0x31/0x38 [ 9354.064539] [<c1520000>] ? mca_attach_bus+0x5/0xc0 Googling for this issue didn't really turn up anything useful (most stuff I found was related to btrfs, but I don't use that, although the module exists and is sometimes loaded). From experience it might have to do with relatively heavy I/O, as two of the panics happened during a backup procedure. Kernel is 2.6.38-12-generic-pae, but I'm pretty sure I also saw panics on 2.6.32. I meanwhile upgraded to 3.0.0-17-generic-pae and am waiting for the next crash ;-) I'm at a loss here, so any pointers where to look for the cause or what it could be would be great :-) Thanks !

    Read the article

  • ASP.NET MVC 2 - ViewModel Prefix

    - by Cosmo
    I want to use RenderPartial twice in my view with different models associated. The problem is that some properties are present in both models (nickname, password). They have no prefix, so even the id's or names are equal in the output. Now, if I have model errors for nickname or password, both fields get highlighted. Main View: <div> <% Html.RenderPartial("Register", Model.RegisterModel); %> </div> <div> <% Html.RenderPartial("Login", Model.LoginModel); %> </div> Login PartialView: <% using (Html.BeginForm("Login", "Member")) { %> <fieldset> <legend>Login</legend> <p> <%= Html.LabelFor(x => x.Nickname) %> <%= Html.TextBoxFor(x => x.Nickname) %> </p> <p> <%= Html.LabelFor(x => x.Password) %> <%= Html.PasswordFor(x => x.Password) %> </p> <input type="submit" value="Login" /> </fieldset> <% } % Register PartialView: <% using (Html.BeginForm("Register", "Member")) { %> <fieldset> <legend>Register</legend> <p> <%= Html.LabelFor(x => x.Nickname) %> <%= Html.TextBoxFor(x => x.Nickname) %> </p> <p> <%= Html.LabelFor(x => x.Email) %> <%= Html.TextBoxFor(x => x.Email) %> </p> <p> <%= Html.LabelFor(x => x.Password) %> <%= Html.PasswordFor(x => x.Password) %> </p> <p> <%= Html.LabelFor(x => x.PasswordRepeat) %> <%= Html.PasswordFor(x => x.PasswordRepeat) %> </p> <input type="submit" value="Register" /> </fieldset> <% } % How can I change this?

    Read the article

  • OnSelectedIndexChange only fires on second click when using custom page validation script

    - by Kris P
    Okay.. this is hard to explain, so here it goes. I have an update panel which contains a number of controls. The update panel is triggered by the OnSelectedIndexChanged event of a dropdownlist called: ddlUSCitizenshipStatus. It works as expected when I selected a new item. However, if I leave ddlUSCitizenshipStatus with the default value, and then click "submit" button on the page, the requiredfieldvalidators say there is an error on ddlUSCitizenshipStatus (which it should, as I never selected a value). So I then choose a value, the error message goes away on ddlUSCitizenshipStatus, however the updatepanel does not refresh. I've debugged this locally and the OnSelectedIndexChanged event for ddlUSCitizenshipStatus does not fire. If I choose an item in the ddlUSCitizenshipStatus list a second time, the OnSelectedIndexChanged server event fires and the update panel refreshes and works as expected. The issue is, I have to select an item in ddlUSCitizenshipStatus twice, after failed validation, before the updatepanel it's sitting in updates. The submit button on the page looks like this: <asp:LinkButton ID="btnSubmitPage1" runat="server" CssClass="continueButton" OnClick="btnSubmitPage1_Click" CausesValidation="true" OnClientClick="javascript: return ValidatePage();" /> If I remove my custom OnClientClick script, making the submit button look like this: <asp:LinkButton ID="btnSubmitPage1" runat="server" CssClass="continueButton" OnClick="btnSubmitPage1_Click" CausesValidation="true" ValidationGroup="valGrpAdditionalInformation" /> The dropdownlist, update panel, and reguiredfieldvalidator all work as expected. However, I need to run that custom "ValidatePage()" script when the button is clicked. Below is what my ValidatePage script looks like. I've been troubleshooting this for more hours than I can count.... I hope someone is able to help me. Please let me know if you can figure out why ddlUSCitizenshipStatus doesn't update the updatepanel until the second click after a failed validation. function ValidatePage() { var blnDoPostBack = true; if (typeof(Page_ClientValidate) == 'function' ) { //Client side validation can occur, so lets do it. //Validate each validation group. for( var i = 0; i < Page_ValidationSummaries.length; i++ ) Page_ClientValidate( Page_ValidationSummaries[i].validationGroup.toString() ); //Validate every validation control on the page. for (var i = 0; i < Page_Validators.length; i++) ValidatorValidate(Page_Validators[i]); //Figure out which validation groups have errors, store a list of these validation groups in an array. var aryValGrpsWithErrors = []; for( var i = 0; i < Page_Validators.length; i++ ) { if( !Page_Validators[i].isvalid ) { //This particular validator has thrown an error. //Remeber to not do a postback, as we were able to catch this validation error client side. blnDoPostBack = false; //If we haven't already registered the validation group this erroring validation control is a part of, do so now. if( aryValGrpsWithErrors.indexOf( Page_Validators[i].validationGroup.toString() ) == -1 ) aryValGrpsWithErrors[aryValGrpsWithErrors.length++] = Page_Validators[i].validationGroup.toString(); } } //Now display every validation summary that has !isvalid validation controls in it. for( var i = 0; i < Page_ValidationSummaries.length; i++ ) { if( aryValGrpsWithErrors.indexOf( Page_ValidationSummaries[i].validationGroup.toString() ) != -1 ) { Page_ValidationSummaries[i].style.display = ""; document.getElementById( Page_ValidationSummaries[i].id.toString() + "Wrapper" ).style.display = ""; } else { //The current validation summary does not have any error messages in it, so make sure it's hidden. Page_ValidationSummaries[i].style.display = "none"; document.getElementById( Page_ValidationSummaries[i].id.toString() + "Wrapper" ).style.display = "none"; } } } return blnDoPostBack; }

    Read the article

  • How to "DRY up" C# attributes in Models and ViewModels?

    - by DanM
    This question was inspired by my struggles with ASP.NET MVC, but I think it applies to other situations as well. Let's say I have an ORM-generated Model and two ViewModels (one for a "details" view and one for an "edit" view): Model public class FooModel // ORM generated { public int Id { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string EmailAddress { get; set; } public int Age { get; set; } public int CategoryId { get; set; } } Display ViewModel public class FooDisplayViewModel // use for "details" view { [DisplayName("ID Number")] public int Id { get; set; } [DisplayName("First Name")] public string FirstName { get; set; } [DisplayName("Last Name")] public string LastName { get; set; } [DisplayName("Email Address")] [DataType("EmailAddress")] public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] public string CategoryName { get; set; } } Edit ViewModel public class FooEditViewModel // use for "edit" view { [DisplayName("First Name")] // not DRY public string FirstName { get; set; } [DisplayName("Last Name")] // not DRY public string LastName { get; set; } [DisplayName("Email Address")] // not DRY [DataType("EmailAddress")] // not DRY public string EmailAddress { get; set; } public int Age { get; set; } [DisplayName("Category")] // not DRY public SelectList Categories { get; set; } } Note that the attributes on the ViewModels are not DRY--a lot of information is repeated. Now imagine this scenario multiplied by 10 or 100, and you can see that it can quickly become quite tedious and error prone to ensure consistency across ViewModels (and therefore across Views). How can I "DRY up" this code? Before you answer, "Just put all the attributes on FooModel," I've tried that, but it didn't work because I need to keep my ViewModels "flat". In other words, I can't just compose each ViewModel with a Model--I need my ViewModel to have only the properties (and attributes) that should be consumed by the View, and the View can't burrow into sub-properties to get at the values. Update LukLed's answer suggests using inheritance. This definitely reduces the amount of non-DRY code, but it doesn't eliminate it. Note that, in my example above, the DisplayName attribute for the Category property would need to be written twice because the data type of the property is different between the display and edit ViewModels. This isn't going to be a big deal on a small scale, but as the size and complexity of a project scales up (imagine a lot more properties, more attributes per property, more views per model), there is still the potentially for "repeating yourself" a fair amount. Perhaps I'm taking DRY too far here, but I'd still rather have all my "friendly names", data types, validation rules, etc. typed out only once.

    Read the article

  • With Eclipselink/JPA, can I have a Foreign Composite Key that shares a field with a Primary Composit

    - by user107924
    My database has two entities; Company and Person. A Company can have many People, but a Person must have only one Company. The table structure looks as follows. COMPANY ---------- owner PK comp_id PK c_name PERSON ---------------- owner PK, FK1 personid PK comp_id FK1 p_fname p_sname It has occurred to me that I could remove PERSON.OWNER and derive it through the foreign key; however, I can't make this change without affecting legacy code. I have modeled these as JPA-annotated classes; @Entity @Table(name = "PERSON") @IdClass(PersonPK.class) public class Person implements Serializable { @Id private String owner; @Id private String personid; @ManyToOne @JoinColumns( {@JoinColumn(name = "owner", referencedColumnName = "OWNER", insertable = false, updatable = false), @JoinColumn(name = "comp_id", referencedColumnName = "COMP_ID", insertable = true, updatable = true)}) private Company company; private String p_fname; private String p_sname; ...and standard getters/setters... } @Entity @Table(name = "COMPANY") @IdClass(CompanyPK.class) public class Company implements Serializable { @Id private String owner; @Id private String comp_id; private String c_name; @OneToMany(mappedBy = "company", cascade=CascadeType.ALL) private List people; ...and standard getters/setters... } My PersonPK and CompanyPK classes are nothing special, they just serve as a struct holding owner and the ID field, and override hashCode and equals(o). So far so good. I come across a problem, however, when trying to deal with associations. It seems if I have an existing Company, and create a Person, and associate to the Person to the Company and persist the company, the association is not saved when the Person is inserted. For example, my main code looks like this: EntityManager em = emf.createEntityManager(); em.getTransaction().begin(); CompanyPK companyPK = new CompanyPK(); companyPK.owner="USA"; companyPK.comp_id="1102F3"; Company company = em.find(Company.class, companyPK); Person person = new Person(); person.setOwner("USA"); person.setPersonid("5116628123"); //some number that doesn't exist yet person.setP_fname("Hannah"); person.setP_sname("Montana"); person.setCompany(company); em.persist(person); This completes without error; however in the database I find that the Person record was inserted with a null in the COMP_ID field. With EclipseLink debug logging set to FINE, the SQL query is shown as: INSERT INTO PERSON (PERSONID,OWNER,P_SNAME,P_FNAME) VALUES (?,?,?,?) bind = [5116628123,USA,Montana,Hannah,] I would have expected this to be saved, and the query to be equivalent to INSERT INTO PERSON (PERSONID,OWNER,COMP_ID,P_SNAME,P_FNAME) VALUES (?,?,?,?,?) bind = [5116628123,USA,1102F3,Montana,Hannah,] What gives? Is it incorrect to say updatable/insertable=true for one half of a composite key and =false for the other half? If I have updatable/insertable=true for both parts of the foreign key, then Eclipselink fails to startup saying that I can not use the column twice without having one set to readonly by specifying these options.

    Read the article

  • Real-time graphing in Java

    - by thodinc
    I have an application which updates a variable about between 5 to 50 times a second and I am looking for some way of drawing a continuous XY plot of this change in real-time. Though JFreeChart is not recommended for such a high update rate, many users still say that it works for them. I've tried using this demo and modified it to display a random variable, but it seems to use up 100% CPU usage all the time. Even if I ignore that, I do not want to be restricted to JFreeChart's ui class for constructing forms (though I'm not sure what its capabilities are exactly). Would it be possible to integrate it with Java's "forms" and drop-down menus? (as are available in VB) Otherwise, are there any alternatives I could look into? EDIT: I'm new to Swing, so I've put together a code just to test the functionality of JFreeChart with it (while avoiding the use of the ApplicationFrame class of JFree since I'm not sure how that will work with Swing's combo boxes and buttons). Right now, the graph is being updated immediately and CPU usage is high. Would it be possible to buffer the value with new Millisecond() and update it maybe twice a second? Also, can I add other components to the rest of the JFrame without disrupting JFreeChart? How would I do that? frame.getContentPane().add(new Button("Click")) seems to overwrite the graph. package graphtest; import java.util.Random; import javax.swing.JFrame; import org.jfree.chart.ChartFactory; import org.jfree.chart.ChartPanel; import org.jfree.chart.JFreeChart; import org.jfree.chart.axis.ValueAxis; import org.jfree.chart.plot.XYPlot; import org.jfree.data.time.Millisecond; import org.jfree.data.time.TimeSeries; import org.jfree.data.time.TimeSeriesCollection; public class Main { static TimeSeries ts = new TimeSeries("data", Millisecond.class); public static void main(String[] args) throws InterruptedException { gen myGen = new gen(); new Thread(myGen).start(); TimeSeriesCollection dataset = new TimeSeriesCollection(ts); JFreeChart chart = ChartFactory.createTimeSeriesChart( "GraphTest", "Time", "Value", dataset, true, true, false ); final XYPlot plot = chart.getXYPlot(); ValueAxis axis = plot.getDomainAxis(); axis.setAutoRange(true); axis.setFixedAutoRange(60000.0); JFrame frame = new JFrame("GraphTest"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); ChartPanel label = new ChartPanel(chart); frame.getContentPane().add(label); //Suppose I add combo boxes and buttons here later frame.pack(); frame.setVisible(true); } static class gen implements Runnable { private Random randGen = new Random(); public void run() { while(true) { int num = randGen.nextInt(1000); System.out.println(num); ts.addOrUpdate(new Millisecond(), num); try { Thread.sleep(20); } catch (InterruptedException ex) { System.out.println(ex); } } } } }

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93  | Next Page >