Search Results

Search found 5991 results on 240 pages for 'iwork numbers'.

Page 81/240 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Nested loops break out unexpectedly

    - by Metju
    Hi guys, I'm trying to create a sudoku game, for those that do not know what it is. You have a 9x9 box that needs to be filled with numbers from 1-9, every number must be unique in its row and column, and also in the 3x3 box it is found. I ended up doing loads of looping within a 2 dimensional array. But at some point it just stops, with no exceptions whatsoever, just breaks out and nothing happens, and it's not always at the same position, but always goes past half way. I was expecting a stack overflow exception at least. Here's my code: public class Engine { public int[,] Create() { int[,] outer = new int[9, 9]; for (int i = 0; i < 9; i++) { for (int j = 0; j < 9; j++) { outer[i, j] = GetRandom(GetUsed(outer, i, j)); } } return outer; } List<int> GetUsed(int[,] arr, int x, int y) { List<int> usedNums = new List<int>(); for (int i = 0; i < 9; i++) { if (arr[x, i] != 0 && i != y) { if(!usedNums.Contains(arr[x, i])) usedNums.Add(arr[x, i]); } } for (int i = 0; i < 9; i++) { if (arr[i, y] != 0 && i != x) { if (!usedNums.Contains(arr[i, y])) usedNums.Add(arr[i, y]); } } int x2 = 9 - (x + 1); int y2 = 9 - (y + 1); if (x2 <= 3) x2 = 2; else if (x2 > 3 && x2 <= 6) x2 = 5; else x2 = 8; if (y2 <= 3) y2 = 2; else if (y2 > 3 && y2 <= 6) y2 = 5; else y2 = 8; for (int i = x2 - 2; i < x2; i++) { for (int j = y2 - 2; j < y2; j++) { if (arr[i, j] != 0 && i != x && j != y) { if (!usedNums.Contains(arr[i, j])) usedNums.Add(arr[i, j]); } } } return usedNums; } int GetRandom(List<int> numbers) { Random r; int newNum; do { r = new Random(); newNum = r.Next(1, 10); } while (numbers.Contains(newNum)); return newNum; } }

    Read the article

  • Generate random number histogram using java

    - by Chewart
    Histogram -------------------------------------------------------- 1 ****(4) 2 ******(6) 3 ***********(11) 4 *****************(17) 5 **************************(26) 6 *************************(25) 7 *******(7) 8 ***(3) 9 (0) 10 *(1) -------------------------------------------------------- basically above is what my prgram needs to do.. im missing something somewhere any help would be great :) import java.util.Random; public class Histogram { /*This is a program to generate random number histogram between 1 and 100 and generate a table */ public static void main(String args[]) { int [] randarray = new int [80]; Random random = new Random(); System.out.println("Histogram"); System.out.println("---------"); int i ; for ( i = 0; i<randarray.length;i++) { int temp = random.nextInt(100); //random numbers up to number value 100 randarray[i] = temp; } int [] histo = new int [10]; for ( i = 0; i<10; i++) { /* %03d\t, this generates the random numbers to three decimal places so the numbers are generated with a full number or number with 00's or one 0*/ if (randarray[i] <= 10) { histo[i] = histo[i] + 1; //System.out.println("*"); } else if ( randarray[i] <= 20){ histo[i] = histo[i] + 1; } else if (randarray[i] <= 30){ histo[i] = histo[i] + 1; } else if ( randarray[i] <= 40){ histo[i] = histo[i] + 1; } else if (randarray[i] <= 50){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=60){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=70){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=80){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=90){ histo[i] = histo[i] + 1; } else if ( randarray[i] <=100){ histo[i] = histo[i] + 1; } switch (randarray[i]) { case 1: System.out.print("0-10 | "); break; case 2: System.out.print("11-20 | "); break; case 3: System.out.print("21-30 | "); break; case 4: System.out.print("31-40 | "); break; case 5: System.out.print("41-50 | "); break; case 6: System.out.print("51-60 | "); break; case 7: System.out.print("61-70 | "); break; case 8: System.out.print("71-80 | "); break; case 9: System.out.print("81-90 | "); break; case 10: System.out.print("91-100 | "); } for (int i = 0; i < 80; i++) { randomNumber = random.nextInt(100) index = (randomNumber - 1) / 2; histo[index]++; } } } }

    Read the article

  • Merge DataGrid ColumnHeaders

    - by Vishal
    I would like to merge two column-Headers. Before you go and mark this question as duplicate please read further. I don't want a super-Header. I just want to merge two column-headers. Take a look at image below: Can you see two columns with headers Mobile Number 1 and Mobile Number 2? I want to show there only 1 column header as Mobile Numbers. Here is the XAML used for creating above mentioned dataGrid: <DataGrid Grid.Row="1" Margin="0,10,0,0" ItemsSource="{Binding Ledgers}" IsReadOnly="True" AutoGenerateColumns="False"> <DataGrid.Columns> <DataGridTextColumn Header="Customer Name" Binding="{Binding LedgerName}" /> <DataGridTextColumn Header="City" Binding="{Binding City}" /> <DataGridTextColumn Header="Mobile Number 1" Binding="{Binding MobileNo1}" /> <DataGridTextColumn Header="Mobile Number 2" Binding="{Binding MobileNo2}" /> <DataGridTextColumn Header="Opening Balance" Binding="{Binding OpeningBalance}" /> </DataGrid.Columns> </DataGrid> Update1: Update2 I have created a converter as follows: public class MobileNumberFormatConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value != null && value != DependencyProperty.UnsetValue) { if (value.ToString().Length <= 15) { int spacesToAdd = 15 - value.ToString().Length; string s = value.ToString().PadRight(value.ToString().Length + spacesToAdd); return s; } return value.ToString().Substring(0, value.ToString().Length - 3) + "..."; } return ""; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } I have used it in XAML as follows: <DataGridTextColumn Header="Mobile Numbers"> <DataGridTextColumn.Binding> <MultiBinding StringFormat=" {0} {1}"> <Binding Path="MobileNo1" Converter="{StaticResource mobileNumberFormatConverter}"/> <Binding Path="MobileNo2" Converter="{StaticResource mobileNumberFormatConverter}"/> </MultiBinding> </DataGridTextColumn.Binding> </DataGridTextColumn> The output I got: Update3: At last I got the desired output. Here is the code for Converter: public class MobileNumberFormatConverter : IValueConverter { public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { if (value != null && value != DependencyProperty.UnsetValue) { if (parameter.ToString().ToUpper() == "N") { if (value.ToString().Length <= 15) { return value.ToString(); } else { return value.ToString().Substring(0, 12); } } else if (parameter.ToString().ToUpper() == "S") { if (value.ToString().Length <= 15) { int spacesToAdd = 15 - value.ToString().Length; string spaces = ""; return spaces.PadRight(spacesToAdd); } else { return "..."; } } } return ""; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } } Here is my XAML: <DataGridTemplateColumn Header="Mobile Numbers"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <TextBlock> <Run Text="{Binding MobileNo1, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=N}" /> <Run Text="{Binding MobileNo1, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=S}" FontFamily="Consolas"/> <Run Text=" " FontFamily="Consolas"/> <Run Text="{Binding MobileNo2, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=N}" /> <Run Text="{Binding MobileNo2, Converter={StaticResource mobileNumberFormatConverter}, ConverterParameter=S}" FontFamily="Consolas"/> </TextBlock> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> Output:

    Read the article

  • Why would Linux VM in vSphere ESXi 5.5 show dramatically increased disk i/o latency?

    - by mhucka
    I'm stumped and I hope someone else will recognize the symptoms of this problem. Hardware: new Dell T110 II, dual-core Pentium G860 2.9 GHz, onboard SATA controller, one new 500 GB 7200 RPM cabled hard drive inside the box, other drives inside but not mounted yet. No RAID. Software: fresh CentOS 6.5 virtual machine under VMware ESXi 5.5.0 (build 174 + vSphere Client). 2.5 GB RAM allocated. The disk is how CentOS offered to set it up, namely as a volume inside an LVM Volume Group, except that I skipped having a separate /home and simply have / and /boot. CentOS is patched up, ESXi patched up, latest VMware tools installed in the VM. No users on the system, no services running, no files on the disk but the OS installation. I'm interacting with the VM via the VM virtual console in vSphere Client. Before going further, I wanted to check that I configured things more or less reasonably. I ran the following command as root in a shell on the VM: for i in 1 2 3 4 5 6 7 8 9 10; do dd if=/dev/zero of=/test.img bs=8k count=256k conv=fdatasync done I.e., just repeat the dd command 10 times, which results in printing the transfer rate each time. The results are disturbing. It starts off well: 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.451 s, 105 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 20.4202 s, 105 MB/s ... but after 7-8 of these, it then prints 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GG) copied, 82.9779 s, 25.9 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 84.0396 s, 25.6 MB/s 262144+0 records in 262144+0 records out 2147483648 bytes (2.1 GB) copied, 103.42 s, 20.8 MB/s If I wait a significant amount of time, say 30-45 minutes, and run it again, it again goes back to 105 MB/s, and after several rounds (sometimes a few, sometimes 10+), it drops to ~20-25 MB/s again. Plotting the disk latency in vSphere's interface, it shows periods of high disk latency hitting 1.2-1.5 seconds during the times that dd reports the low throughput. (And yes, things get pretty unresponsive while that's happening.) What could be causing this? I'm comfortable that it is not due to the disk failing, because I also had configured two other disks as an additional volume in the same system. At first I thought I did something wrong with that volume, but after commenting the volume out from /etc/fstab and rebooting, and trying the tests on / as shown above, it became clear that the problem is elsewhere. It is probably an ESXi configuration problem, but I'm not very experienced with ESXi. It's probably something stupid, but after trying to figure this out for many hours over multiple days, I can't find the problem, so I hope someone can point me in the right direction. (P.S.: yes, I know this hardware combo won't win any speed awards as a server, and I have reasons for using this low-end hardware and running a single VM, but I think that's besides the point for this question [unless it's actually a hardware problem].) ADDENDUM #1: Reading other answers such as this one made me try adding oflag=direct to dd. However, it makes no difference in the pattern of results: initially the numbers are higher for many rounds, then they drop to 20-25 MB/s. (The initial absolute numbers are in the 50 MB/s range.) ADDENDUM #2: Adding sync ; echo 3 > /proc/sys/vm/drop_caches into the loop does not make a difference at all. ADDENDUM #3: To take out further variables, I now run dd such that the file it creates is larger than the amount of RAM on the system. The new command is dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct. Initial throughput numbers with this version of the command are ~50 MB/s. They drop to 20-25 MB/s when things go south. ADDENDUM #4: Here is the output of iostat -d -m -x 1 running in another terminal window while performance is "good" and then again when it's "bad". (While this is going on, I'm running dd if=/dev/zero of=/test.img bs=16k count=256k conv=fdatasync oflag=direct.) First, when things are "good", it shows this: When things go "bad", iostat -d -m -x 1 shows this:

    Read the article

  • Java Thread execution on same data

    - by AR89
    first of all here is the code, you can just copy an paste import java.util.ArrayList; public class RepetionCounter implements Runnable{ private int x; private int y; private int[][] matrix; private int xCounter; private int yCounter; private ArrayList<Thread> threadArray; private int rowIndex; private boolean[] countCompleted; public RepetionCounter(int x, int y, int [][]matrix) { this.x = x; this.y = y; this.matrix = matrix; this.threadArray = new ArrayList<Thread>(matrix.length); this.rowIndex = 0; for(int i = 0; i < matrix.length; i++){ threadArray.add(new Thread(this)); } countCompleted = new boolean[matrix.length]; } public void start(){ for (int i = 0; i < threadArray.size(); i++){ threadArray.get(i).start(); this.rowIndex++; } } public void count(int rowIndex) { for(int i = 0; i < matrix[rowIndex].length; i++){ if (matrix[rowIndex][i] == x){ this.xCounter++; } else if (matrix[rowIndex][i] == y){ this.yCounter++; } } } @Override public void run() { count(this.rowIndex); countCompleted[this.rowIndex] = true; } public int getxCounter() { return xCounter; } public void setxCounter(int xCounter) { this.xCounter = xCounter; } public int getyCounter() { return yCounter; } public void setyCounter(int yCounter) { this.yCounter = yCounter; } public boolean[] getCountCompleted() { return countCompleted; } public void setCountCompleted(boolean[] countCompleted) { this.countCompleted = countCompleted; } public static void main(String args[]){ int[][] matrix = {{0,2,1}, {2,3,4}, {3,2,0}}; RepetionCounter rc = new RepetionCounter(0, 2, matrix); rc.start(); boolean ready = false; while(!ready){ for(int i = 0; i < matrix.length; i++){ if (rc.getCountCompleted()[i]){ ready = true; } else { ready = false; } } } if (rc.getxCounter() > rc.getyCounter()){ System.out.println("Thre are more x than y"); } else {System.out.println("There are:"+rc.getxCounter()+" x and:"+rc.getyCounter()+" y"); } } } What I want this code to do: I give to the object a matrix and tow numbers, and I want to know how much times these two numbers occurs in the matrix. I create as many thread as the number of rows of the matrix (that' why there is that ArrayList), so in this object I have k threads (supposing k is the number of rows), each of them count the occurrences of the two numbers. The problem is: if I run it for the first time everything work, but if I try to execute it another time I get and IndexOutOfBoundException, or a bad count of the occurrences, the odd thing is that if I get the error, and modify the code, after that it will works again just for once. Can you explain to me why is this happening?

    Read the article

  • CodePlex Daily Summary for Wednesday, October 02, 2013

    CodePlex Daily Summary for Wednesday, October 02, 2013Popular ReleasesEla, functional programming language: Ela, dynamic functional language (PDF, book, 0.6): A book about Ela, dynamic functional language in PDF format.Compact 2013 Tools: Managed Code Version of Apps 1.0: Compact13MinShell Download https://download-codeplex.sec.s-msft.com/Images/v20779/RuntimeBinary.gif Compact13MinShellV3.0.zip The Codeplex Project Downloads Page AboutCompact13Tools.zip: Each app as an OS Content Subproject. Includes CoreCon3 Subproject. Apps.zip: Just the apps in a a zip file AppInstallersx86.zip: The apps as separate x86 installers Compact13MinShell Download: (Separate Codeplex Project) The Minshell that implements the menu that includes these apps via registr...Application Architecture Guidelines: App Architecture Guidelines 3.0.8: This document is an overview of software qualities, principles, patterns, practices, tools and libraries.C# Intellisense for Notepad++: Release v1.0.7.0: - smart indentation - document formatting To avoid the DLLs getting locked by OS use MSI file for the installation.CS-Script for Notepad++: Release v1.0.7.0: - smart indentation - document formatting To avoid the DLLs getting locked by OS use MSI file for the installation.State of Decay Save Manager: Version 1.0.2: Added Start/Stop button for timer to manually enable/disable Quick save routine updated to force it to refresh the folder date Quick save added to backup listing Manual update button Lower level hooking for F5 and F9 buttons workingSharePoint Farm documentation tool: SPDocumentor 0.1: SPDocumentor 0.1 This is a POC version of the tool that will be implemented.DotNetNuke® Form and List: 06.00.06: DotNetNuke Form and List 06.00.06 Changes to 6.0.6•Add in Sql to remove 'text on row' setting for UserDefinedTable to make SQL Azure compatible. •Add new azureCompatible element to manifest. •Added a fix for importing templates. Changes to 6.0.2•Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 •Change: Data is now stored in nvarchar(max) instead of ntext Changes to 6.0.1•Scripts now compatible with SQL Azure. Changes to 6.0.0•Icons are shown in module action b...BlackJumboDog: Ver5.9.6: 2013.09.30 Ver5.9.6 (1)SMTP???????、???????????????? (2)WinAPI??????? (3)Web???????CGI???????????????????????Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.2: Mostly internal code tweaks. added -nosize switch to turn off the size- and gzip-calculations done after minification. removed the comments in the build targets script for the old AjaxMin build task (discussion #458831). Fixed an issue with extended Unicode characters encoded inside a string literal with adjacent \uHHHH\uHHHH sequences. Fixed an IndexOutOfRange exception when encountering a CSS identifier that's a single underscore character (_). In previous builds, the net35 and net20...AJAX Control Toolkit: September 2013 Release: AJAX Control Toolkit Release Notes - September 2013 Release (Updated) Version 7.1001September 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Important UpdateThis release has been updated to fix two issues: Upda...WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.4.apifix-alpha: WDTVHubGen.v2.1.4.apifix-alpha is for testers to figure out if we got the NEW api plugged in ok. thanksVisual Log Parser: VisualLogParser: Portable Visual Log Parser for Dotnet 4.0Trace Reader for Microsoft Dynamics CRM: Trace Reader (1.2013.9.29): Initial releaseAudioWordsDownloader: AudioWordsDownloader 1.1 build 88: New features list of words (mp3 files) is available upon typing when a download path is defined list of download paths is added paths history settings added Bug fixed case mismatch in word search field fixed path not exist bug fixed when history has been used path, when filled from dialog, not stored refresh autocomplete list after path change word sought is deleted when path is changed at the end sought word list is deleted word list not refreshed download ends. word lis...Wsus Package Publisher: Release v1.3.1309.28: Fix a bug, where WPP crash when running on a computer where Windows was installed in another language than Fr, En or De, and launching the Update Creation Wizard. Fix a bug, where WPP crash if some Multi-Thread job are launch with more than 64 items. Add a button to abort "Install This Update" wizard. Allow WPP to remember which columns are shown last time. Make URL clickable on the Update Information Tab. Add a new feature, when Double-Clicking on an update, the default action exec...Tweetinvi a friendly Twitter C# API: Alpha 0.8.3.0: Version 0.8.3.0 emphasis on the FIlteredStream and ease how to manage Exceptions that can occur due to the network or any other issue you might encounter. Will be available through nuget the 29/09/2013. FilteredStream Features provided by the Twitter Stream API - Ability to track specific keywords - Ability to track specific users - Ability to track specific locations Additional features - Detect the reasons the tweet has been retrieved from the Filtered API. You have access to both the ma...AcDown?????: AcDown????? v4.5: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ??v4.5 ???? AcPlay????????v3.5 ????????,???????????30% ?? ???????GoodManga.net???? ?? ?????????? ?? ??Acfun?????????? ??Bilibili??????????? ?????????flvcd???????? ??SfAcg????????????? ???????????? ???????????????? ????32...Magick.NET: Magick.NET 6.8.7.001: Magick.NET linked with ImageMagick 6.8.7.0. Breaking changes: - ToBitmap method of MagickImage returns a png instead of a bmp. - Changed the value for full transparency from 255(Q8)/65535(Q16) to 0. - MagickColor now uses floats instead of Byte/UInt16.Media Companion: Media Companion MC3.578b: With the feedback received over the renaming of Movie Folders, and files, there has been some refinement done. As well as I would like to introduce Blu-Ray movie folder support, for Pre-Frodo and Frodo onwards versions of XBMC. To start with, Context menu option for renaming movies, now has three sub options: Movie & Folder, Movie only & Folder only. The option Manual Movie Rename needs to be selected from Movie Preferences, but the autoscrape boxes do not need to be selected. Blu Ray Fo...New ProjectsAll CRM Resources for Microsoft Dynamics CRM: Microsoft Dynamics CRM Resources Windows 8 App with News, Feeds, Forums, Blogs, Videos & Twitter updates, information, guides & resources #MSDynCRM community.BasiliskBugTracker: A sample teamwork project for the Telerik Academy's ASP.NET Course 2013.CagerAutoPilot: Programmatically control a toy helicopter with kinectClass Libraries & Database Management: ClassDBManager permette la sincronizzazione delle classi (creazione/modifica/cancellazione) in base alle tabelle contenute nel databaseCommand Line Utility: Enables fast, easy creation of object-oriented settings classes in C# that interface directly with command line input. Minimize code and increase robustness.Controles | Versa: Login Pagina Principal Cadastro UsuáriosDispage: DisPage is a system to hide a website under a different browser title (For example "Vimeo" could look like "Google" (I am working on a way of changing this)ExpressiveDataGenerators: Expressive and powerfull test data generators.Fabrikam Fiber: This project provides download and support to anyone (i.e. trainers) who want to access the Fabrikam Fiber sample application, setup scripts, notes, etc.Get all numbers in between a pair of numbers: Get all integers between two numbers. C#, VB.NETHungryCrowd food lovers market: food lovers market, food, marketsInvalid User Details for SharePoint 2007 and 2010 Sites: Client Based Utility to export invalid users from a SharePoint site (2007 and 2010), as a CSV file using native SP Web Services (UserGroup.asmx and People.asmx)Kh?o Sát Công Ngh?: 1. Tên d? tài: Th?c tr?ng và gi?i pháp h? tr? nâng cao nang l?c c?nh tranh c?a các doanh nghi?p nh? và v?a t?nh Thanh Hóa Lightning: Micro toolkit to make it easy to get content on your site, and serve it fast.LovelyCMS: LovelyCMS ist ein sehr einfaches Content Management System auf der Basis von ASP.NET MVC4.MVC Error Handler: Simple library that allows you to easily create error pages for common HTTP error and application exceptions.MVC Table Styling selection to CSS and demo table: Enter table styling by selection from drop-down list and both generated CSS and see effect of the CSS on a demo table.MvcWebApiFramework: main frameworkNoDemo: It is not only a demo.NumbersInWordsRU: ?????? ??? ??????????? ????? ??????? ? ????? ????????Omnifactotum: Omnifactotum is the .NET library intended to help .NET developers avoid writing the same helper types, methods and extension methods for different projects.Outlook Rules Offline Processor: A utility for organizing Microsoft Outlook rules. The utility uses the rules export file, *.RWZ, to make changes.SharePoint Farm documentation tool: The SPDocumentor (SharePoint Farm documentation tool) allows you to generate a word document that includes most of your farm settings. Startup Shutdown Mailer: This tool is a simple Windows Service which sends an e-mail to a specified account whenever your PC was started up or shut down.YüzKitabi: Daha güvenli ve etkilesimli YüzKitabi Uygulamasi

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • How to tell if SPARC T4 crypto is being used?

    - by danx
    A question that often comes up when running applications on SPARC T4 systems is "How can I tell if hardware crypto accleration is being used?" To review, the SPARC T4 processor includes a crypto unit that supports several crypto instructions. For hardware crypto these include 11 AES instructions, 4 xmul* instructions (for AES GCM carryless multiply), mont for Montgomery multiply (optimizes RSA and DSA), and 5 des_* instructions (for DES3). For hardware hash algorithm optimization, the T4 has the md5, sha1, sha256, and sha512 instructions (the last two are used for SHA-224 an SHA-384). First off, it's easy to tell if the processor T4 crypto instructions—use the isainfo -v command and look for "sparcv9" and "aes" (and other hash and crypto algorithms) in the output: $ isainfo -v 64-bit sparcv9 applications crc32c cbcond pause mont mpmul sha512 sha256 sha1 md5 camellia kasumi des aes ima hpc vis3 fmaf asi_blk_init vis2 vis popc These instructions are not-privileged, so are available for direct use in user-level applications and libraries (such as OpenSSL). Here is the "openssl speed -evp" command shown with the built-in t4 engine and with the pkcs11 engine. Both run the T4 AES instructions, but the t4 engine is faster than the pkcs11 engine because it has less overhead (especially for smaller packet sizes): t-4 $ /usr/bin/openssl version OpenSSL 1.0.0j 10 May 2012 t-4 $ /usr/bin/openssl engine (t4) SPARC T4 engine support (dynamic) Dynamic engine loading support (pkcs11) PKCS #11 engine support t-4 $ /usr/bin/openssl speed -evp aes-128-cbc # t4 engine used by default . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 487777.10k 816822.21k 986012.59k 1017029.97k 1053543.08k t-4 $ /usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc engine "pkcs11" set. . . . The 'numbers' are in 1000s of bytes per second processed. type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 31703.58k 116636.39k 350672.81k 696170.50k 993599.49k Note: The "-evp" flag indicates use the OpenSSL "EnVeloPe" API, which gives more accurate results. That's because it tells OpenSSL to use the same API that external programs use when calling OpenSSL libcrypto functions, evp(3openssl). DTrace Shows if T4 Crypto Functions Are Used OK, good enough, the isainfo(1) command shows the instructions are present, but how does one know if they are being used? Chi-Chang Lin, who works on Oracle Solaris performance, wrote a Dtrace script to show if T4 instructions are being executed. To show the T4 instructions are being used, run the following Dtrace script. Look for functions named "t4" and "yf" in the output. The OpenSSL T4 engine uses functions named "t4" and the PKCS#11 engine uses functions named "yf". To demonstrate, I'll first run "openssl speed" with the built-in t4 engine then with the pkcs11 engine. The performance numbers are not valid due to dtrace probes slowing things down. t-4 # dtrace -Z -n ' pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes . . . dtrace: pid 2029 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_init 1 libcrypto.so.1.0.0 t4_add_NID 3 libcrypto.so.1.0.0 t4_aes_expand128 5 libcrypto.so.1.0.0 t4_cipher_init_aes 5 libcrypto.so.1.0.0 t4_get_all_ciphers 6 libcrypto.so.1.0.0 t4_get_all_digests 59 libcrypto.so.1.0.0 t4_digest_final_sha1 65 libcrypto.so.1.0.0 t4_digest_init_sha1 65 libcrypto.so.1.0.0 t4_sha1_multiblock 126 libcrypto.so.1.0.0 t4_digest_update_sha1 261 libcrypto.so.1.0.0 t4_aes128_cbc_encrypt 1432979 libcrypto.so.1.0.0 t4_aes128_load_keys_for_encrypt 1432979 libcrypto.so.1.0.0 t4_cipher_do_aes_128_cbc 1432979 t-4 # dtrace -Z -n 'pid$target::*yf*:entry{ @[probemod, probefunc] = count();}   pid$target::*yf*:entry,pid$target::*t4_*:entry{ @[probemod, probefunc] = count();}' \ -c "/usr/bin/openssl speed -engine pkcs11 -evp aes-128-cbc" dtrace: description 'pid$target::*yf*:entry' matched 101 probes engine "pkcs11" set. . . . dtrace: pid 2033 has exited libcrypto.so.1.0.0 ENGINE_load_t4 1 libcrypto.so.1.0.0 t4_DH 1 libcrypto.so.1.0.0 t4_DSA 1 libcrypto.so.1.0.0 t4_RSA 1 libcrypto.so.1.0.0 t4_destroy 1 libcrypto.so.1.0.0 t4_free_aes_ctr_NIDs 1 libcrypto.so.1.0.0 t4_get_all_ciphers 1 libcrypto.so.1.0.0 t4_get_all_digests 1 libsoftcrypto.so.1 rijndael_key_setup_enc_yf 1 libsoftcrypto.so.1 yf_aes_expand128 1 libcrypto.so.1.0.0 t4_add_NID 3 libsoftcrypto.so.1 yf_aes128_cbc_encrypt 1542330 libsoftcrypto.so.1 yf_aes128_load_keys_for_encrypt 1542330 So, as shown above the OpenSSL built-in t4 engine executes t4_* functions (which are hand-coded assembly executing the T4 AES instructions) and the OpenSSL pkcs11 engine executes *yf* functions. Programmatic Use of OpenSSL T4 engine The OpenSSL t4 engine is used automatically with the /usr/bin/openssl command line. Chi-Chang Lin also points out that if you're calling the OpenSSL API (libcrypto.so) from a program, you must call ENGINE_load_built_engines(), otherwise the built-in t4 engine will not be loaded. You do not call ENGINE_set_default(). That's because "openssl speed -evp" test calls ENGINE_load_built_engines() even though the "-engine" option wasn't specified. OpenSSL T4 engine Availability The OpenSSL t4 engine is available with Solaris 11 and 11.1. For Solaris 10 08/11 (U10), you need to use the OpenSSL pkcs311 engine. The OpenSSL t4 engine is distributed only with the version of OpenSSL distributed with Solaris (and not third-party or self-compiled versions of OpenSSL). The OpenSSL engine implements the AES cipher for Solaris 11, released 11/2011. For Solaris 11.1, released 11/2012, the OpenSSL engine adds optimization for the MD5, SHA-1, and SHA-2 hash algorithms, and DES-3. Although the T4 processor has Camillia and Kasumi block cipher instructions, these are not implemented in the OpenSSL T4 engine. The following charts may help view availability of optimizations. The first chart shows what's available with Solaris CLIs and APIs, the second chart shows what's available in Solaris OpenSSL. Native Solaris Optimization for SPARC T4 This table is shows Solaris native CLI and API support. As such, they are all available with the OpenSSL pkcs11 engine. CLIs: "openssl -engine pkcs11", encrypt(1), decrypt(1), mac(1), digest(1), MD5sum(1), SHA1sum(1), SHA224sum(1), SHA256sum(1), SHA384sum(1), SHA512sum(1) APIs: PKCS#11 library libpkcs11(3LIB) (incluDES Openssl pkcs11 engine), libMD(3LIB), and Solaris kernel modules AlgorithmSolaris 1008/11 (U10)Solaris 11Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES2-ECB, DES2-CBC, DES-ECB, DES-CBC XXX bignum Montgomery multiply (RSA, DSA) XXX MD5, SHA-1, SHA-256, SHA-384, SHA-512 XXX SHA-224 X ARCFOUR (RC4) X Solaris OpenSSL T4 Engine Optimization This table is for the Solaris OpenSSL built-in t4 engine. Algorithms listed above are also available through the OpenSSL pkcs11 engine. CLI: openssl(1openssl) APIs: openssl(5), engine(3openssl), evp(3openssl), libcrypto crypto(3openssl) AlgorithmSolaris 11Solaris 11SRU2Solaris 11.1 AES-ECB, AES-CBC, AES-CTR, AES-CBC AES-CFB128 XXX DES3-ECB, DES3-CBC, DES-ECB, DES-CBC X bignum Montgomery multiply (RSA, DSA) X MD5, SHA-1, SHA-256, SHA-384, SHA-512 XX SHA-224 X Source Code Availability Solaris Most of the T4 assembly code that called the new T4 crypto instructions was written by Ferenc Rákóczi of the Solaris Security group, with assistance from others. You can download the Solaris source for this and other parts of Solaris as a few zip files at the Oracle Download website. The relevant source files are generally under directories usr/src/common/crypto/{aes,arcfour,des,md5,modes,sha1,sha2}}/sun4v/. and usr/src/common/bignum/sun4v/. Solaris 11 binary is available from the Oracle Solaris 11 download website. OpenSSL t4 engine The source for the OpenSSL t4 engine, which is based on the Solaris source above, is viewable through the OpenGrok source code browser in directory src/components/openssl/openssl-1.0.0/engines/t4 . You can download the source from the same website or through Mercurial source code management, hg(1). Conclusion Oracle Solaris with SPARC T4 provides a rich set of accelerated cryptographic and hash algorithms. Using the latest update, Solaris 11.1, provides the best set of optimized algorithms, but alternatives are often available, sometimes slightly slower, for releases back to Solaris 10 08/11 (U10). Reference See also these earlier blogs. SPARC T4 OpenSSL Engine by myself, Dan Anderson (2011), discusses the Openssl T4 engine and reviews the SPARC T4 processor for the Solaris 11 release. Exciting Crypto Advances with the T4 processor and Oracle Solaris 11 by Valerie Fenwick (2011) discusses crypto algorithms that were optimized for the T4 processor with the Solaris 11 FCS (11/11) and Solaris 10 08/11 (U10) release. T4 Crypto Cheat Sheet by Stefan Hinker (2012) discusses how to make T4 crypto optimization available to various consumers (such as SSH, Java, OpenSSL, Apache, etc.) High Performance Security For Oracle Database and Fusion Middleware Applications using SPARC T4 (PDF, 2012) discusses SPARC T4 and its usage to optimize application security. Configuring Oracle iPlanet WebServer / Oracle Traffic Director to use crypto accelerators on T4-1 servers by Meena Vyas (2012)

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • CodePlex Daily Summary for Monday, October 24, 2011

    CodePlex Daily Summary for Monday, October 24, 2011Popular ReleasesPeople's Note: People's Note 0.31: Added note tag editing. Changed note edit conflict resolution to keep the latest version. To install: copy the appropriate CAB file onto your WM device and run it.Windows Azure Toolkit for Windows Phone: Windows Azure Toolkit for Windows Phone v1.3.1: Upgraded Windows Azure projects to Windows Azure Tools for Microsoft Visual Studio 2010 1.5 – September 2011 Upgraded the tools tools to support the Windows Phone Developer Tools RTW Update SQL Azure only scenarios to use ASP.NET Universal Providers (through the System.Web.Providers v1.0.1 NuGet package) Changed Shared Access Signature service interface to support more operations Refactored Blobs API to have a similar interface and usage to that provided by the Windows Azure SDK Stor...Workflow Automation (for Dynamics CRM 2011): Release 1.0: Initial release version 1.0.Window Manipulation with the Microsoft Touch Mouse: Window Touch v1.0: This is the initial release of the Window Touch software, which may have bugs and incomplete interactions. Please be patient with us as we work out all of the kinks, and feel free to send comments. To install and run the program download and double click the .msi file below. Make sure you already have a Microsoft Touch Mouse and can use it before installing.xUnit.net Contrib: xunitcontrib-resharper 0.4.4 (dotCover): xunitcontrib release 0.4.4 (ReSharper runner) This release provides a test runner plugin for Resharper 6.0 RTM, targetting all versions of xUnit.net. (See the xUnit.net project to download xUnit.net itself.) This release addresses the following issues:Support for dotCover code coverage 4132 Note that this build work against ALL VERSIONS of xunit. The files are compiled against xunit.dll 1.8 - DO NOT REPLACE THIS FILE. Thanks to xunit's version independent runner system, this package can r...BookShop: BookShop: BookShop WP7 clientRibbon Editor for Microsoft Dynamics CRM 2011: Ribbon Editor (0.1.2122.266): Added CodePlex and PayPal links New icon Bug fix: can't connect to an IFD deployment when the discovery service url has been customizedSiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.0.921.340): Added CodePlex and PayPal links New iconMVCQuick: MVCQuick 0.3.1: Features??NHibernate 3.2??Repository(ORuM) ??Spring.Net 1.3.2??Container(IoC) ??Common.Logging 1.2??Logging ASP.NET Security Provider?? ??MVCQuick.Framework??MusicStoreDotNet.Framework.Common: DotNet.Framework.Common 4.0: ??????????,????????????XML Explorer: XML Explorer 4.0.5: Changes in 4.0.5: Added 'Copy Attribute XPath to Address Bar' feature. Added methods for decoding node text and value from Base64 encoded strings, and copying them to the clipboard. Added 'ChildNodeDefinitions' to the options, which allows for easier navigation of parent-child and ID-IDREF relationships. Discovery happens on-demand, as nodes are expanded and child nodes are added. Nodes can now have 'virtual' child nodes, defined by an xpath to select an identifier (usually relative to ...Media Companion: MC 3.419b Weekly: A couple of minor bug fixes, but the important fix in this release is to tackle the extremely long load times for users with large TV collections (issue #130). A note has been provided by developer Playos: "One final note, you will have to suffer one final long load and then it should be fixed... alternatively you can delete the TvCache.xml and rebuild your library... The fix was to include the file extension so it doesn't have to look for the video file (checking to see if a file exists is a...CODE Framework: 4.0.11021.0: This build adds a lot of our WPF components, including our MVVC and MVC components as well as a "Metro" and "Battleship" style.GridLibre para Visual FoxPro: GridLibre para Visual FoxPro v3.5: GridLibre Para Visual FoxPro: esta herramienta ayudara a los usuarios y programadores en los manejos de los datos, como Filtrar, multiseleccion y el autoformato a las columnas como la asignacion del controlsource.Umbraco CMS: Umbraco 5.0 CMS Alpha 3: Umbraco 5 Alpha 3Umbraco 5 (aka Jupiter) will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out the Alpha of v5 today! If you're new to Umbraco and would like to get a low-down on our popular and easy-to-learn approach to content management, check out our intro video. What's Alpha 3?This is our third Alpha release. It's intended for developers looking to become familiar with the codebase & architecture, or for thos...Vkontakte WP: Vkontakte: source codeWay2Sms Applications for Android, Desktop/Laptop & Java enabled phones: Way2SMS Desktop App v2.0: 1. Fixed issue with sending messages due to changes to Way2Sms site 2. Updated the character limit to 160 from 140GART - Geo Augmented Reality Toolkit: 1.0.1: About Release 1.0.1 Release 1.0.1 is a service release that addresses several issues and improves performance. As always, check the Documentation tab for instructions on how to get started. If you don't have the Windows Phone SDK yet, grab it here. Breaking Change Please note: There is a breaking change in this release. As noted below, the WorldCalculationMode property of ARItem has been replaced by a user-definable function. ARItem is now automatically wired up with a function that perform...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.32: Fix for issue #16710 - string literals in "constant literal operations" which contain ASP.NET substitutions should not be considered "constant." Move the JS1284 error (Misplaced Function Declaration) so it only fires when in strict mode. I got a couple complaints that people didn't like that error popping up in their existing code when they could verify that the location of that function, although not strict JS, still functions as expected cross-browser.Naked Objects: Naked Objects Release 4.0.110.0: Corresponds to the packaged version 4.0.110.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Documentation Please note that after ...New Projects360zebra4en: 360zebra???AG: Web Framework that can leverage silverlight, but fall back on native html if silverlight is not available.BookShop: BookShop ????? ???? ? ????????????? ?????????.CompendiumImport: Import data from Dungeons & Dragons Compendium to Masterplan librariesCS 6235 Arbiter Server: CS 6235 Arbiter ServerDual numbers for automatic differentiation: Dual numbers can be used to automatically calculate numerically stable derivatives of functions.Effort - Entity Framework Unit Testing Tool: Effort is a powerful unit testing tool that brings an easy way to create unit tests for Entity Framework based applications. It can manipulate the behavior of EntityConnection or ObjectContext objects, so that the data operations are executed by a fake in-process database, while omitting the real database completely. This mechanism makes possible to remove the dependency between your unit test and the real database.FlexiCache for ASP.NET applications: This library provides extended cache capabilities to the ASP.NET applications. It includes the MongoDB and SQL Server output cache providers extending ASP.NET Output Cache capabilities by allowing to store cached data outside of the application process that is especially important in web-farm scenario. This library provides "Session-On-Demand" functionality - ability to separate ASP.NET Session data to subsets that can be stored outside of the main ASP.NET session and loaded on demand w...HtmlAgilityPackContrib - Logical extension to HtmlAgilityPack: HtmlAgilityPackContrib - A logical extension to HtmlAgilityPack to parse HTML using jQuery like methods inspired by jSoupjsonhttphandler: The JsonHttpHandler is a simple JSON oriented Http handler to easily integrate JSON GET, POST, and JSONP web services into your application.My Recent Documents: This Webpart for Sharepoint 2010 is developed for users that need help finding the last number of documents they have been working on. The target user have trouble location where he/she placed their documents. This Webpart gives the following features... 1. Locate your last edited documents.. 2. Customize how many documents the webpart should find. 3. Should it look in subsites also? 4. Show the structure in where the documents are located and click easy link to either document lib...Option pricing for arbitrary distributions: European option pricing with arbitrary distributions using dual numbers to calculate greeks.Project Tracy: ????????? ????????? ????????10??????? ???????????????????????。。。。。 ???!Tracy?????????????!(?RegSharp: RegSharp provides server functionality for Sencha Extjs (http://www.sencha.com/products/extjs/) paging grid. RegSharp implements sorting, filtering and paging logic on the server side that is required for using the paging grid. Is't developed in C#SharePoint/TFS Continuous Integration Starter Pack: Provides a customized TFS Build workflow and PowerShell scripts to get started with Continuous Integration (automated builds) in SharePoint 2010/TFS 2010. This pack will allow you to automatically build and deploy WSPs using TFS, and optionally also include automated testing as part of the build such as Visual Studio 2010's Coded UI Tests. Sharp Investor: Sharp Investor pools various online sources as well as preforms a couple local calculation to return recommended stocks. The program is easy to use, and requires very little work to find profitable stocks online. It's developed in C#.SharpXML: Aims to be a simpler library for interacting with XML files that exposes attributes and children elements as properties and objects respectively.Splash: Splash is an interactive MediaCenter-style YouTube client written in WPF.TomCdc: TomCdc is a solution which makes tracking of sql databse changes easy. Quick and simple installation process allows to start using the solution in just a few minutes. It supports all versions of Microsoft Sql server. You'll no longer have to write triggers manually to find out what process is changing data in a table.Track your work: Another work-time tracker TumblePower: TumblePower is a simplified API library for Tumblr. Use it in your application to post Text, Photos, Quotes, Links, Chats, Audio and Videos as well as set functions such as tags, dates and choose whether the post should be private or not.VisualBASH: This project aims to extends Visual Studio's language support to include bash scripting. This includes syntax highlighting, code completion, and syntax error checking.Window Manipulation with the Microsoft Touch Mouse: Window Manipulation for the Microsoft Touch Mouse provides a set of simple gestures for moving and resizing windows.Workflow Automation (for Dynamics CRM 2011): Workflow Automation for Dynamics CRM 2011 allows user to automate or schedule workflow execution via Windows Task Scheduler. XNA Model Viewer: The XNA Model Viewer allows you to load FBX files and view them. It allows you to test that models will work in XNA, determine the effect of modifying bone transforms, and view animation clips. You can examine the bones and meshes and see the complete hierarchy.

    Read the article

  • Which programming idiom to choose for this open source library?

    - by Walkman
    I have an interesting question about which programming idiom is easier to use for beginner developers writing concrete file parsing classes. I'm developing an open source library, which one of the main functionality is to parse plain text files and get structured information from them. All of the files contains the same kind of information, but can be in different formats like XML, plain text (each of them is structured differently), etc. There are a common set of information pieces which is the same in all (e.g. player names, table names, some id numbers) There are formats which are very similar to each other, so it's possible to define a common Base class for them to facilitate concrete format parser implementations. So I can clearly define base classes like SplittablePlainTextFormat, XMLFormat, SeparateSummaryFormat, etc. Each of them hints the kind of structure they aim to parse. All of the concrete classes should have the same information pieces, no matter what. To be useful at all, this library needs to define at least 30-40 of these parsers. A couple of them are more important than others (obviously the more popular formats). Now my question is, which is the best programming idiom to choose to facilitate the development of these concrete classes? Let me explain: I think imperative programming is easy to follow even for beginners, because the flow is fixed, the statements just come one after another. Right now, I have this: class SplittableBaseFormat: def parse(self): "Parses the body of the hand history, but first parse header if not yet parsed." if not self.header_parsed: self.parse_header() self._parse_table() self._parse_players() self._parse_button() self._parse_hero() self._parse_preflop() self._parse_street('flop') self._parse_street('turn') self._parse_street('river') self._parse_showdown() self._parse_pot() self._parse_board() self._parse_winners() self._parse_extra() self.parsed = True So the concrete parser need to define these methods in order in any way they want. Easy to follow, but takes longer to implement each individual concrete parser. So what about declarative? In this case Base classes (like SplittableFormat and XMLFormat) would do the heavy lifting based on regex and line/node number declarations in the concrete class, and concrete classes have no code at all, just line numbers and regexes, maybe other kind of rules. Like this: class SplittableFormat: def parse_table(): "Parses TABLE_REGEX and get information" # set attributes here def parse_players(): "parses PLAYER_REGEX and get information" # set attributes here class SpecificFormat1(SplittableFormat): TABLE_REGEX = re.compile('^(?P<table_name>.*) other info \d* etc') TABLE_LINE = 1 PLAYER_REGEX = re.compile('^Player \d: (?P<player_name>.*) has (.*) in chips.') PLAYER_LINE = 16 class SpecificFormat2(SplittableFormat): TABLE_REGEX = re.compile(r'^Tournament #(\d*) (?P<table_name>.*) other info2 \d* etc') TABLE_LINE = 2 PLAYER_REGEX = re.compile(r'^Seat \d: (?P<player_name>.*) has a stack of (\d*)') PLAYER_LINE = 14 So if I want to make it possible for non-developers to write these classes the way to go seems to be the declarative way, however, I'm almost certain I can't eliminate the declarations of regexes, which clearly needs (senior :D) programmers, so should I care about this at all? Do you think it matters to choose one over another or doesn't matter at all? Maybe if somebody wants to work on this project, they will, if not, no matter which idiom I choose. Can I "convert" non-programmers to help developing these? What are your observations? Other considerations: Imperative will allow any kind of work; there is a simple flow, which they can follow but inside that, they can do whatever they want. It would be harder to force a common interface with imperative because of this arbitrary implementations. Declarative will be much more rigid, which is a bad thing, because formats might change over time without any notice. Declarative will be harder for me to develop and takes longer time. Imperative is already ready to release. I hope a nice discussion will happen in this thread about programming idioms regarding which to use when, which is better for open source projects with different scenarios, which is better for wide range of developer skills. TL; DR: Parsing different file formats (plain text, XML) They contains same kind of information Target audience: non-developers, beginners Regex probably cannot be avoided 30-40 concrete parser classes needed Facilitate coding these concrete classes Which idiom is better?

    Read the article

  • Programação paralela no .NET Framework 4 – Parte II

    - by anobre
    Olá pessoal, tudo bem? Este post é uma continuação da série iniciada neste outro post, sobre programação paralela. Meu objetivo hoje é apresentar o PLINQ, algo que poderá ser utilizado imediatamente nos projetos de vocês. Parallel LINQ (PLINQ) PLINQ nada mais é que uma implementação de programação paralela ao nosso famoso LINQ, através de métodos de extensão. O LINQ foi lançado com a versão 3.0 na plataforma .NET, apresentando uma maneira muito mais fácil e segura de manipular coleções IEnumerable ou IEnumerable<T>. O que veremos hoje é a “alteração” do LINQ to Objects, que é direcionado a coleções de objetos em memória. A principal diferença entre o LINQ to Objects “normal” e o paralelo é que na segunda opção o processamento é realizado tentando utilizar todos os recursos disponíveis para tal, obtendo uma melhora significante de performance. CUIDADO: Nem todas as operações ficam mais rápidas utilizando recursos de paralelismo. Não deixe de ler a seção “Performance” abaixo. ParallelEnumerable Tudo que a gente precisa para este post está organizado na classe ParallelEnumerable. Esta classe contém os métodos que iremos utilizar neste post, e muito mais: AsParallel AsSequential AsOrdered AsUnordered WithCancellation WithDegreeOfParallelism WithMergeOptions WithExecutionMode ForAll … O exemplo mais básico de como executar um código PLINQ é utilizando o métodos AsParallel, como o exemplo: var source = Enumerable.Range(1, 10000); var evenNums = from num in source.AsParallel() where Compute(num) > 0 select num; Algo tão interessante quanto esta facilidade é que o PLINQ não executa sempre de forma paralela. Dependendo da situação e da análise de alguns itens no cenário de execução, talvez seja mais adequado executar o código de forma sequencial – e nativamente o próprio PLINQ faz esta escolha.  É possível forçar a execução para sempre utilizar o paralelismo, caso seja necessário. Utilize o método WithExecutionMode no seu código PLINQ. Um teste muito simples onde podemos visualizar a diferença é demonstrado abaixo: static void Main(string[] args) { IEnumerable<int> numbers = Enumerable.Range(1, 1000); IEnumerable<int> results = from n in numbers.AsParallel() where IsDivisibleByFive(n) select n; Stopwatch sw = Stopwatch.StartNew(); IList<int> resultsList = results.ToList(); Console.WriteLine("{0} itens", resultsList.Count()); sw.Stop(); Console.WriteLine("Tempo de execução: {0} ms", sw.ElapsedMilliseconds); Console.WriteLine("Fim..."); Console.ReadKey(true); } static bool IsDivisibleByFive(int i) { Thread.SpinWait(2000000); return i % 5 == 0; }   Basta remover o AsParallel da instrução LINQ que você terá uma noção prática da diferença de performance. 1. Instrução utilizando AsParallel   2. Instrução sem utilizar paralelismo Performance Apesar de todos os benefícios, não podemos utilizar PLINQ sem conhecer todos os seus detalhes. Lembre-se de fazer as perguntas básicas: Eu tenho trabalho suficiente que justifique utilizar paralelismo? Mesmo com o overhead do PLINQ, vamos ter algum benefício? Por este motivo, visite este link e conheça todos os aspectos, antes de utilizar os recursos disponíveis. Conclusão Utilizar recursos de paralelismo é ótimo, aumenta a performance, utiliza o investimento realizado em hardware – tudo isso sem custo de produtividade. Porém, não podemos usufruir de qualquer tipo de tecnologia sem conhece-la a fundo antes. Portanto, faça bom uso, mas não esqueça de manter o conhecimento a frente da empolgação. Abraços.

    Read the article

  • Initializing and drawing a mesh using OpenTK

    - by Boreal
    I'm implementing a "Mesh" class to use in my OpenTK game. You pass in a vertex array and an index array, and then you can call Mesh.Draw() to draw it using a shader. I've heard VBO's and VAO's are the way to go for this approach, but nowhere have I found a guide that shows how to get Data Video Memory Shader. Can someone give me a quick rundown of how this works? EDIT: So far, I have this: struct Vertex { public Vector3 position; public Vector3 normal; public Vector3 color; public static int memSize = 9 * sizeof(float); public static byte[] memOffset = { 0, 3 * sizeof(float), 6 * sizeof(float) }; } class Mesh { private uint vbo; private uint ibo; // stores the numbers of vertices and indices private int numVertices; private int numIndices; public Mesh(int numVertices, Vertex[] vertices, int numIndices, ushort[] indices) { // set numbers this.numVertices = numVertices; this.numIndices = numIndices; // generate buffers GL.GenBuffers(1, out vbo); GL.GenBuffers(1, out ibo); GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // send data to the buffers GL.BufferData(BufferTarget.ArrayBuffer, new IntPtr(Vertex.memSize * numVertices), vertices, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, new IntPtr(sizeof(ushort) * numIndices), indices, BufferUsageHint.StaticDraw); } public void Render() { // bind buffers GL.BindBuffer(BufferTarget.ArrayBuffer, vbo); GL.BindBuffer(BufferTarget.ElementArrayBuffer, ibo); // define offsets GL.VertexPointer(3, VertexPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[0])); GL.NormalPointer(NormalPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[1])); GL.ColorPointer(3, ColorPointerType.Float, Vertex.memSize, new IntPtr(Vertex.memOffset[2])); // draw GL.DrawElements(BeginMode.Triangles, numIndices, DrawElementsType.UnsignedInt, (IntPtr)0); } } class Application : GameWindow { Mesh triangle; protected override void OnLoad(EventArgs e) { base.OnLoad(e); GL.ClearColor(0.1f, 0.2f, 0.5f, 0.0f); GL.Enable(EnableCap.DepthTest); GL.Enable(EnableCap.VertexArray); GL.Enable(EnableCap.NormalArray); GL.Enable(EnableCap.ColorArray); Vertex v0 = new Vertex(); v0.position = new Vector3(-1.0f, -1.0f, 4.0f); v0.normal = new Vector3(0.0f, 0.0f, -1.0f); v0.color = new Vector3(1.0f, 1.0f, 0.0f); Vertex v1 = new Vertex(); v1.position = new Vector3(1.0f, -1.0f, 4.0f); v1.normal = new Vector3(0.0f, 0.0f, -1.0f); v1.color = new Vector3(1.0f, 0.0f, 0.0f); Vertex v2 = new Vertex(); v2.position = new Vector3(0.0f, 1.0f, 4.0f); v2.normal = new Vector3(0.0f, 0.0f, -1.0f); v2.color = new Vector3(0.2f, 0.9f, 1.0f); Vertex[] va = { v0, v1, v2 }; ushort[] ia = { 0, 1, 2 }; triangle = new Mesh(3, va, 3, ia); } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); Matrix4 modelview = Matrix4.LookAt(Vector3.Zero, Vector3.UnitZ, Vector3.UnitY); GL.MatrixMode(MatrixMode.Modelview); GL.LoadMatrix(ref modelview); triangle.Render(); SwapBuffers(); } } It doesn't draw anything.

    Read the article

  • Don’t miss this very popular presentation on Punchout in iProcurement on June 26th 2012

    - by user793553
    Don’t miss this very popular presentation on Punchout in iProcurement on June 26th.  See Doc ID 1448447.1 for the Webcast details. ADVISOR WEBCAST: Punchout in iProcurement PRODUCT FAMILY: EBZs- Procurement   June 26, 2012 at 14:00 UK / 15:00 Cairo / 6:00 am Pacific / 7:00 am Mountain / 9:00 am Eastern This one-hour session is recommended for technical and functional users who are maintaining and/or implementing the Punchout from iProcurement. The session will provide an overview of the different Punchout model, setup, and the Punchout to PO xml/cxml cycle. Also, it will provide tips in troubleshooting the common issues when new supplier is added to Punchout or the existing one stops working. TOPICS WILL INCLUDE: Overview of the Punchout Models. Provide the knowledge in the Punchout to PO Process cycle. Demo - Punchout. Certificates and setup. Learn the common issues and how to address in an efficient way. (Documentation and Notes) A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Current Schedule can be found on Note 740966.1 Post Presentation Recordings can be found on Note 740964.1 WebEx Conference Details Topic: Advisor Webcast - Punchout in iProcuremen Date and Time: Tuesday, June 26, 2012 3:00 pm, Egypt Time (Cairo, GMT+02:00) Tuesday, June 26, 2012 2:00 pm, GMT Summer Time (London, GMT+01:00) Tuesday, June 26, 2012 9:00 am, Eastern Daylight Time (New York, GMT-04:00) Tuesday, June 26, 2012 7:00 am, Mountain Daylight Time (Denver, GMT-06:00) Event number: 597 373 155 -------------------------------------------------------  To register for this meeting  -------------------------------------------------------  1. Event address for attendees: https://oracleaw.webex.com/oracleaw/onstage/g.php?d=597373155&t=a 2. Register for the meeting.  Once the host approves your request, you will receive a confirmation email with instructions for joining the meeting. InterCall Audio Instructions A list of Toll-Free Numbers can be found below. VOICESTREAMING IS AVAILABLE teleconference ID: 70528713 UK standard International:+44 1452 562 665 US Free Call: 1866 230 1938 US Local call: 1845 608 8023 Global Toll-Free Numbers MOS doc#:  https://metalink3.oracle.com/od/faces/secure/km/DocumentDisplay.jspx?id=1148600.1 Designation Number Argentina Free Call 0800 444 1009 Australia Free Call 1800 763 650 Austria Free Call 0800 111 956 Austria Local Call 0192 865 72 Belgium Free Call 0800 724 46 Belgium Local Call 0817 000 60 Brazil Free Call 0800 761 0835 Bulgaria Free Call 0080 011 511 76 Canada Free Call 1866 984 6577 Columbia Free Call 0180 091 562 17 Croatia Free Call 0800 222 305 Cyprus Free Call 8009 6341 Czech Republic Free Call 8007 007 95 Denmark Free Call 8088 8467 Denmark Local Call 3272 7506 Finland Free Call 0800 112 398 Finland Local Call 0923 114 014 France Free Call 0805 110 463 France Local Call 0359 580 290 Germany Free Call 0800 101 4918 Germany Local Call 0692 222 161 19 Greece Free Call 0080 012 8135 Hong Kong Free Call 8009 661 55 Hungary Free Call 0680 018 839 Hungary Local Call 0180 889 97 India Free Call 0008 001 006 600 Ireland Free Call 1800 300 170 Ireland Local Call 0143 198 35 Israel Free Call 1809 431 440 Italy Free Call 8007 840 87 Italy Local Call 0236 009 700 Japan Free Call 0066 338 124 31 Latvia Free Call 8000 3680 Luxembourg Free Call 8002 7941 Malaysia Free Call 1800 814 528 Mexico Free Call 0018 666 864 905 Monaco Free Call 8009 3655 Netherlands Free Call 0800 949 4596 Netherlands Local Call 0207 168 000 New Zealand Free Call 0800 451 190 North China Free Call 1080 074 413 29 Norway Free Call 8001 8057 Norway Local Call 2151 0847 Poland Free Call 0080 012 135 73 Portugal Free Call 8007 894 20 Romania Free Call 0800 895 558 Russia Free Call 8108 002 385 2044 Slovenia Free Call 0800 804 55 South Africa Free Call 0800 982 794 South China Free Call 1080 044 111 82 South Korea Free Call 0079 814 800 7887 Spain Free Call 9009 389 85 Spain Local Call 9111 421 10 Sweden Free Call 0200 214 344 Sweden Local Call 0850 596 375 Switzerland Free Call 0800 835 040 Switzerland Local Call 0445 804 280 Thailand Free Call 0018 004 421 98 UK Free Call 0800 073 1830 UK Local Call 0844 871 9364 UK National Call 0871 700 0309 UK Standard International +44 (0) 1452 562 665 USA Free Call 1866 230 1938   Back to the top   Copyright? 2010, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement

    Read the article

  • ASP.NET Web API - Screencast series with downloadable sample code - Part 1

    - by Jon Galloway
    There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team. So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3. Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up. Part 1: Your First Web API [Video and code on the ASP.NET site] This screencast starts with an overview of why you'd want to use ASP.NET Web API: Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.) Scale (who doesn't love the cloud?!) Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions) Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project. using System; using System.Collections.Generic; using System.Linq; using System.Net.Http; using System.Web.Http; namespace NewProject_Mvc4BetaWebApi.Controllers { public class ValuesController : ApiController { // GET /api/values public IEnumerable<string> Get() { return new string[] { "value1", "value2" }; } // GET /api/values/5 public string Get(int id) { return "value"; } // POST /api/values public void Post(string value) { } // PUT /api/values/5 public void Put(int id, string value) { } // DELETE /api/values/5 public void Delete(int id) { } } } Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like. A few important things to note: This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ. A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController. The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL: That sample action returns a list of values. To get just one value back, we'd browse to /values/5: That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

    Read the article

  • Restricting Input in HTML Textboxes to Numeric Values

    - by Rick Strahl
    Ok, here’s a fairly basic one – how to force a textbox to accept only numeric input. Somebody asked me this today on a support call so I did a few quick lookups online and found the solutions listed rather unsatisfying. The main problem with most of the examples I could dig up was that they only include numeric values, but that provides a rather lame user experience. You need to still allow basic operational keys for a textbox – navigation keys, backspace and delete, tab/shift tab and the Enter key - to work or else the textbox will feel very different than a standard text box. Yes there are plug-ins that allow masked input easily enough but most are fixed width which is difficult to do with plain number input. So I took a few minutes to write a small reusable plug-in that handles this scenario. Imagine you have a couple of textboxes on a form like this: <div class="containercontent"> <div class="label">Enter a number:</div> <input type="text" name="txtNumber1" id="txtNumber1" value="" class="numberinput" /> <div class="label">Enter a number:</div> <input type="text" name="txtNumber2" id="txtNumber2" value="" class="numberinput" /> </div> and you want to restrict input to numbers. Here’s a small .forceNumeric() jQuery plug-in that does what I like to see in this case: [Updated thanks to Elijah Manor for a couple of small tweaks for additional keys to check for] <script type="text/javascript"> $(document).ready(function () { $(".numberinput").forceNumeric(); }); // forceNumeric() plug-in implementation jQuery.fn.forceNumeric = function () { return this.each(function () { $(this).keydown(function (e) { var key = e.which || e.keyCode; if (!e.shiftKey && !e.altKey && !e.ctrlKey && // numbers key >= 48 && key <= 57 || // Numeric keypad key >= 96 && key <= 105 || // comma, period and minus key == 190 || key == 188 || key == 109 || // Backspace and Tab and Enter key == 8 || key == 9 || key == 13 || // Home and End key == 35 || key == 36 || // left and right arrows key == 37 || key == 39 || // Del and Ins key == 46 || key == 45) return true; return false; }); }); } </script> With the plug-in in place in your page or an external .js file you can now simply use a selector to apply it: $(".numberinput").forceNumeric(); The plug-in basically goes through each selected element and hooks up a keydown() event handler. When a key is pressed the handler is fired and the keyCode of the event object is sent. Recall that jQuery normalizes the JavaScript Event object between browsers. The code basically white-lists a few key codes and rejects all others. It returns true to indicate the keypress is to go through or false to eat the keystroke and not process it which effectively removes it. Simple and low tech, and it works without too much change of typical text box behavior.© Rick Strahl, West Wind Technologies, 2005-2011Posted in JavaScript  jQuery  HTML  

    Read the article

  • From NaN to Infinity...and Beyond!

    - by Tony Davis
    It is hard to believe that it was once possible to corrupt a SQL Server Database by storing perfectly normal data values into a table; but it is true. In SQL Server 2000 and before, one could inadvertently load invalid data values into certain data types via RPC calls or bulk insert methods rather than DML. In the particular case of the FLOAT data type, this meant that common 'special values' for this type, namely NaN (not-a-number) and +/- infinity, could be quite happily plugged into the database from an application and stored as 'out-of-range' values. This was like a time-bomb. When one then tried to query this data; the values were unsupported and so data pages containing them were flagged as being corrupt. Any query that needed to read a column containing the special value could fail or return unpredictable results. Microsoft even had to issue a hotfix to deal with failures in the automatic recovery process, caused by the presence of these NaN values, which rendered the whole database inaccessible! This problem is history for those of us on more current versions of SQL Server, but its ghost still haunts us. Recently, for example, a developer on Red Gate’s SQL Response team reported a strange problem when attempting to load historical monitoring data into a SQL Server 2005 database via the C# ADO.NET provider. The ratios used in some of their reporting calculations occasionally threw out NaN or infinity values, and the subsequent attempts to load these values resulted in a nasty error. It turns out to be a different manifestation of the same problem. SQL Server 2005 still does not fully support the IEEE 754 standard for floating point numbers, in that the FLOAT data type still cannot handle NaN or infinity values. Instead, they just added validation checks that prevent the 'invalid' values from being loaded in the first place. For people migrating from SQL Server 2000 databases that contained out-of-range FLOAT (or DATETIME etc.) data, to SQL Server 2005, Microsoft have added to the latter's version of the DBCC CHECKDB (or CHECKTABLE) command a DATA_PURITY clause. When enabled, this will seek out the corrupt data, but won’t fix it. You have to do this yourself in what can often be a slow, painful manual process. Our development team, after a quizzical shrug of the shoulders, simply decided to represent NaN and infinity values as NULL, and move on, accepting the minor inconvenience of not being able to tell them apart. However, what of scientific, engineering and other applications that really would like the luxury of being able to both store and access these perfectly-reasonable floating point data values? The sticking point seems to be the stipulation in the IEEE 754 standard that, when NaN is compared to any other value including itself, the answer is "unequal" (i.e. FALSE). This is clearly different from normal number comparisons and has repercussions for such things as indexing operations. Even so, this hardly applies to infinity values, which are single definite values. In fact, there is some encouraging talk in the Connect note on this issue that they might be supported 'in the SQL Server 2008 timeframe'. If didn't happen; SQL 2008 doesn't support NaN or infinity values, though one could be forgiven for thinking otherwise, based on the MSDN documentation for the FLOAT type, which states that "The behavior of float and real follows the IEEE 754 specification on approximate numeric data types". However, the truth is revealed in the XPath documentation, which states that "…float (53) is not exactly IEEE 754. For example, neither NaN (Not-a-Number) nor infinity is used…". Is it really so hard to fix this problem the right way, and properly support in SQL Server the IEEE 754 standard for the floating point data type, NaNs, infinities and all? Oracle seems to have managed it quite nicely with its BINARY_FLOAT and BINARY_DOUBLE types, so it is technically possible. We have an enterprise-class database that is marketed as being part of an 'integrated' Windows platform. Absurdly, we have .NET and XPath libraries that fully support the standard for floating point numbers, and we can't even properly store these values, let alone query them, in the SQL Server database! Cheers, Tony.

    Read the article

  • Speed up SQL Server queries with PREFETCH

    - by Akshay Deep Lamba
    Problem The SAN data volume has a throughput capacity of 400MB/sec; however my query is still running slow and it is waiting on I/O (PAGEIOLATCH_SH). Windows Performance Monitor shows data volume speed of 4MB/sec. Where is the problem and how can I find the problem? Solution This is another summary of a great article published by R. Meyyappan at www.sqlworkshops.com.  In my opinion, this is the first article that highlights and explains with working examples how PREFETCH determines the performance of a Nested Loop join.  First of all, I just want to recall that Prefetch is a mechanism with which SQL Server can fire up many I/O requests in parallel for a Nested Loop join. When SQL Server executes a Nested Loop join, it may or may not enable Prefetch accordingly to the number of rows in the outer table. If the number of rows in the outer table is greater than 25 then SQL will enable and use Prefetch to speed up query performance, but it will not if it is less than 25 rows. In this section we are going to see different scenarios where prefetch is automatically enabled or disabled. These examples only use two tables RegionalOrder and Orders.  If you want to create the sample tables and sample data, please visit this site www.sqlworkshops.com. The breakdown of the data in the RegionalOrders table is shown below and the Orders table contains about 6 million rows. In this first example, I am creating a stored procedure against two tables and then execute the stored procedure.  Before running the stored proceudre, I am going to include the actual execution plan. --Example provided by www.sqlworkshops.com --Create procedure that pulls orders based on City --Do not forget to include the actual execution plan CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City END GO SET STATISTICS time ON GO --Example provided by www.sqlworkshops.com --Execute the procedure with parameter SmallCity1 EXEC RegionalOrdersProc 'SmallCity1' GO After running the stored procedure, if we right click on the Clustered Index Scan and click Properties we can see the Estimated Numbers of Rows is 24.    If we right click on Nested Loops and click Properties we do not see Prefetch, because it is disabled. This behavior was expected, because the number of rows containing the value ‘SmallCity1’ in the outer table is less than 25.   Now, if I run the same procedure with parameter ‘BigCity’ will Prefetch be enabled? --Example provided by www.sqlworkshops.com --Execute the procedure with parameter BigCity --We are using cached plan EXEC RegionalOrdersProc 'BigCity' GO As we can see from the below screenshot, prefetch is not enabled and the query takes around 7 seconds to execute. This is because the query used the cached plan from ‘SmallCity1’ that had prefetch disabled. Please note that even if we have 999 rows for ‘BigCity’ the Estimated Numbers of Rows is still 24.   Finally, let’s clear the procedure cache to trigger a new optimization and execute the procedure again. DBCC freeproccache GO EXEC RegionalOrdersProc 'BigCity' GO This time, our procedure runs under a second, Prefetch is enabled and the Estimated Number of Rows is 999.   The RegionalOrdersProc can be optimized by using the below example where we are using an optimizer hint. I have also shown some other hints that could be used as well. --Example provided by www.sqlworkshops.com --You can fix the issue by using any of the following --hints --Create procedure that pulls orders based on City DROP PROC RegionalOrdersProc GO CREATE PROC RegionalOrdersProc @City CHAR(20) AS BEGIN DECLARE @OrderID INT, @OrderDetails CHAR(200) SELECT @OrderID = o.OrderID, @OrderDetails = o.OrderDetails       FROM RegionalOrders ao INNER JOIN Orders o ON (o.OrderID = ao.OrderID)       WHERE City = @City       --Hinting optimizer to use SmallCity2 for estimation       OPTION (optimize FOR (@City = 'SmallCity2'))       --Hinting optimizer to estimate for the currnet parameters       --option (recompile)       --Hinting optimize not to use histogram rather       --density for estimation (average of all 3 cities)       --option (optimize for (@City UNKNOWN))       --option (optimize for UNKNOWN) END GO Conclusion, this tip was mainly aimed at illustrating how Prefetch can speed up query execution and how the different number of rows can trigger this.

    Read the article

  • Filtering data in LINQ with the help of where clause

    - by vik20000in
     LINQ has bought with itself a super power of querying Objects, Database, XML, SharePoint and nearly any other data structure. The power of LINQ lies in the fact that it is managed code that lets you write SQL type code to fetch data.  Whenever working with data we always need a way to filter out the data based on different condition. In this post we will look at some of the different ways in which we can filter data in LINQ with the help of where clause. Simple Filter for an array. Let’s say we have an array of number and we want to filter out data based on some condition. Below is an example int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; var lowNums =                 from num in numbers                 where num < 5                 select num;   Filter based on one of the property in the class. With the help of LINQ we can also filer out data from a list based on value of some property. var soldOutProducts =                 from prod in products                 where prod.UnitsInStock == 0                 select prod; Filter based on Multiple of the property in the class. var expensiveInStockProducts =         from prod in products         where prod.UnitsInStock > 0 && prod.UnitPrice > 3.00M         select prod; Filter based on the index of the Item in the list.In the below example we can see that we are able to filter data based on the index of the item in the list. string[] digits = { "zero", "one", "two", "three", "four", "five", "six"}; var shortDigits = digits.Where((digit, index) => digit.Length < index); There are many other way in which we can filter out data in LINQ. In the above post I have tried and shown few ways using the LINQ. Vikram

    Read the article

  • Why your Netapp is so slow...

    - by Darius Zanganeh
    Have you ever wondered why your Netapp FAS box is slow and doesn't perform well at large block workloads?  In this blog entry I will give you a little bit of information that will probably help you understand why it’s so slow, why you shouldn't use it for applications that read and write in large blocks like 64k, 128k, 256k ++ etc..  Of course since I work for Oracle at this time, I will show you why the ZS3 storage boxes are excellent choices for these types of workloads. Netapp’s Fundamental Problem The fundamental problem you have running these workloads on Netapp is the backend block size of their WAFL file system.  Every application block on a Netapp FAS ends up in a 4k chunk on a disk. Reference:  Netapp TR-3001 Whitepaper Netapp has proven this lacking large block performance fact in at least two different ways. They have NEVER posted an SPC-2 Benchmark yet they have posted SPC-1 and SPECSFS, both recently. In 2011 they purchased Engenio to try and fill this GAP in their portfolio. Block Size Matters So why does block size matter anyways?  Many applications use large block chunks of data especially in the Big Data movement.  Some examples are SAS Business Analytics, Microsoft SQL, Hadoop HDFS is even 64MB! Now let me boil this down for you.  If an application such MS SQL is writing data in a 64k chunk then before Netapp actually writes it on disk it will have to split it into 16 different 4k writes and 16 different disk IOPS.  When the application later goes to read that 64k chunk the Netapp will have to again do 16 different disk IOPS.  In comparison the ZS3 Storage Appliance can write in variable block sizes ranging from 512b to 1MB.  So if you put the same MSSQL database on a ZS3 you can set the specific LUNs for this database to 64k and then when you do an application read/write it requires only a single disk IO.  That is 16x faster!  But, back to the problem with your Netapp, you will VERY quickly run out of disk IO and hit a wall.  Now all arrays will have some fancy pre fetch algorithm and some nice cache and maybe even flash based cache such as a PAM card in your Netapp but with large block workloads you will usually blow through the cache and still need significant disk IO.  Also because these datasets are usually very large and usually not dedupable they are usually not good candidates for an all flash system.  You can do some simple math in excel and very quickly you will see why it matters.  Here are a couple of READ examples using SAS and MSSQL.  Assume these are the READ IOPS the application needs even after all the fancy cache and algorithms.   Here is an example with 128k blocks.  Notice the numbers of drives on the Netapp! Here is an example with 64k blocks You can easily see that the Oracle ZS3 can do dramatically more work with dramatically less drives.  This doesn't even take into account that the ONTAP system will likely run out of CPU way before you get to these drive numbers so you be buying many more controllers.  So with all that said, lets look at the ZS3 and why you should consider it for any workload your running on Netapp today.  ZS3 World Record Price/Performance in the SPC-2 benchmark ZS3-2 is #1 in Price Performance $12.08ZS3-2 is #3 in Overall Performance 16,212 MBPS Note: The number one overall spot in the world is held by an AFA 33,477 MBPS but at a Price Performance of $29.79.  A customer could purchase 2 x ZS3-2 systems in the benchmark with relatively the same performance and walk away with $600,000 in their pocket.

    Read the article

  • Is your Credit Card Number valid?

    - by Rekha
    The credit card numbers may look like some random unique 16 digits number but those digits inform more than what we think it could be. The first digit of the card is the Major Industry Identifier: 1 and 2 -  Airlines 3  – Travel and Entertainment 4 and 5 -  Banking and Financial 6 – Merchandizing and Banking 7 – Petroleum 8 – Telecommunications 9 – National assignment The first 6 digits represent the Issuer Identification Number: Visa – 4xxxxx Master Card – 51xxxx & 55xxxx The 7th and following digits, excluding the last digit, are the person’s account number which leads to trillion possible combinations if the maximum of 12 digits is used. Many cards only use 9 digits. The final digit is the checksum or check digit. It is used to validate the card number using Luhn algorithm. How To Validate Credit Card Number? Take any credit card number, for example 5588 3201 2345 6789. Step 1: Double every other digit from the right: 5*2      8*2      3*2      0*2      2*2      4*2      6*2      8*2 ————————————————————————- 10        16        6          0          4          8      12        16 Step 2: Add these new digits to undoubled digits. All double digit numbers are added as a sum of their digits, so 16 becomes 1+6 = 7: Undoubled digits:       5          8          2          1          3          5          7          9 Doubled Digits:          10       16         6          0          4          8         12         16 Sum:  5+1+0+8+1+6+2+6+1+0+3+4+5+8+7+1+2+9+1+6 = 76 If the final sum is divisible by 10, then the Credit Card number is valid, if not, the number is invalid or fake!!! Hence the example is a fake number? via mint  cc and image credit This article titled,Is your Credit Card Number valid?, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

  • Technology vs. Antiquated Methods

    - by AreYouSerious
    So Here I am talking with my Program lead, about technology, and how while my father is the VP of a major company, he still doesn't have a blackberry, or a smart phone. and I think it's funny. Most people would say it's a generational thing. That because he's older, he dosen't accept technology, and that's why. I have trouble swallowing that because this is the same man, who bought a satellite radio for his car, and made sure that the printer for the house was networked so that his and my mom's laptop could print wirelessly from the living room through their wireless network. I think it has to do with more with necessity, and partially with finical responsibility. My father is very financially conciencious. Think about it yourself. you pay for internet at your home. You have internet access at your office. But if you get a smart phone you're going to pay almost the same amount just for that access. A lot of people take it as just another fixed cost... I'm one of those. I don't even think about it, as I check my facebook from the bus, train, or even while sitting in traffic... The convience of having connection everywhere outweigh the financial responsible person screaming at in the back of my mind. However This conversation lead us to another venue of discussion.... what happens when the power dies. if you left your charger at home, or you phone or navi just stops working... are you going to be able to continue on as you did when it was working... let take the navi as an example... if your navi stops working, how many of you know how to use a map, and navigate? can you even find where you are on a map using the cross streets that your stopped at? This is a skill that unfortunatelly is overlooked these days in the child rearing process. Most people don't see the value, while some others can't do it themselves, so how can they teach their offspring? Take another example.... what if your phone gets lost... or stolen, or you drive over it? do you have the numbers in their memorized? are they recorded somewhere? I know that if it weren't for google sync I wouldn't have them backed up... not sufficiently. And what good does that do if you're in timbuckto and your phone dies, think you can get on the internet to look up those numbers? Don't get me wrong. I'm the first to see the value in technology, and am willing to pay the price to not have to wait for prices to come down. I will pay extra to have that newest thing right now. but let me tell you what.... I know that should I ever procreate it will be a requirement for my offspring (children) to learn how to do something manually before I'll let them use technology. Food for thought?? Let everyone else know what you think.... just sayin'

    Read the article

  • Are there too many qualified software development engineers chasing too few jobs?

    - by T Gregory
    I am trying to write this question in a non-argumentative way, but it is quite emotionally charged for some, so please bear with me. In the U.S., we hear constantly from CEOs that they cannot find enough qualified software engineers. In fact, it is the position of the U.S. government that demand for software engineering talent outpaces supply. This position can be clearly seen in the granting of tens of thousands of H1B visas, but also in the following excerpt from the official 2010-11 Bureau of Labor Statistics Occupational Outlook Handbook: Employment of computer software engineers is expected to increase by 32 percent from 2008-2018, which is much faster than the average for all occupations. In addition, this occupation will see a large number of new jobs, with more than 295,000 created between 2008 and 2018. Demand for computer software engineers will increase as computer networking continues to grow. For example, expanding Internet technologies have spurred demand for computer software engineers who can develop Internet, intranet, and World Wide Web applications. Likewise, electronic data-processing systems in business, telecommunications, healthcare, government, and other settings continue to become more sophisticated and complex. Implementing, safeguarding, and updating computer systems and resolving problems will fuel the demand for growing numbers of systems software engineers. New growth areas will also continue to arise from rapidly evolving technologies. The increasing uses of the Internet, the proliferation of Web sites, and mobile technology such as the wireless Internet have created a demand for a wide variety of new products. As more software is offered over the Internet, and as businesses demand customized software to meet their specific needs, applications and systems software engineers will be needed in greater numbers. In addition, the growing use of handheld computers will create demand for new mobile applications and software systems. As these devices become a larger part of the business environment, it will be necessary to integrate current computer systems with this new, more mobile technology. However, from the the employee side of the equation, we often hear the opposite. Many of the stories of SDEs with graduate degrees and decades of experience on the unemployment line, or the big tech interview war stories, are anecdotal, for sure. But, there is one piece of data that is neither anecdotal nor transitory, and that is the aggregate decisions of millions of undergraduates of what degree to pursue. Here, a different picture emerges from the data, and that picture is not good for the software profession. According the most recent Taulbee Survey from Computer Research Association, undergrad degree production in CS and CE has fallen nearly 60% since 2004. (Undergrad enrollments have ticked up in the past two years, but only modestly). Here we see that a basic disconnect between what corporate CEOs and the US government are saying and what potential employees really think about job prospects in software engineering. So my questions are these. Who are we to believe? Is there an acute talent shortage, or is there a long-term structural oversupply in the SDE labor market? Can anyone provide reliable data on long-term unemployment among SDEs? How many are leaving the profession due to lack of work? Real data is most helpful. Thanks.

    Read the article

  • Measuring Code Quality

    - by DotNetBlues
    Several months back, I was tasked with measuring the quality of code in my organization. Foolishly, I said, "No problem." I figured that Visual Studio has a built-in code metrics tool (Analyze -> Calculate Code Metrics) and that would be a fine place to start with. I was right, but also very wrong. The Visual Studio calculates five primary metrics: Maintainability Index, Cyclomatic Complexity, Depth of Inheritance, Class Coupling, and Lines of Code. The first two are figured at the method level, the second at (primarily) the class level, and the last is a simple count. The first question any reasonable person should ask is "Which one do I look at first?" The first question any manager is going to ask is, "What one number tells me about the whole application?" My answer to both, in a way, was "Maintainability Index." Why? Because each of the other numbers represent one element of quality while MI is a composite number that includes Cyclomatic Complexity. I'd be lying if I said no consideration was given to the fact that it was abstract enough that it's harder for some surly developer (I've been known to resemble that remark) to start arguing why a high coupling or inheritance is no big deal or how complex requirements are to blame for complex code. I should also note that I don't think there is one magic bullet metric that will tell you objectively how good a code base is. There are a ton of different metrics out there, and each one was created for a specific purpose in mind and has a pet theory behind it. When you've got a group of developers who aren't accustomed to measuring code quality, picking a 0-100 scale, non-controversial metric that can be easily generated by tools you already own really isn't a bad place to start. That sort of answers the question a developer would ask, but what about the management question; how do you dashboard this stuff when Visual Studio doesn't roll up the numbers to the solution level? Since VS does roll up the MI to the project level, I thought I could just figure out what sort of weighting Microsoft used to roll method scores up to the class level and then to the namespace and project levels. I was a bit surprised by the answer: there is no weighting. That means that a class with one 1300 line method (which will score a 0 MI) and one empty constructor (which will score a 100 MI) will have an overall MI of a respectable 50. Throw in a couple of DTOs that are nothing more than getters and setters (which tend to score 95 or better) and the project ends up looking really, really healthy. The next poor bastard who has to work on the application is probably not going to be singing the praises of its maintainability, though. For the record, that 1300 line method isn't a hypothetical, either. So, what does one do with that? Well, I decided to weight the average by the Lines of Code per method. For our above example, the formula for the class's MI becomes ((1300 * 0) + (1 * 100))/1301 = .077, rounded to 0. Sounds about right. Continue the pattern for namespace, project, solution, and even multi-solution application MI scores. This can be done relatively easily by using the "export to Excel" button and running a quick formula against the data. On the short list of follow-up questions would be, "How do I improve my application's score?" That's an answer for another time, though.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >