Search Results

Search found 1193 results on 48 pages for 'decimal'.

Page 23/48 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Passing multiple simple POST Values to ASP.NET Web API

    - by Rick Strahl
    A few weeks backs I posted a blog post  about what does and doesn't work with ASP.NET Web API when it comes to POSTing data to a Web API controller. One of the features that doesn't work out of the box - somewhat unexpectedly -  is the ability to map POST form variables to simple parameters of a Web API method. For example imagine you have this form and you want to post this data to a Web API end point like this via AJAX: <form> Name: <input type="name" name="name" value="Rick" /> Value: <input type="value" name="value" value="12" /> Entered: <input type="entered" name="entered" value="12/01/2011" /> <input type="button" id="btnSend" value="Send" /> </form> <script type="text/javascript"> $("#btnSend").click( function() { $.post("samples/PostMultipleSimpleValues?action=kazam", $("form").serialize(), function (result) { alert(result); }); }); </script> or you might do this more explicitly by creating a simple client map and specifying the POST values directly by hand:$.post("samples/PostMultipleSimpleValues?action=kazam", { name: "Rick", value: 1, entered: "12/01/2012" }, $("form").serialize(), function (result) { alert(result); }); On the wire this generates a simple POST request with Url Encoded values in the content:POST /AspNetWebApi/samples/PostMultipleSimpleValues?action=kazam HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Windows NT 6.2; WOW64; rv:15.0) Gecko/20100101 Firefox/15.0.1 Accept: application/json Connection: keep-alive Content-Type: application/x-www-form-urlencoded; charset=UTF-8 X-Requested-With: XMLHttpRequest Referer: http://localhost/AspNetWebApi/FormPostTest.html Content-Length: 41 Pragma: no-cache Cache-Control: no-cachename=Rick&value=12&entered=12%2F10%2F2011 Seems simple enough, right? We are basically posting 3 form variables and 1 query string value to the server. Unfortunately Web API can't handle request out of the box. If I create a method like this:[HttpPost] public string PostMultipleSimpleValues(string name, int value, DateTime entered, string action = null) { return string.Format("Name: {0}, Value: {1}, Date: {2}, Action: {3}", name, value, entered, action); }You'll find that you get an HTTP 404 error and { "Message": "No HTTP resource was found that matches the request URI…"} Yes, it's possible to pass multiple POST parameters of course, but Web API expects you to use Model Binding for this - mapping the post parameters to a strongly typed .NET object, not to single parameters. Alternately you can also accept a FormDataCollection parameter on your API method to get a name value collection of all POSTed values. If you're using JSON only, using the dynamic JObject/JValue objects might also work. ModelBinding is fine in many use cases, but can quickly become overkill if you only need to pass a couple of simple parameters to many methods. Especially in applications with many, many AJAX callbacks the 'parameter mapping type' per method signature can lead to serious class pollution in a project very quickly. Simple POST variables are also commonly used in AJAX applications to pass data to the server, even in many complex public APIs. So this is not an uncommon use case, and - maybe more so a behavior that I would have expected Web API to support natively. The question "Why aren't my POST parameters mapping to Web API method parameters" is already a frequent one… So this is something that I think is fairly important, but unfortunately missing in the base Web API installation. Creating a Custom Parameter Binder Luckily Web API is greatly extensible and there's a way to create a custom Parameter Binding to provide this functionality! Although this solution took me a long while to find and then only with the help of some folks Microsoft (thanks Hong Mei!!!), it's not difficult to hook up in your own projects. It requires one small class and a GlobalConfiguration hookup. Web API parameter bindings allow you to intercept processing of individual parameters - they deal with mapping parameters to the signature as well as converting the parameters to the actual values that are returned. Here's the implementation of the SimplePostVariableParameterBinding class:public class SimplePostVariableParameterBinding : HttpParameterBinding { private const string MultipleBodyParameters = "MultipleBodyParameters"; public SimplePostVariableParameterBinding(HttpParameterDescriptor descriptor) : base(descriptor) { } /// <summary> /// Check for simple binding parameters in POST data. Bind POST /// data as well as query string data /// </summary> public override Task ExecuteBindingAsync(ModelMetadataProvider metadataProvider, HttpActionContext actionContext, CancellationToken cancellationToken) { // Body can only be read once, so read and cache it NameValueCollection col = TryReadBody(actionContext.Request); string stringValue = null; if (col != null) stringValue = col[Descriptor.ParameterName]; // try reading query string if we have no POST/PUT match if (stringValue == null) { var query = actionContext.Request.GetQueryNameValuePairs(); if (query != null) { var matches = query.Where(kv => kv.Key.ToLower() == Descriptor.ParameterName.ToLower()); if (matches.Count() > 0) stringValue = matches.First().Value; } } object value = StringToType(stringValue); // Set the binding result here SetValue(actionContext, value); // now, we can return a completed task with no result TaskCompletionSource<AsyncVoid> tcs = new TaskCompletionSource<AsyncVoid>(); tcs.SetResult(default(AsyncVoid)); return tcs.Task; } private object StringToType(string stringValue) { object value = null; if (stringValue == null) value = null; else if (Descriptor.ParameterType == typeof(string)) value = stringValue; else if (Descriptor.ParameterType == typeof(int)) value = int.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int32)) value = Int32.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(Int64)) value = Int64.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(decimal)) value = decimal.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(double)) value = double.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(DateTime)) value = DateTime.Parse(stringValue, CultureInfo.CurrentCulture); else if (Descriptor.ParameterType == typeof(bool)) { value = false; if (stringValue == "true" || stringValue == "on" || stringValue == "1") value = true; } else value = stringValue; return value; } /// <summary> /// Read and cache the request body /// </summary> /// <param name="request"></param> /// <returns></returns> private NameValueCollection TryReadBody(HttpRequestMessage request) { object result = null; // try to read out of cache first if (!request.Properties.TryGetValue(MultipleBodyParameters, out result)) { // parsing the string like firstname=Hongmei&lastname=Ge result = request.Content.ReadAsFormDataAsync().Result; request.Properties.Add(MultipleBodyParameters, result); } return result as NameValueCollection; } private struct AsyncVoid { } }   The ExecuteBindingAsync method is fired for each parameter that is mapped and sent for conversion. This custom binding is fired only if the incoming parameter is a simple type (that gets defined later when I hook up the binding), so this binding never fires on complex types or if the first type is not a simple type. For the first parameter of a request the Binding first reads the request body into a NameValueCollection and caches that in the request.Properties collection. The request body can only be read once, so the first parameter request reads it and then caches it. Subsequent parameters then use the cached POST value collection. Once the form collection is available the value of the parameter is read, and the value is translated into the target type requested by the Descriptor. SetValue writes out the value to be mapped. Once you have the ParameterBinding in place, the binding has to be assigned. This is done along with all other Web API configuration tasks at application startup in global.asax's Application_Start:GlobalConfiguration.Configuration.ParameterBindingRules .Insert(0, (HttpParameterDescriptor descriptor) => { var supportedMethods = descriptor.ActionDescriptor.SupportedHttpMethods; // Only apply this binder on POST and PUT operations if (supportedMethods.Contains(HttpMethod.Post) || supportedMethods.Contains(HttpMethod.Put)) { var supportedTypes = new Type[] { typeof(string), typeof(int), typeof(decimal), typeof(double), typeof(bool), typeof(DateTime) }; if (supportedTypes.Where(typ => typ == descriptor.ParameterType).Count() > 0) return new SimplePostVariableParameterBinding(descriptor); } // let the default bindings do their work return null; });   The ParameterBindingRules.Insert method takes a delegate that checks which type of requests it should handle. The logic here checks whether the request is POST or PUT and whether the parameter type is a simple type that is supported. Web API calls this delegate once for each method signature it tries to map and the delegate returns null to indicate it's not handling this parameter, or it returns a new parameter binding instance - in this case the SimplePostVariableParameterBinding. Once the parameter binding and this hook up code is in place, you can now pass simple POST values to methods with simple parameters. The examples I showed above should now work in addition to the standard bindings. Summary Clearly this is not easy to discover. I spent quite a bit of time digging through the Web API source trying to figure this out on my own without much luck. It took Hong Mei at Micrsoft to provide a base example as I asked around so I can't take credit for this solution :-). But once you know where to look, Web API is brilliantly extensible to make it relatively easy to customize the parameter behavior. I'm very stoked that this got resolved  - in the last two months I've had two customers with projects that decided not to use Web API in AJAX heavy SPA applications because this POST variable mapping wasn't available. This might actually change their mind to still switch back and take advantage of the many great features in Web API. I too frequently use plain POST variables for communicating with server AJAX handlers and while I could have worked around this (with untyped JObject or the Form collection mostly), having proper POST to parameter mapping makes things much easier. I said this in my last post on POST data and say it again here: I think POST to method parameter mapping should have been shipped in the box with Web API, because without knowing about this limitation the expectation is that simple POST variables map to parameters just like query string values do. I hope Microsoft considers including this type of functionality natively in the next version of Web API natively or at least as a built-in HttpParameterBinding that can be just added. This is especially true, since this binding doesn't affect existing bindings. Resources SimplePostVariableParameterBinding Source on GitHub Global.asax hookup source Mapping URL Encoded Post Values in  ASP.NET Web API© Rick Strahl, West Wind Technologies, 2005-2012Posted in Web Api  AJAX   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Solaris X86 64-bit Assembly Programming

    - by danx
    Solaris X86 64-bit Assembly Programming This is a simple example on writing, compiling, and debugging Solaris 64-bit x86 assembly language with a C program. This is also referred to as "AMD64" assembly. The term "AMD64" is used in an inclusive sense to refer to all X86 64-bit processors, whether AMD Opteron family or Intel 64 processor family. Both run Solaris x86. I'm keeping this example simple mainly to illustrate how everything comes together—compiler, assembler, linker, and debugger when using assembly language. The example I'm using here is a C program that calls an assembly language program passing a C string. The assembly language program takes the C string and calls printf() with it to print the string. AMD64 Register Usage But first let's review the use of AMD64 registers. AMD64 has several 64-bit registers, some special purpose (such as the stack pointer) and others general purpose. By convention, Solaris follows the AMD64 ABI in register usage, which is the same used by Linux, but different from Microsoft Windows in usage (such as which registers are used to pass parameters). This blog will only discuss conventions for Linux and Solaris. The following chart shows how AMD64 registers are used. The first six parameters to a function are passed through registers. If there's more than six parameters, parameter 7 and above are pushed on the stack before calling the function. The stack is also used to save temporary "stack" variables for use by a function. 64-bit Register Usage %rip Instruction Pointer points to the current instruction %rsp Stack Pointer %rbp Frame Pointer (saved stack pointer pointing to parameters on stack) %rdi Function Parameter 1 %rsi Function Parameter 2 %rdx Function Parameter 3 %rcx Function Parameter 4 %r8 Function Parameter 5 %r9 Function Parameter 6 %rax Function return value %r10, %r11 Temporary registers (need not be saved before used) %rbx, %r12, %r13, %r14, %r15 Temporary registers, but must be saved before use and restored before returning from the current function (usually with the push and pop instructions). 32-, 16-, and 8-bit registers To access the lower 32-, 16-, or 8-bits of a 64-bit register use the following: 64-bit register Least significant 32-bits Least significant 16-bits Least significant 8-bits %rax%eax%ax%al %rbx%ebx%bx%bl %rcx%ecx%cx%cl %rdx%edx%dx%dl %rsi%esi%si%sil %rdi%edi%di%axl %rbp%ebp%bp%bp %rsp%esp%sp%spl %r9%r9d%r9w%r9b %r10%r10d%r10w%r10b %r11%r11d%r11w%r11b %r12%r12d%r12w%r12b %r13%r13d%r13w%r13b %r14%r14d%r14w%r14b %r15%r15d%r15w%r15b %r16%r16d%r16w%r16b There's other registers present, such as the 64-bit %mm registers, 128-bit %xmm registers, 256-bit %ymm registers, and 512-bit %zmm registers. Except for %mm registers, these registers may not present on older AMD64 processors. Assembly Source The following is the source for a C program, helloas1.c, that calls an assembly function, hello_asm(). $ cat helloas1.c extern void hello_asm(char *s); int main(void) { hello_asm("Hello, World!"); } The assembly function called above, hello_asm(), is defined below. $ cat helloas2.s /* * helloas2.s * To build: * cc -m64 -o helloas2-cpp.s -D_ASM -E helloas2.s * cc -m64 -c -o helloas2.o helloas2-cpp.s */ #if defined(lint) || defined(__lint) /* ARGSUSED */ void hello_asm(char *s) { } #else /* lint */ #include <sys/asm_linkage.h> .extern printf ENTRY_NP(hello_asm) // Setup printf parameters on stack mov %rdi, %rsi // P2 (%rsi) is string variable lea .printf_string, %rdi // P1 (%rdi) is printf format string call printf ret SET_SIZE(hello_asm) // Read-only data .text .align 16 .type .printf_string, @object .printf_string: .ascii "The string is: %s.\n\0" #endif /* lint || __lint */ In the assembly source above, the C skeleton code under "#if defined(lint)" is optionally used for lint to check the interfaces with your C program--very useful to catch nasty interface bugs. The "asm_linkage.h" file includes some handy macros useful for assembly, such as ENTRY_NP(), used to define a program entry point, and SET_SIZE(), used to set the function size in the symbol table. The function hello_asm calls C function printf() by passing two parameters, Parameter 1 (P1) is a printf format string, and P2 is a string variable. The function begins by moving %rdi, which contains Parameter 1 (P1) passed hello_asm, to printf()'s P2, %rsi. Then it sets printf's P1, the format string, by loading the address the address of the format string in %rdi, P1. Finally it calls printf. After returning from printf, the hello_asm function returns itself. Larger, more complex assembly functions usually do more setup than the example above. If a function is returning a value, it would set %rax to the return value. Also, it's typical for a function to save the %rbp and %rsp registers of the calling function and to restore these registers before returning. %rsp contains the stack pointer and %rbp contains the frame pointer. Here is the typical function setup and return sequence for a function: ENTRY_NP(sample_assembly_function) push %rbp // save frame pointer on stack mov %rsp, %rbp // save stack pointer in frame pointer xor %rax, %r4ax // set function return value to 0. mov %rbp, %rsp // restore stack pointer pop %rbp // restore frame pointer ret // return to calling function SET_SIZE(sample_assembly_function) Compiling and Running Assembly Use the Solaris cc command to compile both C and assembly source, and to pre-process assembly source. You can also use GNU gcc instead of cc to compile, if you prefer. The "-m64" option tells the compiler to compile in 64-bit address mode (instead of 32-bit). $ cc -m64 -o helloas2-cpp.s -D_ASM -E helloas2.s $ cc -m64 -c -o helloas2.o helloas2-cpp.s $ cc -m64 -c helloas1.c $ cc -m64 -o hello-asm helloas1.o helloas2.o $ file hello-asm helloas1.o helloas2.o hello-asm: ELF 64-bit LSB executable AMD64 Version 1 [SSE FXSR FPU], dynamically linked, not stripped helloas1.o: ELF 64-bit LSB relocatable AMD64 Version 1 helloas2.o: ELF 64-bit LSB relocatable AMD64 Version 1 $ hello-asm The string is: Hello, World!. Debugging Assembly with MDB MDB is the Solaris system debugger. It can also be used to debug user programs, including assembly and C. The following example runs the above program, hello-asm, under control of the debugger. In the example below I load the program, set a breakpoint at the assembly function hello_asm, display the registers and the first parameter, step through the assembly function, and continue execution. $ mdb hello-asm # Start the debugger > hello_asm:b # Set a breakpoint > ::run # Run the program under the debugger mdb: stop at hello_asm mdb: target stopped at: hello_asm: movq %rdi,%rsi > $C # display function stack ffff80ffbffff6e0 hello_asm() ffff80ffbffff6f0 0x400adc() > $r # display registers %rax = 0x0000000000000000 %r8 = 0x0000000000000000 %rbx = 0xffff80ffbf7f8e70 %r9 = 0x0000000000000000 %rcx = 0x0000000000000000 %r10 = 0x0000000000000000 %rdx = 0xffff80ffbffff718 %r11 = 0xffff80ffbf537db8 %rsi = 0xffff80ffbffff708 %r12 = 0x0000000000000000 %rdi = 0x0000000000400cf8 %r13 = 0x0000000000000000 %r14 = 0x0000000000000000 %r15 = 0x0000000000000000 %cs = 0x0053 %fs = 0x0000 %gs = 0x0000 %ds = 0x0000 %es = 0x0000 %ss = 0x004b %rip = 0x0000000000400c70 hello_asm %rbp = 0xffff80ffbffff6e0 %rsp = 0xffff80ffbffff6c8 %rflags = 0x00000282 id=0 vip=0 vif=0 ac=0 vm=0 rf=0 nt=0 iopl=0x0 status=<of,df,IF,tf,SF,zf,af,pf,cf> %gsbase = 0x0000000000000000 %fsbase = 0xffff80ffbf782a40 %trapno = 0x3 %err = 0x0 > ::dis # disassemble the current instructions hello_asm: movq %rdi,%rsi hello_asm+3: leaq 0x400c90,%rdi hello_asm+0xb: call -0x220 <PLT:printf> hello_asm+0x10: ret 0x400c81: nop 0x400c85: nop 0x400c88: nop 0x400c8c: nop 0x400c90: pushq %rsp 0x400c91: pushq $0x74732065 0x400c96: jb +0x69 <0x400d01> > 0x0000000000400cf8/S # %rdi contains Parameter 1 0x400cf8: Hello, World! > [ # Step and execute 1 instruction mdb: target stopped at: hello_asm+3: leaq 0x400c90,%rdi > [ mdb: target stopped at: hello_asm+0xb: call -0x220 <PLT:printf> > [ The string is: Hello, World!. mdb: target stopped at: hello_asm+0x10: ret > [ mdb: target stopped at: main+0x19: movl $0x0,-0x4(%rbp) > :c # continue program execution mdb: target has terminated > $q # quit the MDB debugger $ In the example above, at the start of function hello_asm(), I display the stack contents with "$C", display the registers contents with "$r", then disassemble the current function with "::dis". The first function parameter, which is a C string, is passed by reference with the string address in %rdi (see the register usage chart above). The address is 0x400cf8, so I print the value of the string with the "/S" MDB command: "0x0000000000400cf8/S". I can also print the contents at an address in several other formats. Here's a few popular formats. For more, see the mdb(1) man page for details. address/S C string address/C ASCII character (1 byte) address/E unsigned decimal (8 bytes) address/U unsigned decimal (4 bytes) address/D signed decimal (4 bytes) address/J hexadecimal (8 bytes) address/X hexadecimal (4 bytes) address/B hexadecimal (1 bytes) address/K pointer in hexadecimal (4 or 8 bytes) address/I disassembled instruction Finally, I step through each machine instruction with the "[" command, which steps over functions. If I wanted to enter a function, I would use the "]" command. Then I continue program execution with ":c", which continues until the program terminates. MDB Basic Cheat Sheet Here's a brief cheat sheet of some of the more common MDB commands useful for assembly debugging. There's an entire set of macros and more powerful commands, especially some for debugging the Solaris kernel, but that's beyond the scope of this example. $C Display function stack with pointers $c Display function stack $e Display external function names $v Display non-zero variables and registers $r Display registers ::fpregs Display floating point (or "media" registers). Includes %st, %xmm, and %ymm registers. ::status Display program status ::run Run the program (followed by optional command line parameters) $q Quit the debugger address:b Set a breakpoint address:d Delete a breakpoint $b Display breakpoints :c Continue program execution after a breakpoint [ Step 1 instruction, but step over function calls ] Step 1 instruction address::dis Disassemble instructions at an address ::events Display events Further Information "Assembly Language Techniques for Oracle Solaris on x86 Platforms" by Paul Lowik (2004). Good tutorial on Solaris x86 optimization with assembly. The Solaris Operating System on x86 Platforms An excellent, detailed tutorial on X86 architecture, with Solaris specifics. By an ex-Sun employee, Frank Hofmann (2005). "AMD64 ABI Features", Solaris 64-bit Developer's Guide contains rules on data types and register usage for Intel 64/AMD64-class processors. (available at docs.oracle.com) Solaris X86 Assembly Language Reference Manual (available at docs.oracle.com) SPARC Assembly Language Reference Manual (available at docs.oracle.com) System V Application Binary Interface (2003) defines the AMD64 ABI for UNIX-class operating systems, including Solaris, Linux, and BSD. Google for it—the original website is gone. cc(1), gcc(1), and mdb(1) man pages.

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #039

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 FQL – Facebook Query Language Facebook list following advantages of FQL: Condensed XML reduces bandwidth and parsing costs. More complex requests can reduce the number of requests necessary. Provides a single consistent, unified interface for all of your data. It’s fun! UDF – Get the Day of the Week Function The day of the week can be retrieved in SQL Server by using the DatePart function. The value returned by the function is between 1 (Sunday) and 7 (Saturday). To convert this to a string representing the day of the week, use a CASE statement. UDF – Function to Get Previous And Next Work Day – Exclude Saturday and Sunday While reading ColdFusion blog of Ben Nadel Getting the Previous Day In ColdFusion, Excluding Saturday And Sunday, I realize that I use similar function on my SQL Server Database. This function excludes the Weekends (Saturday and Sunday), and it gets previous as well as next work day. Complete Series of SQL Server Interview Questions and Answers Data Warehousing Interview Questions and Answers – Introduction Data Warehousing Interview Questions and Answers – Part 1 Data Warehousing Interview Questions and Answers – Part 2 Data Warehousing Interview Questions and Answers – Part 3 Data Warehousing Interview Questions and Answers Complete List Download 2008 Introduction to Log Viewer In SQL Server all the windows event logs can be seen along with SQL Server logs. Interface for all the logs is same and can be launched from the same place. This log can be exported and filtered as well. DBCC SHRINKFILE Takes Long Time to Run If you are DBA who are involved with Database Maintenance and file group maintenance, you must have experience that many times DBCC SHRINKFILE operations takes a long time but any other operations with Database are relatively quicker. mssqlsystemresource – Resource Database The purpose of resource database is to facilitates upgrading to the new version of SQL Server without any hassle. In previous versions whenever version of SQL Server was upgraded all the previous version system objects needs to be dropped and new version system objects to be created. 2009 Puzzle – Write Script to Generate Primary Key and Foreign Key In SQL Server Management Studio (SSMS), there is no option to script all the keys. If one is required to script keys they will have to manually script each key one at a time. If database has many tables, generating one key at a time can be a very intricate task. I want to throw a question to all of you if any of you have scripts for the same purpose. Maximizing View of SQL Server Management Studio – Full Screen – New Screen I had explained the following two different methods: 1) Open Results in Separate Tab - This is a very interesting method as result pan shows up in a different tab instead of the splitting screen horizontally. 2) Open SSMS in Full Screen - This works always and to its best. Not many people are aware of this method; hence, very few people use it to enhance performance. 2010 Find Queries using Parallelism from Cached Plan T-SQL script gets all the queries and their execution plan where parallelism operations are kicked up. Pay attention there is TOP 10 is used, if you have lots of transactional operations, I suggest that you change TOP 10 to TOP 50 This is the list of the all the articles in the series of computed columns. SQL SERVER – Computed Column – PERSISTED and Storage This article talks about how computed columns are created and why they take more storage space than before. SQL SERVER – Computed Column – PERSISTED and Performance This article talks about how PERSISTED columns give better performance than non-persisted columns. SQL SERVER – Computed Column – PERSISTED and Performance – Part 2 This article talks about how non-persisted columns give better performance than PERSISTED columns. SQL SERVER – Computed Column and Performance – Part 3 This article talks about how Index improves the performance of Computed Columns. SQL SERVER – Computed Column – PERSISTED and Storage – Part 2 This article talks about how creating index on computed column does not grow the row length of table. SQL SERVER – Computed Columns – Index and Performance This article summarized all the articles related to computed columns. 2011 SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 21 of 31 What is Data Warehousing? What is Business Intelligence (BI)? What is a Dimension Table? What is Dimensional Modeling? What is a Fact Table? What are the Fundamental Stages of Data Warehousing? What are the Different Methods of Loading Dimension tables? Describes the Foreign Key Columns in Fact Table and Dimension Table? What is Data Mining? What is the Difference between a View and a Materialized View? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 22 of 31 What is OLTP? What is OLAP? What is the Difference between OLTP and OLAP? What is ODS? What is ER Diagram? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 23 of 31 What is ETL? What is VLDB? Is OLTP Database is Design Optimal for Data Warehouse? If denormalizing improves Data Warehouse Processes, then why is the Fact Table is in the Normal Form? What are Lookup Tables? What are Aggregate Tables? What is Real-Time Data-Warehousing? What are Conformed Dimensions? What is a Conformed Fact? How do you Load the Time Dimension? What is a Level of Granularity of a Fact Table? What are Non-Additive Facts? What is a Factless Facts Table? What are Slowly Changing Dimensions (SCD)? SQL SERVER – Interview Questions and Answers – Frequently Asked Questions – Data Warehousing Concepts – Day 24 of 31 What is Hybrid Slowly Changing Dimension? What is BUS Schema? What is a Star Schema? What Snow Flake Schema? Differences between the Star and Snowflake Schema? What is Difference between ER Modeling and Dimensional Modeling? What is Degenerate Dimension Table? Why is Data Modeling Important? What is a Surrogate Key? What is Junk Dimension? What is a Data Mart? What is the Difference between OLAP and Data Warehouse? What is a Cube and Linked Cube with Reference to Data Warehouse? What is Snapshot with Reference to Data Warehouse? What is Active Data Warehousing? What is the Difference between Data Warehousing and Business Intelligence? What is MDS? Explain the Paradigm of Bill Inmon and Ralph Kimball. SQL SERVER – Azure Interview Questions and Answers – Guest Post by Paras Doshi – Day 25 of 31 Paras Doshi has submitted 21 interesting question and answers for SQL Azure. 1.What is SQL Azure? 2.What is cloud computing? 3.How is SQL Azure different than SQL server? 4.How many replicas are maintained for each SQL Azure database? 5.How can we migrate from SQL server to SQL Azure? 6.Which tools are available to manage SQL Azure databases and servers? 7.Tell me something about security and SQL Azure. 8.What is SQL Azure Firewall? 9.What is the difference between web edition and business edition? 10.How do we synchronize On Premise SQL server with SQL Azure? 11.How do we Backup SQL Azure Data? 12.What is the current pricing model of SQL Azure? 13.What is the current limitation of the size of SQL Azure DB? 14.How do you handle datasets larger than 50 GB? 15.What happens when the SQL Azure database reaches Max Size? 16.How many databases can we create in a single server? 17.How many servers can we create in a single subscription? 18.How do you improve the performance of a SQL Azure Database? 19.What is code near application topology? 20.What were the latest updates to SQL Azure service? 21.When does a workload on SQL Azure get throttled? SQL SERVER – Interview Questions and Answers – Guest Post by Malathi Mahadevan – Day 26 of 31 Malachi had asked a simple question which has several answers. Each answer makes you think and ponder about the reality of the IT world. Look at the simple question – ‘What is the toughest challenge you have faced in your present job and how did you handle it’? and its various answers. Each answer has its own story. SQL SERVER – Interview Questions and Answers – Guest Post by Rick Morelan – Day 27 of 31 Rick Morelan of Joes2Pros has written an excellent blog post on the subject how to find top N values. Most people are fully aware of how the TOP keyword works with a SELECT statement. After years preparing so many students to pass the SQL Certification I noticed they were pretty well prepared for job interviews too. Yes, they would do well in the interview but not great. There seemed to be a few questions that would come up repeatedly for almost everyone. Rick addresses similar questions in his lucid writing skills. 2012 Observation of Top with Index and Order of Resultset SQL Server has lots of things to learn and share. It is amazing to see how people evaluate and understand different techniques and styles differently when implementing. The real reason may be absolutely different but we may blame something totally different for the incorrect results. Read the blog post to learn more. How do I Record Video and Webcast How to Convert Hex to Decimal or INT Earlier I asked regarding a question about how to convert Hex to Decimal. I promised that I will post an answer with Due Credit to the author but never got around to post a blog post around it. Read the original post over here SQL SERVER – Question – How to Convert Hex to Decimal. Query to Get Unique Distinct Data Based on Condition – Eliminate Duplicate Data from Resultset The natural reaction will be to suggest DISTINCT or GROUP BY. However, not all the questions can be solved by DISTINCT or GROUP BY. Let us see the following example, where a user wanted only latest records to be displayed. Let us see the example to understand further. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • AWT Textfield behaves weird with MicroSoft JVM

    - by AKh
    Hi, I am facing a weird problem when using MicroSoft JVM to run my Applet. I have an AWT panel with 4 textfields which is added to a dialog box. Everything goes fine until I enter a decimal value into the textfield and close the dialog box. When i reopen the dialog box the textfield inside the panel with all the decimal digits (entered in the previous step) behaves weird. The decimal values along with the WHITE area inside the textfield moves to the left and hides the digits. When I click inside the textfield it becomes normal. The Panel earlier had gridlayout and I even tried changing it to gridbaylayout and still the problem persist. NOTE: All Development are pertained to JRE1.1 to compatibility with MS JVM If any can help me with this it would be a great help. Thanks in advance. . . . . public MyPanel(Dialog myDialog) { Panel panel = new Panel(); this.dialog = myDialog; //Previous code with grid layout /* panel.setLayout(new GridLayout2(4,2,2,2)); panel.add(new Label("Symbol:")); panel.add(symbolField = new TextField("",20)); panel.add(new Label("Quantity:")); panel.add( qtyField = new TextField()); panel.add(new Label("Price per Share:")); panel.add( costField = new TextField()); panel.add(new Label("Date Acquired:")); panel.add( purchaseDate = new TextField() );*/ GridBagLayout gridbag = new GridBagLayout(); System.out.println("######## Created New GridBagLayout"); GridBagConstraints constraints = new GridBagConstraints(); panel.setLayout( gridbag ); constraints = buildConstraints( constraints, 0, 0, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( new Label("Symbol:"), constraints); constraints = buildConstraints( constraints, 1, 0, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( symbolField = new TextField("",20), constraints); constraints = buildConstraints( constraints, 0, 1, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( new Label("Quantity:"), constraints); constraints = buildConstraints( constraints, 1, 1, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( qtyField = new TextField(), constraints); constraints = buildConstraints( constraints, 0, 2, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( new Label("Price per Share:"), constraints); constraints = buildConstraints( constraints, 1, 2, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( costField = new TextField(), constraints); constraints = buildConstraints( constraints, 0, 3, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( new Label("Date Acquired:"), constraints); constraints = buildConstraints( constraints, 1, 3, 1, 1, 1.5, 1 ); constraints.anchor = GridBagConstraints.WEST; constraints.fill = GridBagConstraints.HORIZONTAL; panel.add( purchaseDate = new TextField(), constraints); .............. ......... }

    Read the article

  • iphone sdk - problem pasting into current text location

    - by norskben
    Hi guys I'm trying to paste text right into where the cursor currently is. I have been trying to do what it says at: - http://dev.ragfield.com/2009/09/insert-text-at-current-cursor-location.html The main deal is that I can't just go textbox1.text (etc) because the textfield is in the middle of a custom cell. I want to just have some text added to where the cursor is (when I press a custom key on a keyboard). -I just want to paste a decimal into the textbox... The error I get is: 2010-05-15 22:37:20.797 PageControl[37962:207] * -[MyDetailController paste:]: unrecognized selector sent to instance 0x1973d10 2010-05-15 22:37:20.797 PageControl[37962:207] Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '** -[MyDetailController paste:]: unrecognized selector sent to instance 0x1973d10' Note: I have access to the textfield tag (if that helps?) I'm a little past the beginner stage in objective-c, but still not great. My code is currently below, and at https://gist.github.com/d634329e5ddf52945989 Thanks all. MyDetailController.h @interface MyDetailController : UITableViewController <UITextFieldDelegate,UINavigationControllerDelegate> { //...(lots in here) } @end @interface UIResponder(UIResponderInsertTextAdditions) - (void) insertText: (NSString*) text; @end MyDetailController.m @implementation MyDetailController //.... (lots in here) - (void)addDecimal:(NSNotification *)notification { // Apend the Decimal to the TextField. //savedAmount.text = [savedAmount.text stringByAppendingString:@"."]; NSLog(@"Decimal Pressed"); NSLog(@"tagClicked: %d",tagClicked); switch (tagClicked) { case 7: //savedAmount.text = [savedAmount.text stringByAppendingString:@"."]; break; case 8: //goalAmount.text = [goalAmount.text stringByAppendingString:@"."]; break; case 9: //incrementAmount.text = [incrementAmount.text stringByAppendingString:@"."]; break; case 10: //incrementAmount.text = [incrementAmount.text stringByAppendingString:@"."]; break; } [self insertText:@"."]; } -(void)textFieldDidBeginEditing:(UITextField *)textfield{ //UITextField *theCell = (UITextField *)sender; tagClicked = textfield.tag; NSLog(@"textfield changed. tagClicked: %d",tagClicked); } @end @implementation UIResponder(UIResponderInsertTextAdditions) - (void) insertText: (NSString*) text { // Get a refererence to the system pasteboard because that's // the only one @selector(paste:) will use. UIPasteboard* generalPasteboard = [UIPasteboard generalPasteboard]; // Save a copy of the system pasteboard's items // so we can restore them later. NSArray* items = [generalPasteboard.items copy]; // Set the contents of the system pasteboard // to the text we wish to insert. generalPasteboard.string = text; // Tell this responder to paste the contents of the // system pasteboard at the current cursor location. [self paste: self]; // Restore the system pasteboard to its original items. generalPasteboard.items = items; // Free the items array we copied earlier. [items release]; } @end

    Read the article

  • Partial view is not rendering within the main view it's contained (instead it's rendered in it's own page)?

    - by JaJ
    I have a partial view that is contained in a simple index view. When I try to add a new object to my model and update my partial view to display that new object along with existing objects the partial view is rendered outside the page that it's contained? I'm using AJAX to update the partial view but what is wrong with the following code? Model: public class Product { public int ID { get; set; } public string Name { get; set; } [DataType(DataType.Currency)] public decimal Price { get; set; } } public class BoringStoreContext { List<Product> results = new List<Product>(); public BoringStoreContext() { Products = new List<Product>(); Products.Add(new Product() { ID = 1, Name = "Sure", Price = (decimal)(1.10) }); Products.Add(new Product() { ID = 2, Name = "Sure2", Price = (decimal)(2.10) }); } public List<Product> Products {get; set;} } public class ProductIndexViewModel { public Product NewProduct { get; set; } public IEnumerable<Product> Products { get; set; } } Index.cshtml View: @model AjaxPartialPageUpdates.Models.ProductIndexViewModel @using (Ajax.BeginForm("Index_AddItem", new AjaxOptions { UpdateTargetId = "productList" })) { <div> @Html.LabelFor(model => model.NewProduct.Name) @Html.EditorFor(model => model.NewProduct.Name) </div> <div> @Html.LabelFor(model => model.NewProduct.Price) @Html.EditorFor(model => model.NewProduct.Price) </div> <div> <input type="submit" value="Add Product" /> </div> } <div id='productList'> @{ Html.RenderPartial("ProductListControl", Model.Products); } </div> ProductListControl.cshtml Partial View @model IEnumerable<AjaxPartialPageUpdates.Models.Product> <table> <!-- Render the table headers. --> <tr> <th>Name</th> <th>Price</th> </tr> <!-- Render the name and price of each product. --> @foreach (var item in Model) { <tr> <td>@Html.DisplayFor(model => item.Name)</td> <td>@Html.DisplayFor(model => item.Price)</td> </tr> } </table> Controller: public class HomeController : Controller { public ActionResult Index() { BoringStoreContext db = new BoringStoreContext(); ProductIndexViewModel viewModel = new ProductIndexViewModel { NewProduct = new Product(), Products = db.Products }; return View(viewModel); } public ActionResult Index_AddItem(ProductIndexViewModel viewModel) { BoringStoreContext db = new BoringStoreContext(); db.Products.Add(viewModel.NewProduct); return PartialView("ProductListControl", db.Products); } }

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata New-SPEnterpriseSearchMetadataCrawledProperty New-SPEnterpriseSearchMetadataManagedProperty Remove-SPEnterpriseSearchMetadataManagedProperty Overview of crawled and managed properties in SharePoint 2013 Preview Remove-SPEnterpriseSearchMetadataManagedProperty SharePoint 2013 – Search Service Application

    Read the article

  • How to create managed properties at site collection level in SharePoint2013

    - by ybbest
    In SharePoint2013, you can create managed properties at site collection. Today, I’d like to show you how to do so through PowerShell. 1. Define your managed properties and crawled properties and managed property Type in an external csv file. PowerShell script will read this file and create the managed and the mapping. 2. As you can see I also defined variant Type, this is because you need the variant type to create the crawled property. In order to have the crawled properties, you need to do a full crawl and also make sure you have data populated for your custom column. However, if you do not want to a full crawl to create those crawled properties, you can create them yourself by using the PowerShell; however you need to make sure the crawled properties you created have the same name if created by a full crawl. Managed properties type: Text = 1 Integer = 2 Decimal = 3 DateTime = 4 YesNo = 5 Binary = 6 Variant Type: Text = 31 Integer = 20 Decimal = 5 DateTime = 64 YesNo = 11 3. You can use the following script to create your managed properties at site collection level, the differences for creating managed property at site collection level is to pass in the site collection id. param( [string] $siteUrl="http://SP2013/", [string] $searchAppName = "Search Service Application", $ManagedPropertiesList=(IMPORT-CSV ".\ManagedProperties.csv") ) Add-PSSnapin Microsoft.SharePoint.PowerShell -ErrorAction SilentlyContinue $searchapp = $null function AppendLog { param ([string] $msg, [string] $msgColor) $currentDateTime = Get-Date $msg = $msg + " --- " + $currentDateTime if (!($logOnly -eq $True)) { # write to console Write-Host -f $msgColor $msg } # write to log file Add-Content $logFilePath $msg } $scriptPath = Split-Path $myInvocation.MyCommand.Path $logFilePath = $scriptPath + "\CreateManagedProperties_Log.txt" function CreateRefiner {param ([string] $crawledName, [string] $managedPropertyName, [Int32] $variantType, [Int32] $managedPropertyType,[System.GUID] $siteID) $cat = Get-SPEnterpriseSearchMetadataCategory –Identity SharePoint -SearchApplication $searchapp $crawledproperty = Get-SPEnterpriseSearchMetadataCrawledProperty -Name $crawledName -SearchApplication $searchapp -SiteCollection $siteID if($crawledproperty -eq $null) { Write-Host AppendLog "Creating Crawled Property for $managedPropertyName" Yellow $crawledproperty = New-SPEnterpriseSearchMetadataCrawledProperty -SearchApplication $searchapp -VariantType $variantType -SiteCollection $siteID -Category $cat -PropSet "00130329-0000-0130-c000-000000131346" -Name $crawledName -IsNameEnum $false } $managedproperty = Get-SPEnterpriseSearchMetadataManagedProperty -Identity $managedPropertyName -SearchApplication $searchapp -SiteCollection $siteID -ErrorAction SilentlyContinue if($managedproperty -eq $null) { Write-Host AppendLog "Creating Managed Property for $managedPropertyName" Yellow $managedproperty = New-SPEnterpriseSearchMetadataManagedProperty -Name $managedPropertyName -Type $managedPropertyType -SiteCollection $siteID -SearchApplication $searchapp -Queryable:$true -Retrievable:$true -FullTextQueriable:$true -RemoveDuplicates:$false -RespectPriority:$true -IncludeInMd5:$true } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } if($mappedProperty -eq $null) { Write-Host AppendLog "Creating Crawled -> Managed Property mapping for $managedPropertyName" Yellow New-SPEnterpriseSearchMetadataMapping -CrawledProperty $crawledproperty -ManagedProperty $managedproperty -SearchApplication $searchapp -SiteCollection $siteID } $mappedProperty = $crawledproperty.GetMappedManagedProperties() | ?{$_.Name -eq $managedProperty.Name } #Get-FASTSearchMetadataCrawledPropertyMapping -ManagedProperty $managedproperty } $searchapp = Get-SPEnterpriseSearchServiceApplication $searchAppName $site= Get-SPSite $siteUrl $siteId=$site.id Write-Host "Start creating Managed properties" $i = 1 FOREACH ($property in $ManagedPropertiesList) { $propertyName=$property.managedPropertyName $crawledName=$property.crawledName $managedPropertyType=$property.managedPropertyType $variantType=$property.variantType Write-Host $managedPropertyType Write-Host "Processing managed property $propertyName $($i)..." $i++ CreateRefiner $crawledName $propertyName $variantType $managedPropertyType $siteId Write-Host "Managed property created " $propertyName } Key Concepts Crawled Properties: Crawled properties are discovered by the search index service component when crawling content. Managed Properties: Properties that are part of the Search user experience, which means they are available for search results, advanced search, and so on, are managed properties. Mapping Crawled Properties to Managed Properties: To make a crawled property available for the Search experience—to make it available for Search queries and display it in Advanced Search and search results—you must map it to a managed property. References Administer search in SharePoint 2013 Preview Managing Metadata

    Read the article

  • Code Contracts: validating arrays and collections

    - by DigiMortal
    Validating collections before using them is very common task when we use built-in generic types for our collections. In this posting I will show you how to validate collections using code contracts. It is cool how much awful looking code you can avoid using code contracts. Failing code Let’s suppose we have method that calculates sum of all invoices in collection. We have class Invoice and one of properties it has is Sum. I don’t introduce here any complex calculations on invoices because we have another problem to solve in this topic. Here is our code. public static decimal CalculateTotal(IList<Invoice> invoices) {     var sum = invoices.Sum(p => p.Sum);     return sum; } This method is very simple but it fails when invoices list contains at least one null. Of course, we can test if invoice is null but having nulls in lists like this is not good idea – it opens green way for different coding bugs in system. Our goal is to react to bugs ASAP at the nearest place they occur. There is one more way how to make our method fail. It happens when invoices is null. I thing it is also one common bugs during development and it even happens in production environments under some conditions that are usually hardly met. Now let’s protect our little calculation method with code contracts. We need two contracts: invoices cannot be null invoices cannot contain any nulls Our first contract is easy but how to write the second one? Solution: Contract.ForAll Preconditions in code are checked using Contract.Ensures method. This method takes boolean value as argument that sais if contract holds or not. There is also method Contract.ForAll that takes collection and predicate that must hold for that collection. Nice thing is ForAll returns boolean. So, we have very simple solution. public static decimal CalculateTotal(IList<Invoice> invoices) {     Contract.Requires(invoices != null);     Contract.Requires(Contract.ForAll<Invoice>(invoices, p => p != null));       var sum = invoices.Sum(p => p.Sum);     return sum; } And here are some lines of code you can use to test the contracts quickly. var invoices = new List<Invoice>(); invoices.Add(new Invoice()); invoices.Add(null); invoices.Add(new Invoice()); //CalculateTotal(null); CalculateTotal(invoices); If your code is covered with unit tests then I suggest you to write tests to check that these contracts hold for every code run. Conclusion Although it seemed at first place that checking all elements in collection may end up with for-loops that does not look so nice we were able to solve our problem nicely. ForAll method of contract class offered us simple mechanism to check collections and it does it smoothly the code-contracts-way. P.S. I suggest you also read devlicio.us blog posting Validating Collections with Code Contracts by Derik Whittaker.

    Read the article

  • LINQ to Entities exceptions (ElementAtOrDefault and CompareObjectEqual)

    - by OffApps Cory
    I am working on a shipping platform which will eventually automate shipping through several major carriers. I have a ShipmentsView Usercontrol which displayes a list of Shipments (returned by EntityFramework), and when a user clicks on a shipment item, it spawns a ShipmentEditView and passes the ShipmentID (RecordKey) to that view. I initially wrestled with trying to get the context from the parent (ShipmentsView) and finally gave up resolving to get to it later. I wanted to do this to keep a single instance of the context. anyhow, I now create a new instance of the context in my ShipmentEditViewModel, and query against it for the Shipment record. I know I could just pass the record, but I wanted to use the Ocean Framework that Karl Shifflett wrote and don't want to muck about writing new transition methods. So anyhow, I query and when stepping through, I can see that it returns a record, as soon as execution reached the point where it assigned the query result to the e.Result property, it throws up the following exception depending on the query I used. LINQToEntities Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = From ship In _Context.Shipment _ Where ship.ShipID = e.Argument _ Select ship Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select "LINQ to Entities does not recognize the method 'System.Object.CompareObjectEqual(System.Object, System.Object, Boolean)' method, and this method cannot be translated into a store expression. LINQToEntities via Method calls Dim RecordID As Decimal = CDec(e.Argument) Dim myResult = _Context.Shipment.Where(Function(s) s.ShipID = RecordID) Select Case myResult.Count Case 0 e.Result = New Shipment Case 1 e.Result = myResult(0) Case Else e.Result = Nothing End Select LINQ to Entities does not recognize the method 'SnazzyShippingDAL.Shipment ElementAtOrDefault[Shipment] (System.Linq.IQueryable`1[SnazzyShippingDAL.Shipment], Int32)' method, and this method cannot be translated into a store expression. I have been trying to get this thing to display a record for like three days. i am seriously thinking about going back and re=-engineering it without the MVVM pattern (which I realize I am only starting to learn and understand) if only to make the &$^%ed thing work. Any help will be muchly appreciated. Cory

    Read the article

  • Methods for deep cloning objects in C#

    - by Anry
    I have a class: public class Order : BaseEPharmObject { public Order() { } public virtual Guid Id { get; set; } public virtual DateTime Created { get; set; } public virtual DateTime? Closed { get; set; } public virtual OrderResult OrderResult { get; set; } public virtual decimal Balance { get; set; } public virtual Customer Customer { get; set; } public virtual Shift Shift { get; set; } public virtual Order LinkedOrder { get; set; } public virtual User CreatedBy { get; set; } public virtual decimal TotalPayable { get; set; } public virtual IList<Transaction> Transactions { get; set; } public virtual IList<Payment> Payments { get; set; } } I need to clone objects of class Order. How to implement a deep copy right in the base class?

    Read the article

  • How do I get SSIS Data Flow to put '0.00' in a flat file?

    - by theog
    I have an SSIS package with a Data Flow that takes an ADO.NET data source (just a small table), executes a select * query, and outputs the query results to a flat file (I've also tried just pulling the whole table and not using a SQL select). The problem is that the data source pulls a column that is a Money datatype, and if the value is not zero, it comes into the text flat file just fine (like '123.45'), but when the value is zero, it shows up in the destination flat file as '.00'. I need to know how to get the leading zero back into the flat file. I've tried various datatypes for the output (in the Flat File Connection Manager), including currency and string, but this seems to have no effect. I've tried a case statement in my select, like this: CASE WHEN columnValue = 0 THEN '0.00' ELSE columnValue END (still results in '.00') I've tried variations on that like this: CASE WHEN columnValue = 0 THEN convert(decimal(12,2), '0.00') ELSE convert(decimal(12,2), columnValue) END (Still results in '.00') and: CASE WHEN columnValue = 0 THEN convert(money, '0.00') ELSE convert(money, columnValue) END (results in '.0000000000000000000') This silly little issue is killin' me. Can anybody tell me how to get a zero Money datatype database value into a flat file as '0.00'?

    Read the article

  • WCF Datacontract, some fields do not deserialize

    - by jhersh
    Problem: I have a WCF service setup to be an endpoint for a call from an external system. The call is sending plain xml. I am testing the system by sending calls into the service from Fiddler using the RequestBuilder. The issue is that all of my fields are being deserialized with the exception of two fields. price_retail and price_wholesale. What am I missing? All of the other fields deserialize without an issue - the service responds. It is just these fields. XML Message: <widget_conclusion> <list_criteria_id>123</list_criteria_id> <list_type>consumer</list_type> <qty>500</qty> <price_retail>50.00</price_retail> <price_wholesale>40.00</price_wholesale> <session_id>123456789</session_id> </widget_conclusion> Service Method: public string WidgetConclusion(ConclusionMessage message) { var priceRetail = message.PriceRetail; } Message class: [DataContract(Name = "widget_conclusion", Namespace = "")] public class ConclusionMessage { [DataMember(Name = "list_criteria_id")] public int CriteriaId { get; set;} [DataMember(Name = "list_type")] public string ListType { get; set; } [DataMember(Name = "qty")] public int ListQuantity { get; set; } [DataMember(Name = "price_retail")] public decimal PriceRetail { get; set; } [DataMember(Name = "price_wholesale")] public decimal PriceWholesale { get; set; } [DataMember(Name = "session_id")] public string SessionId { get; set; } }

    Read the article

  • Specifying formatting for csv.writer in Python

    - by user248237
    I am using csv.DictWriter to output csv files from a set of dictionaries. I use the following function: def dictlist2file(dictrows, filename, fieldnames, delimiter='\t', lineterminator='\n'): out_f = open(filename, 'w') # Write out header header = delimiter.join(fieldnames) + lineterminator out_f.write(header) # Write out dictionary data = csv.DictWriter(out_f, fieldnames, delimiter=delimiter, lineterminator=lineterminator) data.writerows(dictrows) out_f.close() where dictrows is a list of dictionaries, and fieldnames provides the headers that should be serialized to file. Some of the values in my dictionary list (dictrows) are numeric -- e.g. floats, and I'd like to specify the formatting of these. For example, I might want floats to be serialized with "%.2f" rather than full precision. Ideally, I'd like to specify some kind of mapping that says how to format each type, e.g. {float: "%.2f"} that says that if you see a float, format it with %.2f. Is there an easy way to do this? I don't want to subclass DictWriter or anything complicated like that -- this seems like very generic functionality. How can this be done? The only other solution I can think of is: instead of messing with the formatting of DictWriter, just use the decimal package to specify the decimal precision of floats to be %.2 which will cause to be serialized as such. Don't know if this is a better solution? thanks very much for your help.

    Read the article

  • Why are floating point values so prolific?

    - by Kibbee
    So, title says it all. Why are floating point values so prolific in computer programming. Due to problems like rounding errors, and not being able to even accurately represent numbers such as 0.1, I really can't see how they got as far as they did. I understand that the computation is faster with floating point numbers, however, I can think of only a few cases that they actually the right data type would be using. If you sat back and think about every time you used a floating point value, how many times did you say, well, some error would be ok, as long as the result was a few microseconds faster. It really makes me think because Jeff was talking about NP completeness, and how heuristics give an answer that is kind of right. And well, computers shouldn't do that. They should give you the answer that is correct. Yet we see floating point values used in many applications where they are completely not valid. What really bugs me, isn't that floating point exists, but that in many languages, there isn't even a viable alternative, non-floating point, decimal value. A lot of programmers when doing financial applications have to fall back to storing the number of cents in an integer field. Which brings with it all kinds of other problems. Why do floats continue to be so prolific, even though they can't represent the real answer, and we expect computers to be accurate? [EDIT] Just to clarify, I was talking about Base 2 floating points, and not base 10 floating points. .Net offers the Decimal data type, which is a base 10 floating point value which offers a much better representation of the numbers we deal with on a daily basis in most computer programs. I find it hard to believe that even modern languages like Java don't support base 10 floating point values, unless you want to move into the realm of things like BigDecimal, which isn't really the right answer either in a lot of situations.

    Read the article

  • Double type returns -1.#IND/NaN error when calculating pi iteratively

    - by Draak
    I am working through a problem for my MCTS certification. The program has to calculate pi until the user presses a key, at which point the thread is aborted, the result returned to the main thread and printed in the console. Simple enough, right? This exercise is really meant to be about threading, but I'm running into another problem. The procedure that calculates pi returns -1.#IND. I've read some of the material on the web about this error, but I'm still not sure how to fix it. When I change double to Decimal type, I unsurprisingly get Overflow Exception very quickly. So, the question is how do I store the numbers correctly? Do I have to create a class to somehow store parts of the number when it gets too big to be contained in a Decimal? Class PiCalculator Dim a As Double = 1 Dim b As Double = 1 / Math.Sqrt(2) Dim t As Double = 1 / 4 Dim p As Double = 1 Dim pi As Double Dim callback As DelegateResult Sub New(ByVal _callback As DelegateResult) callback = _callback End Sub Sub Calculate() Try Do While True Dim a1 = (a + b) / 2 Dim b1 = Math.Sqrt(a * b) Dim t1 = t - p * (a - a1) ^ 2 Dim p1 = 2 * p a = a1 b = b1 t = t1 p = p1 pi = ((a + b) ^ 2) / (4 * t) Loop Catch ex As ThreadAbortException Finally callback(pi) End Try End Sub End Class

    Read the article

  • How do I work out IEEE 754 64-bit Floating Point Double Precision?

    - by yousef gassar
    enter code herehello i have done it in 32 but i could dont do it in 62bits please i need help I am stuck on this question and need help. I don't know how to work it out. This is the question. Below are two numbers represented in IEEE 754 64-bit Floating Point Double Precision, the bias of the signed exponent is -1023. Any particular real number ‘N’ represented in 64-bit form (i.e. with the following bit fields; 1-bit Sign, 11-bit Exponent, 52-bit Fraction) can be expressed in the form ±1.F2 × 2X by substituting the bit-field values using formula (IV.I): N = (-1) S × 1.F2 × 2(E – 1023) for 0 < E < 2047.........................….(IV.I) Where N= the number represented, S=Sign bit-value, E=Exponent=X +1023, F=Fraction or Mantissa are the values in the 1, 11 and 52-bit fields respectively in the IEEE 754 64-bit FP representation. Using formula (IV.I), express the 64-bit FP representation of each number as: (i) A binary number of the form:- ±1.F2 × 2X (ii) A decimal number of the form:- ±0.F10 × 10Y {limit F10 to 10 decimal places} Sign 0 1 Exponent 1000 0001 001 11 Fraction 1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 52 Sign 1 1 Exponent 1000 0000 000 11 Fraction 1001 0010 0001 1111 1011 0101 0100 0100 0100 0010 1101 0001 1000 52 I know I have to use the formula for each of the these but how do I work it out? Is it like this? N = (-1) S × 1.F2 × 2(E – 1023) = 1 x 1.1111 0111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 x 1000 0001 00111 (-1023)?

    Read the article

  • Excel Range Format: Number is automatically formatted when Range::Value2 is set

    - by A9S6
    I have an Excel addin written in C# that imports a text file into Excel worksheet. Some of the fields in the file are text and some oare numbers. Problem Steps: Change the System's Regional Settings to Dutch (Belgium) Open Excel and import the file into Excel. Records contain values such as 78,1118 which gets converted to 781.118. Note that in Dutch(Belgium), COMMA is the decimal character and DOT is the thousand character. I do not require the number to be formatted automatically but just want to display whatver I get from the file (78,1118). If I set the cell's NumberFormat to "@" i.e. Text, then it displays an error (SmartTag) saying "Number stored as Text". I know I can change the settings by going to the "Options" box but I dont want to change any user options in Excel for this. I have tried setting the cell's Value2 with an "'" (apostrophe) but the same error is displayed. If I set the cell's format to something else after the value is set then the actual value changes and I loose the decimal. Is there a way in Excel to just display the value and NOT display the "Number Stored as Text" error in cell?

    Read the article

  • Using datetime float representation as primary key

    - by devanalyst
    From my experience I have learn that using an surrogate INT data type column as primary key esp. an IDENTITY key column offers better performance than using GUID or char/varchar data type column as primary key. I try to use IDENTITY key as primary key wherever possible. But recently I came across a schema where the tables were horizontally partitioned and were managed via a Partitioned view. So the tables could not have an IDENTITY column since that would make the Partitioned View non updatable. One work around for this was to create a dummy 'keygenerator' table with an identity column to generate IDs for primary key. But this would mean having a 'keygenerator' table for each of the Partitioned View. My next thought was to use float as a primary key. The reason is the following key algorithm that I devised DECLARE @KEY FLOAT SET @KEY = CONVERT(FLOAT,GETDATE())/100000.0 SET @KEY = @EMP_ID + @KEY Heres how it works. CONVERT(FLOAT,GETDATE()) gives float representation of current datetime since internally all datetime are represented by SQL as a float value. CONVERT(FLOAT,GETDATE())/100000.0 converts the float representation into complete decimal value i.e. all digits are pushed to right side of ".". @KEY = @EMP_ID + @KEY adds the Employee ID which is an integer to this decimal value. The logic is that the Employee ID is guaranteed to be unique across sessions since an employee cannot connect to an application more than once at the same time. And for the same employee each time a key will be generated the current datetime will be unique. In all an unique key across all employee sessions and across time. So for Emp Ids 11 and 12, I have key values like 12.40046693321566357, 11.40046693542361111 But my concern whether float data type as primary key offer benefits compared to choosing GUID or char/varchar as primary keys. Also important thing is because of partitioning the float column is going to be part of a composite key.

    Read the article

  • Encoding / Error Correction Challenge

    - by emi1faber
    Is it mathematically feasible to encode and initial 4 byte message into 8 bytes and if one of the 8 bytes is completely dropped and another is wrong to reconstruct the initial 4 byte message? There would be no way to retransmit nor would the location of the dropped byte be known. If one uses Reed Solomon error correction with 4 "parity" bytes tacked on to the end of the 4 "data" bytes, such as DDDDPPPP, and you end up with DDDEPPP (where E is an error) and a parity byte has been dropped, I don't believe there's a way to reconstruct the initial message (although correct me if I am wrong)... What about multiplying (or performing another mathematical operation) the initial 4 byte message by a constant, then utilizing properties of an inverse mathematical operation to determine what byte was dropped. Or, impose some constraints on the structure of the message so every other byte needs to be odd and the others need to be even. Alternatively, instead of bytes, it could also be 4 decimal digits encoded in some fashion into 8 decimal digits where errors could be detected & corrected under the same circumstances mentioned above - no retransmission and the location of the dropped byte is not known. I'm looking for any crazy ideas anyone might have... Any ideas out there?

    Read the article

  • Understanding floating point problems

    - by Maxim Gershkovich
    Could someone here please help me understand how to determine when floating point limitations will cause errors in your calculations. For example the following code. CalculateTotalTax = function (TaxRate, TaxFreePrice) { return ((parseFloat(TaxFreePrice) / 100) * parseFloat(TaxRate)).toFixed(4); }; I have been unable to input any two values that have caused for me an incorrect result for this method. If I remove the toFixed(4) I can infact see where the calculations start to lose accuracy (somewhere around the 6th decimal place). Having said that though, my understanding of floats is that even small numbers can sometimes fail to be represented or have I misunderstood and can 4 decimal places (for example) always be represented accurately. MSDN explains floats as such... This means they cannot hold an exact representation of any quantity that is not a binary fraction (of the form k / (2 ^ n) where k and n are integers) Now I assume this applies to all floats (inlcuding those used in javascript). Fundamentally my question boils down to this. How can one determine if any specific method will be vulnerable to errors in floating point operations, at what precision will those errors materialize and what inputs will be required to produce those errors? Hopefully what I am asking makes sense.

    Read the article

  • How can I change the precision of printing with the stl?

    - by mmr
    This might be a repeat, but my google-fu failed to find it. I want to print numbers to a file using the stl with the number of decimal places, rather than overall precision. So, if I do this: int precision = 16; std::vector<double> thePoint(3); thePoint[0] = 86.3671436; thePoint[0] = -334.8866574; thePoint[0] = 24.2814; ofstream file1(tempFileName, ios::trunc); file1 << std::setprecision(precision) << thePoint[0] << "\\" << thePoint[1] << "\\" << thePoint[2] << "\\"; I'll get numbers like this: 86.36714359999999\-334.8866574\24.28140258789063 What I want is this: 86.37\-334.89\24.28 In other words, truncating at two decimal points. If I set precision to be 4, then I'll get 86.37\-334.9\24.28 ie, the second number is improperly truncated. I do not want to have to manipulate each number explicitly to get the truncation, especially because I seem to be getting the occasional 9 repeating or 0000000001 or something like that that's left behind. I'm sure there's something obvious, like using the printf(%.2f) or something like that, but I'm unsure how to mix that with the stl << and ofstream.

    Read the article

  • database design to speed up hibernate querying of large dataset

    - by paddydub
    I currently have the below tables representing a bus network mapped in hibernate, accessed from a Spring MVC based bus route planner I'm trying to make my route planner application perform faster, I load all the above tables into Lists to perform the route planner logic. I would appreciate if anyone has any ideas of how to speed my performace Or any suggestions of another method to approach this problem of handling a large set of data Coordinate Connections Table (INT,INT,INT)( Containing 50,000 Coordinate Connections) ID, FROMCOORDID, TOCOORDID 1 1 2 2 1 17 3 1 63 4 1 64 5 1 65 6 1 95 Coordinate Table (INT,DECIMAL, DECIMAL) (Containing 4700 Coordinates) ID , LAT, LNG 0 59.352669 -7.264341 1 59.352669 -7.264341 2 59.350012 -7.260653 3 59.337585 -7.189798 4 59.339221 -7.193582 5 59.341408 -7.205888 Bus Stop Table (INT, INT, INT)(Containing 15000 Stops) StopID RouteID COORDINATEID 1000100001 100 17 1000100002 100 18 1000100003 100 19 1000100004 100 20 1000100005 100 21 1000100006 100 22 1000100007 100 23 This is how long it takes to load all the data from each table: stop.findAll = 148ms, stops.size: 15670 Hibernate: select coordinate0_.COORDINATEID as COORDINA1_2_, coordinate0_.LAT as LAT2_, coordinate0_.LNG as LNG2_ from COORDINATES coordinate0_ coord.findAll = 51ms , coordinates.size: 4704 Hibernate: select coordconne0_.COORDCONNECTIONID as COORDCON1_3_, coordconne0_.DISTANCE as DISTANCE3_, coordconne0_.FROMCOORDID as FROMCOOR3_3_, coordconne0_.TOCOORDID as TOCOORDID3_ from COORDCONNECTIONS coordconne0_ coordinateConnectionDao.findAll = 238ms ; coordConnectioninates.size:48132 Hibernate Annotations @Entity @Table(name = "STOPS") public class Stop implements Serializable { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Table(name = "COORDINATES") public class Coordinate { @Id @GeneratedValue @Column(name = "COORDINATEID") private Integer CoordinateID; @Column(name = "LAT") private double latitude; @Column(name = "LNG") private double longitude; } @Entity @Table(name = "COORDCONNECTIONS") public class CoordConnection { @Id @GeneratedValue @Column(name = "COORDCONNECTIONID") private Integer CoordinateID; /** * From Coordinate_id value */ @Column(name = "FROMCOORDID", nullable = false) private int fromCoordID; /** * To Coordinate_id value */ @Column(name = "TOCOORDID", nullable = false) private int toCoordID; //private Coordinate toCoordID; }

    Read the article

  • Nhibernate/Hibernate, lookup tables and object design

    - by Simon G
    Hi, I've got two tables. Invoice with columns CustomerID, InvoiceDate, Value, InvoiceTypeID (CustomerID and InvoiceDate make up a composite key) and InvoiceType with InvoiceTypeID and InvoiceTypeName columns. I know I can create my objects like: public class Invoice { public virtual int CustomerID { get; set; } public virtual DateTime InvoiceDate { get; set; } public virtual decimal Value { get; set; } public virtual InvoiceType InvoiceType { get; set; } } public class InvoiceType { public virtual InvoiceTypeID { get; set; } public virtual InvoiceTypeName { get; set; } } So the generated sql would look something like: SELECT CustomerID, InvoiceDate, Value, InvoiceTypeID FROM Invoice WHERE CustomerID = x AND InvoiceDate = y SELECT InvoiceTypeID, InvoiceTypeName FROM InvoiceType WHERE InvoiceTypeID = z But rather that having two select queries executed to retrieve the data I would rather have one. I would also like to avoid using child object for simple lookup lists. So my object would look something like: public class Invoice { public virtual int CustomerID { get; set; } public virtual DateTime InvoiceDate { get; set; } public virtual decimal Value { get; set; } public virtual InvoiceTypeID { get; set; } public virtual InvoiceTypeName { get; set; } } And my sql would look something like: SELECT CustomerID, InvoiceDate, Value, InvoiceTypeID FROM Invoice INNER JOIN InvoiceType ON Invoice.InvoiceTypeID = InvoiceType.InvoiceTypeID WHERE CustomerID = x AND InvoiceDate = y My question is how do I create the mapping for this? I've tried using join but this tried to join using CustomerID and InvoiceDate, am I missing something obvious? Thanks

    Read the article

  • Re-using aggregate level formulas in SQL - any good tactics?

    - by Cade Roux
    Imagine this case, but with a lot more component buckets and a lot more intermediates and outputs. Many of the intermediates are calculated at the detail level, but a few things are calculated at the aggregate level: DECLARE @Profitability AS TABLE ( Cust INT NOT NULL ,Category VARCHAR(10) NOT NULL ,Income DECIMAL(10, 2) NOT NULL ,Expense DECIMAL(10, 2) NOT NULL ) ; INSERT INTO @Profitability VALUES ( 1, 'Software', 100, 50 ) ; INSERT INTO @Profitability VALUES ( 2, 'Software', 100, 20 ) ; INSERT INTO @Profitability VALUES ( 3, 'Software', 100, 60 ) ; INSERT INTO @Profitability VALUES ( 4, 'Software', 500, 400 ) ; INSERT INTO @Profitability VALUES ( 5, 'Hardware', 1000, 550 ) ; INSERT INTO @Profitability VALUES ( 6, 'Hardware', 1000, 250 ) ; INSERT INTO @Profitability VALUES ( 7, 'Hardware', 1000, 700 ) ; INSERT INTO @Profitability VALUES ( 8, 'Hardware', 5000, 4500 ) ; SELECT Cust ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Cust SELECT Category ,Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability GROUP BY Category SELECT Profit = SUM(Income - Expense) ,Margin = SUM(Income - Expense) / SUM(Income) FROM @Profitability Notice how the same formulae have to be used at the different aggregation levels. This results in code duplication. I have thought of using UDFs (either scalar or table valued with an OUTER APPLY, since many of the final results may share intermediates which have to be calculated at the aggregate level), but in my experience the scalar and multi-statement table-valued UDFs perform very poorly. Also thought about using more dynamic SQL and applying the formulas by name, basically. Any other tricks, techniques or tactics to keeping these kinds of formulae which need to be applied at different levels in sync and/or organized?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >