Search Results

Search found 1523 results on 61 pages for 'assignment'.

Page 18/61 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • Where to find my website's source code?

    - by Aamir Berni
    my company ordered a website and we were given all usernames and passwords but I can't find the PHP source files and this is my first website assignment. I have no prior exposure to web technologies although I've been programming for a decade and know computer usage inside out. I tried to use the cPanel to find .php files but there aren't any. There are no MySQL databases either. I'm lost. I'll appreciate any help in this regards.

    Read the article

  • What is Google Page Rank and Why is it So Important?

    What exactly is PageRank? It is basically a link analysis algorithm, which was influenced by citation analysis, which dates way back to the fifties, when it was conceived by Eugene Garfield and later on by Massimo Marchiori. This link analysis algorithm essentially gives set of hyperlinked documents, where they are weighed in numerical form, and are given a number assignment between zero to ten.

    Read the article

  • Should the 12-String be in it's own class and why?

    - by MayNotBe
    This question is regarding a homework project in my first Java programming class (online program). The assignment is to create a "stringed instrument" class using (among other things) an array of String names representing instrument string names ("A", "E", etc). The idea for the 12-string is beyond the scope of the assignment (it doesn't have to be included at all) but now that I've thought of it, I want to figure out how to make it work. Part of me feels like the 12-String should have it's own class, but another part of me feels that it should be in the guitar class because it's a guitar. I suppose this will become clear as I progress but I thought I would see what kind of response I get here. Also, why would they ask for a String[] for the instrument string names? Seems like a char[] makes more sense. Thank you for any insight. Here's my code so far (it's a work in progress): public class Guitar { private int numberOfStrings = 6; private static int numberOfGuitars = 0; private String[] stringNotes = {"E", "A", "D", "G", "B", "A"}; private boolean tuned = false; private boolean playing = false; public Guitar(){ numberOfGuitars++; } public Guitar(boolean twelveString){ if(twelveString){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = 12; } } public int getNumberOfStrings() { return numberOfStrings; } public void setNumberOfStrings(int strings) { if(strings == 12 || strings == 6) { if(strings == 12){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = strings; } if(strings == 6) numberOfStrings = strings; }//if else System.out.println("***ERROR***Guitar can only have 6 or 12 strings***ERROR***"); } public void getStringNotes() { for(int i = 0; i < stringNotes.length; i++){ if(i == stringNotes.length - 1) System.out.println(stringNotes[i]); else System.out.print(stringNotes[i] + ", "); }//for }

    Read the article

  • Should the 12-String be in it's own class and why? Java

    - by MayNotBe
    This is my first question here. I will amend it as instructed. This is regarding a homework project in my first Java programming class (online program). The assignment is to create a "stringed instrument" class using (among other things) an array of String names representing instrument string names ("A", "E", etc). The idea for the 12-string is beyond the scope of the assignment (it doesn't have to be included at all) but now that I've thought of it, I want to figure out how to make it work. Part of me feels like the 12-String should have it's own class, but another part of me feels that it should be in the guitar class because it's a guitar. I suppose this will become clear as I progress but I thought I would see what kind of response I get here. Also, why would they ask for a String[] for the instrument string names? Seems like a char[] makes more sense. Thank you for any insight. Here's my code so far (it's a work in progress): public class Guitar { private int numberOfStrings = 6; private static int numberOfGuitars = 0; private String[] stringNotes = {"E", "A", "D", "G", "B", "A"}; private boolean tuned = false; private boolean playing = false; public Guitar(){ numberOfGuitars++; } public Guitar(boolean twelveString){ if(twelveString){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = 12; } } public int getNumberOfStrings() { return numberOfStrings; } public void setNumberOfStrings(int strings) { if(strings == 12 || strings == 6) { if(strings == 12){ stringNotes[0] = "E, E"; stringNotes[1] = "A, A"; stringNotes[2] = "D, D"; stringNotes[3] = "G, G"; stringNotes[4] = "B, B"; stringNotes[5] = "E, E"; numberOfStrings = strings; } if(strings == 6) numberOfStrings = strings; }//if else System.out.println("***ERROR***Guitar can only have 6 or 12 strings***ERROR***"); } public void getStringNotes() { for(int i = 0; i < stringNotes.length; i++){ if(i == stringNotes.length - 1) System.out.println(stringNotes[i]); else System.out.print(stringNotes[i] + ", "); }//for }

    Read the article

  • How Do SEO Consultancies Work?

    SEO Consultancies have a very different way of working and functioning. Enterprises that want to have their websites optimized approach the consultancies to either take up the assignment full time or provide part time consulting services.

    Read the article

  • Cannot Change "Log on through Terminal Services" in Local Security Policy XP from Server 2008 GP

    - by Campo
    This is a mixed AD environment, Server 2003 R2 and 2008 R2 I have a 2003 AD R2 and a 2008 R2 AD. GPO is usually managed from the 2008 R2 machine. I have a RD Gateway on another server as well. I setup the CAP and RAP to allow a normal user to log on to the departments workstation. I also adjusted the GPO for that OU to allow Log on trhough Remote Desktop Gateway for the user group. This worked on my windows 7 workstation. But unfortunately the policy is a different name in XP "allow log on through Terminal Services" I can get through right into the machine but when the log on actually happens to the local machine i get the "Cannot log on interactively" error. This is set in (for the local machine) Secpol.msc Local Security Policy "user rights assignment" but is controlled by the GPO in Computer Configuration Policies Security Settings Local Policies "User Rights Assignment" Do I simply need to adjust the same setting on the same GPO but with a server 2003 GP editor? Feel like that could cause issues... Looking for some direction. Or if anyone has run into this issue yet. UPDATE Should this work? support.microsoft.com/kb/186529 Still seems like I will have the issue as the actual GP settings for Log on through Terminal Services is still different between Server 2008 R2 and 2003 R2.... Another Thought: Should I delete the GPO made for the department and remake it with the 2003 R2 server? I have no 2008 specific settings as the whole department runs XP other than myself. If that's a solution I will move my computer out of the department as a solution... Thoughts?

    Read the article

  • Group Policy installation failed error 1274

    - by David Thomas Garcia
    I'm trying to deploy an MSI via the Group Policy in Active Directory. But these are the errors I'm getting in the System event log after logging in: The assignment of application XStandard from policy install failed. The error was : %%1274 The removal of the assignment of application XStandard from policy install failed. The error was : %%2 Failed to apply changes to software installation settings. The installation of software deployed through Group Policy for this user has been delayed until the next logon because the changes must be applied before the user logon. The error was : %%1274 The Group Policy Client Side Extension Software Installation was unable to apply one or more settings because the changes must be processed before system startup or user logon. The system will wait for Group Policy processing to finish completely before the next startup or logon for this user, and this may result in slow startup and boot performance. When I reboot and log in again I simply get the same messages about needing to perform the update before the next logon. I'm on a Windows Vista 32-bit laptop. I'm rather new to deploying via group policy so what other information would be helpful in determining the issue? I tried a different MSI with the same results. I'm able to install the MSI using the command line and msiexec when logged into the computer, so I know the MSI is working ok at least.

    Read the article

  • Unable to compile netmap on Fedora 32 bit

    - by John Elf
    This is the error everytime I try to install netmap: Can someone let me know how to isntall the same on e1000e or ixgbe. I have kernel header and source installed. [root@localhost e1000]# make KSRC=/usr/src/kernels/2.6.35.6-45.fc14.i686/ make -C /usr/src/kernels/2.6.35.6-45.fc14.i686/ M=/media/sf_Shared/netmap-linux/net/e1000 modules make[1]: Entering directory /usr/src/kernels/2.6.35.6-45.fc14.i686' CC [M] /media/sf_Shared/netmap-linux/net/e1000/e1000_main.o /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_tx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:2: error: implicit declaration of function ‘vzalloc’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1485:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_setup_rx_resources’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:1680:20: warning: assignment makes pointer from integer without a cast /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_tx_csum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:2780:2: error: implicit declaration of function ‘skb_checksum_start_offset’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_rx_checksum’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:3689:2: error: implicit declaration of function ‘skb_checksum_none_assert’ /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c: In function ‘e1000_restore_vlan’: /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: error: ‘VLAN_N_VID’ undeclared (first use in this function) /media/sf_Shared/netmap-linux/net/e1000/e1000_main.c:4617:23: note: each undeclared identifier is reported only once for each function it appears in make[2]: *** [/media/sf_Shared/netmap-linux/net/e1000/e1000_main.o] Error 1 make[1]: *** [_module_/media/sf_Shared/netmap-linux/net/e1000] Error 2 make[1]: Leaving directory/usr/src/kernels/2.6.35.6-45.fc14.i686' make: * [all] Error 2

    Read the article

  • What pseudo-operators exist in Perl 5?

    - by Chas. Owens
    I am currently documenting all of Perl 5's operators (see the perlopref GitHub project) and I have decided to include Perl 5's pseudo-operators as well. To me, a pseudo-operator in Perl is anything that looks like an operator, but is really more than one operator or a some other piece of syntax. I have documented the four I am familiar with already: ()= the countof operator =()= the goatse/countof operator ~~ the scalar context operator }{ the Eskimo-kiss operator What other names exist for these pseudo-operators, and do you know of any pseudo-operators I have missed? =head1 Pseudo-operators There are idioms in Perl 5 that appear to be operators, but are really a combination of several operators or pieces of syntax. These pseudo-operators have the precedence of the constituent parts. =head2 ()= X =head3 Description This pseudo-operator is the list assignment operator (aka the countof operator). It is made up of two items C<()>, and C<=>. In scalar context it returns the number of items in the list X. In list context it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count = ()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats = ()= $string =~ /cat/g; #$cats is now 3 print scalar( ()= f() ), "\n"; #prints "5\n" =head3 See also L</X = Y> and L</X =()= Y> =head2 X =()= Y This pseudo-operator is often called the goatse operator for reasons better left unexamined; it is also called the list assignment or countof operator. It is made up of three items C<=>, C<()>, and C<=>. When X is a scalar variable, the number of items in the list Y is returned. If X is an array or a hash it it returns an empty list. It is useful when you have something that returns a list and you want to know the number of items in that list and don't care about the list's contents. It is needed because the comma operator returns the last item in the sequence rather than the number of items in the sequence when it is placed in scalar context. It works because the assignment operator returns the number of items available to be assigned when its left hand side has list context. In the following example there are five values in the list being assigned to the list C<($x, $y, $z)>, so C<$count> is assigned C<5>. my $count = my ($x, $y, $z) = qw/a b c d e/; The empty list (the C<()> part of the pseudo-operator) triggers this behavior. =head3 Example sub f { return qw/a b c d e/ } my $count =()= f(); #$count is now 5 my $string = "cat cat dog cat"; my $cats =()= $string =~ /cat/g; #$cats is now 3 =head3 See also L</=> and L</()=> =head2 ~~X =head3 Description This pseudo-operator is named the scalar context operator. It is made up of two bitwise negation operators. It provides scalar context to the expression X. It works because the first bitwise negation operator provides scalar context to X and performs a bitwise negation of the result; since the result of two bitwise negations is the original item, the value of the original expression is preserved. With the addition of the Smart match operator, this pseudo-operator is even more confusing. The C<scalar> function is much easier to understand and you are encouraged to use it instead. =head3 Example my @a = qw/a b c d/; print ~~@a, "\n"; #prints 4 =head3 See also L</~X>, L</X ~~ Y>, and L<perlfunc/scalar> =head2 X }{ Y =head3 Description This pseudo-operator is called the Eskimo-kiss operator because it looks like two faces touching noses. It is made up of an closing brace and an opening brace. It is used when using C<perl> as a command-line program with the C<-n> or C<-p> options. It has the effect of running X inside of the loop created by C<-n> or C<-p> and running Y at the end of the program. It works because the closing brace closes the loop created by C<-n> or C<-p> and the opening brace creates a new bare block that is closed by the loop's original ending. You can see this behavior by using the L<B::Deparse> module. Here is the command C<perl -ne 'print $_;'> deparsed: LINE: while (defined($_ = <ARGV>)) { print $_; } Notice how the original code was wrapped with the C<while> loop. Here is the deparsing of C<perl -ne '$count++ if /foo/; }{ print "$count\n"'>: LINE: while (defined($_ = <ARGV>)) { ++$count if /foo/; } { print "$count\n"; } Notice how the C<while> loop is closed by the closing brace we added and the opening brace starts a new bare block that is closed by the closing brace that was originally intended to close the C<while> loop. =head3 Example # count unique lines in the file FOO perl -nle '$seen{$_}++ }{ print "$_ => $seen{$_}" for keys %seen' FOO # sum all of the lines until the user types control-d perl -nle '$sum += $_ }{ print $sum' =head3 See also L<perlrun> and L<perlsyn> =cut

    Read the article

  • New features of C# 4.0

    This article covers New features of C# 4.0. Article has been divided into below sections. Introduction. Dynamic Lookup. Named and Optional Arguments. Features for COM interop. Variance. Relationship with Visual Basic. Resources. Other interested readings… 22 New Features of Visual Studio 2008 for .NET Professionals 50 New Features of SQL Server 2008 IIS 7.0 New features Introduction It is now close to a year since Microsoft Visual C# 3.0 shipped as part of Visual Studio 2008. In the VS Managed Languages team we are hard at work on creating the next version of the language (with the unsurprising working title of C# 4.0), and this document is a first public description of the planned language features as we currently see them. Please be advised that all this is in early stages of production and is subject to change. Part of the reason for sharing our plans in public so early is precisely to get the kind of feedback that will cause us to improve the final product before it rolls out. Simultaneously with the publication of this whitepaper, a first public CTP (community technology preview) of Visual Studio 2010 is going out as a Virtual PC image for everyone to try. Please use it to play and experiment with the features, and let us know of any thoughts you have. We ask for your understanding and patience working with very early bits, where especially new or newly implemented features do not have the quality or stability of a final product. The aim of the CTP is not to give you a productive work environment but to give you the best possible impression of what we are working on for the next release. The CTP contains a number of walkthroughs, some of which highlight the new language features of C# 4.0. Those are excellent for getting a hands-on guided tour through the details of some common scenarios for the features. You may consider this whitepaper a companion document to these walkthroughs, complementing them with a focus on the overall language features and how they work, as opposed to the specifics of the concrete scenarios. C# 4.0 The major theme for C# 4.0 is dynamic programming. Increasingly, objects are “dynamic” in the sense that their structure and behavior is not captured by a static type, or at least not one that the compiler knows about when compiling your program. Some examples include a. objects from dynamic programming languages, such as Python or Ruby b. COM objects accessed through IDispatch c. ordinary .NET types accessed through reflection d. objects with changing structure, such as HTML DOM objects While C# remains a statically typed language, we aim to vastly improve the interaction with such objects. A secondary theme is co-evolution with Visual Basic. Going forward we will aim to maintain the individual character of each language, but at the same time important new features should be introduced in both languages at the same time. They should be differentiated more by style and feel than by feature set. The new features in C# 4.0 fall into four groups: Dynamic lookup Dynamic lookup allows you to write method, operator and indexer calls, property and field accesses, and even object invocations which bypass the C# static type checking and instead gets resolved at runtime. Named and optional parameters Parameters in C# can now be specified as optional by providing a default value for them in a member declaration. When the member is invoked, optional arguments can be omitted. Furthermore, any argument can be passed by parameter name instead of position. COM specific interop features Dynamic lookup as well as named and optional parameters both help making programming against COM less painful than today. On top of that, however, we are adding a number of other small features that further improve the interop experience. Variance It used to be that an IEnumerable<string> wasn’t an IEnumerable<object>. Now it is – C# embraces type safe “co-and contravariance” and common BCL types are updated to take advantage of that. Dynamic Lookup Dynamic lookup allows you a unified approach to invoking things dynamically. With dynamic lookup, when you have an object in your hand you do not need to worry about whether it comes from COM, IronPython, the HTML DOM or reflection; you just apply operations to it and leave it to the runtime to figure out what exactly those operations mean for that particular object. This affords you enormous flexibility, and can greatly simplify your code, but it does come with a significant drawback: Static typing is not maintained for these operations. A dynamic object is assumed at compile time to support any operation, and only at runtime will you get an error if it wasn’t so. Oftentimes this will be no loss, because the object wouldn’t have a static type anyway, in other cases it is a tradeoff between brevity and safety. In order to facilitate this tradeoff, it is a design goal of C# to allow you to opt in or opt out of dynamic behavior on every single call. The dynamic type C# 4.0 introduces a new static type called dynamic. When you have an object of type dynamic you can “do things to it” that are resolved only at runtime: dynamic d = GetDynamicObject(…); d.M(7); The C# compiler allows you to call a method with any name and any arguments on d because it is of type dynamic. At runtime the actual object that d refers to will be examined to determine what it means to “call M with an int” on it. The type dynamic can be thought of as a special version of the type object, which signals that the object can be used dynamically. It is easy to opt in or out of dynamic behavior: any object can be implicitly converted to dynamic, “suspending belief” until runtime. Conversely, there is an “assignment conversion” from dynamic to any other type, which allows implicit conversion in assignment-like constructs: dynamic d = 7; // implicit conversion int i = d; // assignment conversion Dynamic operations Not only method calls, but also field and property accesses, indexer and operator calls and even delegate invocations can be dispatched dynamically: dynamic d = GetDynamicObject(…); d.M(7); // calling methods d.f = d.P; // getting and settings fields and properties d[“one”] = d[“two”]; // getting and setting thorugh indexers int i = d + 3; // calling operators string s = d(5,7); // invoking as a delegate The role of the C# compiler here is simply to package up the necessary information about “what is being done to d”, so that the runtime can pick it up and determine what the exact meaning of it is given an actual object d. Think of it as deferring part of the compiler’s job to runtime. The result of any dynamic operation is itself of type dynamic. Runtime lookup At runtime a dynamic operation is dispatched according to the nature of its target object d: COM objects If d is a COM object, the operation is dispatched dynamically through COM IDispatch. This allows calling to COM types that don’t have a Primary Interop Assembly (PIA), and relying on COM features that don’t have a counterpart in C#, such as indexed properties and default properties. Dynamic objects If d implements the interface IDynamicObject d itself is asked to perform the operation. Thus by implementing IDynamicObject a type can completely redefine the meaning of dynamic operations. This is used intensively by dynamic languages such as IronPython and IronRuby to implement their own dynamic object models. It will also be used by APIs, e.g. by the HTML DOM to allow direct access to the object’s properties using property syntax. Plain objects Otherwise d is a standard .NET object, and the operation will be dispatched using reflection on its type and a C# “runtime binder” which implements C#’s lookup and overload resolution semantics at runtime. This is essentially a part of the C# compiler running as a runtime component to “finish the work” on dynamic operations that was deferred by the static compiler. Example Assume the following code: dynamic d1 = new Foo(); dynamic d2 = new Bar(); string s; d1.M(s, d2, 3, null); Because the receiver of the call to M is dynamic, the C# compiler does not try to resolve the meaning of the call. Instead it stashes away information for the runtime about the call. This information (often referred to as the “payload”) is essentially equivalent to: “Perform an instance method call of M with the following arguments: 1. a string 2. a dynamic 3. a literal int 3 4. a literal object null” At runtime, assume that the actual type Foo of d1 is not a COM type and does not implement IDynamicObject. In this case the C# runtime binder picks up to finish the overload resolution job based on runtime type information, proceeding as follows: 1. Reflection is used to obtain the actual runtime types of the two objects, d1 and d2, that did not have a static type (or rather had the static type dynamic). The result is Foo for d1 and Bar for d2. 2. Method lookup and overload resolution is performed on the type Foo with the call M(string,Bar,3,null) using ordinary C# semantics. 3. If the method is found it is invoked; otherwise a runtime exception is thrown. Overload resolution with dynamic arguments Even if the receiver of a method call is of a static type, overload resolution can still happen at runtime. This can happen if one or more of the arguments have the type dynamic: Foo foo = new Foo(); dynamic d = new Bar(); var result = foo.M(d); The C# runtime binder will choose between the statically known overloads of M on Foo, based on the runtime type of d, namely Bar. The result is again of type dynamic. The Dynamic Language Runtime An important component in the underlying implementation of dynamic lookup is the Dynamic Language Runtime (DLR), which is a new API in .NET 4.0. The DLR provides most of the infrastructure behind not only C# dynamic lookup but also the implementation of several dynamic programming languages on .NET, such as IronPython and IronRuby. Through this common infrastructure a high degree of interoperability is ensured, but just as importantly the DLR provides excellent caching mechanisms which serve to greatly enhance the efficiency of runtime dispatch. To the user of dynamic lookup in C#, the DLR is invisible except for the improved efficiency. However, if you want to implement your own dynamically dispatched objects, the IDynamicObject interface allows you to interoperate with the DLR and plug in your own behavior. This is a rather advanced task, which requires you to understand a good deal more about the inner workings of the DLR. For API writers, however, it can definitely be worth the trouble in order to vastly improve the usability of e.g. a library representing an inherently dynamic domain. Open issues There are a few limitations and things that might work differently than you would expect. · The DLR allows objects to be created from objects that represent classes. However, the current implementation of C# doesn’t have syntax to support this. · Dynamic lookup will not be able to find extension methods. Whether extension methods apply or not depends on the static context of the call (i.e. which using clauses occur), and this context information is not currently kept as part of the payload. · Anonymous functions (i.e. lambda expressions) cannot appear as arguments to a dynamic method call. The compiler cannot bind (i.e. “understand”) an anonymous function without knowing what type it is converted to. One consequence of these limitations is that you cannot easily use LINQ queries over dynamic objects: dynamic collection = …; var result = collection.Select(e => e + 5); If the Select method is an extension method, dynamic lookup will not find it. Even if it is an instance method, the above does not compile, because a lambda expression cannot be passed as an argument to a dynamic operation. There are no plans to address these limitations in C# 4.0. Named and Optional Arguments Named and optional parameters are really two distinct features, but are often useful together. Optional parameters allow you to omit arguments to member invocations, whereas named arguments is a way to provide an argument using the name of the corresponding parameter instead of relying on its position in the parameter list. Some APIs, most notably COM interfaces such as the Office automation APIs, are written specifically with named and optional parameters in mind. Up until now it has been very painful to call into these APIs from C#, with sometimes as many as thirty arguments having to be explicitly passed, most of which have reasonable default values and could be omitted. Even in APIs for .NET however you sometimes find yourself compelled to write many overloads of a method with different combinations of parameters, in order to provide maximum usability to the callers. Optional parameters are a useful alternative for these situations. Optional parameters A parameter is declared optional simply by providing a default value for it: public void M(int x, int y = 5, int z = 7); Here y and z are optional parameters and can be omitted in calls: M(1, 2, 3); // ordinary call of M M(1, 2); // omitting z – equivalent to M(1, 2, 7) M(1); // omitting both y and z – equivalent to M(1, 5, 7) Named and optional arguments C# 4.0 does not permit you to omit arguments between commas as in M(1,,3). This could lead to highly unreadable comma-counting code. Instead any argument can be passed by name. Thus if you want to omit only y from a call of M you can write: M(1, z: 3); // passing z by name or M(x: 1, z: 3); // passing both x and z by name or even M(z: 3, x: 1); // reversing the order of arguments All forms are equivalent, except that arguments are always evaluated in the order they appear, so in the last example the 3 is evaluated before the 1. Optional and named arguments can be used not only with methods but also with indexers and constructors. Overload resolution Named and optional arguments affect overload resolution, but the changes are relatively simple: A signature is applicable if all its parameters are either optional or have exactly one corresponding argument (by name or position) in the call which is convertible to the parameter type. Betterness rules on conversions are only applied for arguments that are explicitly given – omitted optional arguments are ignored for betterness purposes. If two signatures are equally good, one that does not omit optional parameters is preferred. M(string s, int i = 1); M(object o); M(int i, string s = “Hello”); M(int i); M(5); Given these overloads, we can see the working of the rules above. M(string,int) is not applicable because 5 doesn’t convert to string. M(int,string) is applicable because its second parameter is optional, and so, obviously are M(object) and M(int). M(int,string) and M(int) are both better than M(object) because the conversion from 5 to int is better than the conversion from 5 to object. Finally M(int) is better than M(int,string) because no optional arguments are omitted. Thus the method that gets called is M(int). Features for COM interop Dynamic lookup as well as named and optional parameters greatly improve the experience of interoperating with COM APIs such as the Office Automation APIs. In order to remove even more of the speed bumps, a couple of small COM-specific features are also added to C# 4.0. Dynamic import Many COM methods accept and return variant types, which are represented in the PIAs as object. In the vast majority of cases, a programmer calling these methods already knows the static type of a returned object from context, but explicitly has to perform a cast on the returned value to make use of that knowledge. These casts are so common that they constitute a major nuisance. In order to facilitate a smoother experience, you can now choose to import these COM APIs in such a way that variants are instead represented using the type dynamic. In other words, from your point of view, COM signatures now have occurrences of dynamic instead of object in them. This means that you can easily access members directly off a returned object, or you can assign it to a strongly typed local variable without having to cast. To illustrate, you can now say excel.Cells[1, 1].Value = "Hello"; instead of ((Excel.Range)excel.Cells[1, 1]).Value2 = "Hello"; and Excel.Range range = excel.Cells[1, 1]; instead of Excel.Range range = (Excel.Range)excel.Cells[1, 1]; Compiling without PIAs Primary Interop Assemblies are large .NET assemblies generated from COM interfaces to facilitate strongly typed interoperability. They provide great support at design time, where your experience of the interop is as good as if the types where really defined in .NET. However, at runtime these large assemblies can easily bloat your program, and also cause versioning issues because they are distributed independently of your application. The no-PIA feature allows you to continue to use PIAs at design time without having them around at runtime. Instead, the C# compiler will bake the small part of the PIA that a program actually uses directly into its assembly. At runtime the PIA does not have to be loaded. Omitting ref Because of a different programming model, many COM APIs contain a lot of reference parameters. Contrary to refs in C#, these are typically not meant to mutate a passed-in argument for the subsequent benefit of the caller, but are simply another way of passing value parameters. It therefore seems unreasonable that a C# programmer should have to create temporary variables for all such ref parameters and pass these by reference. Instead, specifically for COM methods, the C# compiler will allow you to pass arguments by value to such a method, and will automatically generate temporary variables to hold the passed-in values, subsequently discarding these when the call returns. In this way the caller sees value semantics, and will not experience any side effects, but the called method still gets a reference. Open issues A few COM interface features still are not surfaced in C#. Most notably these include indexed properties and default properties. As mentioned above these will be respected if you access COM dynamically, but statically typed C# code will still not recognize them. There are currently no plans to address these remaining speed bumps in C# 4.0. Variance An aspect of generics that often comes across as surprising is that the following is illegal: IList<string> strings = new List<string>(); IList<object> objects = strings; The second assignment is disallowed because strings does not have the same element type as objects. There is a perfectly good reason for this. If it were allowed you could write: objects[0] = 5; string s = strings[0]; Allowing an int to be inserted into a list of strings and subsequently extracted as a string. This would be a breach of type safety. However, there are certain interfaces where the above cannot occur, notably where there is no way to insert an object into the collection. Such an interface is IEnumerable<T>. If instead you say: IEnumerable<object> objects = strings; There is no way we can put the wrong kind of thing into strings through objects, because objects doesn’t have a method that takes an element in. Variance is about allowing assignments such as this in cases where it is safe. The result is that a lot of situations that were previously surprising now just work. Covariance In .NET 4.0 the IEnumerable<T> interface will be declared in the following way: public interface IEnumerable<out T> : IEnumerable { IEnumerator<T> GetEnumerator(); } public interface IEnumerator<out T> : IEnumerator { bool MoveNext(); T Current { get; } } The “out” in these declarations signifies that the T can only occur in output position in the interface – the compiler will complain otherwise. In return for this restriction, the interface becomes “covariant” in T, which means that an IEnumerable<A> is considered an IEnumerable<B> if A has a reference conversion to B. As a result, any sequence of strings is also e.g. a sequence of objects. This is useful e.g. in many LINQ methods. Using the declarations above: var result = strings.Union(objects); // succeeds with an IEnumerable<object> This would previously have been disallowed, and you would have had to to some cumbersome wrapping to get the two sequences to have the same element type. Contravariance Type parameters can also have an “in” modifier, restricting them to occur only in input positions. An example is IComparer<T>: public interface IComparer<in T> { public int Compare(T left, T right); } The somewhat baffling result is that an IComparer<object> can in fact be considered an IComparer<string>! It makes sense when you think about it: If a comparer can compare any two objects, it can certainly also compare two strings. This property is referred to as contravariance. A generic type can have both in and out modifiers on its type parameters, as is the case with the Func<…> delegate types: public delegate TResult Func<in TArg, out TResult>(TArg arg); Obviously the argument only ever comes in, and the result only ever comes out. Therefore a Func<object,string> can in fact be used as a Func<string,object>. Limitations Variant type parameters can only be declared on interfaces and delegate types, due to a restriction in the CLR. Variance only applies when there is a reference conversion between the type arguments. For instance, an IEnumerable<int> is not an IEnumerable<object> because the conversion from int to object is a boxing conversion, not a reference conversion. Also please note that the CTP does not contain the new versions of the .NET types mentioned above. In order to experiment with variance you have to declare your own variant interfaces and delegate types. COM Example Here is a larger Office automation example that shows many of the new C# features in action. using System; using System.Diagnostics; using System.Linq; using Excel = Microsoft.Office.Interop.Excel; using Word = Microsoft.Office.Interop.Word; class Program { static void Main(string[] args) { var excel = new Excel.Application(); excel.Visible = true; excel.Workbooks.Add(); // optional arguments omitted excel.Cells[1, 1].Value = "Process Name"; // no casts; Value dynamically excel.Cells[1, 2].Value = "Memory Usage"; // accessed var processes = Process.GetProcesses() .OrderByDescending(p =&gt; p.WorkingSet) .Take(10); int i = 2; foreach (var p in processes) { excel.Cells[i, 1].Value = p.ProcessName; // no casts excel.Cells[i, 2].Value = p.WorkingSet; // no casts i++; } Excel.Range range = excel.Cells[1, 1]; // no casts Excel.Chart chart = excel.ActiveWorkbook.Charts. Add(After: excel.ActiveSheet); // named and optional arguments chart.ChartWizard( Source: range.CurrentRegion, Title: "Memory Usage in " + Environment.MachineName); //named+optional chart.ChartStyle = 45; chart.CopyPicture(Excel.XlPictureAppearance.xlScreen, Excel.XlCopyPictureFormat.xlBitmap, Excel.XlPictureAppearance.xlScreen); var word = new Word.Application(); word.Visible = true; word.Documents.Add(); // optional arguments word.Selection.Paste(); } } The code is much more terse and readable than the C# 3.0 counterpart. Note especially how the Value property is accessed dynamically. This is actually an indexed property, i.e. a property that takes an argument; something which C# does not understand. However the argument is optional. Since the access is dynamic, it goes through the runtime COM binder which knows to substitute the default value and call the indexed property. Thus, dynamic COM allows you to avoid accesses to the puzzling Value2 property of Excel ranges. Relationship with Visual Basic A number of the features introduced to C# 4.0 already exist or will be introduced in some form or other in Visual Basic: · Late binding in VB is similar in many ways to dynamic lookup in C#, and can be expected to make more use of the DLR in the future, leading to further parity with C#. · Named and optional arguments have been part of Visual Basic for a long time, and the C# version of the feature is explicitly engineered with maximal VB interoperability in mind. · NoPIA and variance are both being introduced to VB and C# at the same time. VB in turn is adding a number of features that have hitherto been a mainstay of C#. As a result future versions of C# and VB will have much better feature parity, for the benefit of everyone. Resources All available resources concerning C# 4.0 can be accessed through the C# Dev Center. Specifically, this white paper and other resources can be found at the Code Gallery site. Enjoy! span.fullpost {display:none;}

    Read the article

  • Dynamic Type to do away with Reflection

    - by Rick Strahl
    The dynamic type in C# 4.0 is a welcome addition to the language. One thing I’ve been doing a lot with it is to remove explicit Reflection code that’s often necessary when you ‘dynamically’ need to walk and object hierarchy. In the past I’ve had a number of ReflectionUtils that used string based expressions to walk an object hierarchy. With the introduction of dynamic much of the ReflectionUtils code can be removed for cleaner code that runs considerably faster to boot. The old Way - Reflection Here’s a really contrived example, but assume for a second, you’d want to dynamically retrieve a Page.Request.Url.AbsoluteUrl based on a Page instance in an ASP.NET Web Page request. The strongly typed version looks like this: string path = Page.Request.Url.AbsolutePath; Now assume for a second that Page wasn’t available as a strongly typed instance and all you had was an object reference to start with and you couldn’t cast it (right I said this was contrived :-)) If you’re using raw Reflection code to retrieve this you’d end up writing 3 sets of Reflection calls using GetValue(). Here’s some internal code I use to retrieve Property values as part of ReflectionUtils: /// <summary> /// Retrieve a property value from an object dynamically. This is a simple version /// that uses Reflection calls directly. It doesn't support indexers. /// </summary> /// <param name="instance">Object to make the call on</param> /// <param name="property">Property to retrieve</param> /// <returns>Object - cast to proper type</returns> public static object GetProperty(object instance, string property) { return instance.GetType().GetProperty(property, ReflectionUtils.MemberAccess).GetValue(instance, null); } If you want more control over properties and support both fields and properties as well as array indexers a little more work is required: /// <summary> /// Parses Properties and Fields including Array and Collection references. /// Used internally for the 'Ex' Reflection methods. /// </summary> /// <param name="Parent"></param> /// <param name="Property"></param> /// <returns></returns> private static object GetPropertyInternal(object Parent, string Property) { if (Property == "this" || Property == "me") return Parent; object result = null; string pureProperty = Property; string indexes = null; bool isArrayOrCollection = false; // Deal with Array Property if (Property.IndexOf("[") > -1) { pureProperty = Property.Substring(0, Property.IndexOf("[")); indexes = Property.Substring(Property.IndexOf("[")); isArrayOrCollection = true; } // Get the member MemberInfo member = Parent.GetType().GetMember(pureProperty, ReflectionUtils.MemberAccess)[0]; if (member.MemberType == MemberTypes.Property) result = ((PropertyInfo)member).GetValue(Parent, null); else result = ((FieldInfo)member).GetValue(Parent); if (isArrayOrCollection) { indexes = indexes.Replace("[", string.Empty).Replace("]", string.Empty); if (result is Array) { int Index = -1; int.TryParse(indexes, out Index); result = CallMethod(result, "GetValue", Index); } else if (result is ICollection) { if (indexes.StartsWith("\"")) { // String Index indexes = indexes.Trim('\"'); result = CallMethod(result, "get_Item", indexes); } else { // assume numeric index int index = -1; int.TryParse(indexes, out index); result = CallMethod(result, "get_Item", index); } } } return result; } /// <summary> /// Returns a property or field value using a base object and sub members including . syntax. /// For example, you can access: oCustomer.oData.Company with (this,"oCustomer.oData.Company") /// This method also supports indexers in the Property value such as: /// Customer.DataSet.Tables["Customers"].Rows[0] /// </summary> /// <param name="Parent">Parent object to 'start' parsing from. Typically this will be the Page.</param> /// <param name="Property">The property to retrieve. Example: 'Customer.Entity.Company'</param> /// <returns></returns> public static object GetPropertyEx(object Parent, string Property) { Type type = Parent.GetType(); int at = Property.IndexOf("."); if (at < 0) { // Complex parse of the property return GetPropertyInternal(Parent, Property); } // Walk the . syntax - split into current object (Main) and further parsed objects (Subs) string main = Property.Substring(0, at); string subs = Property.Substring(at + 1); // Retrieve the next . section of the property object sub = GetPropertyInternal(Parent, main); // Now go parse the left over sections return GetPropertyEx(sub, subs); } As you can see there’s a fair bit of code involved into retrieving a property or field value reliably especially if you want to support array indexer syntax. This method is then used by a variety of routines to retrieve individual properties including one called GetPropertyEx() which can walk the dot syntax hierarchy easily. Anyway with ReflectionUtils I can  retrieve Page.Request.Url.AbsolutePath using code like this: string url = ReflectionUtils.GetPropertyEx(Page, "Request.Url.AbsolutePath") as string; This works fine, but is bulky to write and of course requires that I use my custom routines. It’s also quite slow as the code in GetPropertyEx does all sorts of string parsing to figure out which members to walk in the hierarchy. Enter dynamic – way easier! .NET 4.0’s dynamic type makes the above really easy. The following code is all that it takes: object objPage = Page; // force to object for contrivance :) dynamic page = objPage; // convert to dynamic from untyped object string scriptUrl = page.Request.Url.AbsolutePath; The dynamic type assignment in the first two lines turns the strongly typed Page object into a dynamic. The first assignment is just part of the contrived example to force the strongly typed Page reference into an untyped value to demonstrate the dynamic member access. The next line then just creates the dynamic type from the Page reference which allows you to access any public properties and methods easily. It also lets you access any child properties as dynamic types so when you look at Intellisense you’ll see something like this when typing Request.: In other words any dynamic value access on an object returns another dynamic object which is what allows the walking of the hierarchy chain. Note also that the result value doesn’t have to be explicitly cast as string in the code above – the compiler is perfectly happy without the cast in this case inferring the target type based on the type being assigned to. The dynamic conversion automatically handles the cast when making the final assignment which is nice making for natural syntnax that looks *exactly* like the fully typed syntax, but is completely dynamic. Note that you can also use indexers in the same natural syntax so the following also works on the dynamic page instance: string scriptUrl = page.Request.ServerVariables["SCRIPT_NAME"]; The dynamic type is going to make a lot of Reflection code go away as it’s simply so much nicer to be able to use natural syntax to write out code that previously required nasty Reflection syntax. Another interesting thing about the dynamic type is that it actually works considerably faster than Reflection. Check out the following methods that check performance: void Reflection() { Stopwatch stop = new Stopwatch(); stop.Start(); for (int i = 0; i < reps; i++) { // string url = ReflectionUtils.GetProperty(Page,"Title") as string;// "Request.Url.AbsolutePath") as string; string url = Page.GetType().GetProperty("Title", ReflectionUtils.MemberAccess).GetValue(Page, null) as string; } stop.Stop(); Response.Write("Reflection: " + stop.ElapsedMilliseconds.ToString()); } void Dynamic() { Stopwatch stop = new Stopwatch(); stop.Start(); dynamic page = Page; for (int i = 0; i < reps; i++) { string url = page.Title; //Request.Url.AbsolutePath; } stop.Stop(); Response.Write("Dynamic: " + stop.ElapsedMilliseconds.ToString()); } The dynamic code runs in 4-5 milliseconds while the Reflection code runs around 200+ milliseconds! There’s a bit of overhead in the first dynamic object call but subsequent calls are blazing fast and performance is actually much better than manual Reflection. Dynamic is definitely a huge win-win situation when you need dynamic access to objects at runtime.© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  

    Read the article

  • CodePlex Daily Summary for Tuesday, October 16, 2012

    CodePlex Daily Summary for Tuesday, October 16, 2012Popular ReleasesMagelia WebStore Open-source Ecommerce software: Magelia WebStore 2.1: Scheduler Import & Export feature UTC datetime and timezone support .net 4.5 and Visual Studio 2012 migration client magelia global refactoring nugget package https://nuget.org/packages/Magelia.Webstore.Client/2.1.254.1 burst optimisation burst time improvment (multithreading, index, ...) current burst is still active when a new burst is generating bugfixes version 2.1.254.1DevLib: 70038 binary dlls: 70038 binary dllsP1 Port monitoring with Netduino Plus: V0.2 Beta Netduino Plus P1 Port Monitoring: This is the stable beta release of the Netduino Plus P1 port monitoring. Please read the requirements on the Documentation page.JayData - The cross-platform HTML5 data-management library for JavaScript: JayData 1.2.2: JayData is a unified data access library for JavaScript to CRUD + Query data from different sources like OData, MongoDB, WebSQL, SqLite, HTML5 localStorage, Facebook or YQL. The library can be integrated with Knockout.js or Sencha Touch 2 and can be used on Node.js as well. See it in action in this 6 minutes video Sencha Touch 2 example app using JayData: Netflix browser. What's new in JayData 1.2.2 For detailed release notes check the release notes. Revitalized IndexedDB providerNow you c...VFPX: FoxcodePlus: FoxcodePlus - Visual Studio like extensions to Visual FoxPro IntelliSense.Droid Explorer: Droid Explorer 0.8.8.8 Beta: fixed the icon for packages on the desktop fixed the install dialog closing right when it starts removed the link to "set up the sdk for me" as this is no longer supported. fixed bug where the device selection dialog would show, even if there was only one device connected. fixed toolbar from having "gap" between other toolbar removed main menu items that do not have any menus Iveely Search Engine: Iveely Search Engine (0.3.0): Iveely Search Engine?????????????,0.3.0????????,????????:??????。 ????????????"????“????????,????????????。??0.3.0???????????0.3.0????????,????。 ?????,????????????????,??????300????,?????????300?????????????????,?????????????????。????,??????????,???????,???????。???????IveelySE.Resource,???????????,???????????????????????,???????????。 ????????Iveely.config,??????IveelySE.Run.Task.exe,?????????http://127.0.0.1:8088/query=yourkeyword,??????。 ????,??? ??http://www.cnblogs.com/liufanping...Fiskalizacija za developere: FiskalizacijaDev 1.0: Prva verzija ovog projekta, još je uvijek oznacena kao BETA - ovo znaci da su naša testiranja prošla uspješno :) No, kako mi ne proizvodimo neki software za blagajne, tako sve ovo nije niti isprobano u "realnim" uvjetima - svaka je sugestija, primjedba ili prijava bug-a je dobrodošla. Za sve ovo koristite, molimo, Discussions ili Issue Tracker. U ovom trenutku runtime binary je raspoloživ kao Any CPU za .NET verzije 2.0. Javite ukoliko trebaju i verzije buildane za 32-bit/64-bit kao i za .N...Squiggle - A free open source LAN Messenger: Squiggle 3.2 (Development): NOTE: This is development release and not recommended for production use. This release is mainly for enabling extensibility and interoperability with other platforms. Support for plugins Support for extensions Communication layer and protocol is platform independent (ZeroMQ, ProtocolBuffers) Bug fixes New /invite command Edit the sent message Disable update check NOTE: This is development release and not recommended for production use.AcDown????? - AcDown Downloader Framework: AcDown????? v4.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...PHPExcel: PHPExcel 1.7.8: See Change Log for details of the new features and bugfixes included in this release, and methods that are now deprecated. Note changes to the PDF Writer: tcPDF is no longer bundled with PHPExcel, but should be installed separately if you wish to use that 3rd-Party library with PHPExcel. Alternatively, you can choose to use mPDF or DomPDF as PDF Rendering libraries instead: PHPExcel now provides a configurable wrapper allowing you a choice of PDF renderer. See the documentation, or the PDF s...DirectX Tool Kit: October 12, 2012: October 12, 2012 Added PrimitiveBatch for drawing user primitives Debug object names for all D3D resources (for PIX and debug layer leak reporting)Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.70: Fixed issue described in discussion #399087: variable references within case values weren't getting resolved.GoogleMap Control: GoogleMap Control 6.1: Some important bug fixes and couple of new features were added. There are no major changes to the sample website. Source code could be downloaded from the Source Code section selecting branch release-6.1. Thus just builds of GoogleMap Control are issued here in this release. Update 14.Oct.2012 - Client side access fixed NuGet Package GoogleMap Control 6.1 NuGet Package FeaturesBounds property to provide ability to create a map by center and bounds as well; Setting in markup <artem:Goog...mojoPortal: 2.3.9.3: see release notes on mojoportal.com http://www.mojoportal.com/mojoportal-2393-released Note that we have separate deployment packages for .NET 3.5 and .NET 4.0, but we recommend you to use .NET 4, we will probably drop support for .NET 3.5 once .NET 4.5 is available The deployment package downloads on this page are pre-compiled and ready for production deployment, they contain no C# source code and are not intended for use in Visual Studio. To download the source code see getting the lates...D3 Loot Tracker: 1.5.4: Fixed a bug where the server ip was not logged properly in the stats file.Captcha MVC: Captcha Mvc 2.1.2: v 2.1.2: Fixed problem with serialization. Made all classes from a namespace Jetbrains.Annotaions as the internal. Added autocomplete attribute and autocorrect attribute for captcha input element. Minor changes. Updated: I'm added an example for this question. v 2.1.1: Fixed problem with serialization. Minor changes. v 2.1: Added support for storing captcha in the session or cookie. See the updated example. Updated example. Minor changes. v 2.0.1: Added support for a partial ...DotNetNuke® Community Edition CMS: 06.02.04: Major Highlights Fixed issue where the module printing function was only visible to administrators Fixed issue where pane level skinning was being assigned to a default container for any content pane Fixed issue when using password aging and FB / Google authentication Fixed issue that was causing the DateEditControl to not load the assigned value Fixed issue that stopped additional profile properties to be displayed in the member directory after modifying the template Fixed er...Database View-plug-ins Programming Helper: Database View-plug-ins 1.3: V1.3 added feature: Metadata Deployment. The download package consists of deployment SQL scripts. Run every scripts of all subdirectories in order (sort by name). "VPI" is the default schema name in the manifest, it can be changed to other name according to your enterprise database policy. Current release is for Oracle version (SQL Server version will be released later).Advanced DataGridView with Excel-like auto filter: 1.0.0.0: ?????? ??????New ProjectsAerTHe: Simple, test project about EntityFramework, NUnit, etc.BalanceManagerApp: BalanceManagerAppC++ Debugger Visualizers for VS2012: C++ Debugger Visualizers for Boost, wxWidgets, TinyXML, TinyXML2ClipReader: A Text-To-Speach reader that reads from the clipboard. Reads any text you copy to the clipboard. Similar idea to ReadPlease. Coursework 2.0 UGC Site: University of Hertfordshire coursework project to develop a Web 2.0-style UGC website. DavesinitialcourseworkcalculatorinVS: Basic calculator for assignment 1DL_Assignment 1: This project forms Task 1 of Assignment 1 for 7COM0207. The requirement is that a user can enter 2 numbers and the sum of the numbes is displayed. Document Storage: This project is intended to act as a learning exercise for the participants. gillsassignment1: This is task 1 for assignment 1 which adds 2 numbers and displays the result.gillstestproject: This is my first little test.Iconator for Microsoft Dynamics CRM 2011: This application ease the customization of custom entities icons in Microsoft Dynamics CRM 2011.Infopath 2010 Web Signature Capture: A simple method of adding signature capture to InfoPath Browser enabled forms. KennyWorld: Kenny's blog based on BLogEngine.netMirus - Advanced Open Source Operating System: Mirus is a new, advanced, open source operating system written in C# using the Cosmos toolkit aiming for POSIX compatibility, ease of use, and innovation.Morgado Finance: Test of Finance ManagementMPF for Projects - Visual Studio 2012: A community project containing the source code and tests of a library for creating project system plug-ins for Visual Studio 2012 using C#.OpenWeb: OpenWeb project by Deigo Stefanon [*/*P1 Port monitoring with Netduino Plus: This program reads the Dutch electricity meter P1 port messages and publishes the information to Cosm and ThingSpeak for monitoring.Powerless View: Proof of concept application for hosting the Power View Silverlight application outside of a SharePoint and Reporting Services environment.RAM Drive: A Windows Service that copies an existing Virtual Hard Disk in memory, then mounts it as a disk giving the ability to use your RAM as a super-fast drive.Roderick Vella's Interactive Learning: Web Scripting and Content CreationSaveDouban: ?????????????????,????.NetFramework3.5?????ScriptEase for Microsoft Dynamics CRM 2011: Utility to synchronize your Microsoft Dynamics CRM 2011 JavaScripts with your files on your local hard-drive.T.REST: T.REST - a framework for testing REST ressourcesVisual Leak Detector Performance Test: Tests for Visual Leak Detector for Visual C++ 2008/2010/2012 [url:http://vld.codeplex.com/]???????: ????????????????。?????????????。 ??url??,??????

    Read the article

  • How do I catch this WPF Bitmap loading exception?

    - by mmr
    I'm developing an application that loads bitmaps off of the web using .NET 3.5 sp1 and C#. The loading code looks like: try { CurrentImage = pics[unChosenPics[index]]; bi = new BitmapImage(CurrentImage.URI); // BitmapImage.UriSource must be in a BeginInit/EndInit block. bi.DownloadCompleted += new EventHandler(bi_DownloadCompleted); AssessmentImage.Source = bi; } catch { System.Console.WriteLine("Something broke during the read!"); } and the code to load on bi_DownloadCompleted is: void bi_DownloadCompleted(object sender, EventArgs e) { try { double dpi = 96; int width = bi.PixelWidth; int height = bi.PixelHeight; int stride = width * 4; // 4 bytes per pixel byte[] pixelData = new byte[stride * height]; bi.CopyPixels(pixelData, stride, 0); BitmapSource bmpSource = BitmapSource.Create(width, height, dpi, dpi, PixelFormats.Bgra32, null, pixelData, stride); AssessmentImage.Source = bmpSource; Loading.Visibility = Visibility.Hidden; AssessmentImage.Visibility = Visibility.Visible; } catch { System.Console.WriteLine("Exception when viewing bitmap."); } } Every so often, an image comes along that breaks the reader. I guess that's to be expected. However, rather than being caught by either of those try/catch blocks, the exception is apparently getting thrown outside of where I can handle it. I could handle it using global WPF exceptions, like this SO question. However, that will seriously mess up the control flow of my program, and I'd like to avoid that if at all possible. I have to do the double source assignment because it appears that many images are lacking in width/height parameters in the places where the microsoft bitmap loader expects them to be. So, the first assignment appears to force the download, and the second assignment gets the dpi/image dimensions happen properly. What can I do to catch and handle this exception? Stack trace: at MS.Internal.HRESULT.Check(Int32 hr) at System.Windows.Media.Imaging.BitmapFrameDecode.get_ColorContexts() at System.Windows.Media.Imaging.BitmapImage.FinalizeCreation() at System.Windows.Media.Imaging.BitmapImage.OnDownloadCompleted(Object sender, EventArgs e) at System.Windows.Media.UniqueEventHelper.InvokeEvents(Object sender, EventArgs args) at System.Windows.Media.Imaging.LateBoundBitmapDecoder.DownloadCallback(Object arg) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.DispatcherOperation.InvokeImpl() at System.Threading.ExecutionContext.runTryCode(Object userData) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Windows.Threading.DispatcherOperation.Invoke() at System.Windows.Threading.Dispatcher.ProcessQueue() at System.Windows.Threading.Dispatcher.WndProcHook(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndWrapper.WndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam, Boolean& handled) at MS.Win32.HwndSubclass.DispatcherCallbackOperation(Object o) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) at System.Windows.Threading.Dispatcher.InvokeImpl(DispatcherPriority priority, TimeSpan timeout, Delegate method, Object args, Boolean isSingleParameter) at MS.Win32.HwndSubclass.SubclassWndProc(IntPtr hwnd, Int32 msg, IntPtr wParam, IntPtr lParam) at MS.Win32.UnsafeNativeMethods.DispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.TranslateAndDispatchMessage(MSG& msg) at System.Windows.Threading.Dispatcher.PushFrameImpl(DispatcherFrame frame) at System.Windows.Application.RunInternal(Window window) at LensComparison.App.Main() in C:\Users\Mark64\Documents\Visual Studio 2008\Projects\LensComparison\LensComparison\obj\Release\App.g.cs:line 48 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart()

    Read the article

  • C++ HW - defining classes - objects that have objects of other class problem in header file (out of

    - by kitfuntastik
    This is my first time with much of this code. With this instancepool.h file below I get errors saying I can't use vector (line 14) or have instance& as a return type (line 20). It seems it can't use the instance objects despite the fact that I have included them. #ifndef _INSTANCEPOOL_H #define _INSTANCEPOOL_H #include "instance.h" #include <iostream> #include <string> #include <vector> #include <stdlib.h> using namespace std; class InstancePool { private: unsigned instances;//total number of instance objects vector<instance> ipp;//the collection of instance objects, held in a vector public: InstancePool();//Default constructor. Creates an InstancePool object that contains no Instance objects InstancePool(const InstancePool& original);//Copy constructor. After copying, changes to original should not affect the copy that was created. ~InstancePool();//Destructor unsigned getNumberOfInstances() const;//Returns the number of Instance objects the the InstancePool contains. const instance& operator[](unsigned index) const; InstancePool& operator=(const InstancePool& right);//Overloading the assignment operator for InstancePool. friend istream& operator>>(istream& in, InstancePool& ip);//Overloading of the >> operator. friend ostream& operator<<(ostream& out, const InstancePool& ip);//Overloading of the << operator. }; #endif Here is the instance.h : #ifndef _INSTANCE_H #define _INSTANCE_H ///////////////////////////////#include "instancepool.h" #include <iostream> #include <string> #include <stdlib.h> using namespace std; class Instance { private: string filenamee; bool categoryy; unsigned featuress; unsigned* featureIDD; unsigned* frequencyy; string* featuree; public: Instance (unsigned features = 0);//default constructor unsigned getNumberOfFeatures() const; //Returns the number of the keywords that the calling Instance object can store. Instance(const Instance& original);//Copy constructor. After copying, changes to the original should not affect the copy that was created. ~Instance() { delete []featureIDD; delete []frequencyy; delete []featuree;}//Destructor. void setCategory(bool category){categoryy = category;}//Sets the category of the message. Spam messages are represented with true and and legit messages with false.//easy bool getCategory() const;//Returns the category of the message. void setFileName(const string& filename){filenamee = filename;}//Stores the name of the file (i.e. “spam/spamsga1.txt”, like in 1st assignment) in which the message was initially stored.//const string& trick? string getFileName() const;//Returns the name of the file in which the message was initially stored. void setFeature(unsigned i, const string& feature, unsigned featureID,unsigned frequency) {//i for array positions featuree[i] = feature; featureIDD[i] = featureID; frequencyy[i] = frequency; } string getFeature(unsigned i) const;//Returns the keyword which is located in the ith position.//const string unsigned getFeatureID(unsigned i) const;//Returns the code of the keyword which is located in the ith position. unsigned getFrequency(unsigned i) const;//Returns the frequency Instance& operator=(const Instance& right);//Overloading of the assignment operator for Instance. friend ostream& operator<<(ostream& out, const Instance& inst);//Overloading of the << operator for Instance. friend istream& operator>>(istream& in, Instance& inst);//Overloading of the >> operator for Instance. }; #endif Also, if it is helpful here is instance.cpp: // Here we implement the functions of the class apart from the inline ones #include "instance.h" #include <iostream> #include <string> #include <stdlib.h> using namespace std; Instance::Instance(unsigned features) { //Constructor that can be used as the default constructor. featuress = features; if (features == 0) return; featuree = new string[featuress]; // Dynamic memory allocation. featureIDD = new unsigned[featuress]; frequencyy = new unsigned[featuress]; return; } unsigned Instance::getNumberOfFeatures() const {//Returns the number of the keywords that the calling Instance object can store. return featuress;} Instance::Instance(const Instance& original) {//Copy constructor. filenamee = original.filenamee; categoryy = original.categoryy; featuress = original.featuress; featuree = new string[featuress]; for(unsigned i = 0; i < featuress; i++) { featuree[i] = original.featuree[i]; } featureIDD = new unsigned[featuress]; for(unsigned i = 0; i < featuress; i++) { featureIDD[i] = original.featureIDD[i]; } frequencyy = new unsigned[featuress]; for(unsigned i = 0; i < featuress; i++) { frequencyy[i] = original.frequencyy[i];} } bool Instance::getCategory() const { //Returns the category of the message. return categoryy;} string Instance::getFileName() const { //Returns the name of the file in which the message was initially stored. return filenamee;} string Instance::getFeature(unsigned i) const { //Returns the keyword which is located in the ith position.//const string return featuree[i];} unsigned Instance::getFeatureID(unsigned i) const { //Returns the code of the keyword which is located in the ith position. return featureIDD[i];} unsigned Instance::getFrequency(unsigned i) const { //Returns the frequency return frequencyy[i];} Instance& Instance::operator=(const Instance& right) { //Overloading of the assignment operator for Instance. if(this == &right) return *this; delete []featureIDD; delete []frequencyy; delete []featuree; filenamee = right.filenamee; categoryy = right.categoryy; featuress = right.featuress; featureIDD = new unsigned[featuress]; frequencyy = new unsigned[featuress]; featuree = new string[featuress]; for(unsigned i = 0; i < featuress; i++) { featureIDD[i] = right.featureIDD[i]; } for(unsigned i = 0; i < featuress; i++) { frequencyy[i] = right.frequencyy[i]; } for(unsigned i = 0; i < featuress; i++) { featuree[i] = right.featuree[i]; } return *this; } ostream& operator<<(ostream& out, const Instance& inst) {//Overloading of the << operator for Instance. out << endl << "<message file=" << '"' << inst.filenamee << '"' << " category="; if (inst.categoryy == 0) out << '"' << "legit" << '"'; else out << '"' << "spam" << '"'; out << " features=" << '"' << inst.featuress << '"' << ">" <<endl; for (int i = 0; i < inst.featuress; i++) { out << "<feature id=" << '"' << inst.featureIDD[i] << '"' << " freq=" << '"' << inst.frequencyy[i] << '"' << "> " << inst.featuree[i] << " </feature>"<< endl; } out << "</message>" << endl; return out; } istream& operator>>(istream& in, Instance& inst) { //Overloading of the >> operator for Instance. string word; string numbers = ""; string filenamee2 = ""; bool categoryy2 = 0; unsigned featuress2; string featuree2; unsigned featureIDD2; unsigned frequencyy2; unsigned i; unsigned y; while(in >> word) { if (word == "<message") {//if at beginning of message in >> word;//grab filename word for (y=6; word[y]!='"'; y++) {//pull out filename from between quotes filenamee2 += word[y];} in >> word;//grab category word if (word[10] == 's') categoryy2 = 1; in >> word;//grab features word for (y=10; word[y]!='"'; y++) { numbers += word[y];} featuress2 = atoi(numbers.c_str());//convert string of numbers to integer Instance tempp2(featuress2);//make a temporary Instance object to hold values read in tempp2.setFileName(filenamee2);//set temp object to filename read in tempp2.setCategory(categoryy2); for (i=0; i<featuress2; i++) {//loop reading in feature reports for message in >> word >> word >> word;//skip two words numbers = "";//reset numbers string for (int y=4; word[y]!='"'; y++) {//grab feature ID numbers += word[y];} featureIDD2 = atoi(numbers.c_str()); in >> word;// numbers = ""; for (int y=6; word[y]!='"'; y++) {//grab frequency numbers += word[y];} frequencyy2 = atoi(numbers.c_str()); in >> word;//grab actual feature string featuree2 = word; tempp2.setFeature(i, featuree2, featureIDD2, frequencyy2); }//all done reading in and setting features in >> word;//read in last part of message : </message> inst = tempp2;//set inst (reference) to tempp2 (tempp2 will be destroyed at end of function call) return in; } } } and instancepool.cpp: // Here we implement the functions of the class apart from the inline ones #include "instancepool.h" #include "instance.h" #include <iostream> #include <string> #include <vector> #include <stdlib.h> using namespace std; InstancePool::InstancePool()//Default constructor. Creates an InstancePool object that contains no Instance objects { instances = 0; ipp.clear(); } InstancePool::~InstancePool() { ipp.clear();} InstancePool::InstancePool(const InstancePool& original) {//Copy constructor. instances = original.instances; for (int i = 0; i<instances; i++) { ipp.push_back(original.ipp[i]); } } unsigned InstancePool::getNumberOfInstances() const {//Returns the number of Instance objects the the InstancePool contains. return instances;} const Instance& InstancePool::operator[](unsigned index) const {//Overloading of the [] operator for InstancePool. return ipp[index];} InstancePool& InstancePool::operator=(const InstancePool& right) {//Overloading the assignment operator for InstancePool. if(this == &right) return *this; ipp.clear(); instances = right.instances; for(unsigned i = 0; i < instances; i++) { ipp.push_back(right.ipp[i]); } return *this; } istream& operator>>(istream& in, InstancePool& ip) {//Overloading of the >> operator. ip.ipp.clear(); string word; string numbers; int total;//int to hold total number of messages in collection while(in >> word) { if (word == "<messagecollection"){ in >> word;//reads in total number of all messages for (int y=10; word[y]!='"'; y++){ numbers = ""; numbers += word[y]; } total = atoi(numbers.c_str()); for (int x = 0; x<total; x++) {//do loop for each message in collection in >> ip.ipp[x];//use instance friend function and [] operator to fill in values and create Instance objects and read them intot he vector } } } } ostream& operator<<(ostream& out, const InstancePool& ip) {//Overloading of the << operator. out << "<messagecollection messages=" << '"' << '>' << ip.instances << '"'<< endl << endl; for (int z=0; z<ip.instances; z++) { out << ip[z];} out << endl<<"</messagecollection>\n"; } This code is currently not writing to files correctly either at least, I'm sure it has many problems. I hope my posting of so much is not too much, and any help would be very much appreciated. Thanks!

    Read the article

  • Troubleshoot Perl module installation on Mac OS X

    - by Daniel Standage
    I'm trying to install the Perl module Set::IntervalTree on Mac OS X. I recently installed it today on an Ubuntu box with no problem. I simply started cpan, entered install Set:IntervalTree, and it all worked out. However, the installation failed on Mac OS X--it spits out a huge list of compiler errors (below). How would I troubleshoot this. I don't even know where to begin. cpan[1]> install Set::IntervalTree CPAN: Storable loaded ok (v2.18) Going to read /Users/standage/.cpan/Metadata Database was generated on Fri, 14 Jan 2011 02:58:42 GMT CPAN: YAML loaded ok (v0.72) Going to read /Users/standage/.cpan/build/ ............................................................................DONE Found 1 old build, restored the state of 1 Running install for module 'Set::IntervalTree' Running make for B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz CPAN: Digest::SHA loaded ok (v5.45) CPAN: Compress::Zlib loaded ok (v2.008) Checksum for /Users/standage/.cpan/sources/authors/id/B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz ok Scanning cache /Users/standage/.cpan/build for sizes ............................................................................DONE x Set-IntervalTree-0.01/ x Set-IntervalTree-0.01/src/ x Set-IntervalTree-0.01/src/Makefile x Set-IntervalTree-0.01/src/interval_tree.h x Set-IntervalTree-0.01/src/test_main.cc x Set-IntervalTree-0.01/lib/ x Set-IntervalTree-0.01/lib/Set/ x Set-IntervalTree-0.01/lib/Set/IntervalTree.pm x Set-IntervalTree-0.01/Changes x Set-IntervalTree-0.01/MANIFEST x Set-IntervalTree-0.01/t/ x Set-IntervalTree-0.01/t/Set-IntervalTree.t x Set-IntervalTree-0.01/typemap x Set-IntervalTree-0.01/perlobject.map x Set-IntervalTree-0.01/IntervalTree.xs x Set-IntervalTree-0.01/Makefile.PL x Set-IntervalTree-0.01/README x Set-IntervalTree-0.01/META.yml CPAN: File::Temp loaded ok (v0.18) CPAN.pm: Going to build B/BE/BENBOOTH/Set-IntervalTree-0.01.tar.gz Checking if your kit is complete... Looks good Writing Makefile for Set::IntervalTree cp lib/Set/IntervalTree.pm blib/lib/Set/IntervalTree.pm AutoSplitting blib/lib/Set/IntervalTree.pm (blib/lib/auto/Set/IntervalTree) /usr/bin/perl /System/Library/Perl/5.10.0/ExtUtils/xsubpp -C++ -typemap /System/Library/Perl/5.10.0/ExtUtils/typemap -typemap perlobject.map -typemap typemap IntervalTree.xs > IntervalTree.xsc && mv IntervalTree.xsc IntervalTree.c g++ -c -Isrc -arch x86_64 -arch i386 -arch ppc -g -pipe -fno-common -DPERL_DARWIN -fno-strict-aliasing -I/usr/local/include -g -O0 -DVERSION=\"0.01\" -DXS_VERSION=\"0.01\" "-I/System/Library/Perl/5.10.0/darwin-thread-multi-2level/CORE" -Isrc IntervalTree.c In file included from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/bits/locale_facets.h:4420:40: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/bits/locale_facets.h:4467:34: error: macro "do_close" requires 2 arguments, but only 1 given /usr/include/c++/4.2.1/bits/locale_facets.h:4486:55: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/bits/locale_facets.h:4513:23: error: macro "do_close" requires 2 arguments, but only 1 given In file included from /usr/include/c++/4.2.1/bits/locale_facets.h:4599, from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:58:38: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:67:71: error: macro "do_open" requires 7 arguments, but only 2 given /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:78:39: error: macro "do_close" requires 2 arguments, but only 1 given In file included from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/bits/locale_facets.h:4486: error: ‘do_open’ declared as a ‘virtual’ field /usr/include/c++/4.2.1/bits/locale_facets.h:4486: error: expected ‘;’ before ‘const’ /usr/include/c++/4.2.1/bits/locale_facets.h:4513: error: variable or field ‘do_close’ declared void /usr/include/c++/4.2.1/bits/locale_facets.h:4513: error: expected ‘;’ before ‘const’ In file included from /usr/include/c++/4.2.1/bits/locale_facets.h:4599, from /usr/include/c++/4.2.1/bits/basic_ios.h:44, from /usr/include/c++/4.2.1/ios:50, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from IntervalTree.xs:16: /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:67: error: expected initializer before ‘const’ /usr/include/c++/4.2.1/i686-apple-darwin10/x86_64/bits/messages_members.h:78: error: expected initializer before ‘const’ In file included from IntervalTree.xs:19: src/interval_tree.h:95: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class _Alloc> class std::vector’ src/interval_tree.h:95: error: expected a type, got ‘IntervalTree<T,N>::it_recursion_node’ src/interval_tree.h:95: error: template argument 2 is invalid src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree()’: src/interval_tree.h:130: error: expected type-specifier src/interval_tree.h:130: error: expected `;' src/interval_tree.h:135: error: expected type-specifier src/interval_tree.h:135: error: expected `;' src/interval_tree.h:141: error: request for member ‘push_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h: In member function ‘void IntervalTree<T, N>::LeftRotate(IntervalTree<T, N>::Node*)’: src/interval_tree.h:178: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘void IntervalTree<T, N>::RightRotate(IntervalTree<T, N>::Node*)’: src/interval_tree.h:240: error: ‘x’ was not declared in this scope src/interval_tree.h: In member function ‘void IntervalTree<T, N>::TreeInsertHelp(IntervalTree<T, N>::Node*)’: src/interval_tree.h:298: error: ‘x’ was not declared in this scope src/interval_tree.h:299: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N)’: src/interval_tree.h:375: error: ‘y’ was not declared in this scope src/interval_tree.h:376: error: ‘x’ was not declared in this scope src/interval_tree.h:377: error: ‘newNode’ was not declared in this scope src/interval_tree.h:379: error: expected type-specifier src/interval_tree.h:379: error: expected `;' src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::GetSuccessorOf(IntervalTree<T, N>::Node*) const’: src/interval_tree.h:450: error: ‘y’ was not declared in this scope src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::GetPredecessorOf(IntervalTree<T, N>::Node*) const’: src/interval_tree.h:483: error: ‘y’ was not declared in this scope src/interval_tree.h: In destructor ‘IntervalTree<T, N>::~IntervalTree()’: src/interval_tree.h:546: error: ‘x’ was not declared in this scope src/interval_tree.h:547: error: type/value mismatch at argument 1 in template parameter list for ‘template<class _Tp, class _Alloc> class std::vector’ src/interval_tree.h:547: error: expected a type, got ‘(IntervalTree<T,N>::Node * <expression error>)’ src/interval_tree.h:547: error: template argument 2 is invalid src/interval_tree.h:547: error: invalid type in declaration before ‘;’ token src/interval_tree.h:551: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:554: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:557: error: request for member ‘empty’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:558: error: request for member ‘back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:559: error: request for member ‘pop_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:561: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h:564: error: request for member ‘push_back’ in ‘stuffToFree’, which is of non-class type ‘int’ src/interval_tree.h: In member function ‘void IntervalTree<T, N>::DeleteFixUp(IntervalTree<T, N>::Node*)’: src/interval_tree.h:613: error: ‘w’ was not declared in this scope src/interval_tree.h:614: error: ‘rootLeft’ was not declared in this scope src/interval_tree.h: In member function ‘T IntervalTree<T, N>::remove(IntervalTree<T, N>::Node*)’: src/interval_tree.h:697: error: ‘y’ was not declared in this scope src/interval_tree.h:698: error: ‘x’ was not declared in this scope src/interval_tree.h: In member function ‘std::vector<T, std::allocator<_CharT> > IntervalTree<T, N>::fetch(N, N)’: src/interval_tree.h:819: error: ‘x’ was not declared in this scope src/interval_tree.h:833: error: invalid types ‘int[size_t]’ for array subscript src/interval_tree.h:836: error: request for member ‘push_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:837: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:838: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:839: error: request for member ‘back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:840: error: request for member ‘size’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:846: error: request for member ‘size’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:847: error: expected `;' before ‘back’ src/interval_tree.h:848: error: request for member ‘pop_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:850: error: ‘back’ was not declared in this scope src/interval_tree.h:853: error: invalid types ‘int[size_t]’ for array subscript IntervalTree.c: In function ‘void boot_Set__IntervalTree(PerlInterpreter*, CV*)’: IntervalTree.c:365: warning: deprecated conversion from string constant to ‘char*’ src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:67: instantiated from here src/interval_tree.h:130: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h:135: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... ...blah blah blah... src/interval_tree.h:848: error: request for member ‘pop_back’ in ‘((IntervalTree<T, N>*)this)->IntervalTree<T, N>::recursionNodeStack’, which is of non-class type ‘int’ src/interval_tree.h:850: error: ‘back’ was not declared in this scope src/interval_tree.h:853: error: invalid types ‘int[size_t]’ for array subscript IntervalTree.c: In function ‘void boot_Set__IntervalTree(PerlInterpreter*, CV*)’: IntervalTree.c:365: warning: deprecated conversion from string constant to ‘char*’ src/interval_tree.h: In constructor ‘IntervalTree<T, N>::IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:67: instantiated from here src/interval_tree.h:130: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h:135: error: cannot convert ‘int*’ to ‘IntervalTree<std::tr1::shared_ptr<sv>, long int>::Node*’ in assignment src/interval_tree.h: In member function ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.xs:57: instantiated from here src/interval_tree.h:375: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:375: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:376: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:376: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:377: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:377: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘std::vector<T, std::allocator<_CharT> > IntervalTree<T, N>::fetch(N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.xs:65: instantiated from here src/interval_tree.h:819: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:819: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant IntervalTree.xs:65: instantiated from here src/interval_tree.h:847: error: dependent-name ‘IntervalTree<T,N>::it_recursion_node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:847: note: say ‘typename IntervalTree<T,N>::it_recursion_node’ if a type is meant src/interval_tree.h: In destructor ‘IntervalTree<T, N>::~IntervalTree() [with T = std::tr1::shared_ptr<sv>, N = long int]’: IntervalTree.c:205: instantiated from here src/interval_tree.h:546: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:546: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::TreeInsertHelp(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:380: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:298: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:298: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h:299: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:299: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::LeftRotate(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:395: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:178: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:178: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant src/interval_tree.h: In member function ‘void IntervalTree<T, N>::RightRotate(IntervalTree<T, N>::Node*) [with T = std::tr1::shared_ptr<sv>, N = long int]’: src/interval_tree.h:399: instantiated from ‘typename IntervalTree<T, N>::Node* IntervalTree<T, N>::insert(const T&, N, N) [with T = std::tr1::shared_ptr<sv>, N = long int]’ IntervalTree.xs:57: instantiated from here src/interval_tree.h:240: error: dependent-name ‘IntervalTree<T,N>::Node’ is parsed as a non-type, but instantiation yields a type src/interval_tree.h:240: note: say ‘typename IntervalTree<T,N>::Node’ if a type is meant lipo: can't open input file: /var/tmp//ccLthuaw.out (No such file or directory) make: *** [IntervalTree.o] Error 1 BENBOOTH/Set-IntervalTree-0.01.tar.gz make -- NOT OK Running make test Can't test without successful make Running make install Make had returned bad status, install seems impossible Failed during this command: BENBOOTH/Set-IntervalTree-0.01.tar.gz : make NO

    Read the article

  • T-SQL Improvements And Data Types in ms sql 2008

    - by Aamir Hasan
     Microsoft SQL Server 2008 is a new version released in the first half of 2008 introducing new properties and capabilities to SQL Server product family. All these new and enhanced capabilities can be defined as the classic words like secure, reliable, scalable and manageable. SQL Server 2008 is secure. It is reliable. SQL2008 is scalable and is more manageable when compared to previous releases. Now we will have a look at the features that are making MS SQL Server 2008 more secure, more reliable, more scalable, etc. in details.Microsoft SQL Server 2008 provides T-SQL enhancements that improve performance and reliability. Itzik discusses composable DML, the ability to declare and initialize variables in the same statement, compound assignment operators, and more reliable object dependency information. Table-Valued ParametersInserts into structures with 1-N cardinality problematicOne order -> N order line items"N" is variable and can be largeDon't want to force a new order for every 20 line itemsOne database round-trip / line item slows things downNo ARRAY data type in SQL ServerXML composition/decomposition used as an alternativeTable-valued parameters solve this problemTable-Valued ParametersSQL Server has table variablesDECLARE @t TABLE (id int);SQL Server 2008 adds strongly typed table variablesCREATE TYPE mytab AS TABLE (id int);DECLARE @t mytab;Parameters must use strongly typed table variables Table Variables are Input OnlyDeclare and initialize TABLE variable  DECLARE @t mytab;  INSERT @t VALUES (1), (2), (3);  EXEC myproc @t;Procedure must declare variable READONLY  CREATE PROCEDURE usetable (    @t mytab READONLY ...)  AS    INSERT INTO lineitems SELECT * FROM @t;    UPDATE @t SET... -- no!T-SQL Syntax EnhancementsSingle statement declare and initialize  DECLARE @iint = 4;Compound Assignment Operators  SET @i += 1;Row constructors  DECLARE @t TABLE (id int, name varchar(20));  INSERT INTO @t VALUES    (1, 'Fred'), (2, 'Jim'), (3, 'Sue');Grouping SetsGrouping Sets allow multiple GROUP BY clauses in a single SQL statementMultiple, arbitrary, sets of subtotalsSingle read pass for performanceNested subtotals provide ever better performanceGrouping Sets are an ANSI-standardCOMPUTE BY is deprecatedGROUPING SETS, ROLLUP, CUBESQL Server 2008 - ANSI-syntax ROLLUP and CUBEPre-2008 non-ANSI syntax is deprecatedWITH ROLLUP produces n+1 different groupings of datawhere n is the number of columns in GROUP BYWITH CUBE produces 2^n different groupingswhere n is the number of columns in GROUP BYGROUPING SETS provide a "halfway measure"Just the number of different groupings you needGrouping Sets are visible in query planGROUPING_ID and GROUPINGGrouping Sets can produce non-homogeneous setsGrouping set includes NULL values for group membersNeed to distinguish by grouping and NULL valuesGROUPING (column expression) returns 0 or 1Is this a group based on column expr. or NULL value?GROUPING_ID (a,b,c) is a bitmaskGROUPING_ID bits are set based on column expressions a, b, and cMERGE StatementMultiple set operations in a single SQL statementUses multiple sets as inputMERGE target USING source ON ...Operations can be INSERT, UPDATE, DELETEOperations based onWHEN MATCHEDWHEN NOT MATCHED [BY TARGET] WHEN NOT MATCHED [BY SOURCE]More on MERGEMERGE statement can reference a $action columnUsed when MERGE used with OUTPUT clauseMultiple WHEN clauses possible For MATCHED and NOT MATCHED BY SOURCEOnly one WHEN clause for NOT MATCHED BY TARGETMERGE can be used with any table sourceA MERGE statement causes triggers to be fired onceRows affected includes total rows affected by all clausesMERGE PerformanceMERGE statement is transactionalNo explicit transaction requiredOne Pass Through TablesAt most a full outer joinMatching rows = when matchedLeft-outer join rows = when not matched by targetRight-outer join rows = when not matched by sourceMERGE and DeterminismUPDATE using a JOIN is non-deterministicIf more than one row in source matches ON clause, either/any row can be used for the UPDATEMERGE is deterministicIf more than one row in source matches ON clause, its an errorKeeping Track of DependenciesNew dependency views replace sp_dependsViews are kept in sync as changes occursys.dm_sql_referenced_entitiesLists all named entities that an object referencesExample: which objects does this stored procedure use?sys.dm_sql_referencing_entities 

    Read the article

  • The Next Wave of PeopleSoft Capabilities for the Staffing Industry Is Here

    - by Mark Rosenberg
    With the release of PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 in January this year, we introduced substantial new capabilities for our Staffing Industry customers. Through a co-development project with Infosys Limited, we have enriched Oracle's PeopleSoft Staffing Solution with new tools aimed at accelerating and improving the quality of job order fulfillment, increasing branch recruiter productivity, and driving profitable growth. Staffing industry firms succeed based on their ability to rapidly, cost-effectively, and continually fill their pipelines with new clients and job orders, recruit the best talent, and match orders with talent. Pressure to execute in each of these functional areas is even more acute on staffing firms as contingent labor becomes a more substantial and permanent part of the workforce mix. In an industry that creates value through speedy execution, there is little room for manual, inefficient processes and brittle, custom integrations, which throttle profitability and growth. The latest wave of investment in the PeopleSoft Staffing Solution focuses on generating efficiency and flexibility for our customers. Simplicity To operate profitably and continue growing, a Staffing enterprise needs its client management, recruiting, order fulfillment, and other processes to function in harmony. Most importantly, they need to be simple for recruiters, branch managers, and applicants to access and understand. The latest PeopleSoft Staffing Solution set of enhancements includes numerous automated defaulting mechanisms and information-rich dashboard pagelets that even a new employee can learn quickly. Pending Applicant, Agenda management, Search, and other pagelets are just a few of the newest, easy-to-use tools that not only aggregate and summarize information, but also provide instant access to applicants, tasks, and key reports for branch staff. Productivity The leading firms in the Staffing industry are those that can more efficiently orchestrate large numbers of candidates, clients, and orders than their competitors can. PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 delivers productivity boosters that Staffing firms can leverage to streamline tasks and processes for competitive advantage. For example, we enhanced the Recruiting Funnel, which manages the candidate on-boarding process, with a highly interactive user interface. It integrates disparate Staffing business processes and exploits new PeopleTools technologies to offer a superior on-boarding user experience. Automated creation of agenda items and assignment tasks for each candidate minimizes setup and organizes assignment steps for the on-boarding process. Mass updates of tasks and instant access to the candidate overview page (which we also expanded), candidate event status, event counts, and other key data enable recruiters to better serve clients and candidates. Lower TCO Constructing and maintaining an efficient yet flexible labor supply chain can be complicated, let alone expensive. Traditionally, Staffing firms have been challenged in controlling their technology cost of ownership because connecting candidate and client-facing tools involved building and integrating custom applications and technologies and managing staff turnover, placing heavy demands on IT and support staff. With PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2, there are two major enhancements that aggressively tackle these challenges. First, we added another integration framework to enable cost-effective linking of the Staffing firm’s PeopleSoft applications and its job board distributors. (The first PeopleSoft 9.1 Feature Pack released in March 2011 delivered an integration framework to connect to resume parsing providers.) Second, we introduced the teaming concept to enable work to be partitioned to groups, as well as individuals. These two capabilities, combined with a host of others, position Staffing firms to configure and grow their businesses without growing their IT and overhead expenditures. For our Staffing Industry customers, PeopleSoft Financials and Supply Chain Management 9.1 Feature Pack 2 is loaded with high-value tools aimed at enabling and sustaining a flexible labor supply chain. For more information, contact [email protected] or [email protected].

    Read the article

  • Reconciling the Boy Scout Rule and Opportunistic Refactoring with code reviews

    - by t0x1n
    I am a great believer in the Boy Scout Rule: Always check a module in cleaner than when you checked it out." No matter who the original author was, what if we always made some effort, no matter how small, to improve the module. What would be the result? I think if we all followed that simple rule, we'd see the end of the relentless deterioration of our software systems. Instead, our systems would gradually get better and better as they evolved. We'd also see teams caring for the system as a whole, rather than just individuals caring for their own small little part. I am also a great believer in the related idea of Opportunistic Refactoring: Although there are places for some scheduled refactoring efforts, I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs to cleaned up - by whoever. What this means is that at any time someone sees some code that isn't as clear as it should be, they should take the opportunity to fix it right there and then - or at least within a few minutes Particularly note the following excerpt from the refactoring article: I'm wary of any development practices that cause friction for opportunistic refactoring ... My sense is that most teams don't do enough refactoring, so it's important to pay attention to anything that is discouraging people from doing it. To help flush this out be aware of any time you feel discouraged from doing a small refactoring, one that you're sure will only take a minute or two. Any such barrier is a smell that should prompt a conversation. So make a note of the discouragement and bring it up with the team. At the very least it should be discussed during your next retrospective. Where I work, there is one development practice that causes heavy friction - Code Review (CR). Whenever I change anything that's not in the scope of my "assignment" I'm being rebuked by my reviewers that I'm making the change harder to review. This is especially true when refactoring is involved, since it makes "line by line" diff comparison difficult. This approach is the standard here, which means opportunistic refactoring is seldom done, and only "planned" refactoring (which is usually too little, too late) takes place, if at all. I claim that the benefits are worth it, and that 3 reviewers will work a little harder (to actually understand the code before and after, rather than look at the narrow scope of which lines changed - the review itself would be better due to that alone) so that the next 100 developers reading and maintaining the code will benefit. When I present this argument my reviewers, they say they have no problem with my refactoring, as long as it's not in the same CR. However I claim this is a myth: (1) Most of the times you only realize what and how you want to refactor when you're in the midst of your assignment. As Martin Fowler puts it: As you add the functionality, you realize that some code you're adding contains some duplication with some existing code, so you need to refactor the existing code to clean things up... You may get something working, but realize that it would be better if the interaction with existing classes was changed. Take that opportunity to do that before you consider yourself done. (2) Nobody is going to look favorably at you releasing "refactoring" CRs you were not supposed to do. A CR has a certain overhead and your manager doesn't want you to "waste your time" on refactoring. When it's bundled with the change you're supposed to do, this issue is minimized. The issue is exacerbated by Resharper, as each new file I add to the change (and I can't know in advance exactly which files would end up changed) is usually littered with errors and suggestions - most of which are spot on and totally deserve fixing. The end result is that I see horrible code, and I just leave it there. Ironically, I feel that fixing such code not only will not improve my standings, but actually lower them and paint me as the "unfocused" guy who wastes time fixing things nobody cares about instead of doing his job. I feel bad about it because I truly despise bad code and can't stand watching it, let alone call it from my methods! Any thoughts on how I can remedy this situation ?

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Combining template method with strategy

    - by Mekswoll
    An assignment in my software engineering class is to design an application which can play different forms a particular game. The game in question is Mancala, some of these games are called Wari or Kalah. These games differ in some aspects but for my question it's only important to know that the games could differ in the following: The way in which the result of a move is handled The way in which the end of the game is determined The way in which the winner is determined The first thing that came to my mind to design this was to use the strategy pattern, I have a variation in algorithms (the actual rules of the game). The design could look like this: I then thought to myself that in the game of Mancala and Wari the way the winner is determined is exactly the same and the code would be duplicated. I don't think this is by definition a violation of the 'one rule, one place' or DRY principle seeing as a change in rules for Mancala wouldn't automatically mean that rule should be changed in Wari as well. Nevertheless from the feedback I got from my professor I got the impression to find a different design. I then came up with this: Each game (Mancala, Wari, Kalah, ...) would just have attribute of the type of each rule's interface, i.e. WinnerDeterminer and if there's a Mancala 2.0 version which is the same as Mancala 1.0 except for how the winner is determined it can just use the Mancala versions. I think the implementation of these rules as a strategy pattern is certainly valid. But the real problem comes when I want to design it further. In reading about the template method pattern I immediately thought it could be applied to this problem. The actions that are done when a user makes a move are always the same, and in the same order, namely: deposit stones in holes (this is the same for all games, so would be implemented in the template method itself) determine the result of the move determine if the game has finished because of the previous move if the game has finished, determine who has won Those three last steps are all in my strategy pattern described above. I'm having a lot of trouble combining these two. One possible solution I found would be to abandon the strategy pattern and do the following: I don't really see the design difference between the strategy pattern and this? But I am certain I need to use a template method (although I was just as sure about having to use a strategy pattern). I also can't determine who would be responsible for creating the TurnTemplate object, whereas with the strategy pattern I feel I have families of objects (the three rules) which I could easily create using an abstract factory pattern. I would then have a MancalaRuleFactory, WariRuleFactory, etc. and they would create the correct instances of the rules and hand me back a RuleSet object. Let's say that I use the strategy + abstract factory pattern and I have a RuleSet object which has algorithms for the three rules in it. The only way I feel I can still use the template method pattern with this is to pass this RuleSet object to my TurnTemplate. The 'problem' that then surfaces is that I would never need my concrete implementations of the TurnTemplate, these classes would become obsolete. In my protected methods in the TurnTemplate I could just call ruleSet.determineWinner(). As a consequence, the TurnTemplate class would no longer be abstract but would have to become concrete, is it then still a template method pattern? To summarize, am I thinking in the right way or am I missing something easy? If I'm on the right track, how do I combine a strategy pattern and a template method pattern? This is part of a homework assignment but I'm not looking to be gifted the answer, I have deliberately been very verbose in my question to show that I have thought about it before coming here to ask a question

    Read the article

  • Attachments in Oracle BPM 11g – Create a BPM Process Instance by passing an Attachment

    - by Venugopal Mangipudi
    Problem Statement: On a recent engagement I had  a requirement where we needed to create BPM instances using a message start event. The challenge was that the instance needed to be created after polling a file location and attaching the picked up file (pdf) as an attachment to the instance. Proposed Solution: I was contemplating using process API to accomplish this,but came up with a solution which involves a BPEL process to pickup the file and send a notification to the BPM process by passing the attachment as a payload. The following are some of the brief steps that were used to build the solution: BPM Process to receive an attachment as part of the payload: The BPM Process is a very simple process which has a Message Start event that accepts the attachment as an argument and a Simple User Task that the user can use to view the attachment (as part of the OOTB attachment panel). The Input payload is based on AttachmentPayload.xsd.  The 3 key elements of the the payload are: <xsd:element name="filename" type="xsd:string"/> <xsd:element name="mimetype" type="xsd:string"/> <xsd:element name="content" type="xsd:base64Binary"/> A screenshot of the Human task data assignment that need to performed to attach the file is provided here. Once the process and the UI project (default generated UI) are deployed to the SOA server, copy the wsdl location of the process service (from EM). This WSDL would be used in the BPEL project to create the Instances in the BPM process after a file is polled. BPEL Process to Poll for File and create instances in the BPM process: For the BPEL process a File adapter was configured as a Read service (File Streaming option and keeping the Schema as Opaque). Once a location and the file pattern to poll are provided the Readservice Partner Link was wired to Invoke the BPEL Process. Also, using the BPM Process WSDL, we can create the Webservice reference and can invoke the start operation. Before we do the assignment for the Invoke operation, a global variable should be created to hold the value of the fileName of the file. The mapping to the global variable can be done on the Receive activity properties (jca.file.FileName).  So for the assign operation before we invoke the BPM process service, we can get the content of the file from the receive input variable and the fileName from the jca.file.FileName property. The mimetype needs to be hard coded to the mime-type of the file: application/pdf (I am still researching ways to derive the mime type as it is not available as part of the jca.file properties).  The screenshot of the BPEL process can be found here and the Assign activity can be found here. The project source can be found at the following location. A sample pdf file to test the project and a screenshot of the BPM Human task screen after the successful creation of the instance can be found here. References: [1] https://blogs.oracle.com/fmwinaction/entry/oracle_bpm_adding_an_attachment

    Read the article

  • SOA Community Newsletter May 2014

    - by JuergenKress
    Registration for the Fusion Middleware Summer Camps 2014 is open – Register asap for one of our bootcamps August 4th – 8th 2014 in Lisbon. Please read details and pre-requisitions careful before you register. We expect that like in the past, the conference will be booked out soon! If you can’t make it to Lisbon attend our SOA Suite 11c free on-demand Bootcamp or  Managing the Complexity of IoT online trainings. With more than 5000 customers, SOA Suite Achieves Significant Customer Adoption and Industry Recognition.Thanks to all our SOA Specialized partners for making our joins SOA customers successful! As a summary of the Industrial SOA series we published the Podcast Show Notes: SOA and Cloud - Where's This Relationship Going? Make sure you use the Oracle Demo Systems for your customer presentations. The demo systems are hosted by Oracle and include complete scenarios based on the latest Middleware version like the new B2B SOA Suite Demo System! For local presentations without fast internet use the SOA/BPM 11.1.1.7.1 Virtual Machine and Case Management Sample. At our SOA Community Workspace (SOA Community membership required) you can get new IoT presentations for Location Based Offers for Banking & Whitepaper and online Webcast & Utility presentation. In this newsletter you will find many articles about OSB: OSB 11g – A Hands-on Tutorial & Using Split-Joins in OSB Services for parallel processing of messages & OSB, Service Callouts and OQL & Working with Oracle Security Token Service. Thanks for sharing all the additional SOA articles within the community: How to configure Oracle SOA/BPM task auto release & Controlling BPEL process flow at runtime & Upgrading to Oracle SOA Suite 11g PS6 (11.1.1.7)? Do this. & BPEL and BPM's performance monitoring using DMS & SOA 11g - Create RESTful Service In Oracle SOA & Wrong timezone causes TopLink warning in SOA suite. Highlight of the BPM and ACM section is the IDC BPM vendor report. The new bundle Patch including the ACM UI is now available. If you want to learn more about ACM, get the ACM training material at our SOA Community Workspace (SOA Community membership required). A great demo for your next BPM presentation is the BPM iPad app. It’s simpleMobile BPM is Not An Option. It’s a Necessity. Thanks for sharing all the additional BPM articles within the community: BPM update adds Case Management Web Interface and REST APIs & Implementing deadline functionality with Oracle Adaptive Case Management & BPM 11g Timeout Heuristics & Humantask Assignment: Names and Expressions Assignment via Rules. In our last section Architecture, it is all about design. Usability is a key factor for customer satisfaction, worth to spend some time and read the Simplified User Experience Design Patterns eBook. Great blueprint for your project! See you in Lisbon! To read the newsletter please visit www.tinyurl.com/soaNewsMay2014 (OPN Account required) To become a member of the SOA Partner Community please register at http://www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: newsletter,SOA Community newsletter,SOA Community,Oracle,OPN,Jürgen Kress

    Read the article

  • HPCM 11.1.2.2.x - HPCM Standard Costing Generating >99 Calc Scipts

    - by Jane Story
    HPCM Standard Profitability calculation scripts are named based on a documented naming convention. From 11.1.2.2.x, the script name = a script suffix (1 letter) + POV identifier (3 digits) + Stage Order Number (1 digit) + “_” + index (2 digits) (please see documentation for more information (http://docs.oracle.com/cd/E17236_01/epm.1112/hpm_admin/apes01.html). This naming convention results in the name being 8 characters in length i.e. the maximum number of characters permitted calculation script names in non-unicode Essbase BSO databases. The index in the name will indicate the number of scripts per stage. In the vast majority of cases, the number of scripts generated per stage will be significantly less than 100 and therefore, there will be no issue. However, in some cases, the number of scripts generated can exceed 99. It is unusual for an application to generate more than 99 calculation scripts for one stage. This may indicate that explicit assignments are being extensively used. An assessment should be made of the design to see if assignment rules can be used instead. Assignment rules will reduce the need for so many calculation script lines which will reduce the requirement for such a large number of calculation scripts. In cases where the scripts generates exceeds 100, the length of the name of the 100th calculation script is different from the 99th as the calculation script name changes from being 8 characters long and becomes 9 characters long (e.g. A6811_100 rather than A6811_99). A name of 9 characters is not permitted in non Unicode applications. It is “too long”. When this occurs, an error will show in the hpcm.log as “Error processing calculation scripts” and “Unexpected error in business logic “. Further down the log, it is possible to see that this is “Caused by: Error copying object “ and “Caused by: com.essbase.api.base.EssException: Cannot put olap file object ... object name_[<calc script name> e.g. A6811_100] too long for non-unicode mode application”. The error file will give the name of the calculation script which is causing the issue. In my example, this is A6811_100 and you can see this is 9 characters in length. It is not possible to increase the number of characters allowed in a calculation script name. However, it is possible to increase the size of each calculation script. The default for an HPCM application, set in the preferences, is set to 4mb. If the size of each calculation script is larger, the number of scripts generated will reduce and, therefore, less than 100 scripts will be generated which means that the name of the calculation script will remain 8 characters long. To increase the size of the generated calculation scripts for an application, in the HPM_APPLICATION_PREFERENCE table for the application, find the row where HPM_PREFERENCE_NAME_ID=20. The default value in this row is 4194304. This can be increased e.g. 7340032 will increase this to 7mb. Please restart the profitability service after making the change.

    Read the article

  • ArchBeat Link-o-Rama for October 14-20, 2012

    - by Bob Rhubart
    The Top 10 items shared on the OTN ArchBeat Facebook page for the week of October 14-21, 2012. Panel: On the Impact of Software | InfoQ Les Hatton (Oakwood Computing Associates), Clive King (Oracle), Paul Good (Shell), Mike Andrews (Microsoft) and Michiel van Genuchten (moderator) discuss the impact of software engineering on our lives in this panel discussion recorded at the Computer Society Software Experts Summit 2012. ResCare Solves Content Lifecycle Challenges with Oracle WebCenter Learn how ResCare solves content lifecycle challenges with Oracle WebCenter. Speakers: Joe Lichtefeld, VP of Application Services & PMO, ResCare Wayne Boerger, Product Manager, TEAM Informatics Doug Thompson, EVP Global Development, TEAM Informatics Date: Tuesday, October 30, 2012 Time: 10:00 a.m. PT / 1:00 p.m. ET WebLogic Server 11gR1 Interactive Quick Reference "The WebLogic Server 11gR1 Administration interactive quick reference," explains Juergen Kress, "is a multimedia tool for various terms and concepts used in WebLogic Server architecture. This tool is available for administrators for online or offline use. This is built as a multimedia web page which provides descriptions of WebLogic Server Architectural components, and references to relevant documentation. This tool offers valuable reference information for any complex concept or product in an intuitive and useful manner." Oracle ACE Directors Nordic Tour 2012 : Venues and BI Presentations | Mark Rittman Oracle ACE Director Mark Rittman shares information on the Oracle ACE Director Tour, as the community leaders make their way through the land of the midnight sun, with events in Copenhagen, Stockholm, Oslo and Helsinki. Mobile Apps for EBS | Capgemini Oracle Blog Capgemini solution architect Satish Iyer breifly describes how Oracle ADF and Oracle SOA Suite can be used to fill the gap in mobile applications for Oracle EBS. Introducing the New Face of Fusion Applications | Misha Vaughan Oracle ACE Directors Debra Lilly and Floyd Teter have already blogged about the the new face of Oracle Fusion Applications. Now Applications User Experience Architect Misha Vaughan shares a brief overview of how the Oracle Applications User Experience (UX) team developed the new look. BPM 11g - Dynamic Task Assignment with Multi-level Organization Units | Mark Foster "I've seen several requirements to have a more granular level of task assignment in BPM 11g based on some value in the data passed to the process," says Fusion Middleware A-Team architect Mark Foster. "Parametric Roles is normally the first port of call to try to satisfy this requirement, but in this blog we will show how a lot of use-cases can be satisfied by the easier to implement and flexible Organization Unit." OTN Architect Day Los Angeles - Oct 25 Oracle Technology Network Architect Day in Los Angeles happens in one week. Register now to make sure you don't miss out on a rich schedule of expert technical sessions and peer interaction covering the use of Oracle technologies in cloud computing, SOA, and more. Even better: it's all free. When: October 25, 2012, 8:30am - 5:00pm. Where: Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Oracle VM VirtualBox 4.2.2 released | Oracle's Virtualization Blog The Fat Bloke weighs in with a short post with information on where you can find information and the download for the latest VirtualBox release. Advanced Oracle SOA Suite #OOW 2012 SOA Presentations The Oracle SOA Product Management team has compiled a complete list of all twelve of their Oracle SOA Suite presentations from Oracle OpenWorld 2012, with links to the slide decks. Thought for the Day "Software: do you write it like a book, grow it like a plant, accrete it like a pearl, or construct it like a building?" — Jeff Atwood Source: softwarequotes.com

    Read the article

  • Friday Tips #3

    - by Chris Kawalek
    Even though yesterday was Thanksgiving here in the US, we still have a Friday tip for those of you around your computers today. In fact, we have two! The first one came in last week via our #AskOracleVirtualization Twitter hashtag. The tweet has disappeared into the ether now, but we remember the gist, so here it is: Question: Will there be an Oracle Virtual Desktop Client for Android? Answer by our desktop virtualization product development team: We are looking at Android as a supported platform for future releases. Question: How can I make a Sun Ray Client automatically connect to a virtual machine? Answer by Rick Butland, Principal Sales Consultant, Oracle Desktop Virtualization: Someone recently asked how they can assign VM’s to specific Sun Ray Desktop Units (“DTU’s”) without any user interfaction being required, without the “Desktop Selector” being displayed, or any User Directory.  That is, they wanted each Sun Ray to power on and immediately connect to a pre-assigned Solaris VM.   This can be achieved by using “tokens” for user assignment – that is, the tokens found on Smart Cards, DTU’s, or OVDC clients can be used in place of user credentials.  Note, however, that mixing “token-only” assignments and “User Directories” in the same VDI Center won’t work.   Much of this procedure is covered in the documentation, particularly here. But it can useful to have everything in one place, “cookbook-style”:  1. Create the “token-only” directory type: From the VDI administration interface, select:  “Settings”, “Company”, “New”, select the “None” radio button, and click “Next.” Enter a name for the new “Company”, and click “Next”, then “Finish.” 2. Create Desktop Providers, Pools, and VM’s as appropriate. 3. Access the Sun Ray administration interface at http://servername:1660 and login using “root” credentials, and access the token-id’s you wish to use for assignment.  If you’re using DTU tokens rather than Smart Card tokens, these can be found under the “Tokens” tab, and “Search-ing” using the “Currently Used Tokens” tab.  DTU’s can be identified by the prefix “psuedo.”   For example: 4. Copy/paste this token into the VDI administrative interface, by selecting “Users”, “New”, and pasting in the token ID, and click “OK” - for example: 5. Assign the token (DTU) to a desktop, that is, in the VDI Admin Gui, select “Pool”, “Desktop”, select the VM, and click "Assign" and select the token you want, for example: In addition to assigning tokens to desktops, you'll need to bypass the login screen.  To do this, you need to do two things:  1.  Disable VDI client authentication with:  /opt/SUNWvda/sbin/vda settings-setprops -p clientauthentication=Disabled 2. Disable the VDI login screen – to do this,  add a kiosk argument of "-n" to the Sun Ray kiosk arguments screen.   You set this on the Sun Ray administration page - "Advanced", "Kiosk Mode", "Edit", and add the “-n” option to the arguments screen, for example: 3.  Restart both the Sun Ray and VDI services: # /opt/SUNWut/sbin/utstart –c # /opt/SUNWvda/sbin/vda-service restart Remember, if you have a question for us, please post on Twitter with our hashtag (again, it's #AskOracleVirtualization), and we'll try to answer it if we can. See you next time!

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >