Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 225/1381 | < Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • ???????????????

    - by Todd Bao
    ?????????,???????????????????,??????????,???????,??????,?????????????: SYS@fmw//Scripts> @showfkparent hr employees---------------|             ||DEPARTMENT_ID| +>-->HR.DEPARTMENTS.DEPARTMENT_ID|             ||JOB_ID       | +>-->HR.JOBS.JOB_ID|             ||MANAGER_ID   | +>-->HR.EMPLOYEES.EMPLOYEE_ID|             |--------------- SYS@fmw//Scripts> @showfkparent sh sales------------|          ||CHANNEL_ID| +>-->SH.CHANNELS.CHANNEL_ID|          ||CUST_ID   | +>-->SH.CUSTOMERS.CUST_ID|          ||PROD_ID   | +>-->SH.PRODUCTS.PROD_ID|          ||PROMO_ID  | +>-->SH.PROMOTIONS.PROMO_ID|          ||TIME_ID   | +>-->SH.TIMES.TIME_ID|          |------------ ????????? ??? 30-08-2012 set echo offset verify offset serveroutput ondefine table_owner='&1'define table_name='&2'declare        type info_typ is record (ct varchar2(30),cc varchar2(30),po varchar2(30),pt varchar2(30),pc varchar2(30));        type info_tab_typ is table of info_typ index by pls_integer;        info_tab info_tab_typ;        max_col_length number := 0;beginwith        cons_child as (select                        owner,constraint_name,table_name,                        r_owner,r_constraint_name                from dba_constraints                where                        constraint_type='R' and                        owner=upper('&table_owner') and                        table_name=upper('&table_name')),        cons_parent as (select owner,constraint_name,table_name                from dba_constraints                where                        (owner,constraint_name) in                        (select r_owner,r_constraint_name from cons_child))select        child.table_name child_table_name,        child.column_name child_column_name,        parent.owner parent_owner,        parent.table_name parent_table_name,        parent.column_name parent_column_name        bulk collect into info_tabfrom        cons_child cc,        cons_parent cp,        dba_cons_columns parent,        dba_cons_columns childwhere        cc.owner = child.owner and        cc.constraint_name = child.constraint_name and        cp.owner = parent.owner and        cp.constraint_name = parent.constraint_name and        cc.r_owner = cp.owner and        cc.r_constraint_name = cp.constraint_name and        parent.position = child.positionorder by 2;if (info_tab is not null and info_tab.count >0) then        for i in 1..info_tab.count loop                if length(info_tab(i).cc) > max_col_length then                        max_col_length := length(info_tab(i).cc);                end if;        end loop;        dbms_output.put_line(rpad('-',max_col_length+2,'-'));                dbms_output.put_line(' '||'|'||rpad(' ',max_col_length,' ')||'|');        for i in 1..info_tab.count loop                dbms_output.put('|'||rpad(info_tab(i).cc,max_col_length,' ')||'|');                dbms_output.put_line(' +>-->'||info_tab(i).po||'.'||info_tab(i).pt||'.'||info_tab(i).pc);                dbms_output.put_line('|'||rpad(' ',max_col_length,' ')||'|');        end loop;        dbms_output.put_line(rpad('-',max_col_length+2,'-'));else        dbms_output.put_line('### No foreign key defined on this table! ###');end if;end;/undefine table_ownerundefine table_nameset serveroutput off Todd

    Read the article

  • The Clocks on USACO

    - by philip
    I submitted my code for a question on USACO titled "The Clocks". This is the link to the question: http://ace.delos.com/usacoprob2?a=wj7UqN4l7zk&S=clocks This is the output: Compiling... Compile: OK Executing... Test 1: TEST OK [0.173 secs, 13928 KB] Test 2: TEST OK [0.130 secs, 13928 KB] Test 3: TEST OK [0.583 secs, 13928 KB] Test 4: TEST OK [0.965 secs, 13928 KB] Run 5: Execution error: Your program (`clocks') used more than the allotted runtime of 1 seconds (it ended or was stopped at 1.584 seconds) when presented with test case 5. It used 13928 KB of memory. ------ Data for Run 5 ------ 6 12 12 12 12 12 12 12 12 ---------------------------- Your program printed data to stdout. Here is the data: ------------------- time:_0.40928452 ------------------- Test 5: RUNTIME 1.5841 (13928 KB) I wrote my program so that it will print out the time taken (in seconds) for the program to complete before it exits. As can be seen, it took 0.40928452 seconds before exiting. So how the heck did the runtime end up to be 1.584 seconds? What should I do about it? This is the code if it helps: import java.io.; import java.util.; class clocks { public static void main(String[] args) throws IOException { long start = System.nanoTime(); // Use BufferedReader rather than RandomAccessFile; it's much faster BufferedReader f = new BufferedReader(new FileReader("clocks.in")); // input file name goes above PrintWriter out = new PrintWriter(new BufferedWriter(new FileWriter("clocks.out"))); // Use StringTokenizer vs. readLine/split -- lots faster int[] clock = new int[9]; for (int i = 0; i < 3; i++) { StringTokenizer st = new StringTokenizer(f.readLine()); // Get line, break into tokens clock[i * 3] = Integer.parseInt(st.nextToken()); clock[i * 3 + 1] = Integer.parseInt(st.nextToken()); clock[i * 3 + 2] = Integer.parseInt(st.nextToken()); } ArrayList validCombination = new ArrayList();; for (int i = 1; true; i++) { ArrayList combination = getPossibleCombinations(i); for (int j = 0; j < combination.size(); j++) { if (tryCombination(clock, (int[]) combination.get(j))) { validCombination.add(combination.get(j)); } } if (validCombination.size() > 0) { break; } } int [] min = (int[])validCombination.get(0); if (validCombination.size() > 1){ String minS = ""; for (int i=0; i<min.length; i++) minS += min[i]; for (int i=1; i<validCombination.size(); i++){ String tempS = ""; int [] temp = (int[])validCombination.get(i); for (int j=0; j<temp.length; j++) tempS += temp[j]; if (tempS.compareTo(minS) < 0){ minS = tempS; min = temp; } } } for (int i=0; i<min.length-1; i++) out.print(min[i] + " "); out.println(min[min.length-1]); out.close(); // close the output file long end = System.nanoTime(); System.out.println("time: " + (end-start)/1000000000.0); System.exit(0); // don't omit this! } static boolean tryCombination(int[] clock, int[] steps) { int[] temp = Arrays.copyOf(clock, clock.length); for (int i = 0; i < steps.length; i++) transform(temp, steps[i]); for (int i=0; i<temp.length; i++) if (temp[i] != 12) return false; return true; } static void transform(int[] clock, int n) { if (n == 1) { int[] clocksToChange = {0, 1, 3, 4}; add3(clock, clocksToChange); } else if (n == 2) { int[] clocksToChange = {0, 1, 2}; add3(clock, clocksToChange); } else if (n == 3) { int[] clocksToChange = {1, 2, 4, 5}; add3(clock, clocksToChange); } else if (n == 4) { int[] clocksToChange = {0, 3, 6}; add3(clock, clocksToChange); } else if (n == 5) { int[] clocksToChange = {1, 3, 4, 5, 7}; add3(clock, clocksToChange); } else if (n == 6) { int[] clocksToChange = {2, 5, 8}; add3(clock, clocksToChange); } else if (n == 7) { int[] clocksToChange = {3, 4, 6, 7}; add3(clock, clocksToChange); } else if (n == 8) { int[] clocksToChange = {6, 7, 8}; add3(clock, clocksToChange); } else if (n == 9) { int[] clocksToChange = {4, 5, 7, 8}; add3(clock, clocksToChange); } } static void add3(int[] clock, int[] position) { for (int i = 0; i < position.length; i++) { if (clock[position[i]] != 12) { clock[position[i]] += 3; } else { clock[position[i]] = 3; } } } static ArrayList getPossibleCombinations(int size) { ArrayList l = new ArrayList(); int[] current = new int[size]; for (int i = 0; i < current.length; i++) { current[i] = 1; } int[] end = new int[size]; for (int i = 0; i < end.length; i++) { end[i] = 9; } l.add(Arrays.copyOf(current, size)); while (!Arrays.equals(current, end)) { incrementWithoutRepetition(current, current.length - 1); l.add(Arrays.copyOf(current, size)); } int [][] combination = new int[l.size()][size]; for (int i=0; i<l.size(); i++) combination[i] = (int[])l.get(i); return l; } static int incrementWithoutRepetition(int[] n, int index) { if (n[index] != 9) { n[index]++; return n[index]; } else { n[index] = incrementWithoutRepetition(n, index - 1); return n[index]; } } static void p(int[] n) { for (int i = 0; i < n.length; i++) { System.out.print(n[i] + " "); } System.out.println(""); } }

    Read the article

  • How Should I Generate Trade Statistics For CouchDB/Rails3 Application?

    - by James
    My Problem: I am trying to developing a web application for currency traders. The application allows traders to enter or upload information about their trades and I want to calculate a wide variety of statistics based on what the user entered. Now, normally I would use a relational database for this, but I have two requirements that don't fit well with a relational database so I am attempting to use couchdb. Those two problems are: 1) Primarily, I have a companion desktop application that users will be able to work with and replicate to the site using couchdb's awesome replication feature and 2) I would like to allow users to be able to define their own custom things to track about trades and generate results based off of what they enter. The schema less nature of couch seems perfect here, but it may end up being harder than it sounds. (I already know couch requires you to define views in advance and such so I was just planning on sticking all the custom attributes in an array and then emitting the array in the view and further processing from there.) What I Am Doing: Right now I am just emitting each trade in couch keyed by each user's system and querying with the key of the system to get an array of trades per system. Simple. I am not using a reduce function currently to calculate any stats because I couldn't figure out how to get everything I need without getting a reduce overflow error. Here is an example of rows that are getting emitted from couch: {"total_rows":134,"offset":0,"rows":[ {"id":"5b1dcd47221e160d8721feee4ccc64be", "key":["80e40ba2fa43589d57ec3f1d19db41e6","2010/05/14 04:32:37 +0000"], null, "doc":{ "_id":"5b1dcd47221e160d8721feee4ccc64be", "_rev":"1-bc9fe763e2637694df47d6f5efb58e5b", "couchrest-type":"Trade", "system":"80e40ba2fa43589d57ec3f1d19db41e6", "pair":"EUR/USD", "direction":"Buy", "entry":12600, "exit":12700, "stop_loss":12500, "profit_target":12700, "status":"Closed", "slug":"101332132375", "custom_tracking": [{"name":"signal", "value":"Pin Bar"}] "updated_at":"2010/05/14 04:32:37 +0000", "created_at":"2010/05/14 04:32:37 +0000", "result":100}} ]} In my rails 3 controller I am basically just populating an array of trades such as the one above and then extracting out the relevant data into smaller arrays that I can compute my statistics on. Here is my show action for the page that I want to display the stats and all the trades: def show @trades = Trade.by_system(:startkey => [@system.id], :endkey => [@system.id, Time.now ]) @trades.each do |trade| if trade.result > 0 @winning_trades << trade.result elsif trade.result < 0 @losing_trades << trade.result else @breakeven_trades << trade.result end if trade.direction == "Buy" @long_trades << trade.result else @short_trades << trade.result end if trade["custom_tracking"] != nil @custom_tracking << {"result" => trade.result, "variables" => trade["custom_tracking"]} end end end I am omitting some other stuff that is going on, but that is the gist of what I am doing. Then I am calculating stuff in the view layer to produce some results: <% winning_long_trades = @long_trades.reject {|trade| trade <= 0 } %> <% winning_short_trades = @short_trades.reject {|trade| trade <= 0 } %> <ul> <li>Total Trades: <%= @trades.count %></li> <li>Winners: <%= @winning_trades.size %></li> <li>Biggest Winner (Pips): <%= @winning_trades.max %></li> <li>Average Win(Pips): <%= @winning_trades.sum/@winning_trades.size %></li> <li>Losers: <%= @losing_trades.size %></li> <li>Biggest Loser (Pips): <%= @losing_trades.min %></li> <li>Average Loss(Pips): <%= @losing_trades.sum/@losing_trades.size %></li> <li>Breakeven Trades: <%= @breakeven_trades.size %></li> <li>Long Trades: <%= @long_trades.size %></li> <li>Winning Long Trades: <%= winning_long_trades.size %></li> <li>Short Trades: <%= @short_trades.size %></li> <li>Winning Short Trades: <%= winning_short_trades.size %></li> <li>Total Pips: <%= @winning_trades.sum + @losing_trades.sum %></li> <li>Win Rate (%): <%= @winning_trades.size/@trades.count.to_f * 100 %></li> </ul> This produces the following results, which aside from a few things is exactly what I want: Total Trades: 134 Winners: 70 Biggest Winner (Pips): 1488 Average Win(Pips): 440 Losers: 58 Biggest Loser (Pips): -516 Average Loss(Pips): -225 Breakeven Trades: 6 Long Trades: 125 Winning Long Trades: 67 Short Trades: 9 Winning Short Trades: 3 Total Pips: 17819 Win Rate (%): 52.23880597014925 What I Am Wondering- Finally The Actual Questions: I am starting to get really skeptical of how well this method will work when a user has 5,000 trades instead of just 134 like in this example. I anticipate most users will only have somewhere under 200 per year, but some users may have a couple thousand trades per year. Probably no more than 5,000 per year. It seems to work ok now, but the page load times are already getting a tad high for my tastes. (About 800ms to generate the page according to rails logs with about a 250ms of that spent in the view layer.) I will end up caching this page I am sure, but I still need the regenerate the page each time a trade is updated and I can't afford to have this be too slow. Sooo..... Is doing something similar here possible with a straight couchdb reduce function? I am assuming handing this off to couch would possibly help with larger data sets. I couldn't figure out how, but I suppose that doesn't mean it isn't possible. If possible, any hints will be helpful. Could I use a list function if a reduce was not available due to reduce constraints? Are couchdb list functions suitable for this type of calculations? Anyone have any idea of whether or not list functions perform well? Any hints what one would look like for the type of calculations I am trying to achieve? I thought about other options such as running the calculations at the time each trade was saved or nightly if I had to and saving the results to a statistics doc that I could then query so that all the processing was done ahead of time. I would like this to be the last resort because then I can't really filter out trades by time periods dynamically like I would really like to. (I want to have a slider that a user can slide to only show trades from that time period using the startkey and endkey in couchdb if I can.) If I should continue running the calculations inside the rails app at the time of the page view, what can I do to improve my current implementation. I am new to rails, couch and programming in general. I am sure that I could be doing something better here. Do I need to create an array for each stat or is there a better way to do that. I guess I just would really like some advice on how to tackle this problem. I want to keep the page generation time minimal since I anticipate these being some of the highest trafficked pages. My gut is that I will need to offload the statistics calculation to either couch or run the stats in advance of when they are called, but I am not sure. Lastly: Like I mentioned above, one of the primary reasons for using couch is to allow users to define their own things to track per trade. Getting the data into couch is no problem, but how would I be able to take the custom_tracking array and find how many winning trades for each named tracking attribute. If anyone can give me any hints to the possibility of doing this that would be great. Thanks a bunch. Would really appreciate any help. Willing to fork out some $$$ if someone wants to take on the problem for me. (Don't know if that is allowed on stack overflow or not.)

    Read the article

  • Simple MSBuild Configuration: Updating Assemblies With A Version Number

    - by srkirkland
    When distributing a library you often run up against versioning problems, once facet of which is simply determining which version of that library your client is running.  Of course, each project in your solution has an AssemblyInfo.cs file which provides, among other things, the ability to set the Assembly name and version number.  Unfortunately, setting the assembly version here would require not only changing the version manually for each build (depending on your schedule), but keeping it in sync across all projects.  There are many ways to solve this versioning problem, and in this blog post I’m going to try to explain what I think is the easiest and most flexible solution.  I will walk you through using MSBuild to create a simple build script, and I’ll even show how to (optionally) integrate with a Team City build server.  All of the code from this post can be found at https://github.com/srkirkland/BuildVersion. Create CommonAssemblyInfo.cs The first step is to create a common location for the repeated assembly info that is spread across all of your projects.  Create a new solution-level file (I usually create a Build/ folder in the solution root, but anywhere reachable by all your projects will do) called CommonAssemblyInfo.cs.  In here you can put any information common to all your assemblies, including the version number.  An example CommonAssemblyInfo.cs is as follows: using System.Reflection; using System.Resources; using System.Runtime.InteropServices;   [assembly: AssemblyCompany("University of California, Davis")] [assembly: AssemblyProduct("BuildVersionTest")] [assembly: AssemblyCopyright("Scott Kirkland & UC Regents")] [assembly: AssemblyConfiguration("")] [assembly: AssemblyTrademark("")]   [assembly: ComVisible(false)]   [assembly: AssemblyVersion("1.2.3.4")] //Will be replaced   [assembly: NeutralResourcesLanguage("en-US")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }   Cleanup AssemblyInfo.cs & Link CommonAssemblyInfo.cs For each of your projects, you’ll want to clean up your assembly info to contain only information that is unique to that assembly – everything else will go in the CommonAssemblyInfo.cs file.  For most of my projects, that just means setting the AssemblyTitle, though you may feel AssemblyDescription is warranted.  An example AssemblyInfo.cs file is as follows: using System.Reflection;   [assembly: AssemblyTitle("BuildVersionTest")] .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Next, you need to “link” the CommonAssemblyinfo.cs file into your projects right beside your newly lean AssemblyInfo.cs file.  To do this, right click on your project and choose Add | Existing Item from the context menu.  Navigate to your CommonAssemblyinfo.cs file but instead of clicking Add, click the little down-arrow next to add and choose “Add as Link.”  You should see a little link graphic similar to this: We’ve actually reduced complexity a lot already, because if you build all of your assemblies will have the same common info, including the product name and our static (fake) assembly version.  Let’s take this one step further and introduce a build script. Create an MSBuild file What we want from the build script (for now) is basically just to have the common assembly version number changed via a parameter (eventually to be passed in by the build server) and then for the project to build.  Also we’d like to have a flexibility to define what build configuration to use (debug, release, etc). In order to find/replace the version number, we are going to use a Regular Expression to find and replace the text within your CommonAssemblyInfo.cs file.  There are many other ways to do this using community build task add-ins, but since we want to keep it simple let’s just define the Regular Expression task manually in a new file, Build.tasks (this example taken from the NuGet build.tasks file). <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <UsingTask TaskName="RegexTransform" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll"> <ParameterGroup> <Items ParameterType="Microsoft.Build.Framework.ITaskItem[]" /> </ParameterGroup> <Task> <Using Namespace="System.IO" /> <Using Namespace="System.Text.RegularExpressions" /> <Using Namespace="Microsoft.Build.Framework" /> <Code Type="Fragment" Language="cs"> <![CDATA[ foreach(ITaskItem item in Items) { string fileName = item.GetMetadata("FullPath"); string find = item.GetMetadata("Find"); string replaceWith = item.GetMetadata("ReplaceWith"); if(!File.Exists(fileName)) { Log.LogError(null, null, null, null, 0, 0, 0, 0, String.Format("Could not find version file: {0}", fileName), new object[0]); } string content = File.ReadAllText(fileName); File.WriteAllText( fileName, Regex.Replace( content, find, replaceWith ) ); } ]]> </Code> </Task> </UsingTask> </Project> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If you glance at the code, you’ll see it’s really just going a Regex.Replace() on a given file, which is exactly what we need. Now we are ready to write our build file, called (by convention) Build.proj. <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <Import Project="$(MSBuildProjectDirectory)\Build.tasks" /> <PropertyGroup> <Configuration Condition="'$(Configuration)' == ''">Debug</Configuration> <SolutionRoot>$(MSBuildProjectDirectory)</SolutionRoot> </PropertyGroup>   <ItemGroup> <RegexTransform Include="$(SolutionRoot)\CommonAssemblyInfo.cs"> <Find>(?&lt;major&gt;\d+)\.(?&lt;minor&gt;\d+)\.\d+\.(?&lt;revision&gt;\d+)</Find> <ReplaceWith>$(BUILD_NUMBER)</ReplaceWith> </RegexTransform> </ItemGroup>   <Target Name="Go" DependsOnTargets="UpdateAssemblyVersion; Build"> </Target>   <Target Name="UpdateAssemblyVersion" Condition="'$(BUILD_NUMBER)' != ''"> <RegexTransform Items="@(RegexTransform)" /> </Target>   <Target Name="Build"> <MSBuild Projects="$(SolutionRoot)\BuildVersionTest.sln" Targets="Build" /> </Target>   </Project> Reviewing this MSBuild file, we see that by default the “Go” target will be called, which in turn depends on “UpdateAssemblyVersion” and then “Build.”  We go ahead and import the Bulid.tasks file and then setup some handy properties for setting the build configuration and solution root (in this case, my build files are in the solution root, but we might want to create a Build/ directory later).  The rest of the file flows logically, we setup the RegexTransform to match version numbers such as <major>.<minor>.1.<revision> (1.2.3.4 in our example) and replace it with a $(BUILD_NUMBER) parameter which will be supplied externally.  The first target, “UpdateAssemblyVersion” just runs the RegexTransform, and the second target, “Build” just runs the default MSBuild on our solution. Testing the MSBuild file locally Now we have a build file which can replace assembly version numbers and build, so let’s setup a quick batch file to be able to build locally.  To do this you simply create a file called Build.cmd and have it call MSBuild on your Build.proj file.  I’ve added a bit more flexibility so you can specify build configuration and version number, which makes your Build.cmd look as follows: set config=%1 if "%config%" == "" ( set config=debug ) set version=%2 if "%version%" == "" ( set version=2.3.4.5 ) %WINDIR%\Microsoft.NET\Framework\v4.0.30319\msbuild Build.proj /p:Configuration="%config%" /p:build_number="%version%" .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now if you click on the Build.cmd file, you will get a default debug build using the version 2.3.4.5.  Let’s run it in a command window with the parameters set for a release build version 2.0.1.453.   Excellent!  We can now run one simple command and govern the build configuration and version number of our entire solution.  Each DLL produced will have the same version number, making determining which version of a library you are running very simple and accurate. Configure the build server (TeamCity) Of course you are not really going to want to run a build command manually every time, and typing in incrementing version numbers will also not be ideal.  A good solution is to have a computer (or set of computers) act as a build server and build your code for you, providing you a consistent environment, excellent reporting, and much more.  One of the most popular Build Servers is JetBrains’ TeamCity, and this last section will show you the few configuration parameters to use when setting up a build using your MSBuild file created earlier.  If you are using a different build server, the same principals should apply. First, when setting up the project you want to specify the “Build Number Format,” often given in the form <major>.<minor>.<revision>.<build>.  In this case you will set major/minor manually, and optionally revision (or you can use your VCS revision number with %build.vcs.number%), and then build using the {0} wildcard.  Thus your build number format might look like this: 2.0.1.{0}.  During each build, this value will be created and passed into the $BUILD_NUMBER variable of our Build.proj file, which then uses it to decorate your assemblies with the proper version. After setting up the build number, you must choose MSBuild as the Build Runner, then provide a path to your build file (Build.proj).  After specifying your MSBuild Version (equivalent to your .NET Framework Version), you have the option to specify targets (the default being “Go”) and additional MSBuild parameters.  The one parameter that is often useful is manually setting the configuration property (/p:Configuration="Release") if you want something other than the default (which is Debug in our example).  Your resulting configuration will look something like this: [Under General Settings] [Build Runner Settings]   Now every time your build is run, a newly incremented build version number will be generated and passed to MSBuild, which will then version your assemblies and build your solution.   A Quick Review Our goal was to version our output assemblies in an automated way, and we accomplished it by performing a few quick steps: Move the common assembly information, including version, into a linked CommonAssemblyInfo.cs file Create a simple MSBuild script to replace the common assembly version number and build your solution Direct your build server to use the created MSBuild script That’s really all there is to it.  You can find all of the code from this post at https://github.com/srkirkland/BuildVersion. Enjoy!

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Why didn't 12.04 install?

    - by Josephisscrewed
    Ok, so I've installed Ubuntu many times on my computer.. Normally on the same partition, and WIndows would always delete Ubuntu(I don't know how.. it just happens) if i go away from keyboard during boot and it chooses Windows automatically because I took to long. So i tried to reinstall again, but after the fifth time it wouldn't let me, and told me to check "wubi-12.04-rev266.log". It took a while to find, but when i found it, I had no idea what any of it meant, as I'm no programmer.I first tried this the day Precise Pangolin came out. SO skip ahead 2.5 months, when I finally found this file, and i then got the idea of making a new partition to install Ubuntu on, but I used wubi, like I always did. It didn't look like it would f anything up, so I did it. it went through all the downloads, extracting, etc. Which took about 40 minutes total, then ended with an error message saying to check "wubi-12.04-rev266.log". i did. Here's what it says: 07-10 23:33 INFO root: === wubi 12.04 rev266 === 07-10 23:33 DEBUG root: Logfile is c:\users\joseph\appdata\local\temp\wubi-12.04-rev266.log 07-10 23:33 DEBUG root: sys.argv = ['main.pyo', '--exefile="C:\\Users\\Joseph\\Downloads\\wubi.exe"'] 07-10 23:33 DEBUG CommonBackend: data_dir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data 07-10 23:33 DEBUG WindowsBackend: 7z=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\bin\7z.exe 07-10 23:33 DEBUG WindowsBackend: startup_folder=C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Startup 07-10 23:33 DEBUG CommonBackend: Fetching basic info... 07-10 23:33 DEBUG CommonBackend: original_exe=C:\Users\Joseph\Downloads\wubi.exe 07-10 23:33 DEBUG CommonBackend: platform=win32 07-10 23:33 DEBUG CommonBackend: osname=nt 07-10 23:33 DEBUG CommonBackend: language=en_US 07-10 23:33 DEBUG CommonBackend: encoding=cp1252 07-10 23:33 DEBUG WindowsBackend: arch=amd64 07-10 23:33 DEBUG CommonBackend: Parsing isolist=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\data\isolist.ini 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Xubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Edubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Ubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Mythbuntu-amd64 07-10 23:33 DEBUG CommonBackend: Adding distro Kubuntu-i386 07-10 23:33 DEBUG CommonBackend: Adding distro Lubuntu-amd64 07-10 23:33 DEBUG WindowsBackend: Fetching host info... 07-10 23:33 DEBUG WindowsBackend: registry_key=Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi 07-10 23:33 DEBUG WindowsBackend: windows version=vista 07-10 23:33 DEBUG WindowsBackend: windows_version2=Windows 7 Home Premium 07-10 23:33 DEBUG WindowsBackend: windows_sp=None 07-10 23:33 DEBUG WindowsBackend: windows_build=7600 07-10 23:33 DEBUG WindowsBackend: gmt=-8 07-10 23:33 DEBUG WindowsBackend: country=US 07-10 23:33 DEBUG WindowsBackend: timezone=America/Los_Angeles 07-10 23:33 DEBUG WindowsBackend: windows_username=Joseph 07-10 23:33 DEBUG WindowsBackend: user_full_name=Joseph 07-10 23:33 DEBUG WindowsBackend: user_directory=C:\Users\Joseph 07-10 23:33 DEBUG WindowsBackend: windows_language_code=1033 07-10 23:33 DEBUG WindowsBackend: windows_language=English 07-10 23:33 DEBUG WindowsBackend: processor_name=Intel(R) Core(TM) i3 CPU M 370 @ 2.40GHz 07-10 23:33 DEBUG WindowsBackend: bootloader=vista 07-10 23:33 DEBUG WindowsBackend: system_drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(C: hd 78696.8203125 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(D: hd 4303.48046875 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(E: cd 0.0 mb free udf) 07-10 23:33 DEBUG WindowsBackend: drive=Drive(U: hd 79907.8320313 mb free ntfs) 07-10 23:33 DEBUG WindowsBackend: uninstaller_path=None 07-10 23:33 DEBUG WindowsBackend: previous_target_dir=None 07-10 23:33 DEBUG WindowsBackend: previous_distro_name=None 07-10 23:33 DEBUG WindowsBackend: keyboard_id=67699721 07-10 23:33 DEBUG WindowsBackend: keyboard_layout=us 07-10 23:33 DEBUG WindowsBackend: keyboard_variant= 07-10 23:33 DEBUG CommonBackend: python locale=('en_US', 'cp1252') 07-10 23:33 DEBUG CommonBackend: locale=en_US.UTF-8 07-10 23:33 DEBUG WindowsBackend: total_memory_mb=3893.859375 07-10 23:33 DEBUG CommonBackend: Searching ISOs on USB devices 07-10 23:33 DEBUG CommonBackend: Searching for local CDs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether D:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether E:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Kubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Xubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Mythbuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Edubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 DEBUG Distro: checking whether U:\ is a valid Lubuntu CD 07-10 23:33 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:33 INFO root: Running the installer... 07-10 23:33 DEBUG WindowsFrontend: __init__... 07-10 23:33 DEBUG WindowsFrontend: on_init... 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:33 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG WinuiInstallationPage: target_drive=U:, installation_size=30000MB, distro_name=Ubuntu, language=en_US, locale=en_US.UTF-8, username=joseph 07-10 23:35 INFO root: Received settings 07-10 23:35 DEBUG CommonBackend: Searching for local CD 07-10 23:35 DEBUG Distro: checking whether C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether D:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain D:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether E:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain E:\casper\filesystem.squashfs 07-10 23:35 DEBUG Distro: checking whether U:\ is a valid Ubuntu CD 07-10 23:35 DEBUG Distro: does not contain U:\casper\filesystem.squashfs 07-10 23:35 DEBUG CommonBackend: Searching for local ISO 07-10 23:35 INFO WinuiPage: appname=wubi, localedir=C:\Users\Joseph\AppData\Local\Temp\pylA05E.tmp\translations, languages=['en_US', 'en'] 07-10 23:35 DEBUG TaskList: # Running tasklist... 07-10 23:35 DEBUG TaskList: ## Running select_target_dir... 07-10 23:35 INFO WindowsBackend: Installing into U:\ubuntu 07-10 23:35 DEBUG TaskList: ## Finished select_target_dir 07-10 23:35 DEBUG TaskList: ## Running create_dir_structure... 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\disks\boot\grub 07-10 23:35 DEBUG CommonBackend: Creating dir U:\ubuntu\install\boot\grub 07-10 23:35 DEBUG TaskList: ## Finished create_dir_structure 07-10 23:35 DEBUG TaskList: ## Running create_uninstaller... 07-10 23:35 DEBUG WindowsBackend: Copying uninstaller C:\Users\Joseph\Downloads\wubi.exe -> U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi UninstallString U:\ubuntu\uninstall-wubi.exe 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi InstallationDir U:\ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayName Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayIcon U:\ubuntu\Ubuntu.ico 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi DisplayVersion 12.04-rev266 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi Publisher Ubuntu 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi URLInfoAbout http://www.ubuntu.com 07-10 23:35 DEBUG registry: Setting registry key -2147483646 Software\Microsoft\Windows\CurrentVersion\Uninstall\Wubi HelpLink http://www.ubuntu.com/support 07-10 23:35 DEBUG TaskList: ## Finished create_uninstaller 07-10 23:35 DEBUG TaskList: ## Running create_preseed_diskimage... 07-10 23:35 DEBUG TaskList: ## Finished create_preseed_diskimage 07-10 23:35 DEBUG TaskList: ## Running get_diskimage... 07-10 23:35 DEBUG TaskList: New task download 07-10 23:35 DEBUG TaskList: ### Running download... 07-10 23:35 DEBUG downloader: downloading http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz > U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz 07-10 23:35 DEBUG downloader: Download start filename=U:\ubuntu\disks\ubuntu-12.04-wubi-amd64.tar.xz, url=http://releases.ubuntu.com/12.04/ubuntu-12.04-wubi-amd64.tar.xz, basename=ubuntu-12.04-wubi-amd64.tar.xz, length=512730488, text=None 07-11 00:00 DEBUG TaskList: ### Finished download 07-11 00:00 DEBUG downloader: download finished (read 512730488 bytes) 07-11 00:00 DEBUG TaskList: ## Finished get_diskimage 07-11 00:00 DEBUG TaskList: ## Running extract_diskimage... 07-11 00:03 DEBUG TaskList: ## Finished extract_diskimage 07-11 00:03 DEBUG TaskList: ## Running choose_disk_sizes... 07-11 00:03 DEBUG WindowsBackend: total size=30000 root=29744 swap=256 home=0 usr=0 07-11 00:03 DEBUG TaskList: ## Finished choose_disk_sizes 07-11 00:03 DEBUG TaskList: ## Running expand_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished expand_diskimage 07-11 00:05 DEBUG TaskList: ## Running create_swap_diskimage... 07-11 00:05 DEBUG TaskList: ## Finished create_swap_diskimage 07-11 00:05 DEBUG TaskList: ## Running modify_bootloader... 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ### Running modify_bcd... 07-11 00:05 DEBUG WindowsBackend: modify_bcd Drive(C: hd 78696.8203125 mb free ntfs) 07-11 00:05 ERROR TaskList: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: # Cancelling tasklist 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 ERROR root: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= Traceback (most recent call last): File "\lib\wubi\application.py", line 58, in run File "\lib\wubi\application.py", line 132, in select_task File "\lib\wubi\application.py", line 158, in run_installer File "\lib\wubi\backends\common\tasklist.py", line 197, in __call__ File "\lib\wubi\backends\win32\backend.py", line 697, in modify_bcd File "\lib\wubi\backends\common\utils.py", line 66, in run_command Exception: Error executing command >>command=C:\Windows\sysnative\bcdedit.exe /set {970e3d1b-e019-11df-a016-81045c79c1f9} device partition=U: >>retval=1 >>stderr=An error has occurred setting the element data. The request is not supported. >>stdout= 07-11 00:05 DEBUG TaskList: New task modify_bcd 07-11 00:05 DEBUG TaskList: ## Finished modify_bootloader 07-11 00:05 DEBUG TaskList: # Finished tasklist What have I done wrong? What can I do? If I turn off my laptop, will I actually be able to turn it back on? If you want me to post the log from the first day it happened, i'd be glad to in the comments, in the main body it made it over 30000 characters.

    Read the article

  • When is a SQL function not a function?

    - by Rob Farley
    Should SQL Server even have functions? (Oh yeah – this is a T-SQL Tuesday post, hosted this month by Brad Schulz) Functions serve an important part of programming, in almost any language. A function is a piece of code that is designed to return something, as opposed to a piece of code which isn’t designed to return anything (which is known as a procedure). SQL Server is no different. You can call stored procedures, even from within other stored procedures, and you can call functions and use these in other queries. Stored procedures might query something, and therefore ‘return data’, but a function in SQL is considered to have the type of the thing returned, and can be used accordingly in queries. Consider the internal GETDATE() function. SELECT GETDATE(), SomeDatetimeColumn FROM dbo.SomeTable; There’s no logical difference between the field that is being returned by the function and the field that’s being returned by the table column. Both are the datetime field – if you didn’t have inside knowledge, you wouldn’t necessarily be able to tell which was which. And so as developers, we find ourselves wanting to create functions that return all kinds of things – functions which look up values based on codes, functions which do string manipulation, and so on. But it’s rubbish. Ok, it’s not all rubbish, but it mostly is. And this isn’t even considering the SARGability impact. It’s far more significant than that. (When I say the SARGability aspect, I mean “because you’re unlikely to have an index on the result of some function that’s applied to a column, so try to invert the function and query the column in an unchanged manner”) I’m going to consider the three main types of user-defined functions in SQL Server: Scalar Inline Table-Valued Multi-statement Table-Valued I could also look at user-defined CLR functions, including aggregate functions, but not today. I figure that most people don’t tend to get around to doing CLR functions, and I’m going to focus on the T-SQL-based user-defined functions. Most people split these types of function up into two types. So do I. Except that most people pick them based on ‘scalar or table-valued’. I’d rather go with ‘inline or not’. If it’s not inline, it’s rubbish. It really is. Let’s start by considering the two kinds of table-valued function, and compare them. These functions are going to return the sales for a particular salesperson in a particular year, from the AdventureWorks database. CREATE FUNCTION dbo.FetchSales_inline(@salespersonid int, @orderyear int) RETURNS TABLE AS  RETURN (     SELECT e.LoginID as EmployeeLogin, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ) ; GO CREATE FUNCTION dbo.FetchSales_multi(@salespersonid int, @orderyear int) RETURNS @results TABLE (     EmployeeLogin nvarchar(512),     OrderDate datetime,     SalesOrderID int     ) AS BEGIN     INSERT @results (EmployeeLogin, OrderDate, SalesOrderID)     SELECT e.LoginID, o.OrderDate, o.SalesOrderID     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ;     RETURN END ; GO You’ll notice that I’m being nice and responsible with the use of the DATEADD function, so that I have SARGability on the OrderDate filter. Regular readers will be hoping I’ll show what’s going on in the execution plans here. Here I’ve run two SELECT * queries with the “Show Actual Execution Plan” option turned on. Notice that the ‘Query cost’ of the multi-statement version is just 2% of the ‘Batch cost’. But also notice there’s trickery going on. And it’s nothing to do with that extra index that I have on the OrderDate column. Trickery. Look at it – clearly, the first plan is showing us what’s going on inside the function, but the second one isn’t. The second one is blindly running the function, and then scanning the results. There’s a Sequence operator which is calling the TVF operator, and then calling a Table Scan to get the results of that function for the SELECT operator. But surely it still has to do all the work that the first one is doing... To see what’s actually going on, let’s look at the Estimated plan. Now, we see the same plans (almost) that we saw in the Actuals, but we have an extra one – the one that was used for the TVF. Here’s where we see the inner workings of it. You’ll probably recognise the right-hand side of the TVF’s plan as looking very similar to the first plan – but it’s now being called by a stack of other operators, including an INSERT statement to be able to populate the table variable that the multi-statement TVF requires. And the cost of the TVF is 57% of the batch! But it gets worse. Let’s consider what happens if we don’t need all the columns. We’ll leave out the EmployeeLogin column. Here, we see that the inline function call has been simplified down. It doesn’t need the Employee table. The join is redundant and has been eliminated from the plan, making it even cheaper. But the multi-statement plan runs the whole thing as before, only removing the extra column when the Table Scan is performed. A multi-statement function is a lot more powerful than an inline one. An inline function can only be the result of a single sub-query. It’s essentially the same as a parameterised view, because views demonstrate this same behaviour of extracting the definition of the view and using it in the outer query. A multi-statement function is clearly more powerful because it can contain far more complex logic. But a multi-statement function isn’t really a function at all. It’s a stored procedure. It’s wrapped up like a function, but behaves like a stored procedure. It would be completely unreasonable to expect that a stored procedure could be simplified down to recognise that not all the columns might be needed, but yet this is part of the pain associated with this procedural function situation. The biggest clue that a multi-statement function is more like a stored procedure than a function is the “BEGIN” and “END” statements that surround the code. If you try to create a multi-statement function without these statements, you’ll get an error – they are very much required. When I used to present on this kind of thing, I even used to call it “The Dangers of BEGIN and END”, and yes, I’ve written about this type of thing before in a similarly-named post over at my old blog. Now how about scalar functions... Suppose we wanted a scalar function to return the count of these. CREATE FUNCTION dbo.FetchSales_scalar(@salespersonid int, @orderyear int) RETURNS int AS BEGIN     RETURN (         SELECT COUNT(*)         FROM Sales.SalesOrderHeader AS o         LEFT JOIN HumanResources.Employee AS e         ON e.EmployeeID = o.SalesPersonID         WHERE o.SalesPersonID = @salespersonid         AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')         AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101')     ); END ; GO Notice the evil words? They’re required. Try to remove them, you just get an error. That’s right – any scalar function is procedural, despite the fact that you wrap up a sub-query inside that RETURN statement. It’s as ugly as anything. Hopefully this will change in future versions. Let’s have a look at how this is reflected in an execution plan. Here’s a query, its Actual plan, and its Estimated plan: SELECT e.LoginID, y.year, dbo.FetchSales_scalar(p.SalesPersonID, y.year) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; We see here that the cost of the scalar function is about twice that of the outer query. Nicely, the query optimizer has worked out that it doesn’t need the Employee table, but that’s a bit of a red herring here. There’s actually something way more significant going on. If I look at the properties of that UDF operator, it tells me that the Estimated Subtree Cost is 0.337999. If I just run the query SELECT dbo.FetchSales_scalar(281,2003); we see that the UDF cost is still unchanged. You see, this 0.0337999 is the cost of running the scalar function ONCE. But when we ran that query with the CROSS JOIN in it, we returned quite a few rows. 68 in fact. Could’ve been a lot more, if we’d had more salespeople or more years. And so we come to the biggest problem. This procedure (I don’t want to call it a function) is getting called 68 times – each one between twice as expensive as the outer query. And because it’s calling it in a separate context, there is even more overhead that I haven’t considered here. The cheek of it, to say that the Compute Scalar operator here costs 0%! I know a number of IT projects that could’ve used that kind of costing method, but that’s another story that I’m not going to go into here. Let’s look at a better way. Suppose our scalar function had been implemented as an inline one. Then it could have been expanded out like a sub-query. It could’ve run something like this: SELECT e.LoginID, y.year, (SELECT COUNT(*)     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = p.SalesPersonID     AND o.OrderDate >= DATEADD(year,y.year-2000,'20000101')     AND o.OrderDate < DATEADD(year,y.year-2000+1,'20000101')     ) AS NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID; Don’t worry too much about the Scan of the SalesOrderHeader underneath a Nested Loop. If you remember from plenty of other posts on the matter, execution plans don’t push the data through. That Scan only runs once. The Index Spool sucks the data out of it and populates a structure that is used to feed the Stream Aggregate. The Index Spool operator gets called 68 times, but the Scan only once (the Number of Executions property demonstrates this). Here, the Query Optimizer has a full picture of what’s being asked, and can make the appropriate decision about how it accesses the data. It can simplify it down properly. To get this kind of behaviour from a function, we need it to be inline. But without inline scalar functions, we need to make our function be table-valued. Luckily, that’s ok. CREATE FUNCTION dbo.FetchSales_inline2(@salespersonid int, @orderyear int) RETURNS table AS RETURN (SELECT COUNT(*) as NumSales     FROM Sales.SalesOrderHeader AS o     LEFT JOIN HumanResources.Employee AS e     ON e.EmployeeID = o.SalesPersonID     WHERE o.SalesPersonID = @salespersonid     AND o.OrderDate >= DATEADD(year,@orderyear-2000,'20000101')     AND o.OrderDate < DATEADD(year,@orderyear-2000+1,'20000101') ); GO But we can’t use this as a scalar. Instead, we need to use it with the APPLY operator. SELECT e.LoginID, y.year, n.NumSales FROM (VALUES (2001),(2002),(2003),(2004)) AS y (year) CROSS JOIN Sales.SalesPerson AS p LEFT JOIN HumanResources.Employee AS e ON e.EmployeeID = p.SalesPersonID OUTER APPLY dbo.FetchSales_inline2(p.SalesPersonID, y.year) AS n; And now, we get the plan that we want for this query. All we’ve done is tell the function that it’s returning a table instead of a single value, and removed the BEGIN and END statements. We’ve had to name the column being returned, but what we’ve gained is an actual inline simplifiable function. And if we wanted it to return multiple columns, it could do that too. I really consider this function to be superior to the scalar function in every way. It does need to be handled differently in the outer query, but in many ways it’s a more elegant method there too. The function calls can be put amongst the FROM clause, where they can then be used in the WHERE or GROUP BY clauses without fear of calling the function multiple times (another horrible side effect of functions). So please. If you see BEGIN and END in a function, remember it’s not really a function, it’s a procedure. And then fix it. @rob_farley

    Read the article

  • How to create a new WCF/MVC/jQuery application from scratch

    - by pjohnson
    As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch. Architecture One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation. Consuming the service Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like: svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this: <system.serviceModel>  <client>    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />  </client></system.serviceModel> By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler. From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior. User interface The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box. The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section. Logging and exception handling At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code. With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings. Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file. One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests: if (filterContext.HttpContext.Request.IsAjaxRequest()){    filterContext.Result = Json(new ErrorModel(...));    filterContext.ExceptionHandled = true;} My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal. Summary As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

    Read the article

  • CodePlex Daily Summary for Thursday, May 17, 2012

    CodePlex Daily Summary for Thursday, May 17, 2012Popular ReleasesWatchersNET.UrlShorty: WatchersNET.UrlShorty 01.03.03: changes Fixed Url & Error History when urls contain line breaksAspxCommerce: AspxCommerce1.1: AspxCommerce - 'Flexible and easy eCommerce platform' offers a complete e-Commerce solution that allows you to build and run your fully functional online store in minutes. You can create your storefront; manage the products through categories and subcategories, accept payments through credit cards and ship the ordered products to the customers. We have everything set up for you, so that you can only focus on building your own online store. Note: To login as a superuser, the username and pass...SiteMap Editor for Microsoft Dynamics CRM 2011: SiteMap Editor (1.1.1616.403): BUG FIX Hide save button when Titles or Descriptions element is selectedVisual C++ 2010 Directories Editor: Visual C++ 2010 Directories Editor (x32_x64): release v1.3MapWindow 6 Desktop GIS: MapWindow 6.1.2: Looking for a .Net GIS Map Application?MapWindow 6 Desktop GIS is an open source desktop GIS for Microsoft Windows that is built upon the DotSpatial Library. This release requires .Net 4 (Client Profile). Are you a software developer?Instead of downloading MapWindow for development purposes, get started with with the DotSpatial template. The extensions you create from the template can be loaded in MapWindow.DotSpatial: DotSpatial 1.2: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...Mugen Injection: Mugen Injection 2.2.1 (WinRT supported): Added ManagedScopeLifecycle. Increase performance. Added support for resolve 'params'.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.52: Make preprocessor comment-statements nestable; add the ///#IFNDEF statement. (Discussion #355785) Don't throw an error for old-school JScript event handlers, and don't rename them if they aren't global functions.DotNetNuke® Events: 06.00.00: This is a serious release of Events. DNN 6 form pattern - We have take the full route towards DNN6: most notably the incorporation of the DNN6 form pattern with streamlined UX/UI. We have also tried to change all formatting to a div based structure. A daunting task, since the Events module contains a lot of forms. Roger has done a splendid job by going through all the forms in great detail, replacing all table style layouts into the new DNN6 div class="dnnForm XXX" type of layout with chang...LogicCircuit: LogicCircuit 2.12.5.15: Logic Circuit - is educational software for designing and simulating logic circuits. Intuitive graphical user interface, allows you to create unrestricted circuit hierarchy with multi bit buses, debug circuits behavior with oscilloscope, and navigate running circuits hierarchy. Changes of this versionThis release is fixing one but nasty bug. Two functions XOR and XNOR when used with 3 or more inputs were incorrectly evaluating their results. If you have a circuit that is using these functions...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8.1: Two fixes: Rar Decompression bug fixed. Error only occurred on some files Rar Decompression will throw an exception when another volume isn't found but one is expected.?????????? - ????????: All-In-One Code Framework ??? 2012-05-14: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ???OneCode??????,??????????6????Microsoft OneCode Sample,????2?Data Platform Sample?4?WPF Sample。???????????。 ????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 Data Platform Sample CSUseADO CppUseADO WPF Sample CSWPFMasterDetailBinding VBWPFMasterDetailBinding CSWPFThreading VBWPFThreading ....... ???????????blog: http://blog.csd...LINQ to Twitter: LINQ to Twitter Beta v2.0.25: Supports .NET 3.5, .NET 4.0, Silverlight 4.0, Windows Phone 7.1, Client Profile, and Windows 8. 100% Twitter API coverage. Also available via NuGet! Follow @JoeMayo.BlogEngine.NET: BlogEngine.NET 2.6: Get DotNetBlogEngine for 3 Months Free! Click Here for More Info BlogEngine.NET Hosting - 3 months free! Cheap ASP.NET Hosting - $4.95/Month - Click Here!! Click Here for More Info Cheap ASP.NET Hosting - $4.95/Month - Click Here! If you want to set up and start using BlogEngine.NET right away, you should download the Web project. If you want to extend or modify BlogEngine.NET, you should download the source code. If you are upgrading from a previous version of BlogEngine.NET, please take...BlackJumboDog: Ver5.6.2: 2012.05.07 Ver5.6.2 (1) Web???????、????????·????????? (2) Web???????、?????????? COMSPEC PATHEXT WINDIR SERVERADDR SERVERPORT DOCUMENTROOT SERVERADMIN REMOTE_PORT HTTPACCEPTCHRSET HTTPACCEPTLANGUAGE HTTPACCEPTEXCODINGGardens Point Parser Generator: Gardens Point Parser Generator version 1.5.0: ChangesVersion 1.5.0 contains a number of changes. Error messages are now MSBuild and VS-friendly. The default encoding of the *.y file is Unicode, with an automatic fallback to the previous raw-byte interpretation. The /report option has been improved, as has the automaton tracing facility. New facilities are included that allow multiple parsers to share a common token type. A complete change-log is available as a separate documentation file. The source project has been upgraded to Visual...Media Companion: Media Companion 3.502b: It has been a slow week, but this release addresses a couple of recent bugs: Movies Multi-part Movies - Existing .nfo files that differed in name from the first part, were missed and scraped again. Trailers - MC attempted to scrape info for existing trailers. TV Shows Show Scraping - shows available only in the non-default language would not show up in the main browser. The correct language can now be selected using the TV Show Selector for a single show. General Will no longer prompt for ...NewLife XCode ??????: XCode v8.5.2012.0508、XCoder v4.7.2012.0320: X????: 1,????For .Net 4.0?? XCoder????: 1,???????,????X????,?????? XCode????: 1,Insert/Update/Delete???????????????,???SQL???? 2,IEntityOperate?????? 3,????????IEntityTree 4,????????????????? 5,?????????? 6,??????????????Google Book Downloader: Google Books Downloader Lite 1.0: Google Books Downloader Lite 1.0Python Tools for Visual Studio: 1.5 Alpha: We’re pleased to announce the release of Python Tools for Visual Studio 1.5 Alpha. Python Tools for Visual Studio (PTVS) is an open-source plug-in for Visual Studio which supports programming with the Python language. PTVS supports a broad range of features including: • Supports Cpython, IronPython, Jython and Pypy • Python editor with advanced member, signature intellisense and refactoring • Code navigation: “Find all refs”, goto definition, and object browser • Local and remote debugging...New ProjectsAmlak: Amlak projectApparat: An Open Source Game/Simulation Engine made with C# and SlimDX.CarShop emulator: carshopDynaMaxx Server Backend: DynaMaxx Server BackendEntry-Level C# Password Generator: The Entry-Level C# Password Generator is a piece of software written for two purposes. To be kept as simple as possible for newcomers to the langauge to understand how to use the language and to help people make a new secure password for themselves.faccipractica: ESTA ES LA CAPA DE DATOSFACCIULEAM: ESTE ES UN PROYECTO DE PRACTICAFast C++ Math Expression Parser: The C++ Mathematical Expression Library (ExprTk) is a simple to use, easy to integrate and extremely efficient and fast mathematical expression parsing and evaluation engine. The parsing engine supports various kinds of functional and logic processing semantics and is very easily extendible.Font Data Catalog: A tool to store font dataGraffiti: Graffiti is a high-performance rendering engine built specifically for the Reach profile on top of XNA/Monogame with a very specific feature set * Support for the Reach profile * CPU/GPU vertex transformation (using SkinnedEffect) * Quake 3 shader style effects (Multi-pass) for anything Graffiti can render * Keyframed/procedural animation framework * Primitive rendering (antialiased, variable-sized points and lines) * Particle system using complex/primitive objects *Text rendering (w...homeland: A simple form Engine for Rails app.HTML5 for SharePoint 2010: HTML5 for SharePoint 2010 is a package of controls and webparts that allows using HTML5 controls in SharePoint. Image Cropper 4 Umbraco 5: Image cropper for Umbraco 5.IPickMovies: This is project for IPick Movies from Mezanmi Technologies.JFrame: jframeJustForTest: For Testkxcxw: This is a website project. Latence: projectLive: this is Live project.MASMYTEST: fdfMP3 player: Project for MUL.MSI Validation: MSI ValidationMyWp: wp applicationOpen Waves Activity Feed: Activity Feed component to be used in ASP.NET projects based on EPiServer CMS, SP, and other.pComboBox: Script ComboBox - Initial versions complete and functional ( current version is 1.54) program and documentation available on justcode.ca - http://www.justcode.ca/justwindowscode/ Source will be available at pcombobox.codeplex.com The basic functionality is shown in the below batch file to call this combobox was coded as follows pcombobox /p:one,"# two",three,4 /t:"title" echo %ERRORLEVEL% Basically you can call this combobox dialog window from a batch file or a vbscript an...Periodic.Net: Periodic.Net is a Periodic Table layout for Windows based on WPF and the .Net Framework 3.5 SP1, More info coming soon.Pob-Pong: A simple Pong game - the first Project of Elsor and Zakk.PodcastCasting: Podcast casting system for Podcasters that use voice talents in their storiesresolvendo.net: Projeto desenvolvido na 3 etapa do S2BSharepoint for TFS: Custom control to integrate files from Sharepoint to TFSShutdownAB: Windows service to shutdown computer after a backup.SSIS Restart Framework: A framework that provides restartability for SSIS 2012 projects. It is very much work in progress so at the current time, use at your own peril.Standalone Encrypted Sign In Library: The ECL library is a standalone connection library !stunserver: New version 1.1. This is the source code to STUNTMAN - an open source STUN server and client code by john selbie. Compliant with the latest RFCs including 5389, 5769, and 5780. Also includes backwards compatibility for RFC 3489. The stun server code is part of a larger personal project involving P2P file sharing and NAT traversal. Version 1.1 compiles on Linux, MacOS, BSD, and Solaris. Additional features are in development. www.stunprotocol.orgTask Scheduler Assistant: This is a very simple Windows service that watches folders/files and triggers an associated Scheduled Task accordingly. A simple * wildcard scheme is used for file triggers. Wildcards can only be in the filename and not in the path. This was initially built to be a Task Scheduler trigger from import processes from clients, providing the missing Trigger type from Window's Task Scheduler.Tech4WPF: Tech4WPF make it easier for developers to create technical applications. You will no longer have to create your own user controlls like knobs, gauges and simple charts. It's developed in C#/WPF. This project was inspirated by Qwt - Qt Widgets for Technical Applications http://qwt.sourceforge.net/. It's not port, but similar project, creating controls for technical aplications using .NET framework, WPF and all benefits of this platform like binding etc. Tech4WPF was created as a bachelo...TestProject_Git: git projectVS Templates for generating Duet Workspace sites: This project provides VS templates to easily create Duet Enterprise workspace sites.WinTest: This is a winform application. It need net framework 3.5 or higher version.WPF Progressive FizzBuzz Coding Assignment: Classic "FizzBuzz".YahalomProject: YahalomProject is for testing and using codeplex , tfs...Zorbo: Zorbo Server library was designed to add a unique twist to the Ares Galaxy P2P community. It features a large and detailed plugin architecture that allows developers to create rich chatroom experiences while being light-weight and fast.

    Read the article

  • Building applications with WCF - Intro

    - by skjagini
    I am going to write series of articles using Windows Communication Framework (WCF) to develop client and server applications and this is the first part of that series. What is WCF As Juwal puts in his Programming WCF book, WCF provides an SDK for developing and deploying services on Windows, provides runtime environment to expose CLR types as services and consume services as CLR types. Building services with WCF is incredibly easy and it’s implementation provides a set of industry standards and off the shelf plumbing including service hosting, instance management, reliability, transaction management, security etc such that it greatly increases productivity Scenario: Lets consider a typical bank customer trying to create an account, deposit amount and transfer funds between accounts, i.e. checking and savings. To make it interesting, we are going to divide the functionality into multiple services and each of them working with database directly. We will run test cases with and without transactional support across services. In this post we will build contracts, services, data access layer, unit tests to verify end to end communication etc, nothing big stuff here and we dig into other features of the WCF in subsequent posts with incremental changes. In any distributed architecture we have two pieces i.e. services and clients. Services as the name implies provide functionality to execute various pieces of business logic on the server, and clients providing interaction to the end user. Services can be built with Web Services or with WCF. Service built on WCF have the advantage of binding independent, i.e. can run against TCP and HTTP protocol without any significant changes to the code. Solution Services Profile: For creating a new bank customer, getting details about existing customer ProfileContract ProfileService Checking Account: To get checking account balance, deposit or withdraw amount CheckingAccountContract CheckingAccountService Savings Account: To get savings account balance, deposit or withdraw amount SavingsAccountContract SavingsAccountService ServiceHost: To host services, i.e. running the services at particular address, binding and contract where client can connect to Client: Helps end user to use services like creating account and amount transfer between the accounts BankDAL: Data access layer to work with database     BankDAL It’s no brainer not to use an ORM as many matured products are available currently in market including Linq2Sql, Entity Framework (EF), LLblGenPro etc. For this exercise I am going to use Entity Framework 4.0, CTP 5 with code first approach. There are two approaches when working with data, data driven and code driven. In data driven we start by designing tables and their constrains in database and generate entities in code while in code driven (code first) approach entities are defined in code and the metadata generated from the entities is used by the EF to create tables and table constrains. In previous versions the entity classes had  to derive from EF specific base classes. In EF 4 it  is not required to derive from any EF classes, the entities are not only persistence ignorant but also enable full test driven development using mock frameworks.  Application consists of 3 entities, Customer entity which contains Customer details; CheckingAccount and SavingsAccount to hold the respective account balance. We could have introduced an Account base class for CheckingAccount and SavingsAccount which is certainly possible with EF mappings but to keep it simple we are just going to follow 1 –1 mapping between entity and table mappings. Lets start out by defining a class called Customer which will be mapped to Customer table, observe that the class is simply a plain old clr object (POCO) and has no reference to EF at all. using System;   namespace BankDAL.Model { public class Customer { public int Id { get; set; } public string FullName { get; set; } public string Address { get; set; } public DateTime DateOfBirth { get; set; } } }   In order to inform EF about the Customer entity we have to define a database context with properties of type DbSet<> for every POCO which needs to be mapped to a table in database. EF uses convention over configuration to generate the metadata resulting in much less configuration. using System.Data.Entity;   namespace BankDAL.Model { public class BankDbContext: DbContext { public DbSet<Customer> Customers { get; set; } } }   Entity constrains can be defined through attributes on Customer class or using fluent syntax (no need to muscle with xml files), CustomerConfiguration class. By defining constrains in a separate class we can maintain clean POCOs without corrupting entity classes with database specific information.   using System; using System.Data.Entity.ModelConfiguration;   namespace BankDAL.Model { public class CustomerConfiguration: EntityTypeConfiguration<Customer> { public CustomerConfiguration() { Initialize(); }   private void Initialize() { //Setting the Primary Key this.HasKey(e => e.Id);   //Setting required fields this.HasRequired(e => e.FullName); this.HasRequired(e => e.Address); //Todo: Can't create required constraint as DateOfBirth is not reference type, research it //this.HasRequired(e => e.DateOfBirth); } } }   Any queries executed against Customers property in BankDbContext are executed against Cusomers table. By convention EF looks for connection string with key of BankDbContext when working with the context.   We are going to define a helper class to work with Customer entity with methods for querying, adding new entity etc and these are known as repository classes, i.e., CustomerRepository   using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CustomerRepository { private readonly IDbSet<Customer> _customers;   public CustomerRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _customers = bankDbContext.Customers; }   public IQueryable<Customer> Query() { return _customers; }   public void Add(Customer customer) { _customers.Add(customer); } } }   From the above code it is observable that the Query methods returns customers as IQueryable i.e. customers are retrieved only when actually used i.e. iterated. Returning as IQueryable also allows to execute filtering and joining statements from business logic using lamba expressions without cluttering the data access layer with tens of methods.   Our CheckingAccountRepository and SavingsAccountRepository look very similar to each other using System; using System.Data.Entity; using System.Linq; using BankDAL.Model;   namespace BankDAL.Repositories { public class CheckingAccountRepository { private readonly IDbSet<CheckingAccount> _checkingAccounts;   public CheckingAccountRepository(BankDbContext bankDbContext) { if (bankDbContext == null) throw new ArgumentNullException(); _checkingAccounts = bankDbContext.CheckingAccounts; }   public IQueryable<CheckingAccount> Query() { return _checkingAccounts; }   public void Add(CheckingAccount account) { _checkingAccounts.Add(account); }   public IQueryable<CheckingAccount> GetAccount(int customerId) { return (from act in _checkingAccounts where act.CustomerId == customerId select act); }   } } The repository classes look very similar to each other for Query and Add methods, with the help of C# generics and implementing repository pattern (Martin Fowler) we can reduce the repeated code. Jarod from ElegantCode has posted an article on how to use repository pattern with EF which we will implement in the subsequent articles along with WCF Unity life time managers by Drew Contracts It is very easy to follow contract first approach with WCF, define the interface and append ServiceContract, OperationContract attributes. IProfile contract exposes functionality for creating customer and getting customer details.   using System; using System.ServiceModel; using BankDAL.Model;   namespace ProfileContract { [ServiceContract] public interface IProfile { [OperationContract] Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth);   [OperationContract] Customer GetCustomer(int id);   } }   ICheckingAccount contract exposes functionality for working with checking account, i.e., getting balance, deposit and withdraw of amount. ISavingsAccount contract looks the same as checking account.   using System.ServiceModel;   namespace CheckingAccountContract { [ServiceContract] public interface ICheckingAccount { [OperationContract] decimal? GetCheckingAccountBalance(int customerId);   [OperationContract] void DepositAmount(int customerId,decimal amount);   [OperationContract] void WithdrawAmount(int customerId, decimal amount);   } }   Services   Having covered the data access layer and contracts so far and here comes the core of the business logic, i.e. services.   .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } ProfileService implements the IProfile contract for creating customer and getting customer detail using CustomerRepository. using System; using System.Linq; using System.ServiceModel; using BankDAL; using BankDAL.Model; using BankDAL.Repositories; using ProfileContract;   namespace ProfileService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Profile: IProfile { public Customer CreateAccount( string customerName, string address, DateTime dateOfBirth) { Customer cust = new Customer { FullName = customerName, Address = address, DateOfBirth = dateOfBirth };   using (var bankDbContext = new BankDbContext()) { new CustomerRepository(bankDbContext).Add(cust); bankDbContext.SaveChanges(); } return cust; }   public Customer CreateCustomer(string customerName, string address, DateTime dateOfBirth) { return CreateAccount(customerName, address, dateOfBirth); } public Customer GetCustomer(int id) { return new CustomerRepository(new BankDbContext()).Query() .Where(i => i.Id == id).FirstOrDefault(); }   } } From the above code you shall observe that we are calling bankDBContext’s SaveChanges method and there is no save method specific to customer entity because EF manages all the changes centralized at the context level and all the pending changes so far are submitted in a batch and it is represented as Unit of Work. Similarly Checking service implements ICheckingAccount contract using CheckingAccountRepository, notice that we are throwing overdraft exception if the balance falls by zero. WCF has it’s own way of raising exceptions using fault contracts which will be explained in the subsequent articles. SavingsAccountService is similar to CheckingAccountService. using System; using System.Linq; using System.ServiceModel; using BankDAL.Model; using BankDAL.Repositories; using CheckingAccountContract;   namespace CheckingAccountService { [ServiceBehavior(IncludeExceptionDetailInFaults = true)] public class Checking:ICheckingAccount { public decimal? GetCheckingAccountBalance(int customerId) { using (var bankDbContext = new BankDbContext()) { CheckingAccount account = (new CheckingAccountRepository(bankDbContext) .GetAccount(customerId)).FirstOrDefault();   if (account != null) return account.Balance;   return null; } }   public void DepositAmount(int customerId, decimal amount) { using(var bankDbContext = new BankDbContext()) { var checkingAccountRepository = new CheckingAccountRepository(bankDbContext); CheckingAccount account = (checkingAccountRepository.GetAccount(customerId)) .FirstOrDefault();   if (account == null) { account = new CheckingAccount() { CustomerId = customerId }; checkingAccountRepository.Add(account); }   account.Balance = account.Balance + amount; if (account.Balance < 0) throw new ApplicationException("Overdraft not accepted");   bankDbContext.SaveChanges(); } } public void WithdrawAmount(int customerId, decimal amount) { DepositAmount(customerId, -1*amount); } } }   BankServiceHost The host acts as a glue binding contracts with it’s services, exposing the endpoints. The services can be exposed either through the code or configuration file, configuration file is preferred as it allows run time changes to service behavior even after deployment. We have 3 services and for each of the service you need to define name (the class that implements the service with fully qualified namespace) and endpoint known as ABC, i.e. address, binding and contract. We are using netTcpBinding and have defined the base address with for each of the contracts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } <system.serviceModel> <services> <service name="ProfileService.Profile"> <endpoint binding="netTcpBinding" contract="ProfileContract.IProfile"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Profile"/> </baseAddresses> </host> </service> <service name="CheckingAccountService.Checking"> <endpoint binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Checking"/> </baseAddresses> </host> </service> <service name="SavingsAccountService.Savings"> <endpoint binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/> <host> <baseAddresses> <add baseAddress="net.tcp://localhost:1000/Savings"/> </baseAddresses> </host> </service> </services> </system.serviceModel> Have to open the services by creating service host which will handle the incoming requests from clients.   using System;   namespace ServiceHost { class Program { static void Main(string[] args) { CreateHosts(); Console.ReadLine(); }   private static void CreateHosts() { CreateHost(typeof(ProfileService.Profile),"Profile Service"); CreateHost(typeof(SavingsAccountService.Savings), "Savings Account Service"); CreateHost(typeof(CheckingAccountService.Checking), "Checking Account Service"); }   private static void CreateHost(Type type, string hostDescription) { System.ServiceModel.ServiceHost host = new System.ServiceModel.ServiceHost(type); host.Open();   if (host.ChannelDispatchers != null && host.ChannelDispatchers.Count != 0 && host.ChannelDispatchers[0].Listener != null) Console.WriteLine("Started: " + host.ChannelDispatchers[0].Listener.Uri); else Console.WriteLine("Failed to start:" + hostDescription); } } } BankClient    The client has no knowledge about service business logic other than the functionality it exposes through the contract, end points and a proxy to work against. The endpoint data and server proxy can be generated by right clicking on the project reference and choosing ‘Add Service Reference’ and entering the service end point address. Or if you have access to source, you can manually reference contract dlls and update clients configuration file to point to the service end point if the server and client happens to be being built using .Net framework. One of the pros with the manual approach is you don’t have to work against messy code generated files.   <system.serviceModel> <client> <endpoint name="tcpProfile" address="net.tcp://localhost:1000/Profile" binding="netTcpBinding" contract="ProfileContract.IProfile"/> <endpoint name="tcpCheckingAccount" address="net.tcp://localhost:1000/Checking" binding="netTcpBinding" contract="CheckingAccountContract.ICheckingAccount"/> <endpoint name="tcpSavingsAccount" address="net.tcp://localhost:1000/Savings" binding="netTcpBinding" contract="SavingsAccountContract.ISavingsAccount"/>   </client> </system.serviceModel> The client uses a façade to connect to the services   using System.ServiceModel; using CheckingAccountContract; using ProfileContract; using SavingsAccountContract;   namespace Client { public class ProxyFacade { public static IProfile ProfileProxy() { return (new ChannelFactory<IProfile>("tcpProfile")).CreateChannel(); }   public static ICheckingAccount CheckingAccountProxy() { return (new ChannelFactory<ICheckingAccount>("tcpCheckingAccount")) .CreateChannel(); }   public static ISavingsAccount SavingsAccountProxy() { return (new ChannelFactory<ISavingsAccount>("tcpSavingsAccount")) .CreateChannel(); }   } }   With that in place, lets get our unit tests going   using System; using System.Diagnostics; using BankDAL.Model; using NUnit.Framework; using ProfileContract;   namespace Client { [TestFixture] public class Tests { private void TransferFundsFromSavingsToCheckingAccount(int customerId, decimal amount) { ProxyFacade.CheckingAccountProxy().DepositAmount(customerId, amount); ProxyFacade.SavingsAccountProxy().WithdrawAmount(customerId, amount); }   private void TransferFundsFromCheckingToSavingsAccount(int customerId, decimal amount) { ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, amount); ProxyFacade.CheckingAccountProxy().WithdrawAmount(customerId, amount); }     [Test] public void CreateAndGetProfileTest() { IProfile profile = ProxyFacade.ProfileProxy(); const string customerName = "Tom"; int customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)).Id; Customer customer = profile.GetCustomer(customerId); Assert.AreEqual(customerName,customer.FullName); }   [Test] public void DepositWithDrawAndTransferAmountTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Smith" + DateTime.Now.ToString("HH:mm:ss"); var customer = profile.CreateCustomer(customerName, "NJ", new DateTime(1982, 1, 1)); // Deposit to Savings ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 100); ProxyFacade.SavingsAccountProxy().DepositAmount(customer.Id, 25); Assert.AreEqual(125, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); // Withdraw ProxyFacade.SavingsAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(95, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id));   // Deposit to Checking ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 60); ProxyFacade.CheckingAccountProxy().DepositAmount(customer.Id, 40); Assert.AreEqual(100, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); // Withdraw ProxyFacade.CheckingAccountProxy().WithdrawAmount(customer.Id, 30); Assert.AreEqual(70, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Savings to Checking TransferFundsFromSavingsToCheckingAccount(customer.Id,10); Assert.AreEqual(85, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id));   // Transfer from Checking to Savings TransferFundsFromCheckingToSavingsAccount(customer.Id, 50); Assert.AreEqual(135, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customer.Id)); Assert.AreEqual(30, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customer.Id)); }   [Test] public void FundTransfersWithOverDraftTest() { IProfile profile = ProxyFacade.ProfileProxy(); string customerName = "Angelina" + DateTime.Now.ToString("HH:mm:ss");   var customerId = profile.CreateCustomer(customerName, "NJ", new DateTime(1972, 1, 1)).Id;   ProxyFacade.SavingsAccountProxy().DepositAmount(customerId, 100); TransferFundsFromSavingsToCheckingAccount(customerId,80); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); Assert.AreEqual(80, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId));   try { TransferFundsFromSavingsToCheckingAccount(customerId,30); } catch (Exception e) { Debug.WriteLine(e.Message); }   Assert.AreEqual(110, ProxyFacade.CheckingAccountProxy().GetCheckingAccountBalance(customerId)); Assert.AreEqual(20, ProxyFacade.SavingsAccountProxy().GetSavingsAccountBalance(customerId)); } } }   We are creating a new instance of the channel for every operation, we will look into instance management and how creating a new instance of channel affects it in subsequent articles. The first two test cases deals with creation of Customer, deposit and withdraw of month between accounts. The last case, FundTransferWithOverDraftTest() is interesting. Customer starts with depositing $100 in SavingsAccount followed by transfer of $80 in to checking account resulting in $20 in savings account.  Customer then initiates $30 transfer from Savings to Checking resulting in overdraft exception on Savings with $30 being deposited to Checking. As we are not running both the requests in transactions the customer ends up with more amount than what he started with $100. In subsequent posts we will look into transactions handling.  Make sure the ServiceHost project is set as start up project and start the solution. Run the test cases either from NUnit client or TestDriven.Net/Resharper which ever is your favorite tool. Make sure you have updated the data base connection string in the ServiceHost config file to point to your local database

    Read the article

  • CodePlex Daily Summary for Saturday, June 11, 2011

    CodePlex Daily Summary for Saturday, June 11, 2011Popular ReleasesSimplePlanner: v2.0b: For 2011-2012 Sem 1 ???2011-2012 ????Econ NetVert: 2.4.2.14 Production: Many thanks to paulhickman for testing and reporting issues. - this release has lots of bugfixes in codeconversion when using NRefactory 4.0 (local) and VisualStudio Projects conversionVisual Studio 2010 Help Downloader: 1.0.0.3: Domain name support for proxy Cleanup old packages bug Writing to EventLog with UAC enabled bug Small fixes & RefactoringMedia Companion: MC 3.406b weekly: With this version change a movie rebuild is required when first run -else MC will lock up on exit. Extract the entire archive to a folder which has user access rights, eg desktop, documents etc. Refer to the documentation on this site for the Installation & Setup Guide Important! If you find MC not displaying movie data properly, please try a 'movie rebuild' to reload the data from the nfo's into MC's cache. Fixes Movies Readded movie preference to rename invalid or scene nfo's to info ext...SCCM Client Actions Tool: SCCM Client Actions Tool v0.5.1: SCCM Client Actions Tool v0.5.1 is currently the most stable version and includes all of the functionality requested so far. It comes with following changes since last version: Fixed an incorrect path to x64 client setup folder. It comes as a ZIP file that contains three files: ClientActionsTool.hta – The tool itself. Cmdkey.exe – command line tool for managing cached credentials. This is needed for alternate credentials feature when running the HTA on Windows XP. Cmdkey.exe is natively a...Windows Azure VM Assistant: AzureVMAssist V1.0.0.5: AzureVMAssist V1.0.0.5 (Debug) - Test Release VersionSizeOnDisk: 1.8.0.3: Fix (issue 317): Main window icon loading error on windows server 2003 32bit runing on x86 cpu. Bypass of the Microsoft windows comexception.NetOffice - The easiest way to use Office in .NET: NetOffice Release 0.9: Changes: - fix examples (include issue 16026) - add new examples - 32Bit/64Bit Walkthrough is now available in technical Documentation. Includes: - Runtime Binaries and Source Code for .NET Framework:......v2.0, v3.0, v3.5, v4.0 - Tutorials in C# and VB.Net:..............................................................COM Proxy Management, Events, etc. - Examples in C# and VB.Net:............................................................Excel, Word, Outlook, PowerPoint, Access - COMAddi...Reusable Library: V1.1.3: A collection of reusable abstractions for enterprise application developerClosedXML - The easy way to OpenXML: ClosedXML 0.54.0: New on this release: 1) Mayor performance improvements. 2) AdjustToContents now take into account the text rotation. 3) Fixed issues 6782, 6784, 6788HTML-IDEx: HTML-IDEx .15 ALPHA: This release fixes line counting a little bit and adds the masshighlight() sub, which highlights pasted and inserted code.AutoLoL: AutoLoL v2.0.3: - Improved summoner spells are now displayed - Fixed some of the startup errors people got - Double clicking an item selects it - Some usability changes that make using AutoLoL just a little easier - Bug fixes AutoLoL v2 is not an update, but an entirely new version! Please install to a different directory than AutoLoL v1Host Profiles: Host Profiles 1.0: Host Profiles 1.0 Release Quickly modify host file Automatically flush dnsVidCoder: 0.9.2: Updated to HandBrake 4024svn. This fixes problems with mpeg2 sources: corrupted previews, incorrect progress indicators and encodes that incorrectly report as failed. Fixed a problem that prevented target sizes above 2048 MB.SharePoint Search XSL Samples: SharePoint 2010 Samples: I have updated some of the samples from the 2007 release. These all work in SharePoint 2010. I removed the Pivot on File Extension because SharePoint 2010 search has refiners that perform the same function.AcDown????? - Anime&Comic Downloader: AcDown????? v3.0 Beta5: ??AcDown?????????????,??????????????,????、????。?????Acfun????? ????32??64? Windows XP/Vista/7 ????????????? ??:????????Windows XP???,?????????.NET Framework 2.0???(x86)?.NET Framework 2.0???(x64),?????"?????????"??? ??v3.0 Beta5 ?????????? ???? ?? ???????? ???"????????"?? ????????????? ????????/???? ?? ???"????"??? ?? ??????????? ?? ?? ??????????? ?? ?????????????????? ??????????????????? ???????????????? ????????????Discussions???????? ????AcDown??????????????VFPX: GoFish 4 Beta 1: Current beta is Build 144 (released 2011-06-07 ) See the GoFish4 info page for details and video link: http://vfpx.codeplex.com/wikipage?title=GoFishShowUI: Write-UI -in PowerShell: ShowUI: ShowUI is a PowerShell module to help you write rich user interfaces in script.SharePoint 2010 FBA Pack: SharePoint 2010 FBA Pack 1.0.3: Fixed User Management screen when "RequiresQuestionAndAnswer" set to true Reply to Email Address can now be customized User Management page now only displays users that reside in the membership database Web parts have been changed to inherit from System.Web.UI.WebControls.WebParts.WebPart, so that they will display on anonymous application pages For installation and configuration steps see here.TerrariViewer: TerrariViewer v2.5: Added new items associated with Terraria v1.0.3 to the character editor. Fixed multiple bugs with Piggy Bank EditoryNew Projects.NET MVC Base by OST: The OST .NET MVC project is currently in infancy but contains html helper extensions, model binders and other extensions to the mvc 3 framework. It also contains a separate project for easily consuming JQGrid requests and return the correct JSON to the jQuery plugin.Asterisk AMI Proxy suite: The AMI Proxy suite project will bridge the gap between your asterisk phone system and your windows based applications. The AMI Proxy is a service that can be used to simply proxy AMI connections to your asterisk server but also adds call stat generation and line monitors.Augusta Developers Guild: The web site for the user group Augusta Developers GuildAzure Blob Pusher: File uploader allows bulk upload of files from client to Azure blob storage. FileSystemWatcher notices new files in configured directory. SQL Express database tracks files as they are pushed up to blob storage and then processed. Local Directory is cleaned up after acknowledgement.Basic Class Library: A collection of functions created to eased up the programmers life in order to solved basic task like validation, AmountTowords, Encrypt & Decryption it also has capability to Connect and retrieve sql data just in one line of code, and some other Basic but very Important task.Basic RTS Game using RolePlayingGame Tool kit based XNA: this is a basic RTS Game. i can make it easy by using RolePlayingGame Took Kit based XNA.Batch Scheduler using .Net 4, MEF and Entity framework 4.1 (Magic Unicorn): Simple Batch Architecture written on C#. Uses the .NET 4, MEF and Entity Framework 4.1 Code First. If you dont need a scheduler, just use the sample code. Agents can be scheduled through a central database table. Plug-ins (or jobs) are launched through MEF. DropBox Linker: The goal of DropBox Linker project is to improve DropBox service client usability. It intellectually automates copying URLs to clipboard after publishing a file. Multiple files at once is supported. Additionally, it corrects URL on file rename after its link been copied.Exchange Spigot for NodeXL: Exchange Spigot for NodeXL enables Microsoft Excel plugin NodeXL to collect messaging data from the Microsoft Exchange Server and display that data as a graph. It is developed in C#.G - Low Level Graphics API: A low level graphics api, in similar style to OpenGL, but more OO. Can drastically speed up OpenGL based apps. Makes full use of Net4.0 features such as lambada expressions where useful. (C#, OpenTK powered. This is not for C++) GpgAPI - A C# Api for Gpg: GpgAPI is a C# API for Gpg. Gpg is a command line software to encrypt, decrypt files with a symmetric or assymetric key. You can also managed public keys, import keys from the web, export your public keys, etc. GpgAPI is an interface to Gpg. In C#, with a few lines, you can execute gpg to encrypt, decrypt, etc. The license of this project is "GPL v3"; it is NOT GPL v2. GPL v3 is not present in the licenses list so I had to choose GPL v2. So if use GpgAPI, you must accept the GPL v...iDreamBoard: iDreamBoard is an application to ease the process of creating and editing dreamboard themes for iOS devices. UNDER DEVELOPMENTIETouch: Use IE Touch to access multitouch events from JavaScript in IE. This project uses some of the code from Windows 7 Multitouch .NET Interop Sample Library available in http://archive.msdn.microsoft.com/WindowsTouch/Release/ProjectReleases.aspx?ReleaseId=2127iMaker-Lion: Applications post installation pour Mac OS X sur PC.Informatic System Ma Chung University Alumni Center: Informatic System Ma Chung University Alumni Center is a website for alumni at Information System Faculty at Ma Chung University at Malang, IndonesiaJasc (just another script compressor): Just another script compressor, but that's by far the easiest to use! Use Jasc to merge compress your javascript (JS) and style sheets (CSS). Just drop the executable in your javascript / CSS folder and run it. Voyla! It will handle everything by itself. Compression is based on Microsoft Ajax Minifier. Jasc can handle single or multiple files, will accept regular expressions to remove specific lines from the source code, and can also optimize the results by shrinking variable names and twe...LINQ Extensions Library: A library of useful LINQ extensions allowing for arithmetic manipulation of sequences, statistical analysis, sequence generation, sequence manipulation, pattern detection and more.Mail2FotoFrame: Pulls emails from a POP3-Account, extracts picture-attachments and saves them to a SD-card, so you can view them on a fotoframe.Make web (or sharepoint) pages screenshots with Powershell: From a text file containing url, make screenshot of each page. This powershell script use iecapt.exe project. You can manage width format and delay for each captureModé Internationalé (E-Commerce Site): Mode International is a fictional clothing store, and this website project is meant to be the e-Commerce site for this store. So far, the project has been developed using Java Enterprise Edition and Sockets, but I am considering making the use of C# in ASP .NET in the future.Music House: Site dedicado a musica e afiliadosNiphor's ToolBox: ?????????Nordic nRF24L01+ .NET Micro Framework Driver: Library that enables .NET Micro Framework developers to use Nordic 2.4 GHz wireless tranceivers. This library can be used on any net mf device with SPI interface.Petey's Game Engine: Petey's Game Engine is a XNA 4.0 game engine with a featured blog describing each step of the development. This is a learning project so people may follow and learn as my project evolves.ReportGenerator: ReportGenerator converts XML reports generated by PartCover or NCover into a readable HTML report.Rise of the rabbits: Rise of the RabbitsSapaFinance: SapaFinanceSCSM Copy Object: SCSM Copy Object is a CodePlex project that enables users to select objects in the System Center Service Manager console, click a task in the task pane and create a copy of the selected object(s). Objects can also be copied from command line executables.SharePoint 2010 Language Pack Downloader: SharePoint 2010 Language Pack Downloader is a tool to quickly download multiple language pack files for the SharePoint 2010 Server platform.SharePoint XQuery Reporting Solution: My company's project needs this technology because of the current company has a lot of data is stored in SharePoint, through Infopath data collection, data storage and not all InfoPath field values ​​are saved into a database, a considerable number of data in XML File, which caused difficulties in query and print a report, need a solution. Simply explained the thinking: First step:Put SharePoint, InfoPath XML file later (I did a small tool XMLToDB) or through the Event Handle sync sav...The Pacman: A simple single player game similar to Pacman. Developed using XNA game framework and C#. Game demonstrates usage of fuzzy logic for controlling the AI ghosts. This game is developed to test the effect of fuzzy logic AI in Pacman game.TodoStrike: Todo Strike is a simple todo list keeper.WallpaperDeleter: Simple program to delete the current background wallpaper image.Windows Phone 7 Continuous Integration Testing Framework: WP7 CI is a port of the Silverlight Toolkit Test Framework with added CI support, allowing tests to be run in the emulator from the command lineWolfpack - Distributed System Monitoring: Wolfpack is an extensible, .Net windows service framework for running jobs to monitor your software, system & business KPI's. The data collected can be sent directly to Growl, Geckoboard, WCF, SQL, SQLite, NServiceBus. It comes loaded with some tasks but it's simple to implement your own!

    Read the article

  • Handling HumanTask attachments in Oracle BPM 11g PS4FP+ (II)

    - by ccasares
    Retrieving uploaded attachments -UCM- As stated in my previous blog entry, Oracle BPM 11g 11.1.1.5.1 (aka PS4FP) introduced a new cool feature whereby you can use Oracle WebCenter Content (previously known as Oracle UCM) as the repository for the human task attached documents. For more information about how to use or enable this feature, have a look here. The attachment scope (either TASK or PROCESS) also applies to UCM-attachments. But even with this other feature, one question might arise when using UCM attachments. How can I get them from within the process? The first answer would be to use the same getTaskAttachmentContents() XPath function already explained in my previous blog entry. In fact, that's the way it should be. But in Oracle BPM 11g 11.1.1.5.1 (PS4FP) and 11.1.1.6.0 (PS5) there's a bug that prevents you to do that. If you invoke such function against a UCM-attachment, you'll get a null content response (bug#13907552). Even if the attachment was correctly uploaded. While this bug gets fixed, next I will show a workaround that lets me to retrieve the UCM-attached documents from within a BPM process. Besides, the sample will show how to interact with WCC API from within a BPM process.Aside note: I suggest you to read my previous blog entry about Human Task attachments where I briefly describe some concepts that are used next, such as the execData/attachment[] structure. Sample Process I will be using the following sample process: A dummy UserTask using "HumanTask2" Human Task, followed by an Embedded Subprocess that will retrieve the attachments payload. In this case, and here's the key point of the sample, we will retrieve such payload using WebCenter Content WebService API (IDC): and once retrieved, we will write each of them back to a file in the server using a File Adapter service: In detail:  We will use the same attachmentCollection XSD structure and same BusinessObject definition as in the previous blog entry. However we create a separate variable, named attachmentUCM, based on such BusinessObject. We will still need to keep a copy of the HumanTask output's execData structure. Therefore we need to create a new variable of type TaskExecutionData (different one than the other used for non-UCM attachments): As in the non-UCM attachments flow, in the output tab of the UserTask mapping, we'll keep a copy of the execData structure: Now we get into the embedded subprocess that will retrieve the attachments' payload. First, and using an XSLT transformation, we feed the attachmentUCM variable with the following information: The name of each attachment (from execData/attachment/name element) The WebCenter Content ID of the uploaded attachment. This info is stored in execData/attachment/URI element with the format ecm://<id>. As we just want the numeric <id>, we need to get rid of the protocol prefix ("ecm://"). We do so with some XPath functions as detailed below: with these two functions being invoked, respectively: We, again, set the target payload element with an empty string, to get the <payload></payload> tag created. The complete XSLT transformation is shown below. Remember that we're using the XSLT for-each node to create as many target structures as necessary.  Once we have fed the attachmentsUCM structure and so it now contains the name of each of the attachments along with each WCC unique id (dID), it is time to iterate through it and get the payload. Therefore we will use a new embedded subprocess of type MultiInstance, that will iterate over the attachmentsUCM/attachment[] element: In each iteration we will use a Service activity that invokes WCC API through a WebService. Follow these steps to create and configure the Partner Link needed: Login to WCC console with an administrator user (i.e. weblogic). Go to Administration menu and click on "Soap Wsdls" link. We will use the GetFile service to retrieve a file based on its dID. Thus we'll need such service WSDL definition that can be downloaded by clicking the GetFile link. Save the WSDL file in your JDev project folder. In the BPM project's composite view, drag & drop a WebService adapter to create a new External Reference, based on the just added GetFile.wsdl. Name it UCM_GetFile. WCC services are secured through basic HTTP authentication. Therefore we need to enable the just created reference for that: Right-click the reference and click on Configure WS Policies. Under the Security section, click "+" to add the "oracle/wss_username_token_client_policy" policy The last step is to set the credentials for the security policy. For the sample we will use the admin user for WCC (weblogic/welcome1). Open the composite.xml file and select the Source view. Search for the UCM_GetFile entry and add the following highlighted elements into it:   <reference name="UCM_GetFile" ui:wsdlLocation="GetFile.wsdl">     <interface.wsdl interface="http://www.stellent.com/GetFile/#wsdl.interface(GetFileSoap)"/>     <binding.ws port="http://www.stellent.com/GetFile/#wsdl.endpoint(GetFile/GetFileSoap)"                 location="GetFile.wsdl" soapVersion="1.1">       <wsp:PolicyReference URI="oracle/wss_username_token_client_policy"                            orawsp:category="security" orawsp:status="enabled"/>       <property name="weblogic.wsee.wsat.transaction.flowOption"                 type="xs:string" many="false">WSDLDriven</property>       <property name="oracle.webservices.auth.username"                 type="xs:string">weblogic</property>       <property name="oracle.webservices.auth.password"                 type="xs:string">welcome1</property>     </binding.ws>   </reference> Now the new external reference is ready: Once the reference has just been created, we should be able now to use it from our BPM process. However we find here a problem. The WCC GetFile service operation that we will use, GetFileByID, accepts as input a structure similar to this one, where all element tags are optional: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">    <get:dID>?</get:dID>   <get:rendition>?</get:rendition>   <get:extraProps>      <get:property>         <get:name>?</get:name>         <get:value>?</get:value>      </get:property>   </get:extraProps></get:GetFileByID> and we need to fill up just the <get:dID> tag element. Due to some kind of restriction or bug on WCC, the rest of the tag elements must NOT be sent, not even empty (i.e.: <get:rendition></get:rendition> or <get:rendition/>). A sample request that performs the query just by the dID, must be in the following format: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/">   <get:dID>12345</get:dID></get:GetFileByID> The issue here is that the simple mapping in BPM does create empty tags being a sample result as follows: <get:GetFileByID xmlns:get="http://www.stellent.com/GetFile/"> <get:dID>12345</get:dID> <get:rendition/> <get:extraProps/> </get:GetFileByID> Although the above structure is perfectly valid, it is not accepted by WCC. Therefore, we need to bypass the problem. The workaround we use (many others are available) is to add a Mediator component between the BPM process and the Service that simply copies the input structure from BPM but getting rid of the empty tags. Follow these steps to configure the Mediator: Drag & drop a new Mediator component into the composite. Uncheck the creation of the SOAP bindings and use the Interface Definition from WSDL template and select the existing GetFile.wsdl Double click in the mediator to edit it. Add a static routing rule to the GetFileByID operation, of type Service and select References/UCM_GetFile/GetFileByID target service: Create the request and reply XSLT mappers: Make sure you map only the dID element in the request: And do an Auto-mapper for the whole response: Finally, we can now add and configure the Service activity in the BPM process. Drag & drop it to the embedded subprocess and select the NormalizedGetFile service and getFileByID operation: Map both the input: ...and the output: Once this embedded subprocess ends, we will have all attachments (name + payload) in the attachmentsUCM variable, which is the main goal of this sample. But in order to test everything runs fine, we finish the sample writing each attachment to a file. To that end we include a final embedded subprocess to concurrently iterate through each attachmentsUCM/attachment[] element: On each iteration we will use a Service activity that invokes a File Adapter write service. In here we have two important parameters to set. First, the payload itself. The file adapter awaits binary data in base64 format (string). We have to map it using XPath (Simple mapping doesn't recognize a String as a base64-binary valid target): Second, we must set the target filename using the Service Properties dialog box: Again, note how we're making use of the loopCounter index variable to get the right element within the embedded subprocess iteration. Final blog entry about attachments will handle how to inject documents to Human Tasks from the BPM process and how to share attachments between different User Tasks. Will come soon. Again, once I finish will all posts on this matter, I will upload the whole sample project to java.net.

    Read the article

  • CodePlex Daily Summary for Tuesday, October 09, 2012

    CodePlex Daily Summary for Tuesday, October 09, 2012Popular ReleasesScript SQL Server Configuration: Release 3.0.9: Release 3.0.9 Rewrote trigger scripting. If encrypted triggers are encountered they are listed in a commented line. Scripts event notifications Added an option to script a single database (in addition to the instance) using the /scriptdb parameter. Script user-defined end points Script Service Broker objects Skip database mail on Express EditionMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.69: Fix for issue #18766: build task should not build the output if it's newer than all the input files. Fix for Issue #18764: build taks -res switch not working. update build task to concatenate input source and then minify, rather than minify and then concatenate. include resource string-replacement root name in the assumed globals list. Stop replacing new Date().getTime() with +new Date -- the latter is smaller, but turns out it executes up to 45% slower. add CSS support for single-...D3 Loot Tracker: 1.5.2: now recording server ip for each drop.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.3: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features Attachable Behaviors AwaitableUI extensions Controls Converters Debugging helpers Extension methods Imaging helpers IO helpers VisualTree helpers Samples Recent changes NOTE:...DevLib: 69721 binary dll: 69721 binary dllVidCoder: 1.4.4 Beta: Fixed inability to create new presets with "Save As".MCEBuddy 2.x: MCEBuddy 2.3.2: Changelog for 2.3.2 (32bit and 64bit) 1. Added support for generating XBMC XML NFO files for files in the conversion queue (store it along with the source video with source video name.nfo). Right click on the file in queue and select generate XML 2. UI bugifx, start and end trim box locations interchanged 3. Added support for removing commercials from non DVRMS/WTV files (MP4, AVI etc) 4. Now checking for Firewall port status before enabling (might help with some firewall problems) 5. User In...Sandcastle Help File Builder: SHFB v1.9.5.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release supports the Sandcastle October 2012 Release (v2.7.1.0). It includes full support for generating, installing, and removing MS Help Viewer files. This new release suppor...Sofire Suite v1.6: XSqlModelGenerator.AddIn: 1、?? VS2010/2012 2、?? .NET FRAMEWORK 2.0 3、?? SOFIRE V1.6ClosedXML - The easy way to OpenXML: ClosedXML 0.68.0: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...Json.NET: Json.NET 4.5 Release 10: New feature - Added Portable build to NuGet package New feature - Added GetValue and TryGetValue with StringComparison to JObject Change - Improved duplicate object reference id error message Fix - Fixed error when comparing empty JObjects Fix - Fixed SecAnnotate warnings Fix - Fixed error when comparing DateTime JValue with a DateTimeOffset JValue Fix - Fixed serializer sometimes not using DateParseHandling setting Fix - Fixed error in JsonWriter.WriteToken when writing a DateT...Readable Passphrase Generator: KeePass Plugin 0.7.2: Changes: Tested against KeePass 2.20.1 Tested under Ubuntu 12.10 (and KeePass 2.20) Added GenerateAsUtf8 method returning the encrypted passphrase as a UTF8 byte array.patterns & practices: Prism: Prism for .NET 4.5: This is a release does not include any functionality changes over Prism 4.1 Desktop. These assemblies target .NET 4.5. These assemblies also were compiled against updated dependencies: Unity 3.0 and Common Service Locator (Portable Class Library).Snoop, the WPF Spy Utility: Snoop 2.8.0: Snoop 2.8.0Announcing Snoop 2.8.0! It's been exactly six months since the last release, and this one has a bunch of goodies in it. In particular, there is now a PowerShell scripting tab, compliments of Bailey Ling. With this tab, the possibilities are limitless. It basically lets you automate/script the application that you are Snooping. Bailey has a couple blog posts (one and two) on his tab already, and I am sure more is to come. Please note that if you do not have PowerShell installed, y...Z3: Z3 4.1.2: Minor fixes. Now, z3 compiles with gcc 4.7.x.NET Micro Framework: .NET MF 4.3 (Beta): This is the 4.3 Beta version of the .NET Micro Framework. Feature List for v4.3 Support for Visual Studio 2012 (including the Windows Desktop Express version) All v4.2 QFEs features and bug fixes (PWM enhancements, lwIP and network driver reliability improvements, Analog Output, WinUSB and latest GCC support) Improved diagnostic information for deployment Decreased boot time Bug fixes Work Item 1736 - Create link for MFDeploy under start menu Work Item 1504 - Customizing lwIP o...NTCPMSG: V1.2.0.0: Allocate an identify cableid for each single connection cable. * Server can asend to specified cableid directly.Team Foundation Server Word Add-in: Version 1.0.12.0622: Welcome to the Visual Studio Team Foundation Server Word Add-in Supported Environments Microsoft Office Word 2007 and 2010 X86 (32-bit) Team Foundation Server 2010 Object Model TFS 2010, 2012 and TFS Service supported, using TFS OM / Explorer 2010. Quality-Bar Details Tool has been reviewed by Visual Studio ALM Rangers Tool has been through an independent technical and quality review All critical bugs have been resolved Known Issues / Bugs WI#43553 - The Acceptance criteria is not pu...UMD????? - PC?: UMDEditor?????V2.7: ??:http://jianyun.org/archives/948.html =============================================================================== UMD??? ???? =============================================================================== 2.7.0 (2012-10-3) ???????“UMD???.exe”??“UMDEditor.exe” ?????????;????????,??????。??????,????! ??64????,??????????????bug ?????????????,???? ???????????????? ???????????????,??????????bug ------------------------------------------------------- ?? reg.bat ????????????。 ????,??????txt/u...Untangler: Untangler 1.0.0: Add a missing file from first releaseNew Projects3dBuzz: Denise's website for 3d projection mapping artists to develop a web community to show and discuss their work.Advanced DataGridView with Excel-like auto filter: Windows Forms DataGridView Control with Excel-Like auto-filter context menu Windows Forms DataGridView ??????? ? Excel-???????? ????-????????azure media services admin panel: Simple azure media services dashboard. Upload media assets, queue encoder tasks and stream your audio\video assets.BackupCleaner.Net: A C#.Net-based tool for automatically removing old backups selectively. It can be used when you already make daily backups on disk, and want to clean them up while still keeping some of the ones made weeks, months or years ago.Bible Lib: BibleLib is a .net compatible library containing information for books, chapters and verses in the Old Testament.CRM 4.0 to CRM 2011 Queues Upgrades: for more information and documentation, please refer to http://mayankp.wordpress.com/2012/05/25/crm-4-0-to-crm-2011-queues-upgrades/ CSV File Reader and Writer for .NET: C# classes for reading and writing CSV files. Support for multi-line fields, custom delimiter and quote characters, options for how empty lines are handled.DotNetNuke Boards: DNN Boards is a task management solution that allows each user to have their own task board or social group members can collaborate within a single board.FarmaciasCruzAzul: Sistema Cruz AzulFindValueInDatabase: This solution finds the value you input in the hole given database and tells you which columns of which tables have the value you are looking for.Fish Tank: Social networking site for Aquarium enthusiasts.GSBA Apps: GSBA App for Windows 8Heuristics for the Vehicle Routing Problem: This is the code for our class, IEMS 482.Icaro - UPN: Icaro UPN es la recopilación de todos los preyectos realizados en clases de los diferentes proyectos que Enseño, espero les Sirva.JSLogger - free logging library for JavaScript: JSLogger is a free JavaScript Library for log information during the duration of your client script. The target is very easy >>Find every javasvript error<<Just little strategy: Just little strategy writen in C# using XNA Game Studio Now under developmentLightweight Medata Reader: The Lightweight Metadata Reader (LMR) is a tooling friendly version of the CLR Reflection APIs that takes no dependency on the CLR Loader.My Solution: No description for it nowoden????: ???????????????^^ooaavee.net: TODOPath Splitter: Path Splitter uses Roslyn to convert a method into a set of methods each equivalent to a distinct execution path. Assume annotations are added for use with Pex.Planning Poker for Azure: Planning Poker application allows distributed teams to play planning poker just using web browser. It can be deployed to IIS or Azure cloud service.plasmatrim.net: A library for controlling the USB PlasmaTrim from a .net application.powersaver: A simple utility, which will turn off your monitor when you lock your work station.Project13251008: gterProject13261008: asdsaProject13271008: sdfPyfus Reload: Server-side framework for Dofus 1.29.1 gameRei do Biscuit: E-commerce do rei do biscuitSofire Suite v1.6: Sofire Suite v1.6Sofire XSql: Sofire XSqlSosa Analysis Module (SAM): Numerical Analysis and data visualization program for SOSA psychological experiment software.SyncProject: The summary is coming soon... Just be patient ! Tistory Syntax Highlighter: ???? SyntaxHighlighter ????TJBHJJ: this is a website for company!tricogol App: nonetuts: My first SVNWatchr Change Control Management: Change control management for development and administrative teams focused on lean or agile processes.WCF Simple multi chat: This is a simple Windows form application that it's like a 'chat room'. Multiple users can login and chat with others users. The chat's core is powered by WCFWindows Event Log Email Notification: This will read the windows event logs based on the supplied xpath filters and email the specified addresses if there are matches.

    Read the article

  • difficulties in developing gantt chart using jquery plugins

    - by Lina
    Hi, I'm trying to develop a gantt like chart using jquery plugins and asp.net mvc2... on the y-axis i like to have names of people to whom tasks will be assigned... while on the x-axis i like to have the tasks... i haven't found good plugins for this...so any suggestions??? it is also important to have more than 1 task on a row, i.e. sam kan have more than 1 task on the same row... i'm thankful for your help Lina

    Read the article

  • WCF, Timer Jobs, Web Service which is better ???

    - by kannan.ambadi
    I am working with a Web application, based on Asp.Net 3.5 and WSS 3.0 platform. Recenlty i've got a task as follows. Import bank statement using FTX - Desktop application and parse those statements into database in every 24 hours ie. i need to download bank statement with the help of a desktop application(which i can call by batch file). Then i have to go through each statement(text file) and convert those data into our database for future reference. As far as i know, .Net provides the following options to implement such a functionality. SharePoint Timer Jobs Web Services WCF Windows Services I would like to go for SharePoint Timer Jobs, but there are some plans to move whole application to Asp.net platform. I am interested with WCF since i haven't much experience with WCF applications, but not in a position to take final decision :) Which is the most suitable way for this kind of task? Please suggest.

    Read the article

  • Html.DropDownListFor not behaving as expected ASP.net MVC

    - by rybl
    Hello, I am new to ASP.net MVC and I am having trouble getting dropdown lists to work correctly. I have a strongly typed view that is attempting to use a Html.DropDownListFor as follows: <%=Html.DropDownListFor(Function(model) model.Arrdep, Model.ArrdepOptions)%> I am populating the list with a property in my model as follows: Public ReadOnly Property ArrdepOptions() As List(Of SelectListItem) Get Dim list As New List(Of SelectListItem) Dim arriveListItem As New SelectListItem() Dim departListItem As New SelectListItem() arriveListItem.Text = "Arrive At" arriveListItem.Value = ArriveDepart.Arrive departListItem.Text = "Depart At" departListItem.Value = ArriveDepart.Depart Select Case Me.Arrdep Case ArriveDepart.Arrive : arriveListItem.Selected = True Case Else : departListItem.Selected = True End Select list.Add(departListItem) list.Add(arriveListItem) Return list End Get End Property The Select Case works find and it sets the right SelectListItem as Selected, but when my view renders the dropdown list no matter what is marked as selected the generated HTML does not have anything selected. Am I obviously doing something wrong or missing something, but I can't for the life of me figure out what.

    Read the article

  • ErrorListProvider in VS2010 throws InvalidOperationException about IVsTaskList

    - by Ben Hall
    I'm trying to hook into the ErrorListProvider in VS2010 to provide some more feedback from my VS2010 extension addin. The code is as follows: try { ErrorListProvider errorProvider = new ErrorListProvider(ServiceProvider); ErrorTask error = new ErrorTask(); error.Category = TaskCategory.BuildCompile; error.Text = "ERROR!"; errorProvider.Tasks.Add(error); } catch (InvalidOperationException) { } However the following exception is thrown: System.InvalidOperationException was caught Message=The service 'Microsoft.VisualStudio.Shell.Interop.IVsTaskList' must be installed for this feature to work. Ensure that this service is available. Source=Microsoft.VisualStudio.Shell.10.0 StackTrace: at Microsoft.VisualStudio.Shell.TaskProvider.get_VsTaskList() at Microsoft.VisualStudio.Shell.TaskProvider.Refresh() at Microsoft.VisualStudio.Shell.TaskProvider.TaskCollection.Add(Task task) Does anyone have any ideas why?

    Read the article

  • Handling nmake errorlevel/return codes

    - by tlianza
    Hi all, I have an nmake-based project which in turn calls the asp compiler, which can throw an error, which nmake seems to recognize: NMAKE : fatal error U1077: 'C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_compiler.exe' : return code '0x1' However, when I call nmake from within a batch file, the environment variable %ERRORLEVEL% remains set at zero: nmake /NOLOGO echo BUILD RETURNING: %ERRORLEVEL% If I control-c the nmake task, I do end up getting a non-zero ERRORLEVEL (it's set to 2) so my assumption is that I'm able to catch errors okay, but nmake isn't bubbling up the non-zero exit code from it's task. Or, at least, I'm mis-trapping it. Any help would be appreciated.

    Read the article

  • Sending and receiving a TMemoryStream using IdTCPClient and IdTCPServer

    - by Martin Melka
    I found Remy Lebeau's chat demo of IdTCP components in XE2 and I wanted to play with it a little bit. (It can be found here) I would like to send a picture using these components and the best approach seems to be using TMemoryStream. If I send strings, the connection works fine, the strings are transmitted successfully, however when I change it to Stream instead, it doesn't work. Here is the code: Server procedure TMainForm.IdTCPServerExecute(AContext: TIdContext); var rcvdMsg: string; ms:TMemoryStream; begin // This commented code is working, it receives and sends strings. // rcvdMsg:=AContext.Connection.IOHandler.ReadLn; // LogMessage('<ServerExec> '+rcvdMsg); // // TResponseSync.SendResponse(AContext, rcvdMsg); try ms:=TMemoryStream.Create; AContext.Connection.IOHandler.ReadStream(ms); ms.SaveToFile('c:\networked.bmp'); except LogMessage('Failed to receive',clred); end; end; Client procedure TfrmMain.Button1Click(Sender: TObject); var ms: TMemoryStream; bmp: TBitmap; pic: TPicture; s: string; begin // Again, this code is working for sending strings. // s:=edMsg.Text; // Client.IOHandler.WriteLn(s); ms:=TMemoryStream.Create; pic:=TPicture.Create; pic.LoadFromFile('c:\Back.png'); bmp:=TBitmap.Create; bmp.Width:=pic.Width; bmp.Height:=pic.Height; bmp.Canvas.Draw(0,0,pic.Graphic); bmp.SaveToStream(ms); ms.Position:=0; Client.IOHandler.Write(ms); ms.Free; end; When I try to send the stream from the client, nothing observable happens (breakpoint in the OnExecute doesn't fire). However, when closing the programs(after sending the MemoryStream), two things happen: If the Client is closed first, only then does the except part get processed (the log displays the 'Failed to receive' error. However, even if I place a breakpoint on the first line of the try-except block, it somehow gets skipped and only the error is displayed). If the Server is closed first, the IDE doesn't change back from debug, Client doesn't change its state to disconnected (as it normally does when server disconnects) and after the Client is closed as well, an Access Violation error from the Server app appears. I guess this means that there is a thread of the Server still running and maintaining the connection. But no matter how much time i give it, it never completes the task of receiving the MemoryStream. Note: The server uses IdSchedulerOfThreadDefault and IdAntiFreeze, if that matters. As I can't find any reliable source of help for the revamped Indy 10 (it all appears to apply for the older Indy 10, or even Indy 9), I hope you can tell me what is wrong. Thanks - ANSWER - SERVER procedure TMainForm.IdTCPServerExecute(AContext: TIdContext); var size: integer; ms:TMemoryStream; begin try ms:=TMemoryStream.Create; size:=AContext.Connection.IOHandler.ReadLongInt; AContext.Connection.IOHandler.ReadStream(ms, size); ms.SaveToFile('c:\networked.bmp'); except LogMessage('Failed to receive',clred); end; end; CLIENT procedure TfrmMain.Button1Click(Sender: TObject); var ms: TMemoryStream; bmp: TBitmap; pic: TPicture; begin ms:=TMemoryStream.Create; pic:=TPicture.Create; pic.LoadFromFile('c:\Back.png'); bmp:=TBitmap.Create; bmp.Width:=pic.Width; bmp.Height:=pic.Height; bmp.Canvas.Draw(0,0,pic.Graphic); bmp.SaveToStream(ms); ms.Position:=0; Client.IOHandler.Write(ms, 0, True); ms.Free; end;

    Read the article

  • MOSS Sharepoint - Holiday Approval /tracking

    - by nav
    Hi, Has anyone implemented a holiday workflow approval / tracking list in MOSS Sharepoint 2007? Can anyone suggests other solutions? The solution below works fine but I am specifically looking for a way to lookup manager of the user who created the holiday request list item in the workflow. I have followed this link http://www.u2u.info/Blogs/Kevin/Lists/Posts/Post.aspx?ID=39 which shows you how to create a custom workflow approval. Below are the steps outlined by the link. User add new holiday item to list Workflow kicks off Wf has the manager hardcoded (need a way to look this up, maybe from AD??) and creates a Task for them to review the request. If desired, this can include an email notification of the task Manager reviews, adds comments and approves/denies request User is notified of completed request Many Thanks, Naveen

    Read the article

  • How to Delete a Virtual Directory from an FTP Site in IIS 7 and IIS 7.5 using C#/VB.Net and WMI?

    - by Steve Johnson
    Hi all. I hope everybody is doing fine. I try to delete a virtual directory using WMi (Server Manager Class) and recreate with different values. The problem i am facing is that the virtual directory is not getting deleted. Please help. Here is my code. Try Using mgr As New ServerManager() Dim site As Site = mgr.Sites(DomainName) Dim app As Application = site.Applications("/") '.CreateElement() '("/" & VirDirName) Dim VirDir As VirtualDirectory = app.VirtualDirectories.CreateElement() For Each VirDir In app.VirtualDirectories If VirDir("path") = "/" & VirDirName Then app.VirtualDirectories.Remove(VirDir) Exit For End If Next mgr.CommitChanges() End Using Catch Err As Exception Ex = Err Throw New Exception(Err.Message, Ex) End Try

    Read the article

  • What is the difference between NavigationService.Navigate() method and PhoneApplicationFrame.Source

    - by afriza
    Taken from Exercise 1: Creating Windows Phone Applications with Microsoft Visual Studio 2010 Express for Windows Phone Task 3: Step 9 // navigate this.NavigationService.Navigate(new Uri("/PuzzlePage.xaml", UriKind.Relative)); Note: The PhoneApplicationPage class provides methods and properties to navigate to pages through its NavigationService property. You can call the Navigate method of the NavigationService and pass the URI for the page as a parameter. You can also use the GoBack and GoForward methods to navigate backward or forward in the navigation history. The hardware back button also provides backward navigation within an application. The event handler shown above uses the NavigationService to go to the PuzzlePage.xaml page. Task 4: Step 3 (RootVisual as Microsoft.Phone.Controls.PhoneApplicationFrame).Source = new Uri("/ErrorPage.xaml", UriKind.Relative); Note: ... Whenever you set the Source property to a value that is different from the displayed content, the frame navigates to the new content. ... What are the differences and similarities of both techniques?

    Read the article

  • Good place to look for example Database Designs - Best practices

    - by Younes
    I have been given the task to design a database to store a lot of information for our company. Because the task is rather big and contains multiple modules where users should be able to do stuff, I'm worried about designing a good data model for this. I just don't want to end up with a badly designed database. I want to have some decent examples of database structures for contracts / billing / orders etc to combine those in one nice relational database. Are there any resources out there that can help me with some examples regarding this?

    Read the article

  • Can't clone gitosis-admin.git local from my snow leopard server running gitosis

    - by joggink
    I've installed gitosis as described here: http://gist.github.com/264304 One of the things that I had to adjust was to give my git user permissions to use ssh (which I did trough server admin - access - ssh and added the 'git' user). When I ssh from my local machine (mac osx) as the git user, I get this response: PTY allocation request failed on channel 0 bash: gitosis-serve: command not found Connection to 10.0.0.108 closed. Which I think is normal, because in Pro GIT the author says you should get something like this if you try ssh'ing to the server using your git user: PTY allocation request failed on channel 0 fatal: unrecognized command 'gitosis-serve schacon@quaternion' Connection to gitserver closed. So far so good, I think? Now when I try to clone my gitosis-admin.git repository using this command: $git clone [email protected]:gitosis-admin.git I get this: Initialized empty Git repository in /Users/joggink/gitosis-admin/.git/ bash: gitosis-serve: command not found fatal: The remote end hung up unexpectedly So after doing some searching, I found an answer here on serverfault claiming I should use ssh:// as protocol for my git clone (which I thought is the default git protocol?) However, when I try: $git clone ssh://[email protected]:gitosis-admin.git This is the response: The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: Password: Password: Permission denied (publickey,keyboard-interactive). fatal: The remote end hung up unexpectedly When I type in my own password (because my git user has no password), I get following error: The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: bash: git-upload-pack: command not found fatal: The remote end hung up unexpectedly I added my upload-pack location as followed: $git clone -u /usr/local/git/bin/git-upload-pack ssh://[email protected]:gitosis-admin.git I get the error that gitosis-admin.git isn't a git repo... Initialized empty Git repository in /Users/joggink/gitosis-admin/.git/ The authenticity of host ' (::1)' can't be established. RSA key fingerprint is 80:4d:77:c7:78:cb:c9:42:e3:82:06:7c:fe:c0:08:ce. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '' (RSA) to the list of known hosts. Password: fatal: '[email protected]:gitosis-admin.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly I've been searching for a solution for almost a week now, and every topic I've found on the internet gives no result...

    Read the article

< Previous Page | 221 222 223 224 225 226 227 228 229 230 231 232  | Next Page >