Search Results

Search found 2950 results on 118 pages for 'andrew martin'.

Page 8/118 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Need Advice on designing ATL inproc Server (dll) that serves as both a soure and a sink of events.

    - by Andrew
    Hi, I need to design an ATL inproc server that besides exposing methods and properties, also can fire events (source) and serve as a sink for a third party COM control that fires events. I would assume that this is a fairly common requirement. I can also foresee several "gotchas" that I would like to read up on before commencing the design. My questions/concerns are: Can someone point me to an example? Which threading model to use? Should I have a seperate COM object for the sink? Should I, and how do I, protect certain memory. For example, my server will receive data from the third party control. It will save this, and in some cases, fire an event to interested clients. The interested clients will request the data through a standard method or property. I did try to research this myself. I can find many examples of COM servers that are soures, and some that are sinks, but never both. The only post I did find was this: http://www.generation-nt.com/us/atl-control-an-event-source-sink-help-9098542.html Which strongly advocates putting the sink on a seperate COM object. Any leads, tutorials or ideas would be much appreciated. Thanks, Andrew

    Read the article

  • Calling an Oracle PL/SQL procedure with Custom Object return types from 0jdbc6 JDBCthin drivers

    - by Andrew Harmel-Law
    I'm writing some JDBC code which calls a Oracle 11g PL/SQL procdedure which has a Custom Object return type. Whenever I try an register my return types, I get either ORA-03115 or PLS-00306 as an error when the statement is executed depending on the type I set. An example is below: PLSQL Code: Procedure GetDataSummary (p_my_key IN KEYS.MY_KEY%TYPE, p_recordset OUT data_summary_tab, p_status OUT VARCHAR2); Java Code: String query = "beginmanageroleviewdata.getdatasummary(?, ?, ?); end;"); CallableStatement stmt = conn.prepareCall(query); stmt.setInt(1, 83); stmt.registerOutParameter(2, OracleTypes.CURSOR); // Causes error: PLS-00306 stmt.registerOutParameter(3, OracleTypes.VARCHAR); stmt.execute(stmt); // Error mentioned above thrown here. Can anyone provide me with an example showing how I can do this? I guess it's possible. However I can't see a rowset OracleType. CURSOR, REF, DATALINK, and more fail. Apologies if this is a dumb question. I'm not a PL/SQL expert and may have used the wrong terminology in some areas of my question. (If so, please edit me). Thanks in advance. Regs, Andrew

    Read the article

  • What's the fastest way to bulk insert a lot of data in SQL Server (C# client)

    - by Andrew
    I am hitting some performance bottlenecks with my C# client inserting bulk data into a SQL Server 2005 database and I'm looking for ways in which to speed up the process. I am already using the SqlClient.SqlBulkCopy (which is based on TDS) to speed up the data transfer across the wire which helped a lot, but I'm still looking for more. I have a simple table that looks like this: CREATE TABLE [BulkData]( [ContainerId] [int] NOT NULL, [BinId] [smallint] NOT NULL, [Sequence] [smallint] NOT NULL, [ItemId] [int] NOT NULL, [Left] [smallint] NOT NULL, [Top] [smallint] NOT NULL, [Right] [smallint] NOT NULL, [Bottom] [smallint] NOT NULL, CONSTRAINT [PKBulkData] PRIMARY KEY CLUSTERED ( [ContainerIdId] ASC, [BinId] ASC, [Sequence] ASC )) I'm inserting data in chunks that average about 300 rows where ContainerId and BinId are constant in each chunk and the Sequence value is 0-n and the values are pre-sorted based on the primary key. The %Disk time performance counter spends a lot of time at 100% so it is clear that disk IO is the main issue but the speeds I'm getting are several orders of magnitude below a raw file copy. Does it help any if I: Drop the Primary key while I am doing the inserting and recreate it later Do inserts into a temporary table with the same schema and periodically transfer them into the main table to keep the size of the table where insertions are happening small Anything else? -- Based on the responses I have gotten, let me clarify a little bit: Portman: I'm using a clustered index because when the data is all imported I will need to access data sequentially in that order. I don't particularly need the index to be there while importing the data. Is there any advantage to having a nonclustered PK index while doing the inserts as opposed to dropping the constraint entirely for import? Chopeen: The data is being generated remotely on many other machines (my SQL server can only handle about 10 currently, but I would love to be able to add more). It's not practical to run the entire process on the local machine because it would then have to process 50 times as much input data to generate the output. Jason: I am not doing any concurrent queries against the table during the import process, I will try dropping the primary key and see if that helps. ~ Andrew

    Read the article

  • Assigning specific menu administration rights for roles in drupal

    - by Martin Andersson
    Hello folks. I'm trying to give one of my roles the administrative rights to add/remove content in a specific menu (but not all menus). I think I found a module that should enable something like this, http://drupalmodules.com/module/delegate-menu-administration I've followed the instructions, added the role to my user, checked the "administer some menus" value for that role and checked the "Make admin" field for that role and specific menu in Menus. I also gave the role permissions to change page and story content. However, it still wont let the user add any new content it creates in any menu, and I get an error message saying "warning: Invalid argument supplied for foreach() in /home/martin/www/drupal/modules/delegate_menu_admin/delegate_menu_admin.module on line 346." Line 346 looks like this: foreach ($form['menu']['parent']['#options'] as $key => $value) { I did a print_r($form); in the file just before it and there's no such array that I can see: [menu] => Array ( [#access] => 1 [delete] => Array ( [#access] => ) ) When I gave the role "administer menu" permissions, nothing extra was printed at all, leading me to the assumption that the delegate_menu_admin.module file is not used at all while both the "administer menu" and the "administer some menus" (from the delegate-menu-administration module) permissions are set! Is this some incompatibility between the module because of some drupal update? Or am I just too tired and too stupid? :)

    Read the article

  • How to visualize a list of lists of lists of ... in R?

    - by Martin
    Hi there, I have a very deep list of lists in R. Now I want to print this list to the standard output to get a better overview of the elements. It should look like the way the StatET plugin for eclipse shows a list. Example list: l6 = list() l6[["h"]] = "one entry" l6[["g"]] = "nice" l5 = list() l5[["e"]] = l6 l4 = list() l4[["f"]] = "test" l4[["d"]] = l5 l3 = list() l3[["c"]] = l4 l2 = list() l2[["b"]] = l3 l1 = list() l1[["a"]] = l2 This should print like: List of 1 $ a:List of 1 ..$ b:List of 1 .. ..$ c:List of 2 .. .. ..$ f: chr "test" .. .. ..$ d:List of 1 .. .. .. ..$ e:List of 2 .. .. .. .. ..$ h: chr "one entry" .. .. .. .. ..$ g: chr "nice" I know this is possible with recursion and the deepness. But is there a way to do this with the help of rapply or something like that? Thanx in advance, Martin

    Read the article

  • Loading the target of a link in a <DIV> via jQuery's .live event into the same <DIV>??

    - by Martin Pescador
    Hello together I call a certain div from another page with jquery to be loaded into a div on my main page like this: <script type="text/javascript"> $("#scotland").load("http://www.example.com/scotland .gallery"); </script> <div id="scotland"></div> The div I call is a piece of code which is automatically generated by a CMS made simple module, by the way. Now it comes to my problem: The .gallery div I call, looks, a little simplified, like this: <div class="gallery"> <span><img src="http://www.example.com/scotlandimage1.jpg"></span> <span class="imgnavi"><a href="link_to_next_page_with_one_image">Next image</href></span> </div> I want the "next image"-link to load the next page into the .gallery div (it is always a page with one image on it). But what it does, is, it opens the new page http://www.example.com/scotland only. I tried to use jquerys .live event to load the linked page (that would be "scotlandimage2" and the navigation, as you can see in the upper part - not only the image!), but I must have done something wrong. I tried different ways, but never got it to work. This was my last try: $(".imgnavi a").click(function() { var myUrl = $(this).attr("href"); $(".gallery").load(myUrl); return false; }); I have to admit that I am very new to jquery... But does someone know what I did wrong (do I even follow the right handlers?)? Thanks very much in advance! Martin

    Read the article

  • jQuery accordion - different image for active sections

    - by Andrew Cassidy
    Hi, I'm using Ryan Stemkoski's "Stupid Simple Jquery Accordion Menu" which is available here: stemkoski.com/stupid-simple-jquery-accordion-menu/ Here is the javascript $(document).ready(function() { //ACCORDION BUTTON ACTION (ON CLICK DO THE FOLLOWING) $('.accordionButton').click(function() { //REMOVE THE ON CLASS FROM ALL BUTTONS $('.accordionButton').removeClass('on'); //NO MATTER WHAT WE CLOSE ALL OPEN SLIDES $('.accordionContent').slideUp('normal'); //IF THE NEXT SLIDE WASN'T OPEN THEN OPEN IT if($(this).next().is(':hidden') == true) { //ADD THE ON CLASS TO THE BUTTON $(this).addClass('on'); //OPEN THE SLIDE $(this).next().slideDown('normal'); } }); /*** REMOVE IF MOUSEOVER IS NOT REQUIRED ***/ //ADDS THE .OVER CLASS FROM THE STYLESHEET ON MOUSEOVER $('.accordionButton').mouseover(function() { $(this).addClass('over'); //ON MOUSEOUT REMOVE THE OVER CLASS }).mouseout(function() { $(this).removeClass('over'); }); /*** END REMOVE IF MOUSEOVER IS NOT REQUIRED ***/ /******************************************************************************************************************** CLOSES ALL S ON PAGE LOAD ********************************************************************************************************************/ $('.accordionContent').hide(); }); and the CSS #wrapper { width: 800px; margin-left: auto; margin-right: auto; } .accordionButton { width: 800px; float: left; _float: none; /* Float works in all browsers but IE6 */ background: #003366; border-bottom: 1px solid #FFFFFF; cursor: pointer; } .accordionContent { width: 800px; float: left; _float: none; /* Float works in all browsers but IE6 */ background: #95B1CE; } /*********************************************************************************************************************** EXTRA STYLES ADDED FOR MOUSEOVER / ACTIVE EVENTS ************************************************************************************************************************/ .on { background: #990000; } .over { background: #CCCCCC; } There is an "on" class which allows the style of the accordionButton class when it is active but I would like to be able to have each active accordionButton class have a different image. http://www.thepool.ie For example, in the above site the word "WORK" should be a darker grey image when the work section is selected, so should COLLAB when it is selected etc. I can't figure out how to do this, any help would be greatly appreciated. Thanks, Andrew

    Read the article

  • How to scan an array for certain information

    - by Andrew Martin
    I've been doing an MSc Software Development conversion course, the main language of which is Java, since the end of September. We have our first assessed practical coming and I was hoping for some guidance. We have to create an array that will store 100 integers (all of which are between 1 and 10), which are generated by a random number generator, and then print out ten numbers of this array per line. For the second part, we need to scan these integers, count up how often each number appears and store the results in a second array. I've done the first bit okay, but I'm confused about how to do the second. I have been looking through the scanner class to see if it has any methods which I could use, but I don't see any. Could anyone point me in the right direction - not the answer, but perhaps which library it comes from? Code so far: import java.util.Random; public class Practical4_Assessed { public static void main(String[] args) { Random numberGenerator = new Random (); int[] arrayOfGenerator = new int[100]; for (int countOfGenerator = 0; countOfGenerator < 100; countOfGenerator++) arrayOfGenerator[countOfGenerator] = numberGenerator.nextInt(10); int countOfNumbersOnLine = 0; for (int countOfOutput = 0; countOfOutput < 100; countOfOutput++) { if (countOfNumbersOnLine == 10) { System.out.println(""); countOfNumbersOnLine = 0; countOfOutput--; } else { System.out.print(arrayOfGenerator[countOfOutput] + " "); countOfNumbersOnLine++; } } } } Thanks, Andrew

    Read the article

  • Eclipse won't believe I have Maven 2.2.1

    - by Andrew Clegg
    I have a project (built from an AppFuse template) that requires Maven 2.2.1. So I upgraded to this (from 2.1.0) and set my path and my M2_HOME and MAVEN_HOME env variables. Then I ran mvn eclipse:eclipse and imported the project into Eclipse (Galileo). However, in the problems list for the project (and at the top of the pom.xml GUI editor) it says: Unable to build project '/export/people/clegg/data/GanymedeWorkspace/funcserve/pom.xml; it requires Maven version 2.2.1 (Please ignore 'GanymedeWorkspace', I really am using Galileo!) This persists whether I set Eclipse to use its Embedded Maven implementation, or the external 2.2.1 installation, in the Preferences - Maven - Installations dialog. I've tried closing and reopening the project, reindexing the repository, cleaning the project, restarting the IDE, logging out and back in again, everything I can think of! But Eclipse still won't believe I have Maven 2.2.1. I just did a plugin update so I have the latest version of Maven Integration for Eclipse -- 0.9.8.200905041414. Does anyone know how to convince Eclipse I really do have the right version of Maven? It's like it's recorded the previous version somewhere else and won't pay any attention to my changes :-( Many thanks! Andrew.

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • Why does ANTLR not parse the entire input?

    - by Martin Wiboe
    Hello, I am quite new to ANTLR, so this is likely a simple question. I have defined a simple grammar which is supposed to include arithmetic expressions with numbers and identifiers (strings that start with a letter and continue with one or more letters or numbers.) The grammar looks as follows: grammar while; @lexer::header { package ConFreeG; } @header { package ConFreeG; import ConFreeG.IR.*; } @parser::members { } arith: term | '(' arith ( '-' | '+' | '*' ) arith ')' ; term returns [AExpr a]: NUM { int n = Integer.parseInt($NUM.text); a = new Num(n); } | IDENT { a = new Var($IDENT.text); } ; fragment LOWER : ('a'..'z'); fragment UPPER : ('A'..'Z'); fragment NONNULL : ('1'..'9'); fragment NUMBER : ('0' | NONNULL); IDENT : ( LOWER | UPPER ) ( LOWER | UPPER | NUMBER )*; NUM : '0' | NONNULL NUMBER*; fragment NEWLINE:'\r'? '\n'; WHITESPACE : ( ' ' | '\t' | NEWLINE )+ { $channel=HIDDEN; }; I am using ANTLR v3 with the ANTLR IDE Eclipse plugin. When I parse the expression (8 + a45) using the interpreter, only part of the parse tree is generated: http://imgur.com/iBaEC.png Why does the second term (a45) not get parsed? The same happens if both terms are numbers. Thank you, Martin Wiboe

    Read the article

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • Running an executable on network share with CustomAction with wix?

    - by martin
    Hello, i have created a msi-package which compresses some xml-files to a zip-file during installation. I have created a CustomAction for this purposes: <CustomAction Id="CompressMy" BinaryKey="zipEXE" ExeCommand="a -tzip &quot;[TEMPLATE_DIR]my.zip&quot; &quot;[TempSourceFolder]data.xml&quot;" Return="check" HideTarget="no" Impersonate="no" Execute="deferred" /> The installation works fine, if i try to install to a local drive, but recently a customer wanted to install [TEMPLATE_DIR] to a network drive on Windows Vista. The CustomAction fails, because of the elevated install-user hasn't mapped the network drive, even if the installer-calling user has mapped the drive. This happens also, if I try to install to an unc-path. I use 7zip for compressing. I have added it to my msi-package. I have tried to set Impersonate="yes", but then the Installations fails, if my TEMPLATE_DIR is f.e. the ProgramData-dir. Do you have any idea what i can do? I thinked about checking if TEMPLATE_DIR is a network path, but I didn't know how i can check this. Or do you have any other Ideas how I can provide a local and a network installation while using this custom action. Would be great if there are any advices, greetings, Martin

    Read the article

  • How to tie a Hudson job to a user who has access to run MSIExec

    - by Andrew
    Hi All, I have a batch file that calls "MSIExec /X {MyGUID} /qn". This runs successfully when run with my admin user. When I run it as a Window Batch command from a Hudson job it fails with "T?h?e? ?i?n?s?t?a?l?l?a?t?i?o?n? ?s?o?u?r?c?e? ?f?o?r? ?t?h?i?s? ?p?r?o?d?u?c?t? ?i?s? ?n?o?t? ?a?v?a?i?l?a?b?l?e?.? ? ?V?e?r?i?f?y? ?t?h?a?t? ?t?h?e? ?s?o?u?r?c?e? ?e?x?i?s?t?s? ?a?n?d? ?t?h?a?t? ?y?o?u? ?c?a?n? ?a?c?c?e?s?s? ?i?t?.? " I am inclined to think that the issue is that the job is started by the "anonymous" user rather than my admin user. How in hudson do I "tie" the job to be run under the admin user? Thanks in advance. Regards, Andrew

    Read the article

  • [MySQL/PHP] Avoid using RAND()

    - by Andrew Ellis
    So... I have never had a need to do a random SELECT on a MySQL DB until this project I'm working on. After researching it seems the general populous says that using RAND() is a bad idea. I found an article that explains how to do another type of random select. Basically, if I want to select 5 random elements, I should do the following (I'm using the Kohana framework here)? If not, what is a better solution? Thanks, Andrew <?php final class Offers extends Model { /** * Loads a random set of offers. * * @param integer $limit * @return array */ public function random_offers($limit = 5) { // Find the highest offer_id $sql = ' SELECT MAX(offer_id) AS max_offer_id FROM offers '; $max_offer_id = DB::query(Database::SELECT, $sql) ->execute($this->_db) ->get('max_offer_id'); // Check to make sure we're not trying to load more offers // than there really is... if ($max_offer_id < $limit) { $limit = $max_offer_id; } $used = array(); $ids = ''; for ($i = 0; $i < $limit; ) { $rand = mt_rand(1, $max_offer_id); if (!isset($used[$rand])) { // Flag the ID as used $used[$rand] = TRUE; // Set the ID if ($i > 0) $ids .= ','; $ids .= $rand; ++$i; } } $sql = ' SELECT offer_id, offer_name FROM offers WHERE offer_id IN(:ids) '; $offers = DB::query(Database::SELECT, $sql) ->param(':ids', $ids) ->as_object(); ->execute($this->_db); return $offers; } }

    Read the article

  • RegisterClientScriptInclude doesn't work for some reason...

    - by Andrew
    Hey, I've spent at least 2 days trying anything and googling this...but for some reason I can't get RegisterClientScriptInclude to work the way everyone else has it working? First off, I am usting .NET 3.5 Ajax,...and I am including javascript in my partial page refreshes...using this code: ScriptManager.RegisterClientScriptBlock(this, typeof(Page), "MyClientCode", script, true); It works perfectly, my javascript code contained in the script variable is included every partial refresh. The javascript in script is actually quite extensive though, and I would like to store it in a .js file,..so logically I make a .js file and try to include it using RegisterClientScriptInclude ...however i can't for the life of my get this to work. here's the exact code: ScriptManager.RegisterClientScriptInclude(this, typeof(Page), "mytestscript", "/js/testscript.js"); the testscript.js file is only included in FULL page refreshes...ie. when I load the page, or do a full postback....i can't get the file to be included in partial refreshes...have no idea why..when viewing the ajax POST in firebug I don't see a difference whether I include the file or not.... both of the ScriptManager Includes are being ran from the exact same place in "Page_Load"...so they should execute every partial refresh (but only the ScriptBlock does). anyways,..any help or ideas,..or further ways I can trouble shoot this problem, would be appreciated. Thanks, Andrew

    Read the article

  • How to access elements in a complex list?

    - by Martin
    Hi, it's me and the lists again. I have a nice list, which looks like this: tmp = NULL t = NULL tmp$resultitem$count = "1057230" tmp$resultitem$status = "Ok" tmp$resultitem$menu = "PubMed" tmp$resultitem$dbname = "pubmed" t$resultitem$count = "305215" t$resultitem$status = "Ok" t$resultitem$menu = "PMC" t$resultitem$dbname = "pmc" tmp = c(tmp, t) t = NULL t$resultitem$count = "1" t$resultitem$status = "Ok" t$resultitem$menu = "Journals" t$resultitem$dbname = "journals" tmp = c(tmp, t) Which produces: str(tmp) List of 3 $ resultitem:List of 4 ..$ count : chr "1057230" ..$ status: chr "Ok" ..$ menu : chr "PubMed" ..$ dbname: chr "pubmed" $ resultitem:List of 4 ..$ count : chr "305215" ..$ status: chr "Ok" ..$ menu : chr "PMC" ..$ dbname: chr "pmc" $ resultitem:List of 4 ..$ count : chr "1" ..$ status: chr "Ok" ..$ menu : chr "Journals" ..$ dbname: chr "journals" Now I want to search through the elements of each "resultitem". I want to know the "dbname" for every database, that has less then 10 "count" (example). In this case it is very easy, as this list only has 3 elements, but the real list is a little bit longer. This could be simply done with a for loop. But is there a way to do this with some other function of R (like rapply)? My problem with those apply functions is, that they only look at one element. If I do a grep to get all "dbname" elements, I can not get the count of each element. rapply(tmp, function(x) paste("Content: ", x))[grep("dbname", names(rapply(tmp, c)))] Does someone has a better idea than a for loop? Thanx, Martin

    Read the article

  • start-stop-daemon quoted arguments misinterpreted

    - by Martin Westin
    Hi, I have been trying to make an init script using start-stop-daemon. I am stuck on the arguments to the daemon. I want to keep these in a variable at the top of the script but I can't get the quotations to filter down correctly. I'll use ls here so we don't have to look at binaries and arguments that most people wont know or care about. The end result I am looking for is for start-stop... to run ls -la "/folder with space/" DAEMON=/usr/bin/ls DAEMON_OPTS='-la "/folder with space/"' start-stop-daemon --start --make-pidfile --pidfile $PID --exec $DAEMON -- $DAEMON_OPTS Double escaping the options and trying innumerable variations of quotations do not help... Then they end up at the daemon they are always messed up. Enclosing $DAEMON_OPTS in quotes changes things... then they are seen as one since quote... never the right number though :) Echoing the command-line (start-stop...) prints exactly the right stuff to screen. But the daemon (the real one, not ls) complains about the wrong number of arguments. How do I specify a variable so that quotes inside it are brought along to the daemon correctly? Thanks, Martin

    Read the article

  • CRONTAB doesn't finish svndump

    - by Andrew
    I just discovered that the automated dumps I've been creating of my SVN repository have been getting cut off early and basically only half the dump is there. It's not an emergency, but I hate being in this situation. It defeats the purpose of making automated backups in the first place. The command I'm using is below. If I execute it manually in the terminal, it completes fine; the output.txt file is 16 megs in size with all 335 revisions. But if I leave it to crontab, it bails at the halfway mark, at around 8.1 megs and only the first 169 revisions. # m h dom mon dow command 18 00 * * * svnadmin dump /var/svn/repos/myproject > /home/andrew/output.txt I actually save to a dated gzipped file, and there's no shortage of space on the server, so this is not a disk space issue. It seems to bail after two seconds, so this could be a time issue, but the file size is the same every single time for the past month, so I don't think it's that either. Does crontab execute within a limited memory space?

    Read the article

  • Problem declaring and calling internal metthods

    - by Martin
    How do I declare and use small helper functions inside my normal methods ? In on of my objective-c methods I need a function to find an item within a string -(void) Onlookjson:(id) sender{ NSString * res = getKeyValue(res, @"Birth"); } I came up with a normal C type declaration for helper function getKeyvalue like this NSString * getKeyvalue(NSString * s, NSString key){ NSString *trm = [[s substringFromIndex:2] substringToIndex:[s length]-3]; NSArray *list = [trm componentsSeparatedByString:@";"]; .... NSString res; res = [list objectAtIndex:1]; ... return res; } Input data string: { Birth = ".."; Death = "..."; ... } Anyway I get an exception "unrecognized selector sent to instance" for any of the two first lines in the helper function How do I declare helper functions that are just to be used internally and how to call them safely ? regards Martin

    Read the article

  • Creating a RESTful API - HELP!

    - by Martin Cox
    Hi Chaps Over the last few weeks I've been learning about iOS development, which has naturally led me into the world of APIs. Now, searching around on the Internet, I've come to the conclusion that using the REST architecture is very much recommended - due to it's supposed simplicity and ease of implementation. However, I'm really struggling with the implementation side of REST. I understand the concept; using HTTP methods as verbs to describe the action of a request and responding with suitable response codes, and so on. It's just, I don't understand how to code it. I don't get how I map a URI to an object. I understand that a GET request for domain.com/api/user/address?user_id=999 would return the address of user 999 - but I don't understand where or how that mapping from /user/address to some method that queries a database has taken place. Is this all coded in one php script? Would I just have a method that grabs the URI like so: $array = explode("/", ltrim(rtrim($_SERVER['REQUEST_URI'], "/"), "/")) And then cycle through that array, first I would have a request for a "user", so the PHP script would direct my request to the user object and then invoke the address method. Is that what actually happens? I've probably not explained my thought process very well there. The main thing I'm not getting is how that URI /user/address?id=999 somehow is broken down and executed - does it actually resolve to code? class user(id) { address() { //get user address } } I doubt I'm making sense now, so I'll call it a day trying to explain further. I hope someone out there can understand what I'm trying to say! Thanks Chaps, look forward to your responses. Martin p.s - I'm not a developer yet, I'm learning :)

    Read the article

  • MVVM for Dummies

    - by Martin Hinshelwood
    I think that I have found one of the best articles on MVVM that I have ever read: http://jmorrill.hjtcentral.com/Home/tabid/428/EntryId/432/MVVM-for-Tarded-Folks-Like-Me-or-MVVM-and-What-it-Means-to-Me.aspx This article sums up what is in MVVM and what is outside of MVVM. Note, when I and most other people say MVVM, they really mean MVVM, Commanding, Dependency Injection + any other Patterns you need to create your application. In WPF a lot of use is made of the Decorator and Behaviour pattern as well. The goal of all of this is to have pure separation of concerns. This is what every code behind file of every Control / Window / Page  should look like if you are engineering your WPF and Silverlight correctly: C# – Ideal public partial class IdealView : UserControl { public IdealView() { InitializeComponent(); } } Figure: This is the ideal code behind for a Control / Window / Page when using MVVM. C# – Compromise, but works public partial class IdealView : UserControl { public IdealView() { InitializeComponent(); this.DataContext = new IdealViewModel(); } } Figure: This is a compromise, but the best you can do without Dependency Injection VB.NET – Ideal Partial Public Class ServerExplorerConnectView End Class Figure: This is the ideal code behind for a Control / Window / Page when using MVVM. VB.NET – Compromise, but works Partial Public Class ServerExplorerConnectView Private Sub ServerExplorerConnectView_Loaded(ByVal sender As Object, ByVal e As System.Windows.RoutedEventArgs) Handles Me.Loaded Me.DataContext = New ServerExplorerConnectViewModel End Sub End Class Figure: This is a compromise, but the best you can do without Dependency Injection Technorati Tags: MVVM,.NET,WPF,Silverlight

    Read the article

  • Integrate SharePoint 2010 with Team Foundation Server 2010

    - by Martin Hinshelwood
    Our client is using a brand new shiny installation of SharePoint 2010, so we need to integrate our upgraded Team Foundation Server 2010 instance into it. In order to do that you need to run the Team Foundation Server 2010 install on the SharePoint 2010 server and choose to install only the “Extensions for SharePoint Products and Technologies”. We want out upgraded Team Project Collection to create any new portal in this SharePoint 2010 server farm. There a number of goodies above and beyond a solution file that requires the install, with the main one being the TFS2010 client API. These goodies allow proper integration with the creation and viewing of Work Items from SharePoint a new feature with TFS 2010. This works in both SharePoint 2007 and SharePoint 2010 with the level of integration dependant on the version of SharePoint that you are running. There are three levels of integration with “SharePoint Services 3.0” or “SharePoint Foundation 2010” being the lowest. This level only offers reporting services framed integration for reporting along with Work Item Integration and document management. The highest is Microsoft Office SharePoint Services (MOSS) Enterprise with Excel Services integration providing some lovely dashboards. Figure: Dashboards take the guessing out of Project Planning and estimation. Plus writing these reports would be boring!   The Extensions that you need are on the same installation media as the main TFS install and the only difference is the options you pick during the install. Figure: Installing the TFS 2010 Extensions for SharePoint Products and Technologies onto SharePoint 2010   Annoyingly you may need to reboot a couple of times, but on this server the process was MUCH smother than on our internal server. I think this was mostly to do with this being a clean install. Once it is installed you need to run the configuration. This will add all of the Solution and Templates that are needed for SharePoint to work properly with TFS. Figure: This is where all the TFS 2010 goodies are added to your SharePoint 2010 server and the TFS 2010 object model is installed.   Figure: All done, you have everything installed, but you still need to configure it Now that we have the TFS 2010 SharePoint Extensions installed on our SharePoint 2010 server we need to configure them both so that they will talk happily to each other. Configuring the SharePoint 2010 Managed path for Team Foundation Server 2010 In order for TFS to automatically create your project portals you need a wildcard managed path setup. This is where TFS will create the portal during the creation of a new Team project. To find the managed paths page for any application you need to first select the “Managed web applications”  link from the SharePoint 2010 Central Administration screen. Figure: Find the “Manage web applications” link under the “Application Management” section. On you are there you will see that the “Managed Paths” are there, they are just greyed out and selecting one of the applications will enable it to be clicked. Figure: You need to select an application for the SharePoint 2010 ribbon to activate.   Figure: You need to select an application before you can get to the Managed Paths for that application. Now we need to add a managed path for TFS 2010 to create its portals under. I have gone for the obvious option of just calling the managed path “TFS02” as the TFS 2010 server is the second TFS server that the client has installed, TFS 2008 being the first. This links the location to the server name, and as you can’t have two projects of the same name in two separate project collections there is unlikely to be any conflicts. Figure: Add a “tfs02” wildcard inclusion path to your SharePoint site. Configure the Team Foundation Server 2010 connection to SharePoint 2010 In order to have you new TFS 2010 Server talk to and create sites in SharePoint 2010 you need to tell the TFS server where to put them. As this TFS 2010 server was installed in out-of-the-box mode it has a SharePoint Services 3.0 (the free one) server running on the same box. But we want to change that so we can use the external SharePoint 2010 instance. Just open the “Team Foundation Server Administration Console” and navigate to the “SharePoint Web Applications” section. Here you click “Add” and enter the details for the Managed path we just created. Figure: If you have special permissions on your SharePoint you may need to add accounts to the “Service Accounts” section.    Before we can se this new SharePoint 2010 instance to be the default for our upgraded Team Project Collection we need to configure SharePoint to take instructions from our TFS server. Configure SharePoint 2010 to connect to Team Foundation Server 2010 On your SharePoint 2010 server open the Team Foundation Server Administration Console and select the “Extensions for SharePoint Products and Technologies” node. Here we need to “grant access” for our TFS 2010 server to create sites. Click the “Grant access” link and  fill out the full URL to the  TFS server, for example http://servername.domain.com:8080/tfs, and if need be restrict the path that TFS sites can be created on. Remember that when the users create a new team project they can change the default and point it anywhere they like as long as it is an authorised SharePoint location. Figure: Grant access for your TFS 2010 server to create sites in SharePoint 2010 Now that we have an authorised location for our team project portals to be created we need to tell our Team Project Collection that this is where it should stick sites by default for any new Team Projects created. Configure the Team Foundation Server 2010 Team Project Collection to create new sites in SharePoint 2010 Back on out TFS 2010 server we need to setup the defaults for our upgraded Team Project Collection to the new SharePoint 2010 integration we have just set up. On the TFS 2010 server open up the “Team Foundation Server Administration Console” again and navigate to the “Team Project Collections” node. Once you are there you will see a list of all of your TPC’s and in our case we have a DefaultCollection as well as out named and Upgraded collection for TFS 2008. If you select the “SharePoint Site” tab we can see that it is not currently configured. Figure: Our new Upgrade TFS2008 Team Project Collection does not have SharePoint configured Select to “Edit Default Site Location” and select the new integration point that we just set up for SharePoint 2010. Once you have selected the “SharePoint Web Application” (the thing we just configured) then it will give you an example based on that configuration point and the name of the Team Project Collection that we are configuring. Figure: Set the default location for new Team Project Portals to be created for this Team Project Collection This is where the reason for configuring the Extensions on the SharePoint 2010 server before doing this last bit becomes apparent. TFS 2010 is going to create a site at our http://sharepointserver/tfs02/ location called http://sharepointserver/tfs02/[TeamProjectCollection], or whatever we had specified, and it would have had difficulty doing this if we had not given it permission first. Figure: If there is no Team Project Collection site at this location the TFS 2010 server is going to create one This will create a nice Team Project Collection parent site to contain the Portals for any new Team Projects that are created. It is with noting that it will not create portals for existing Team Projects as this process is run during the Team Project Creation wizard. Figure: Just a basic parent site to host all of your new Team Project Portals as sub sites   You will need to add all of the users that will be creating Team Projects to be Administrators of this site so that they will not get an error during the Project Creation Wizard. You may also want to customise this as a proper portal to your projects if you are going to be having lots of them, but it is really just a default placeholder so you have a top level site that you can backup and point at. You have now integrated SharePoint 2010 and team Foundation Server 2010! You can now go forth and multiple your Team Projects for this Team Project Collection or you can continue to add portals to your other Collections.   Technorati Tags: TFS 2010,Sharepoint 2010,VS ALM

    Read the article

  • Upgrading from TFS 2010 RC to TFS 2010 RTM done

    - by Martin Hinshelwood
    Today is the big day, with the Launch of Visual Studio 2010 already done in Asia, and rolling around the world towards us, we are getting ready for the RTM (Released). We have had TFS 2010 in Production for nearly 6 months and have had only minimal problems. Update 12th April 2010  – Added Scott Hanselman’s tweet about the MSDN download release time. SSW was the first company in the world outside of Microsoft to deploy Visual Studio 2010 Team Foundation Server to production, not once, but twice. I am hoping to make it 3 in a row, but with all the hype around the new version, and with it being a production release and not just a go-live, I think there will be a lot of competition. Developers: MSDN will be updated with #vs2010 downloads and details at 10am PST *today*! @shanselman - Scott Hanselman Same as before, we need to Uninstall 2010 RC and install 2010 RTM. The installer will take care of all the complexity of actually upgrading any schema changes. If you are upgrading from TFS 2008 to TFS2010 you can follow our Rules To Better TFS 2010 Migration and read my post on our successes.   We run TFS 2010 in a Hyper-V virtual environment, so we have the advantage of running a snapshot as well as taking a DB backup. Done - Snapshot the hyper-v server Microsoft does not support taking a snapshot of a running server, for very good reason, and Brian Harry wrote a post after my last upgrade with the reason why you should never snapshot a running server. Done - Uninstall Visual Studio Team Explorer 2010 RC You will need to uninstall all of the Visual Studio 2010 RC client bits that you have on the server. Done - Uninstall TFS 2010 RC Done - Install TFS 2010 RTM Done - Configure TFS 2010 RTM Pick the Upgrade option and point it at your existing “tfs_Configuration” database to load all of the existing settings Done - Upgrade the SharePoint Extensions Upgrade Build Servers (Pending) Test the server The back out plan, and you should always have one, is to restore the snapshot. Upgrading to Team Foundation Server 2010 – Done The first thing you need to do is off the TFS server and then log into the Hyper-v server and create a snapshot. Figure: Make sure you turn the server off and delete all old snapshots before you take a new one I noticed that the snapshot that was taken before the Beta 2 to RC upgrade was still there. You should really delete old snapshots before you create a new one, but in this case the SysAdmin (who is currently tucked up in bed) asked me not to. I guess he is worried about a developer messing up his server Turn your server on and wait for it to boot in anticipation of all the nice shiny RTM’ness that is coming next. The upgrade procedure for TFS2010 is to uninstal the old version and install the new one. Figure: Remove Visual Studio 2010 Team Foundation Server RC from the system.   Figure: Most of the heavy lifting is done by the Uninstaller, but make sure you have removed any of the client bits first. Specifically Visual Studio 2010 or Team Explorer 2010.  Once the uninstall is complete, this took around 5 minutes for me, you can begin the install of the RTM. Running the 64 bit OS will allow the application to use more than 2GB RAM, which while not common may be of use in heavy load situations. Figure: It is always recommended to install the 64bit version of a server application where possible. I do not think it is likely, with SharePoint 2010 and Exchange 2010  and even Windows Server 2008 R2 being 64 bit only, I do not think there will be another release of a server app that is 32bit. You then need to choose what it is you want to install. This depends on how you are running TFS and on how many servers. In our case we run TFS and the Team Foundation Build Service (controller only) on out TFS server along with Analysis services and Reporting Services. But our SharePoint server lives elsewhere. Figure: This always confuses people, but in reality it makes sense. Don’t install what you do not need. Every extra you install has an impact of performance. If you are integrating with SharePoint you will need to run this install on every Front end server in your farm and don’t forget to upgrade your Build servers and proxy servers later. Figure: Selecting only Team Foundation Server (TFS) and Team Foundation Build Services (TFBS)   It is worth noting that if you have a lot of builds kicking off, and hence a lot of get operations against your TFS server, you can use a proxy server to cache the source control on another server in between your TFS server and your build servers. Figure: Installing Microsoft .NET Framework 4 takes the most time. Figure: Now run Windows Update, and SSW Diagnostic to make sure all your bits and bobs are up to date. Note: SSW Diagnostic will check your Power Tools, Add-on’s, Check in Policies and other bits as well. Configure Team Foundation Server 2010 – Done Now you can configure the server. If you have no key you will need to pick “Install a Trial Licence”, but it is only £500, or free with a MSDN subscription. Anyway, if you pick Trial you get 90 days to get your key. Figure: You can pick trial and add your key later using the TFS Server Admin. Here is where the real choices happen. We are doing an Upgrade from a previous version, so I will pick Upgrade the same as all you folks that are using the RC or TFS 2008. Figure: The upgrade wizard takes your existing 2010 or 2008 databases and upgraded them to the release.   Once you have entered your database server name you can click “List available databases” and it will show what it can upgrade. Figure: Select your database from the list and at this point, make sure you have a valid backup. At this point you have not made ANY changes to the databases. At this point the configuration wizard will load configuration from your existing database if you have one. If you are upgrading TFS 2008 refer to Rules To Better TFS 2010 Migration. Mostly during the wizard the default values will suffice, but depending on the configuration you want you can pick different options. Figure: Set the application tier account and Authentication method to use. We use NTLM to keep things simple as we host our TFS server externally for our remote developers.  Figure: Setting your TFS server URL’s to be the remote URL’s allows the reports to be accessed without using VPN. Very handy for those remote developers. Figure: Detected the existing Warehouse no problem. Figure: Again we love green ticks. It gives us a warm fuzzy feeling. Figure: The username for connecting to Reporting services should be a domain account (if you are on a domain that is). Figure: Setup the SharePoint integration to connect to your external SharePoint server. You can take the option to connect later.   You then need to run all of your readiness checks. These check can save your life! it will check all of the settings that you have entered as well as checking all the external services are configures and running properly. There are two reasons that TFS 2010 is so easy and painless to install where previous version were not. Microsoft changes the install to two steps, Install and configuration. The second reason is that they have pulled out all of the stops in making the install run all the checks necessary to make sure that once you start the install that it will complete. if you find any errors I recommend that you report them on http://connect.microsoft.com so everyone can benefit from your misery.   Figure: Now we have everything setup the configuration wizard can do its work.  Figure: Took a while on the “Web site” stage for some point, but zipped though after that.  Figure: last wee bit. TFS Needs to do a little tinkering with the data to complete the upgrade. Figure: All upgraded. I am not worried about the yellow triangle as SharePoint was being a little silly Exception Message: TF254021: The account name or password that you specified is not valid. (type TfsAdminException) Exception Stack Trace:    at Microsoft.TeamFoundation.Management.Controls.WizardCommon.AccountSelectionControl.TestLogon(String connectionString)    at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) [Info   @16:10:16.307] Benign exception caught as part of verify: Exception Message: TF255329: The following site could not be accessed: http://projects.ssw.com.au/. The server that you specified did not return the expected response. Either you have not installed the Team Foundation Server Extensions for SharePoint Products on this server, or a firewall is blocking access to the specified site or the SharePoint Central Administration site. For more information, see the Microsoft Web site (http://go.microsoft.com/fwlink/?LinkId=161206). (type TeamFoundationServerException) Exception Stack Trace:    at Microsoft.TeamFoundation.Client.SharePoint.WssUtilities.VerifyTeamFoundationSharePointExtensions(ICredentials credentials, Uri url)    at Microsoft.TeamFoundation.Admin.VerifySharePointSitesUrl.Verify() Inner Exception Details: Exception Message: TF249064: The following Web service returned an response that is not valid: http://projects.ssw.com.au/_vti_bin/TeamFoundationIntegrationService.asmx. This Web service is used for the Team Foundation Server Extensions for SharePoint Products. Either the extensions are not installed, the request resulted in HTML being returned, or there is a problem with the URL. Verify that the following URL points to a valid SharePoint Web application and that the application is available: http://projects.ssw.com.au. If the URL is correct and the Web application is operating normally, verify that a firewall is not blocking access to the Web application. (type TeamFoundationServerInvalidResponseException) Exception Data Dictionary: ResponseStatusCode = InternalServerError I’ll look at SharePoint after, probably the SharePoint box just needs a restart or a kick If there is a problem with SharePoint it will come out in testing, But I will definatly be passing this on to Microsoft.   Upgrading the SharePoint connector to TFS 2010 You will need to upgrade the Extensions for SharePoint Products and Technologies on all of your SharePoint farm front end servers. To do this uninstall  the TFS 2010 RC from it in the same way as the server, and then install just the RTM Extensions. Figure: Only install the SharePoint Extensions on your SharePoint front end servers. TFS 2010 supports both SharePoint 2007 and SharePoint 2010.   Figure: When you configure SharePoint it uploads all of the solutions and templates. Figure: Everything is uploaded Successfully. Figure: TFS even remembered the settings from the previous installation, fantastic.   Upgrading the Team Foundation Build Servers to TFS 2010 Just like on the SharePoint servers you will need to upgrade the Build Server to the RTM. Just uninstall TFS 2010 RC and then install only the Team Foundation Build Services component. Unlike on the SharePoint server you will probably have some version of Visual Studio installed. You will need to remove this as well. (Coming Soon) Connecting Visual Studio 2010 / 2008 / 2005 and Eclipse to TFS2010 If you have developers still on Visual Studio 2005 or 2008 you will need do download the respective compatibility pack: Visual Studio Team System 2005 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 Visual Studio Team System 2008 Service Pack 1 Forward Compatibility Update for Team Foundation Server 2010 If you are using Eclipse you can download the new Team Explorer Everywhere install for connecting to TFS. Get your developers to check that you have the latest version of your applications with SSW Diagnostic which will check for Service Packs and hot fixes to Visual Studio as well.   Technorati Tags: TFS,TFS2010,TFS 2010,Upgrade

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >