Search Results

Search found 70827 results on 2834 pages for 'data quality services'.

Page 82/2834 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • Generic dataset handling library

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • Sending data in a GTK Callback

    - by snostorm
    How can I send data through a GTK callback? I've Googled, and with the information I found created this: #include <gtk/gtk.h> #include <stdio.h> void button_clicked( GtkWidget *widget, GdkEvent *event, gchar *data); int main( int argc, char *argv[]){ GtkWidget *window; GtkWidget *button; gtk_init (&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); button = gtk_button_new_with_label("Go!"); gtk_container_add(GTK_CONTAINER(window), button); g_signal_connect_swapped(G_OBJECT(window), "destroy", G_CALLBACK(gtk_main_quit), NULL); g_signal_connect(G_OBJECT(button), "clicked", G_CALLBACK(button_clicked),"test" ); gtk_widget_show(window); gtk_widget_show(button); gtk_main(); return 0; } void button_clicked( GtkWidget *widget, GdkEvent *event, gchar *data){ printf("%s \n", (gchar *) data); return; } But it just Segfaults when I press the button. What is the right way to do this?

    Read the article

  • Counting in R data.table

    - by Simon Z.
    I have the following data.table set.seed(1) DT <- data.table(VAL = sample(c(1, 2, 3), 10, replace = TRUE)) VAL 1: 1 2: 2 3: 2 4: 3 5: 1 6: 3 7: 3 8: 2 9: 2 10: 1 Now I want to to perform two tasks: Count the occurrences of numbers in VAL. Count within all rows with the same value VAL (first, second, third occurrence) At the end I want the result VAL COUNT IDX 1: 1 3 1 2: 2 4 1 3: 2 4 2 4: 3 3 1 5: 1 3 2 6: 3 3 2 7: 3 3 3 8: 2 4 3 9: 2 4 4 10: 1 3 3 where COUNT defines task 1. and IDX task 2. I tried to work with which and length using .I: dt[, list(COUNT = length(VAL == VAL[.I]), IDX = which(which(VAL == VAL[.I]) == .I))] but this does not work as .I refers to a vector with the index, so I guess one must use .I[]. Though inside .I[] I again face the problem, that I do not have the row index and I do know (from reading data.table FAQ and following the posts here) that looping through rows should be avoided if possible. So, what's the data.table way?

    Read the article

  • Core Data object into an NSDictionary with possible nil objects

    - by Chuck
    I have a core data object that has a bunch of optional values. I'm pushing a table view controller and passing it a reference to the object so I can display its contents in a table view. Because I want the table view displayed a specific way, I am storing the values from the core data object into an array of dictionaries then using the array to populate the table view. This works great, and I got editing and saving working properly. (i'm not using a fetched results controller because I don't have anything to sort on) The issue with my current code is that if one of the items in the object is missing, then I end up trying to put nil into the dictionary, which won't work. I'm looking for a clean way to handle this, I could do the following, but I can't help but feeling like there's a better way. *passedEntry is the core data object handed to the view controller when it is pushed, lets say it contains firstName, lastName, and age, all optional. if ([passedEntry firstName] != nil) { [dictionary setObject:[passedEntry firstName] forKey:@"firstName"] } else { [dictionary setObject:@"" forKey:@"firstName"] } And so on. This works, but it feels kludgy, especially if I end up adding more items to the core data object down the road.

    Read the article

  • using jquery $.ajax to retrieve data and push data into javascript array through the success functio

    - by teamdane
    Hello All, I have a function that I'm trying to retrieve values using $.ajax and push the returned data into an array called output. Problem is the success function will not allow me to push the results into the array. see below for code. var getValues = function(el) { var value = ($(el).val()); if(value == '' || $(el).attr('disabled')) { return []; } var string = 'value=' + value; var output = []; $.ajax({ type: "GET", url: 'shocklookup_callback.php', dataType: "string", data: string, success: function(data) { } }); //output.push({ text : value + 'test', value : value + 'test1'} ); return output; }; Any ideas of what I can put in the success function to be able to push the data into the output array? Also I need to be able to return the output array to the original function getValues. If this doesn't' make a whole lot of sense please let me know and I'll try to explain better. Thanks, Dane

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • Quickly generate junk data of certain size in Javascript

    - by user1357607
    I am writing an upload speed test in Javascript. I am using Jquery (and Ajax) to send chunks of data to a server in order to time how long it takes to get a response. This should, in theory give an estimation, of the upload speed. Of course to cater for different bandwidths of the user I sequentially upload larger and larger amounts of junk data until a threshold duration is reached. Currently I generate the junk data using the following function, however, it is very slow when generation megabytes of data. function generate_random_data(size){ var chars = "abcdefghijklmnopqrstuvwxyz"; var random_data = ""; for (var i = 0; i < size; i++){ var random_num = Math.floor(Math.random() * char.length); random_data = random_data + chars.substring(random_num,random_num+1); } return random_data; Really all I am doing is generating a chunk of bytes to send to the server, however, this is the only way I could find out how in Javascript. Any help would be appreciated.

    Read the article

  • Data transformation question

    - by tkm
    I have data composed of a list of employers and a list of workers. Each has a many-to-many relationship with the other (so an employer can have many workers, and a worker can have many employers). The way the data is retrieved (and given to me) is as follows: each employer has an array of workers. In other words: employer n has: worker x, worker y etc. So I have a bunch of employer objects each containing an array of workers. I need to transform this data (and basically invert the relationship). I need to have a bunch of worker objects, each containing and array of employers. In other words: worker x has: employer n1, employer n2 etc. The context is hypothetical so please don't comment on why I need this or why I am doing it this way. I would really just like help on the algorithm to perform this transformation (there isn't that much data so I would prefer readability over complex optimizations which reduce complexity). (Oh and I am using Java, but pseudocode would be fine). Thanks!

    Read the article

  • How can I get the output of a command terminated by a alarm() call in Perl?

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • How to Get The Output Of a command terminated by a alarm() call.

    - by rockyurock
    Case 1 If I run below command i.e iperf in UL only, then i am able to capture the o/p in txt file @output = readpipe("iperf.exe -u -c 127.0.0.1 -p 5001 -b 3600k -t 10 -i 1"); open FILE, ">Misplay_DL.txt" or die $!; print FILE @output; close FILE; Case 2 When I run iperf in DL mode , as we know server will start listening in cont. mode like below even after getting data from client (Here i am using server and client on LAN) @output = system("iperf.exe -u -s -p 5001 -i 1"); on server side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -s -p 5001 ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1896] local 192.168.5.101 port 5001 connected with 192.168.5.101 port 4878 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [1896] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) command prompt does not appear , process is contd... on client side: D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITYiperf.exe -u -c 192.168.5.101 -p 5001 -b 3600k -t 2 -i 1 ------------------------------------------------------------ Client connecting to 192.168.5.101, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 8.00 KByte (default) ------------------------------------------------------------ [1880] local 192.168.5.101 port 4878 connected with 192.168.5.101 port 5001 [ ID] Interval Transfer Bandwidth [1880] 0.0- 1.0 sec 441 KBytes 3.61 Mbits/sec [1880] 1.0- 2.0 sec 439 KBytes 3.60 Mbits/sec [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec [1880] Server Report: [1880] 0.0- 2.0 sec 881 KBytes 3.58 Mbits/sec 0.000 ms 0/ 614 (0%) [1880] Sent 614 datagrams D:\_IOT_SESSION_RELATED\SEEM_ELEMESNTS_AT_COMM_PORT_CONF\Tput_Related_Tools\AUTO MATION_APP_\AUTOMATION_UTILITY so with this as server is cont. listening and never terminates so can't take output of server side to a txt file as it is going to the next command itself to create a txt file so i adopted the alarm() function to terminate the server side (iperf.exe -u -s -p 5001) commands after it received all data from the client. could anybody suggest me the way.. Here is my code: #! /usr/bin/perl -w my $command = "iperf.exe -u -s -p 5001"; my @output; eval { local $SIG{ALRM} = sub { die "Timeout\n" }; alarm 20; #@output = `$command`; #my @output = readpipe("iperf.exe -u -s -p 5001"); #my @output = exec("iperf.exe -u -s -p 5001"); my @output = system("iperf.exe -u -s -p 5001"); alarm 0; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", @output; } open FILE, ">display.txt" or die $!; print FILE @output_1; close FILE; i know that with system command i cannot capture the o/p to a txt file but i tried with readpipe() and exec() calls also but in vain... could some one please take a look and let me know why the iperf.exe -u -s -p 5001 is not terminating even after the alarm call and to take the out put to a txt file

    Read the article

  • Unable to Use Simple JSOUP Example To Parse Website Table Data

    - by OhNoItsAnOverflow
    I'm attempting to extract the following data from a table via Android / JSOUP however I'm having a bit of trouble nailing down the process. I think I'm getting close to being able to do this using the code I've provided below - but for some reason I still cannot get my textview to display any of the table data. P.S. Live URL's can be provided if necessary. SOURCE: public class MainActivity extends Activity { TextView tv; final String URL = "http://exampleurl.com"; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); tv = (TextView) findViewById(R.id.TextView01); new MyTask().execute(URL); } private class MyTask extends AsyncTask<String, Void, String> { ProgressDialog prog; String title = ""; @Override protected void onPreExecute() { prog = new ProgressDialog(MainActivity.this); prog.setMessage("Loading...."); prog.show(); } @Override protected String doInBackground(String... params) { try { Document doc = Jsoup.connect(params[0]).get(); Element tableElement = doc.getElementsByClass("datagrid") .first(); title = doc.title(); } catch (IOException e) { e.printStackTrace(); } return title; } @Override protected void onPostExecute(String result) { super.onPostExecute(result); prog.dismiss(); tv.setText(result); } } } TABLE: <table class="datagrid"> <tbody><tr> <th>Item No.</th> <th>Name</th> <th>Sex</th> <th>Location</th> </tr> <tr> <td><a href="redirector.cfm?ID=a33660a3-aae0-45e3-9703-d59d77717836&amp;page=1&amp;&amp;lname=&amp;fname=" title="501207593">501207593&nbsp;</a></td> <td>USER1</td> <td>M&nbsp;</td> <td>Unknown</td> </tr> <tr> <td><a href="redirector.cfm?ID=edf524da-8598-450f-9373-da87db8d6c84&amp;page=1&amp;&amp;lname=&amp;fname=" title="501302750">501302750&nbsp;</a></td> <td>USER2</td> <td>M&nbsp;</td> <td>Unknown</td> </tr> <tr> <td><a href="redirector.cfm?ID=a78abeea-7651-4ac1-bba2-0dcb272c8b77&amp;page=1&amp;&amp;lname=&amp;fname=" title="531201804">531201804&nbsp;</a></td> <td>USER3</td> <td>M&nbsp;</td> <td>Unknown</td> </tr> </tbody></table>

    Read the article

  • Is adding users to the group www-data safe on Debian?

    - by John
    Many PHP applications do self-configuration and self-updating. This requires apache to have write access to the PHP files. While chgrp'ing them all to www-data appears like a good practice to avoid making them world writable, I also wish to allow users to create new files and edit existing one. Is adding users to the group www-data safe on Debian? For example: 775 root www-data /var/www 644 john www-data /var/www/johns_php_application.php 660 john www-data /var/www/johns_php_applications_configuration_file

    Read the article

  • How to measure startup time and order of Windows services on boot?

    - by djangofan
    I am not asking how to measure server startup time here. I am wondering if anyone knows of a tool that can measure and show a graph of the startup time and order of all the windows services during system startup. I saw a software program shown on my local Portland news last week that does this but I am unable to remember what it was called or anything else about it. All I remember is that it was a "tech" news story to help computer users with their computers. So, I know the software exists and I am trying to find it.

    Read the article

  • How to restrict all services to single domain in Ubuntu?

    - by harold
    Someone has pointed an unknown domain to my server's IP address likely via A records. I would like to reject access to ALL services (httpd, ssh, mail, etc.) from this domain and only allow requests from my domain. I want to make it so when I connect to that domain it's completely rejected from my server. I can disallow access from HTTP by changing my web server settings, but I want to do this for every single type of connection. How can I do this?

    Read the article

  • "this network location can't be included because it is not indexed" on Windows 2008R2 Remote Desktop Services Hosting

    - by ChrisNZ
    I'm setting up a new terminal server for our users on Win2008R2 (I guess I should call it Remote Desktop Services now!) When I try to change the location of "Documents" (by removing the default Documents library and adding a new one), to use the file server ie \\fileserver\username\Documents I get the message: "This network location can't be included because it is not indexed" I certainly don't want to make folders available offline, and in fact, I have set the GPO to prohibit offline folders on the terminal servers. What is the best practice for document libraries on terminal server and network file shares?

    Read the article

  • Solution: Testing Web Services with MSTest on Team Build

    - by Martin Hinshelwood
    Guess what. About 20 minutes after I fixed the build, Allan broke it again! Update: 4th March 2010 – After having huge problems getting this working I read Billy Wang’s post which showed me the light. The problem here is that even though the test passes locally it will not during an Automated Build. When you send your tests to the build server it does not understand that you want to spin up the web site and run tests against that! When you run the test in Visual Studio it spins up the web site anyway, but would you expect your test to pass if you told the website not to spin up? Of course not. So, when you send the code to the build server you need to tell it what to spin up. First, the best way to get the parameters you need is to right click on the method you want to test and select “Create Unit Test”. This will detect wither you are running in IIS or ASP.NET Development Server or None, and create the relevant tags. Figure: Right clicking on “SaveDefaultProjectFile” will produce a context menu with “Create Unit tests…” on it. If you use this option it will AutoDetect most of the Attributes that are required. /// <summary> ///A test for SSW.SQLDeploy.SilverlightUI.Web.Services.IProfileService.SaveDefaultProjectFile ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web", "/")] [UrlToTest("http://localhost:3100/")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] public void SaveDefaultProjectFileTest() { IProfileService target = new ProfileService(); // TODO: Initialize to an appropriate value string strComputerName = string.Empty; // TODO: Initialize to an appropriate value bool expected = false; // TODO: Initialize to an appropriate value bool actual; actual = target.SaveDefaultProjectFile(strComputerName); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Figure: Auto created code that shows the attributes required to run correctly in IIS or in this case ASP.NET Development Server If you are a purist and don’t like creating unit tests like this then you just need to add the three attributes manually. HostType – This attribute specified what host to use. Its an extensibility point, so you could write your own. Or you could just use “ASP.NET”. UrlToTest – This specifies the start URL. For most tests it does not matter which page you call, as long as it is a valid page otherwise your test may not run on the server, but may pass anyway. AspNetDevelopmentServerHost – This is a nasty one, it is only used if you are using ASP.NET Development Host and is unnecessary if you are using IIS. This sets the host settings and the first value MUST be the physical path to the root of your web application. OK, so all that was rubbish and I could not get anything working using the MSDN documentation. Google provided very little help until I ran into Billy Wang’s post  and I heard that heavenly music that all developers hear when understanding dawns that what they have been doing up until now is just plain stupid. I am sure that the above will work when I am doing Web Unit Tests, but there is a much easier way when doing web services. You need to add the AspNetDevelopmentServer attribute to your code. This will tell MSTest to spin up an ASP.NET Development server to host the service. Specify the path to the web application you want to use. [AspNetDevelopmentServer("WebApp1", "D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: This AspNetDevelopmentServer will make sure that the specified web application is launched. Now we can run the test and have it pass, but if the dynamically assigned ASP.NET Development server port changes what happens to the details in your app.config that was generated when creating a reference to the web service? Well, it would be wrong and the test would fail. This is where Billy’s helper method comes in. Once you have created an instance of your service call, and it has loaded the config, but before you make any calls to it you need to go in and dynamically set the Endpoint address to the same address as your dynamically hosted Web Application. using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Reflection; using System.ServiceModel.Description; using System.ServiceModel; namespace SSW.SQLDeploy.Test { class WcfWebServiceHelper { public static bool TryUrlRedirection(object client, TestContext context, string identifier) { bool result = true; try { PropertyInfo property = client.GetType().GetProperty("Endpoint"); string webServer = context.Properties[string.Format("AspNetDevelopmentServer.{0}", identifier)].ToString(); Uri webServerUri = new Uri(webServer); ServiceEndpoint endpoint = (ServiceEndpoint)property.GetValue(client, null); EndpointAddressBuilder builder = new EndpointAddressBuilder(endpoint.Address); builder.Uri = new Uri(endpoint.Address.Uri.OriginalString.Replace(endpoint.Address.Uri.Authority, webServerUri.Authority)); endpoint.Address = builder.ToEndpointAddress(); } catch (Exception e) { context.WriteLine(e.Message); result = false; } return result; } } } Figure: This fixes a problem with the URL in your web.config not being the same as the dynamically hosted ASP.NET Development server port. We can now add a call to this method after we created the Proxy object and change the Endpoint for the Service to the correct one. This process is wrapped in an assert as if it fails there is no point in continuing. [AspNetDevelopmentServer("WebApp1", D:\\Workspaces\\SSW\\SSW\\SqlDeploy\\DEV\\Main\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Editing the Endpoint from the app.config on the fly to match the dynamically hosted ASP.NET Development Server URL and port is now easy. As you can imagine AspNetDevelopmentServer poses some problems of you have multiple developers. What are the chances of everyone using the same location to store the source? What about if you are using a build server, how do you tell MSTest where to look for the files? To the rescue is a property called" “%PathToWebRoot%” which is always right on the build server. It will always point to your build drop folder for your solutions web sites. Which will be “\\tfs.ssw.com.au\BuildDrop\[BuildName]\Debug\_PrecompiledWeb\” or whatever your build drop location is. So lets change the code above to add this. [AspNetDevelopmentServer("WebApp1", "%PathToWebRoot%\\SSW.SQLDeploy.SilverlightUI.Web")] [DeploymentItem("SSW.SQLDeploy.SilverlightUI.Web.dll")] [TestMethod] public void ProfileService_Integration_SaveDefaultProjectFile_Returns_True() { ProfileServiceClient target = new ProfileServiceClient(); Assert.IsTrue(WcfWebServiceHelper.TryUrlRedirection(target, TestContext, "WebApp1")); bool isTrue = target.SaveDefaultProjectFile("Mav"); Assert.AreEqual(true, isTrue); } Figure: Adding %PathToWebRoot% to the AspNetDevelopmentServer path makes it work everywhere. Now we have another problem… this will ONLY run on the build server and will fail locally as %PathToWebRoot%’s default value is “C:\Users\[profile]\Documents\Visual Studio 2010\Projects”. Well this sucks… How do we get the test to run on any build server and any developer laptop. Open “Tools | Options | Test Tools | Test Execution” in Visual Studio and you will see a field called “Web application root directory”. This is where you override that default above. Figure: You can override the default website location for tests. In my case I would put in “D:\Workspaces\SSW\SSW\SqlDeploy\DEV\Main” and all the developers working with this branch would put in the folder that they have mapped. Can you see a problem? What is I create a “$/SSW/SqlDeploy/DEV/34567” branch from Main and I want to run tests in there. Well… I would have to change the value above. This is not ideal, but as you can put your projects anywhere on a computer, it has to be done. Conclusion Although this looks convoluted and complicated there are real problems being solved here that mean that you have a test ANYWHERE solution. Any build server, any Developer workstation. Resources: http://billwg.blogspot.com/2009/06/testing-wcf-web-services.html http://tough-to-find.blogspot.com/2008/04/testing-asmx-web-services-in-visual.html http://msdn.microsoft.com/en-us/library/ms243399(VS.100).aspx http://blogs.msdn.com/dscruggs/archive/2008/09/29/web-tests-unit-tests-the-asp-net-development-server-and-code-coverage.aspx http://www.5z5.com/News/?543f8bc8b36b174f Technorati Tags: VS2010,MSTest,Team Build 2010,Team Build,Visual Studio,Visual Studio 2010,Visual Studio ALM,Team Test,Team Test 2010

    Read the article

  • Problems with data driven testing in MSTest

    - by severj3
    Hello, I am trying to get data driven testing to work in C# with MSTest/Selenium. Here is a sample of some of my code trying to set it up: [TestClass] public class NewTest { private ISelenium selenium; private StringBuilder verificationErrors; [DeploymentItem("GoogleTestData.xls")] [DataSource("System.Data.OleDb", "Provider=Microsoft.Jet.OLEDB.4.0;Data Source=GoogleTestData.xls;Persist Security Info=False;Extended Properties='Excel 8.0'", "TestSearches$", DataAccessMethod.Sequential)] [TestMethod] public void GoogleTest() { selenium = new DefaultSelenium("localhost", 4444, "*iehta", http://www.google.com); selenium.Start(); verificationErrors = new StringBuilder(); var searchingTerm = TestContext.DataRow["SearchingString"].ToString(); var expectedResult = TestContext.DataRow["ExpectedTextResults"].ToString(); Here's my error: Error 3 An object reference is required for the non-static field, method, or property 'Microsoft.VisualStudio.TestTools.UnitTesting.TestContext.DataRow.get' E:\Projects\SeleniumProject\SeleniumProject\MaverickTest.cs 32 33 SeleniumProject The error is underlining the "TestContext.DataRow" part of both statements. I've really been struggling with this one, thanks!

    Read the article

  • Agile PLM Partners and OCS offer valuable services

    - by Shane Goodwin
    One of the amazing benefits of using Agile is the ability to tap into a broad number of Partners and Oracle Consulting Services to leverage very talented people. Many of these people respond to postings on the Agile PLM Yahoo usergroups (WRAU and IAUG) as well as in the Oracle Support Community. As you are managing your Agile PLM project, either a new implementation or an upgrade, these Partners and OCS can jumpstart your internal resources with information and experience which takes years to develop. They can also offer other capabilities which add to your Agile experience. As an example, GoEngineer recently announced a new offering to host Agile PLM databases for companies who do not want to run their own Agile PLM system. This offering provides our smaller customers with more options on how to implement Agile PLM. The Oracle Consulting Services group has two new offerings, one for Rapid Deployments of new customers and another for Process and Technical Diagnostics of existing customers in order to better leverage the PLM investment. Contact your Oracle Sales representative to learn more. In addition, there are many other partners who are helping our customers get the most from Agile PLM. Some examples are below. In the coming months, the Oracle Partner Network will be working with partners to certify them as specialists in Agile PLM. Some of our other current partners working on Agile PLM are: Kalypso J Squared Domain Systems Sierra Atlantic Minerva

    Read the article

  • Stack data storage order

    - by Jamie Dixon
    When talking about a stack in either computing or "real" life we usually assume a "first on, last off" type of functionality. Because the idea of a stack is based around something in the physical world, does it matter how the data in the stack is stored? I notice in a lot of examples that the storage of the stack data is quite often done using an array and the newest item added to the stack is placed at the bottom of the array. (like adding a new plate to an existing stack of plates except putting it underneath the other plates rather than on top). As a paradigm, does it matter in what order the data is stored within the stack as long as the operation of the stack acts as expected?

    Read the article

  • asp.NET Dynamic Data Site and asp.NET MVC-2 site together

    - by loviji
    Hi, I have created firstly ASP.NET MVC 2. and write more functionality. After I create asp.NET Dynamic Data Site. now, when I click on run button in Visual Studio, mvc app. opened in browser as http://localhost:50062. and asp.NET Dynamic Data Site as http://localhost:58395/cms/. but i want to merge this app. in one. can I use asp.NET Dynamic Data Site and asp.NET MVC-2 at the same time?

    Read the article

  • ETL Operation - Return Primary Key

    - by user302254
    I am using Talend to populate a data warehouse. My job is writing customer data to a dimension table and transaction data to the fact table. The surrogate key (p_key) on the fact table is auto-incrementing. When I insert a new customer, I need my fact table to reflect the id of the related customer. As I mentioned my p_key is auto auto_incrementing so I can't just insert an arbitrary value for the p_key. Any thought on how I can insert a row into my dimension table and still retrieve the primary key to reference in my fact record? Thanks.

    Read the article

  • best way to save data in ipod touch/iphone objective-c

    - by Leonardo
    Hi all, I am writing a very simple application, for iphone. Unfortunately I am really a newbie. What I am trying to do is to save data at the end of a user experience. These data are really simple, only string or int, or some array. Later I want to be able to retrieve that data, therefore I also need an event id (I suppose). Could you please point out the best way, api or technology to achieve that, xml, plain text, serialization... ? many thanks Leonardo

    Read the article

  • Messaging Systems – Handshaking, Reconciliation and Tracking for Data Transparency

    - by Ahsan Alam
    As many corporations build business partnerships with other organizations, the need to share information becomes necessary. Large amount of data sharing using snail mail, email and/or fax are quickly becoming a thing of the past. More and more organizations are relying heavily on Ftp and/or Web Service to exchange data. Corporations apply wide range of technologies and techniques based on available resources and data transfer needs. Sometimes, it involves simple home-grown applications. Other times, large investments are made on products like BizTalk, TIBCO etc. Complexity of information management also varies significantly from one organizations to another. Some may deal with handful of simple steps to process and manage shared data; whereas others may rely on fairly complex processes with heavy interaction with internal and external systems in order to serve the business needs. It is not surprising that many of these systems end up becoming black boxes over a period of time. Consequently, people and business start to rely more and more on developers and support personnel just to extract simple information adding to the loss of productivity. One of the most important factor in any business is transparency to data irrespective of technology preferences and the complexity of business processes. Not knowing the state of data could become very costly to the business. Being involved in messaging systems for some time now, I have heard the same type of questions over and over again. Did we transmit messages successfully? Did we get responses back? What is the expected turn-around-time? Did the system experience any errors? When one company transmits data to one or more company, it may invoke a set of processes that could complete in matter of seconds, or it could days. As data travels from one organizations to another, the uncertainty grows, and the longer it takes to track uncertain state of the data the costlier it gets for the business, So, in every business scenario, it's extremely important to be aware of the state of the data.   Architects of messaging systems can take several steps to aid with data transparency. Some forms of data handshaking and reconciliation mechanism as well as extensive data tracking can be incorporated into the system to provide clear visibility to the data. What do I mean by handshaking and reconciliation? Some might consider these to be a single concept; however, I like to consider them in two unique categories. Handshaking serves as message receipts or acknowledgment. When one transmits messages to another, the receiver must acknowledge each message by sending immediate responses for each transaction. Whenever we use Web Services, handshaking is often achieved utilizing request/reply pattern. Similarly, if Ftp is used, a receiver can acknowledge by dropping messages for the sender as soon as the files are picked up. These forms of handshaking or acknowledgment informs the message sender and receiver that a successful transaction has occurred. I have mentioned earlier that it could take anywhere from a few seconds to a number of days before shared data is completely processed. In addition, whenever a batched transaction is used, processing time for each data element inside the batch could also vary significantly. So, in order to successfully manage data processing, reconciliation becomes extremely important; otherwise it may result into data loss or in some cases hefty penalty. Reconciliation can be done in many ways. Partner organizations can share and compare ad hoc reports to achieve reconciliation. On the other hand, partners can agree on some type of systematic reconciliation messages. Systems within responsible parties can trigger messages to partners as soon as the data process completes.   Next step in the data transparency is extensive data tracking. Some products such as BizTalk and TIBCO provide built-in functionality for data tracking; however, built-in functionality may not always be adequate. Sometimes additional tracking system (or databases) needs to be built in order monitor all types of data flow including, message transactions, handshaking, reconciliation, system errors and many more. If these types of data are captured, then these can be presented to business users in any forms or fashion. When business users are empowered with such information, then the reliance on developers and support teams decreases dramatically.   In today's collaborative world of information sharing, data transparency is key to the success of every business. The state of business data will constantly change. However, when people have easier access to various states of data, it allows them to make better and quicker decisions. Therefore, I feel that data handshaking, reconciliation and tracking is very important aspect of messaging systems.

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >