Search Results

Search found 25093 results on 1004 pages for 'console output'.

Page 113/1004 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • How do I output installation status using gtk in c?

    - by Runner
    int main( int argc, char *argv[] ) { GtkWidget *window; gtk_init (&argc, &argv); window = gtk_window_new (GTK_WINDOW_TOPLEVEL); gtk_widget_show (window); gtk_main (); return 0; } The above is just a empty winform,I want to output dynamic information in it(not editable), how should I do that?

    Read the article

  • Capture Stored Procedure print output in .NET (Different model!)

    - by Workshop Alex
    Basically, this question with a difference... Is it possible to capture print output from a TSQL stored procedure in .NET, using the Entity Framework? The solution in the other question doesn't work for me. It works with the connection type from System.Data.SqlClient but I'm using the one from System.Data.EntityClient which does not have an InfoMessage event. (Of course, I could just create an SQL connection based on the Entity connection settings, but prefer to do it directly.)

    Read the article

  • having an issue about the output in c programming ..

    - by user2985811
    i'm having a problem on running the output after putting the input.. the output doesn't show after i put the variables and i don't know how to set the code .. so if you guys could help me with this, that would be grateful.. #include <stdio.h> #include <conio.h> int read_temps (float temps[]); int hot_days (int numOfTemp, float temps[]); int printf_temps (int numOfTemp, float temps[], int numOfHotDays); int main (void) { int index = 0; float tempVal; float temps[31]; int numOfTemp, numOfHotDays; do { printf ("Enter the temperature:"); scanf ("%f", &tempVal); if (tempVal!=-500.0) { temps[index] = tempVal; index++; } } while (tempVal != -500.0); return ; { int i; int count = 0; for (i = 0; i < numOfTemp; i++) { if (temps[i] > 32.0) count++; } return count; } { float sum = 0.0; int i; printf ("\nInput Temperatures:"); printf ("\n-------------------------"); for (i = 0;i < numOfTemp; i++) { printf ("\nDay %d : %.2fF", i+1, temps[i]); sum = sum + temps[i]; } printf ("\nNumber of Hot Days : %d", numOfHotDays); printf ("\nAverage Temperature: %.2f", sum/numOfTemp); } { clrscr (); numOfTemp = read_temps (temps); numOfHotDays = hot_days (numOfTemp, temps); clrscr (); printf_temps (numOfTemp, temps, numOfHotDays); getch (); } }

    Read the article

  • SMO ManagedComputer.ServiceInstances is empty

    - by Mark J Miller
    I am trying to use SMO (VS 2010, SQL Server 2008) to connect to SQL Server and view the server protocol configuration. I can connect and list the Services and ClientProtocols as well as the account MSSQLSERVER service is running under. However, the ServerInstances collection is empty. The only instance on the target server is the default (MSSQLSERVER), shouldn't that be in the collection? How can I get an instance of it so I can inspect the ServerProtocols collection? Here's the code I'm using: class Program { static void Main(string[] args) { //machine hosting installed sql server instance ManagedComputer host = new ManagedComputer("dev-it-db01.dev.interbankfx.lcl"); //ManagedComputer host = new ManagedComputer("MRW-IT-DTP69"); if (host.ServerInstances.Count != 0) { //why is this 0? Is it because only the DEFAULT instance exists? Console.WriteLine("/////////////// INSTANCES ////////////////"); foreach (ServerInstance inst in host.ServerInstances) { Console.WriteLine(inst.Name); } } Console.WriteLine("/////////////// SERVICES ////////////////"); // enumerate sql services (looking for MSSSQLSERVER) foreach (Service svc in host.Services) { Console.WriteLine(svc.Name); } Console.WriteLine("/////////////// DETAILS ////////////////"); // get name of MSSQLSERVER instance from user (pick from list above) Service mssqlserver = host.Services["MSSQLSERVER"]; // print service account: .\{account} == "local account", "LocalSystem", "NetworkService", {domain}\{account} == "domain account" Console.WriteLine("Service Account: {0}", mssqlserver.ServiceAccount); // get client protocols foreach (ClientProtocol cp in host.ClientProtocols) { Console.WriteLine("{0} {1} ({2})", cp.Order, cp.DisplayName, cp.IsEnabled ? "Enabled" : "Disabled"); } } } I've also tried: Urn u = new Urn("ManagedComputer[@Name=dev-it-db01.dev.interbankfx.lcl]/ServerInstance[@Name='MSSQLSERVER']/ServerProtocol[@Name='Tcp']"); ServerProtocol tcp = host.GetSmoObject(u) as ServerProtocol; if (tcp != null) { Console.WriteLine("{0}", tcp.DisplayName); } But I get an error message stating: "child expressions are not supported." Any ideas what's wrong?

    Read the article

  • C# iterator is executed twice when composing two IEnumerable methods

    - by achristoph
    I just started learning about C# iterator but got confused with the flow of the program after reading the output of the program. The foreach with uniqueVals seems to be executed twice. My understanding is that the first few lines up to the line before "Nums in Square: 3" should not be there. Can anyone help to explain why this happens? The output is: Unique: 1 Adding to uniqueVals: 1 Unique: 2 Adding to uniqueVals: 2 Unique: 2 Unique: 3 Adding to uniqueVals: 3 Nums in Square: 3 Unique: 1 Adding to uniqueVals: 1 Square: 1 Number returned from Unique: 1 Unique: 2 Adding to uniqueVals: 2 Square: 2 Number returned from Unique: 4 Unique: 2 Unique: 3 Adding to uniqueVals: 3 Square: 3 Number returned from Unique: 9 static class Program { public static IEnumerable<T> Unique<T>(IEnumerable<T> sequence) { Dictionary<T, T> uniqueVals = new Dictionary<T, T>(); foreach (T item in sequence) { Console.WriteLine("Unique: {0}", item); if (!uniqueVals.ContainsKey(item)) { Console.WriteLine("Adding to uniqueVals: {0}", item); uniqueVals.Add(item, item); yield return item; Console.WriteLine("After Unique yield: {0}", item); } } } public static IEnumerable<int> Square(IEnumerable<int> nums) { Console.WriteLine("Nums in Square: {0}", nums.Count()); foreach (int num in nums) { Console.WriteLine("Square: {0}", num); yield return num * num; Console.WriteLine("After Square yield: {0}", num); } } static void Main(string[] args) { var nums = new int[] { 1, 2, 2, 3 }; foreach (int num in Square(Unique(nums))) Console.WriteLine("Number returned from Unique: {0}", num); Console.Read(); } }

    Read the article

  • jQuery AJAX chained calls + Celery in Django

    - by user1029968
    Currently clicking one of the links in my application, triggers AJAX call (GET) that - if succeeds - triggers the second one and this second one - if succeeds - calls the third one. This way user can be informed which part of process started when clicking the link is currently ongoing. So in the template file in Django project, click callback body for link mentioned looks like below: $("#the-link").click(function(item)) { // CALL 1 $.ajax({ url: {% url ajax_call_1 %}, data: { // something } }) .done(function(call1Result) { // CALL 2 $.ajax({ url: {% url ajax_call_1 %}, data: { // call1Result passed here to CALL 2 } }) .done(function(call2Result) { // CALL 3 $.ajax({ url: {%url ajax_call_3 %}, data: { // call2Result passed here to CALL 3 } }) .done(function(call3Result) { // expected result if everything went fine console.log("wow, it worked!"); console.log(call3Result); }) .fail(function(errorObject) { console.log("call3 failed"); console.log(errorObject); } }) .fail(function(errorObject)) { console.log("call2 failed"); console.log(errorObject); } }) .fail(function(errorObject) { console.log("call1 failed"); console.log(errorObject); }); }); This works fine for me. The thing is, I'd like to prevent interrupting the following calls if the user closes the browser and the calls are not finished (as it will take some time to finish all three), as there is some additional logic in Django view functions called in each GET request. For example, if user clicks the link and closes the browser during CALL 1, is it possible to somehow go on with the following CALL 2 and CALL 3? I know that normally I'd be able to use Celery Task to process the function but is it still possible here with the chained calls mentioned? Any help is much appreciated!

    Read the article

  • Heroku only initializes some of my models.

    - by JayX
    So I ran heroku db:push And it returned Sending schema Schema: 100% |==========================================| Time: 00:00:08 Sending indexes schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:00 users: 100% |==========================================| Time: 00:00:00 Sending data 8 tables, 70,551 records groups: 100% |==========================================| Time: 00:00:00 schema_migrat: 100% |==========================================| Time: 00:00:00 projects: 100% |==========================================| Time: 00:00:00 tasks: 100% |==========================================| Time: 00:00:02 authenticatio: 100% |==========================================| Time: 00:00:00 articles: 100% |==========================================| Time: 00:08:27 users: 100% |==========================================| Time: 00:00:00 topics: 100% |==========================================| Time: 00:01:22 Resetting sequences And when I went to heroku console This worked >> Task => Task(id: integer, topic: string, content: string, This worked >> User => User(id: integer, name: string, email: string, But the rest only returned something like >> Project NameError: uninitialized constant Project /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' /home/heroku_rack/lib/console.rb:28:in `call' >> Authentication NameError: uninitialized constant Authentication /home/heroku_rack/lib/console.rb:150 /home/heroku_rack/lib/console.rb:150:in `call' update 1: And when I typed >> ActiveRecord::Base.connection.tables it returned => ["projects", "groups", "tasks", "topics", "articles", "schema_migrations", "authentications", "users"] Using heroku's SQL console plugin I got SQL> show tables +-------------------+ | table_name | +-------------------+ | authentications | | topics | | groups | | projects | | schema_migrations | | tasks | | articles | | users | +-------------------+ So I think they are existing in heroku's database already. There is probably something wrong with rack db:migrate update 2: I ran rack db:migrate locally in both production and development modes and nothing wrong happened. But when I ran it on heroku it only returned: $ heroku rake db:migrate (in /disk1/home/slugs/389817_1c16250_4bf2-f9c9517b-bdbd-49d9-8e5a-a87111d3558e/mnt) $ Also, I am using sqlite3 update 3: so I opened up heroku console and typed in the following command class Authentication < ActiveRecord::Base;end Amazingly I was able to call Authentication class, but once I exited, nothing was changed.

    Read the article

  • VB.NET CInt(Long) behaving differently in 32- and 64-bit environments

    - by LocoDelAssembly
    Hello everybody, this is my first message here. Today I had a problem converting a Long (Int64) to an Integer (Int32). The problem is that my code was always working in 32-bit environments, but when I try THE SAME executable in a 64-bit computer it crashes with a System.OverflowException exception. I've prepared this test code in VS2008 in a new project with default settings: Module Module1 Sub Main() Dim alpha As Long = -1 Dim delta As Integer Try delta = CInt(alpha And UInteger.MaxValue) Console.WriteLine("CINT OK") delta = Convert.ToInt32(alpha And UInteger.MaxValue) Console.WriteLine("Convert.ToInt32 OK") Catch ex As Exception Console.WriteLine(ex.GetType().ToString()) Finally Console.ReadLine() End Try End Sub End Module On my 32-bit setups (Windows XP SP3 32-bit and Windows 7 32-bit) it prints "CINT OK", but in the 64-bit computer (Windows 7 64-bit) that I've tested THE SAME executable it prints the exception name only. Is this behavior documented? I tried to find a reference but failed miserably. For reference I leave the MSIL code too: .method public static void Main() cil managed { .entrypoint .custom instance void [mscorlib]System.STAThreadAttribute::.ctor() = ( 01 00 00 00 ) // Code size 88 (0x58) .maxstack 2 .locals init ([0] int64 alpha, [1] int32 delta, [2] class [mscorlib]System.Exception ex) IL_0000: nop IL_0001: ldc.i4.m1 IL_0002: conv.i8 IL_0003: stloc.0 IL_0004: nop .try { .try { IL_0005: ldloc.0 IL_0006: ldc.i4.m1 IL_0007: conv.u8 IL_0008: and IL_0009: conv.ovf.i4 IL_000a: stloc.1 IL_000b: ldstr "CINT OK" IL_0010: call void [mscorlib]System.Console::WriteLine(string) IL_0015: nop IL_0016: ldloc.0 IL_0017: ldc.i4.m1 IL_0018: conv.u8 IL_0019: and IL_001a: call int32 [mscorlib]System.Convert::ToInt32(int64) IL_001f: stloc.1 IL_0020: ldstr "Convert.ToInt32 OK" IL_0025: call void [mscorlib]System.Console::WriteLine(string) IL_002a: nop IL_002b: leave.s IL_0055 } // end .try catch [mscorlib]System.Exception { IL_002d: dup IL_002e: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::SetProjectError(class [mscorlib]System.Exception) IL_0033: stloc.2 IL_0034: nop IL_0035: ldloc.2 IL_0036: callvirt instance class [mscorlib]System.Type [mscorlib]System.Exception::GetType() IL_003b: callvirt instance string [mscorlib]System.Type::ToString() IL_0040: call void [mscorlib]System.Console::WriteLine(string) IL_0045: nop IL_0046: call void [Microsoft.VisualBasic]Microsoft.VisualBasic.CompilerServices.ProjectData::ClearProjectError() IL_004b: leave.s IL_0055 } // end handler } // end .try finally { IL_004d: nop IL_004e: call string [mscorlib]System.Console::ReadLine() IL_0053: pop IL_0054: endfinally } // end handler IL_0055: nop IL_0056: nop IL_0057: ret } // end of method Module1::Main I suspect that the instruction that is behaving differently is either conv.ovf.i4 or the ldc.i4.m1/conv.u8 pair. If you know what is going on here please let me know Thanks

    Read the article

  • Javascript - Wait for function to return

    - by LoadData
    So, I am working on a project which requires me to call upon a function to get data from an external source. The issue I am having, I call upon the function - However the code after the function call is continuing before the function has returned a value. Here is the function - function getData() { var myVar; var xmlLoc = $.get("http://ec.urbentom.co.uk/StudentAppData.xml", function(data) { $xml = $(data); myVar = $xml; console.log(myVar); console.log(String($xml)); localStorage.setItem("Data", $xml); console.log(String(localStorage.getItem("Data"))); return myVar; }); return myVar; console.log("Does this continue"); } And here is where it is called upon - $(document).on("pageshow","#Information",function() { $xml = $(getData()); //Here is the function call console.log($xml); //However, it will instantly go to this line before 'getData' has returned a value. $xml.find('AllData').each(function() { $(this).find('item').each(function() { if ($(this).find('Category').text()=="Facilities") { console.log($(this).find('Title').text()); //Do stuff here } else if ($(this).find('Category').text()=="Contacts" || $(this).find('Category').text()=="Information") { console.log($(this).find('Title').text()); //Do stuff here too } }); $('#informationList').html(output).listview().listview("refresh"); console.log("Finished"); }); }); Right now, I'm unsure of why it is not working. My guess is that it is because I am calling a function within a function. Does anyone have any ideas on how this issue can be fixed?

    Read the article

  • Unix sort keys cause performance problems

    - by KenFar
    My data: It's a 71 MB file with 1.5 million rows. It has 6 fields All six fields combine to form a unique key - so that's what I need to sort on. Sort statement: sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 -o output.csv input.csv The problem: If I sort without keys, it takes 30 seconds. If I sort with keys, it takes 660 seconds. I need to sort with keys to keep this generic and useful for other files that have non-key fields as well. The 30 second timing is fine, but the 660 is a killer. More details using unix time: sort input.csv -o output.csv = 28 seconds sort -t ',' -k1 input.csv -o output.csv = 28 seconds sort -t ',' -k1,1 input.csv -o output.csv = 64 seconds sort -t ',' -k1,1 -k2,2 input.csv -o output.csv = 194 seconds sort -t ',' -k1,1 -k2,2 -k3,3 input.csv -o output.csv = 328 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 input.csv -o output.csv = 483 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 input.csv -o output.csv = 561 seconds sort -t ',' -k1,1 -k2,2 -k3,3 -k4,4 -k5,5 -k6,6 input.csv -o output.csv = 660 seconds I could theoretically move the temp directory to SSD, and/or split the file into 4 parts, sort them separately (in parallel) then merge the results, etc. But I'm hoping for something simpler since looks like sort is just picking a bad algorithm. Any suggestions? Testing Improvements using buffer-size: With 2 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 8% improvement with 16MB, but 6% worse with 128MB With 6 keys I got a 5% improvement with 8, 20, 24 MB and best performance of 9% improvement with 16MB. Testing improvements using dictionary order (just 1 run each): sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o output.csv = 235 seconds (21% worse) sort -d --buffer-size=8M -t ',' -k1,1 -k2,2 input.csv -o ouput.csv = 232 seconds (21% worse) conclusion: it makes sense that this would slow the process down, not useful Testing with different file system on SSD - I can't do this on this server now. Testing with code to consolidate adjacent keys: def consolidate_keys(key_fields, key_types): """ Inputs: - key_fields - a list of numbers in quotes: ['1','2','3'] - key_types - a list of types of the key_fields: ['integer','string','integer'] Outputs: - key_fields - a consolidated list: ['1,2','3'] - key_types - a list of types of the consolidated list: ['string','integer'] """ assert(len(key_fields) == len(key_types)) def get_min(val): vals = val.split(',') assert(len(vals) <= 2) return vals[0] def get_max(val): vals = val.split(',') assert(len(vals) <= 2) return vals[len(vals)-1] i = 0 while True: try: if ( (int(get_max(key_fields[i])) + 1) == int(key_fields[i+1]) and key_types[i] == key_types[i+1]): key_fields[i] = '%s,%s' % (get_min(key_fields[i]), key_fields[i+1]) key_types[i] = key_types[i] key_fields.pop(i+1) key_types.pop(i+1) continue i = i+1 except IndexError: break # last entry return key_fields, key_types While this code is just a work-around that'll only apply to cases in which I've got a contiguous set of keys - it speeds up the code by 95% in my worst case scenario.

    Read the article

  • How to grab data from webpage in Chrome and output into Chrome extension popup?

    - by chimerical
    For a Google Chrome extension, none of the Javascript I write to manipulate the DOM of the extension popup.html seems to have any effect on the popup's DOM. I can manipulate the DOM of the current webpage in the browser just fine by using content_script.js, and I'm interested in grabbing data from the webpage and outputting it into the extension popup, like so (below: popup.html): <div id="extensionpopupcontent">Links</div> <a onclick="click()">Some Link</a> <script type="text/javascript"> function click() { chrome.tabs.executeScript(null, {file: "content_script.js"}); document.getElementById("extensionpopupcontent").innerHTML = variableDefinedInContentScript; window.close(); } </script> I tried using chrome.extension.sendRequest from the documentation at http://code.google.com/chrome/extensions/messaging.html, but I'm not sure how to properly use it in my case, specifically the greeting and the response. contentscript.js ================ chrome.extension.sendRequest({greeting: "hello"}, function(response) { console.log(response.farewell); });

    Read the article

  • Consume WCF Service InProcess using Agatha and WCF

    - by REA_ANDREW
    I have been looking into this lately for a specific reason.  Some integration tests I want to write I want to control the types of instances which are used inside the service layer but I want that control from the test class instance.  One of the problems with just referencing the service is that a lot of the time this will by default be done inside a different process.  I am using StructureMap as my DI of choice and one of the tools which I am using inline with RhinoMocks is StructureMap.AutoMocking.  With StructureMap the main entry point is the ObjectFactory.  This will be process specific so if I decide that the I want a certain instance of a type to be used inside the ServiceLayer I cannot configure the ObjectFactory from my test class as that will only apply to the process which it belongs to. This is were I started thinking about two things: Running a WCF in process Being able to share mocked instances across processes A colleague in work pointed me to a project which is for the latter but I thought that it would be a better solution if I could run the WCF Service in process.  One of the projects which I use when I think about WCF Services is AGATHA, and the one which I have to used to try and get my head around doing this. Another asset I have is a book called Programming WCF Services by Juval Lowy and if you have not heard of it or read it I would definately recommend it.  One of the many topics that is inside this book is the type of configuration you need to communicate with a service in the same process, and it turns out to be quite simple from a config point of view. <system.serviceModel> <services> <service name="Agatha.ServiceLayer.WCF.WcfRequestProcessor"> <endpoint address ="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </service> </services> <client> <endpoint name="MyEndpoint" address="net.pipe://localhost/MyPipe" binding="netNamedPipeBinding" contract="Agatha.Common.WCF.IWcfRequestProcessor"/> </client> </system.serviceModel>   You can see here that I am referencing the Agatha object and contract here, but also that my binding and the address is something called Named Pipes.  THis is sort of the “Magic” which makes it happen in the same process. Next I need to open the service prior to calling the methods on a proxy which I also need.  My initial attempt at the proxy did not use any Agatha specific coding and one of the pains I found was that you obviously need to give your proxy the known types which the serializer can be aware of.  So we need to add to the known types of the proxy programmatically.  I came across the following blog post which showed me how easy it was http://bloggingabout.net/blogs/vagif/archive/2009/05/18/how-to-programmatically-define-known-types-in-wcf.aspx. First Pass So with this in mind, and inside a console app this was my first pass at consuming a service in process.  First here is the proxy which I made making use of the Agatha IWcfRequestProcessor contract. public class InProcProxy : ClientBase<Agatha.Common.WCF.IWcfRequestProcessor>, Agatha.Common.WCF.IWcfRequestProcessor { public InProcProxy() { } public InProcProxy(string configurationName) : base(configurationName) { } public Agatha.Common.Response[] Process(params Agatha.Common.Request[] requests) { return Channel.Process(requests); } public void ProcessOneWayRequests(params Agatha.Common.OneWayRequest[] requests) { Channel.ProcessOneWayRequests(requests); } } So with the proxy in place I could then use this after opening the service so here is the code which I use inside the console app make the request. static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new InProcProxy()) { foreach (var operation in proxy.Endpoint.Contract.Operations) { foreach (var t in KnownTypeProvider.GetKnownTypes(null)) { operation.KnownTypes.Add(t); } } var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); } So what I used here is the KnownTypeProvider of Agatha to easily get all the types I need for the service/proxy and add them to the proxy.  My Request handler for this was just a test one which always returned 2 products. public class GetProductsHandler : RequestHandler<GetProductsRequest,GetProductsResponse> { public override Agatha.Common.Response Handle(GetProductsRequest request) { return new GetProductsResponse { Products = new List<ProductDto> { new ProductDto{}, new ProductDto{} } }; } } Second Pass Now after I did this I started reading up some more on some resources including more by Davy Brion and others on Agatha.  Now it turns out that the work I did above to create a derived class of the ClientBase implementing Agatha.Common.WCF.IWcfRequestProcessor was not necessary due to a nice class which is present inside the Agatha code base, RequestProcessorProxy which takes care of this for you! :-) So disregarding that class I made for the proxy and changing my code to use it I am now left with the following: static void Main(string[] args) { ComponentRegistration.Register(); ServiceHost serviceHost = new ServiceHost(typeof(Agatha.ServiceLayer.WCF.WcfRequestProcessor)); serviceHost.Open(); Console.WriteLine("Service is running...."); using (var proxy = new RequestProcessorProxy()) { var request = new GetProductsRequest(); var responses = proxy.Process(new[] { request }); var response = (GetProductsResponse)responses[0]; Console.WriteLine("{0} Products have been retrieved", response.Products.Count); } serviceHost.Close(); Console.WriteLine("Finished"); Console.ReadLine(); }   Cheers for now, Andy References Agatha WCF InProcess Without WCF StructureMap.AutoMocking Cross Process Mocking Agatha Programming WCF Services by Juval Lowy

    Read the article

  • C#: Does an IDisposable in a Halted Iterator Dispose?

    - by James Michael Hare
    If that sounds confusing, let me give you an example. Let's say you expose a method to read a database of products, and instead of returning a List<Product> you return an IEnumerable<Product> in iterator form (yield return). This accomplishes several good things: The IDataReader is not passed out of the Data Access Layer which prevents abstraction leak and resource leak potentials. You don't need to construct a full List<Product> in memory (which could be very big) if you just want to forward iterate once. If you only want to consume up to a certain point in the list, you won't incur the database cost of looking up the other items. This could give us an example like: 1: // a sample data access object class to do standard CRUD operations. 2: public class ProductDao 3: { 4: private DbProviderFactory _factory = SqlClientFactory.Instance 5:  6: // a method that would retrieve all available products 7: public IEnumerable<Product> GetAvailableProducts() 8: { 9: // must create the connection 10: using (var con = _factory.CreateConnection()) 11: { 12: con.ConnectionString = _productsConnectionString; 13: con.Open(); 14:  15: // create the command 16: using (var cmd = _factory.CreateCommand()) 17: { 18: cmd.Connection = con; 19: cmd.CommandText = _getAllProductsStoredProc; 20: cmd.CommandType = CommandType.StoredProcedure; 21:  22: // get a reader and pass back all results 23: using (var reader = cmd.ExecuteReader()) 24: { 25: while(reader.Read()) 26: { 27: yield return new Product 28: { 29: Name = reader["product_name"].ToString(), 30: ... 31: }; 32: } 33: } 34: } 35: } 36: } 37: } The database details themselves are irrelevant. I will say, though, that I'm a big fan of using the System.Data.Common classes instead of your provider specific counterparts directly (SqlCommand, OracleCommand, etc). This lets you mock your data sources easily in unit testing and also allows you to swap out your provider in one line of code. In fact, one of the shared components I'm most proud of implementing was our group's DatabaseUtility library that simplifies all the database access above into one line of code in a thread-safe and provider-neutral way. I went with my own flavor instead of the EL due to the fact I didn't want to force internal company consumers to use the EL if they didn't want to, and it made it easy to allow them to mock their database for unit testing by providing a MockCommand, MockConnection, etc that followed the System.Data.Common model. One of these days I'll blog on that if anyone's interested. Regardless, you often have situations like the above where you are consuming and iterating through a resource that must be closed once you are finished iterating. For the reasons stated above, I didn't want to return IDataReader (that would force them to remember to Dispose it), and I didn't want to return List<Product> (that would force them to hold all products in memory) -- but the first time I wrote this, I was worried. What if you never consume the last item and exit the loop? Are the reader, command, and connection all disposed correctly? Of course, I was 99.999999% sure the creators of C# had already thought of this and taken care of it, but inspection in Reflector was difficult due to the nature of the state machines yield return generates, so I decided to try a quick example program to verify whether or not Dispose() will be called when an iterator is broken from outside the iterator itself -- i.e. before the iterator reports there are no more items. So I wrote a quick Sequencer class with a Dispose() method and an iterator for it. Yes, it is COMPLETELY contrived: 1: // A disposable sequence of int -- yes this is completely contrived... 2: internal class Sequencer : IDisposable 3: { 4: private int _i = 0; 5: private readonly object _mutex = new object(); 6:  7: // Constructs an int sequence. 8: public Sequencer(int start) 9: { 10: _i = start; 11: } 12:  13: // Gets the next integer 14: public int GetNext() 15: { 16: lock (_mutex) 17: { 18: return _i++; 19: } 20: } 21:  22: // Dispose the sequence of integers. 23: public void Dispose() 24: { 25: // force output immediately (flush the buffer) 26: Console.WriteLine("Disposed with last sequence number of {0}!", _i); 27: Console.Out.Flush(); 28: } 29: } And then I created a generator (infinite-loop iterator) that did the using block for auto-Disposal: 1: // simply defines an extension method off of an int to start a sequence 2: public static class SequencerExtensions 3: { 4: // generates an infinite sequence starting at the specified number 5: public static IEnumerable<int> GetSequence(this int starter) 6: { 7: // note the using here, will call Dispose() when block terminated. 8: using (var seq = new Sequencer(starter)) 9: { 10: // infinite loop on this generator, means must be bounded by caller! 11: while(true) 12: { 13: yield return seq.GetNext(); 14: } 15: } 16: } 17: } This is really the same conundrum as the database problem originally posed. Here we are using iteration (yield return) over a large collection (infinite sequence of integers). If we cut the sequence short by breaking iteration, will that using block exit and hence, Dispose be called? Well, let's see: 1: // The test program class 2: public class IteratorTest 3: { 4: // The main test method. 5: public static void Main() 6: { 7: Console.WriteLine("Going to consume 10 of infinite items"); 8: Console.Out.Flush(); 9:  10: foreach(var i in 0.GetSequence()) 11: { 12: // could use TakeWhile, but wanted to output right at break... 13: if(i >= 10) 14: { 15: Console.WriteLine("Breaking now!"); 16: Console.Out.Flush(); 17: break; 18: } 19:  20: Console.WriteLine(i); 21: Console.Out.Flush(); 22: } 23:  24: Console.WriteLine("Done with loop."); 25: Console.Out.Flush(); 26: } 27: } So, what do we see? Do we see the "Disposed" message from our dispose, or did the Dispose get skipped because from an "eyeball" perspective we should be locked in that infinite generator loop? Here's the results: 1: Going to consume 10 of infinite items 2: 0 3: 1 4: 2 5: 3 6: 4 7: 5 8: 6 9: 7 10: 8 11: 9 12: Breaking now! 13: Disposed with last sequence number of 11! 14: Done with loop. Yes indeed, when we break the loop, the state machine that C# generates for yield iterate exits the iteration through the using blocks and auto-disposes the IDisposable correctly. I must admit, though, the first time I wrote one, I began to wonder and that led to this test. If you've never seen iterators before (I wrote a previous entry here) the infinite loop may throw you, but you have to keep in mind it is not a linear piece of code, that every time you hit a "yield return" it cedes control back to the state machine generated for the iterator. And this state machine, I'm happy to say, is smart enough to clean up the using blocks correctly. I suspected those wily guys and gals at Microsoft engineered it well, and I wasn't disappointed. But, I've been bitten by assumptions before, so it's good to test and see. Yes, maybe you knew it would or figured it would, but isn't it nice to know? And as those campy 80s G.I. Joe cartoon public service reminders always taught us, "Knowing is half the battle...". Technorati Tags: C#,.NET

    Read the article

  • Tips on ensuring Model Quality

    - by [email protected]
    Given enough data that represents well the domain and models that reflect exactly the decision being optimized, models usually provide good predictions that ensure lift. Nevertheless, sometimes the modeling situation is less than ideal. In this blog entry we explore the problems found in a few such situations and how to avoid them.1 - The Model does not reflect the problem you are trying to solveFor example, you may be trying to solve the problem: "What product should I recommend to this customer" but your model learns on the problem: "Given that a customer has acquired our products, what is the likelihood for each product". In this case the model you built may be too far of a proxy for the problem you are really trying to solve. What you could do in this case is try to build a model based on the result from recommendations of products to customers. If there is not enough data from actual recommendations, you could use a hybrid approach in which you would use the [bad] proxy model until the recommendation model converges.2 - Data is not predictive enoughIf the inputs are not correlated with the output then the models may be unable to provide good predictions. For example, if the input is the phase of the moon and the weather and the output is what car did the customer buy, there may be no correlations found. In this case you should see a low quality model.The solution in this case is to include more relevant inputs.3 - Not enough cases seenIf the data learned does not include enough cases, at least 200 positive examples for each output, then the quality of recommendations may be low. The obvious solution is to include more data records. If this is not possible, then it may be possible to build a model based on the characteristics of the output choices rather than the choices themselves. For example, instead of using products as output, use the product category, price and brand name, and then combine these models.4 - Output leaking into input giving the false impression of good quality modelsIf the input data in the training includes values that have changed or are available only because the output happened, then you will find some strong correlations between the input and the output, but these strong correlations do not reflect the data that you will have available at decision (prediction) time. For example, if you are building a model to predict whether a web site visitor will succeed in registering, and the input includes the variable DaysSinceRegistration, and you learn when this variable has already been set, you will probably see a big correlation between having a Zero (or one) in this variable and the fact that registration was successful.The solution is to remove these variables from the input or make sure they reflect the value as of the time of decision and not after the result is known. 

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >