Search Results

Search found 18761 results on 751 pages for 'lot'.

Page 693/751 | < Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >

  • Is it possible to change HANDLE that has been opened for synchronous I/O to be opened for asynchrono

    - by Martin Dobšík
    Dear all, Most of my daily programming work in Windows is nowadays around I/O operations of all kind (pipes, consoles, files, sockets, ...). I am well aware of different methods of reading and writing from/to different kinds of handles (Synchronous, asynchronous waiting for completion on events, waiting on file HANDLEs, I/O completion ports, and alertable I/O). We use many of those. For some of our applications it would be very useful to have only one way to treat all handles. I mean, the program may not know, what kind of handle it has received and we would like to use, let's say, I/O completion ports for all. So first I would ask: Let's assume I have a handle: HANDLE h; which has been received by my process for I/O from somewhere. Is there any easy and reliable way to find out what flags it has been created with? The main flag in question is FILE_FLAG_OVERLAPPED. The only way known to me so far, is to try to register such handle into I/O completion port (using CreateIoCompletionPort()). If that succeeds the handle has been created with FILE_FLAG_OVERLAPPED. But then only I/O completion port must be used, as the handle can not be unregistered from it without closing the HANDLE h itself. Providing there is an easy a way to determine presence of FILE_FLAG_OVERLAPPED, there would come my second question: Is there any way how to add such flag to already existing handle? That would make a handle that has been originally open for synchronous operations to be open for asynchronous. Would there be a way how to create opposite (remove the FILE_FLAG_OVERLAPPED to create synchronous handle from asynchronous)? I have not found any direct way after reading through MSDN and googling a lot. Would there be at least some trick that could do the same? Like re-creating the handle in same way using CreateFile() function or something similar? Something even partly documented or not documented at all? The main place where I would need this, is to determine the the way (or change the way) process should read/write from handles sent to it by third party applications. We can not control how third party products create their handles. Dear Windows gurus: help please! With regards Martin

    Read the article

  • ASP.NET FormView: "Object of type 'System.Int32' cannot be converted to type 'System.String"

    - by Vinzcent
    Hey I have a problem with my FromView. I would like to show some data from a Database Table in my FormView. But some data is from the tupe Int32, while this data should be in a TextBox, a string. How do you convert these Int32's. FormView and my ObjectDataSource <asp:FormView ID="fvDetailOrder" runat="server"> <ItemTemplate> Aantal:<br /> <asp:Label CssClass="txtBox" ID="Label15" runat="server" Text='<%# Eval("COUNT") %>' /><br /> Prijs:<br /> <asp:Label CssClass="txtBox" ID="Label16" runat="server" Text='<%# Eval("PRICE") %>' /><br /> Korting:<br /> <asp:Label CssClass="txtBox" ID="Label17" runat="server" Text='' /><br /> Totaal:<br /> <asp:Label CssClass="txtBox" ID="Label18" runat="server" Text='<%# Eval("AMOUNT") %>' /><br /> Betaald:<br /> <asp:Label CssClass="txtBox" ID="Label19" runat="server" Text='<%# Eval("PAID") %>' /><br /> Datum betaling:<br /> <asp:Label CssClass="txtBox" ID="Label20" runat="server" Text='<%# Eval("PDATE") %>' /><br /> </ItemTemplate> </asp:FormView> <asp:ObjectDataSource ID="objdsOrderID" runat="server" OnSelecting="objdsOrderID_Selecting" SelectMethod="getOrdersByID" TypeName="DAL.OrdersDAL"> <SelectParameters> <asp:Parameter Name="id" Type="Int32" /> </SelectParameters> </asp:ObjectDataSource> My Code behind protected void gvOrdersAdmin_SelectedIndexChanged(object sender, EventArgs e) { fvDetailOrder.DataSource = objdsOrderID; fvDetailOrder.DataBind(); // <-- HERE I GET THE ERROR } protected void objdsOrderID_Selecting(object sender, ObjectDataSourceSelectingEventArgs e) { e.InputParameters["id"] = gvOrdersAdmin.DataKeys[gvOrdersAdmin.SelectedRow.RowIndex].Values[0]; ; } My Data Acces Layer public static DataTable getOrdersByID(string id) { string sql = "SELECT 'AUTHOR' = tblAuthors.FIRSTNAME + ' ' + tblAuthors.LASTNAME, tblBooks.*, tblGenres.*, tblLanguages.*, tblOrders.* FROM tblAuthors INNER JOIN tblBooks ON tblAuthors.AUTHOR_ID = tblBooks.AUTHOR_ID INNER JOIN tblGenres ON tblBooks.GENRE_ID = tblGenres.GENRE_ID INNER JOIN tblLanguages ON tblBooks.LANG_ID = tblLanguages.LANG_ID INNER JOIN tblOrders ON tblBooks.BOOK_ID = tblOrders.BOOK_ID" + " WHERE tblOrders.ID = @id;"; SqlDataAdapter da = new SqlDataAdapter(sql, GetConnectionString()); da.SelectCommand.Parameters["id"].Value = id; DataSet ds = new DataSet(); da.Fill(ds, "Orders"); return ds.Tables["Orders"]; } Thanks a lot, Vincent

    Read the article

  • ObjectListView: Custom Sorter

    - by Mike
    In the control ObjectListView(http://objectlistview.sourceforge.net/html/cookbook.htm), I'm trying to add a custom sorter where it ignores "The" and "A" prefixes. I managed to do it with a regular ListView, but now that I switched to ObjectListView(a lot more features, and ease), I can't seem to do the same. The following is the Main comparer in the ObjectListView code i think... public int Compare(object x, object y) { return this.Compare((OLVListItem)x, (OLVListItem)y); } Original Sorter for ascending, as an example (Ignoring "A" and "The") public class CustomSortAsc : IComparer { int IComparer.Compare(Object x, Object y) { string[] px = Convert.ToString(x).Split(' '); string[] py = Convert.ToString(y).Split(' '); string newX = ""; string newY = ""; for (int i = 0; i < px.Length; i++) { px[i] = px[i].Replace("{", ""); px[i] = px[i].Replace("}", ""); } for (int i = 0; i < py.Length; i++) { py[i] = py[i].Replace("{", ""); py[i] = py[i].Replace("}", ""); } if ((px[1].ToLower() == "a") || (px[1].ToLower() == "the")) { if (px.Length > 1) { for (int i = 2; i < px.Length; i++) newX += px[i]; } } else { for (int i = 1; i < px.Length; i++) newX += px[i]; } if ((py[1].ToLower() == "a") || (py[1].ToLower() == "the")) { if (py.Length > 1) { for (int i = 2; i < py.Length; i++) newY += py[i]; } } else { for (int i = 1; i < py.Length; i++) newY += py[i]; } return ((new CaseInsensitiveComparer()).Compare(newX, newY)); }

    Read the article

  • Calculating multiple column average in SQLite3

    - by Benjamin Oakes
    I need to average some values in a row-wise fashion, rather than a column-wise fashion. (If I were doing a column-wise average, I could just use avg()). My specific application of this requires me ignore NULLs in averaging. It's pretty straightforward logic, but seems awfully difficult to do in SQL. Is there an elegant way of doing my calculation? I'm using SQLite3, for what it's worth. Details If you need more details, here's an illustration: I have a a table with a survey: | q1 | q2 | q3 | ... | q144 | |----|-------|-------|-----|------| | 1 | 3 | 7 | ... | 2 | | 4 | 2 | NULL | ... | 1 | | 5 | NULL | 2 | ... | 3 | (Those are just some example values and simple column names. The valid values are 1 through 7 and NULL.) I need to calculate some averages like so: q7 + q33 + q38 + q40 + ... + q119 / 11 as domain_score_1 q10 + q11 + q34 + q35 + ... + q140 / 13 as domain_score_2 ... q2 + q5 + q13 + q25 + ... + q122 / 12 as domain_score_14 ...but i need to pull out the nulls and average based on the non-nulls. So, for domain_score_1 (which has 11 items), I would need to do: Input: 3, 5, NULL, 7, 2, NULL, 3, 1, 5, NULL, 1 (3 + 5 + 7 + 2 + 3 + 1 + 5 + 1) / (11 - 3) 27 / 8 3.375 A simple algorithm I'm considering is: Input: 3, 5, NULL, 7, 2, NULL, 3, 1, 5, NULL, 1 Coalesce each value to 0 if NULL: 3, 5, 0, 7, 2, 0, 3, 1, 5, 0, 1 Sum: 27 Get the number of non-zeros by converting values 0 to 1 and sum: 3, 5, 0, 7, 2, 0, 3, 1, 5, 0, 1 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 1 8 Divide those two numbers 27 / 8 3.375 But that seems like a lot more programming than this should take. Is there an elegant way of doing this that I'm not aware of? Update: Unless I'm misunderstanding something, avg() won't work for this. Example of what I would want to do: select avg(q7, q33, q38, ..., q119) from survey; Output: SQL error near line 3: wrong number of arguments to function avg()

    Read the article

  • Best practices for developing simple ASP.NET sites (built in controls or JQuery + scripts)

    - by Nix
    I was recently reviewing some code written by two different contractors, both were basic ASP.NET management sites. The sites allowed the user to view and edit data. Pretty much simple CRUD gateways. One group did their best to use built in ASP + AJAX Toolkit controls and did their best to use as many built in controls as possible. I found the code much easier to read and maintain. The other used jQuery and the code is heavily marked up with script blocks which are then used to build pages from javascript files. Which one is more common? The one that basically leveraged embedded HTML markup in scripts controled by javascript files screams readability and maintenance issues? Is this just the way of doing asp dev with jQuery? Assuming the second example happens a lot, are there tools that help facilitate jQuery development with visual studio? Do you think they generated the html somewhere else and just copied it in? Example Script block: <script id="HRPanel" type="text/html"> <table cellpadding='0' cellspacing='0' class="atable"><thead class="mHeader"><tr><th>Name</th><th>Description</th><th>Other</th></thead><tbody> <# for(var i=0; i < hrRows.length; i++) { var r = HRRows[i]; #> <tr><td><#=r.Name#></td><td><#=r.Description#></td><td class="taRight"><#=r.Other#></td></tr> <#}#> </tbody><tfoot><th></th><th></th><th></th></tfoot></table> </script> Then in a separate location (js file) you would see something like this. $("#HRPanel").html($("#HRPanel").parseTemplate({ HRRows: response.something.bah.bah }));

    Read the article

  • converting code from not CPS to CPS (CPS aka Continuation Passing Style aka Continuations)

    - by Delirium tremens
    before: function sc_startSiteCompare(){ var visitinguri; var validateduri; var downloaduris; var compareuris; var tryinguri; sc_setstatus('started'); visitinguri = sc_getvisitinguri(); validateduri = sc_getvalidateduri(visitinguri); downloaduris = new Array(); downloaduris = sc_generatedownloaduris(validateduri); compareuris = new Array(); compareuris = sc_generatecompareuris(validateduri); tryinguri = 0; sc_finishSiteCompare(downloaduris, compareuris, tryinguri); } function sc_getvisitinguri() { var visitinguri; visitinguri = content.location.href; return visitinguri; } after (I'm trying): function sc_startSiteCompare(){ var visitinguri; sc_setstatus('started'); visitinguri = sc_getvisitinguri(sc_startSiteComparec1); } function sc_startSiteComparec1 (visitinguri) { var validateduri; validateduri = sc_getvalidateduri(visitinguri, sc_startSiteComparec2); } function sc_startSiteComparec2 (visitinguri, c) { var downloaduris; downloaduris = sc_generatedownloaduris(validateduri, sc_startSiteComparec3); } function sc_startSiteComparec3 (validateduri, c) { var compareuris; compareuris = sc_generatecompareuris(downloaduris, validateduri, sc_startSiteComparec4); } function sc_startSiteComparec4 (downloaduris, compareuris, validateduri, c) { var tryinguri; tryinguri = 0; sc_finishSiteCompare(downloaduris, compareuris, tryinguri); } function sc_getvisitinguri(c) { var visitinguri; visitinguri = content.location.href; c(visitinguri); } What should the code above become? I need CPS, because I have XMLHttpRequests when validating uris, then downloading pages, but I can't use return statements, because I use asynchronous calls. Is there an alternative to CPS? Also, I'm having to pass lots of arguments to functions now. global in procedural code look like this / self in modular code. Any difference? Will I really have to convert from procedural to modular too? It's looking like a lot of work ahead.

    Read the article

  • JavaScript: Doing some stuff right before the user exits the page

    - by Mike
    I have seen some questions here regarding what I want to achieve and have based what I have so far on those answer. But there is a slight misbehavior that is still irritating me. What I have is sort of a recovery feature. Whenever you are typing text, the client sends a sync request to the server every 45 seconds. It does 2 things. First, it extends the lease the client has on the record (only one person may edit at one time) for another 60 seconds. Second, it sends the text typed so far to the server in case the server crashes, internet connection fails, etc. In that case, the next time the user enters our application, the user is notified that something has gone wrong and that some text was recovered. Think of Microsoft or OpenOffice recovery whenever they crash! Of course, if the user leaves the page willingly, the user does not need to be notified and as a result, the recovery is deleted. I do that final request via a beforeunload event. Everything went fine until I was asked to make a final adjustment... The same behavior you have here at stack overflow when you exit the editor... a confirm dialogue. This works so far, BUT, the confirm dialogue is shown twice. Here is the code. The event if (local.sync.autosave_textelement) { window.onbeforeunload = exitConfirm; } The function function exitConfirm() { var local = Core; if (confirm('blub?')) { local.sync.autosave_destroy = true; sync(false); return true; } else { return false; } }; Some problem irrelevant clarifications: Core is a global Object that contains a lot of variables that are used everywhere. sync makes an ajax request. The values are based on the values that the Core.sync object contains. The parameter determines if the call should be async (default) or sync.

    Read the article

  • General Questions on setting up Subversion (w/Netbeans) and web development workflow.

    - by Roeland
    Hey guys, I am web developer who currently works mostly on his own but, some projects require outside coding help (my brother.) Anyway, after running in to the problem of "working on the same files" and "saving over each others edits" I decided to research ways to avoid this. Through the help of stackoverflow I decided on subversion. My setup is the following: A windows 2008 server with a clean install. My desktop which is on the local network of the server. Then my brothers desktop which is at his house, not on lan. We both prefer to use netbeans for development. My Questions: How do we set this thing up correctly and most optimally. Here is my current setup and work flow. dev site: in the past I just created sub domains with my web host for test sites (company1.bythepixel.com, company2.bythepixel.com), and editted those sites with netbeans set up having remote sources (ftp). how do i set up my netbeans now? should I set it up with remote sources? I guess I may need to set up a web server on my local server. I'm just not sure of the work flow. When i hit save in netbeans.. will it update the repository.. then do i need to update the site from the repository somehow? live site: when going live, i would just copy all the files from the dev site into the live site. from what i gather i should be able to update the site from the dev repository? Currently I am toying with virtualsvn server (http://www.visualsvn.com/server/) on my local server. It looks like it is set up to use the http protocol. Is there advantages to this or should I consider something that does the file//. Do you recommend any other subversion software that would run on my 2008 box? how will my brother connect? should i set up a permanent vpn from his house to mine? suggestions? (not that important) how do I deal with databases, there anyway to do subversion on database? I know I have a lot of questions and I am trying to read / make sense of the free online subversion book, but its all so overwhelming! Wish there was a small subversion for dummies guide :)

    Read the article

  • NoSQL DB for .Net document-based database (ECM)

    - by Dane
    I'm halfway through coding a basic multi-tenant SaaS ECM solution. Each client has it's own instance of the database / datastore, but the .Net app is single instance. The documents are pretty much read only (i.e. an image archive of tiffs or PDFs) I've used MSSQL so far, but then started thinking this might be viable in a NoSQL DB (e.g. MongoDB, CouchDB). The basic premise is that it stores documents, each with their own particular indexes. Each tenant can have multiple document types. e.g. One tenant might have an invoice type, which has Customer ID, Invoice Number and Invoice Date. Another tenant might have an application form, which has Member Number, Application Number, Member Name, and Application Date. So far I've used the old method which Sharepoint (used?) to use, and created a document table which has int_field_1, int_field_2, date_field_1, date_field_2, etc. Then, I've got a "mapping" table which stores the customer specific index name, and the database field that will map to. I've avoided the key-value pair model in the DB due to volume of documents. This way, we can support multiple document types in the one table, and get reasonably high performance out of it, and allow for custom document type searches (i.e. user selects a document type, then they're presented with a list of search fields). However, a NoSQL DB might make this a lot simpler, as I don't need to worry about denormalizing the document. However, I've just got concerns about the rest of the data around a document. We store an "action history" against the document. This tracks views, whether someone emails the document from within the system, and other "future" functionality (e.g. faxing). We have control over the document load process, so we can manipulate the data however it needs to be to get it in the document store (e.g. assign unique IDs). Users will not be adding in their own documents, so we shouldn't need to worry about ACID compliance, as the documents are relatively static. So, my questions I guess : Is a NoSQL DB a good fit Is MongoDB the best for Asp.Net (I saw Raven and Velocity, but they're still kinda beta) Can I store a key for each document, and then store the action history in a MSSQL DB with this key? I don't need to do joins, it would be if a person clicks "View History" against a document. How would performance compare between the two (NoSQL DB vs denormalized "document" table) Volumes would be up to 200,000 new documents per month for a single tenant. My current scaling plan with the SQL DB involves moving the SQL DB into a cluster when certain thresholds are reached, and then reviewing partitioning and indexing structures.

    Read the article

  • Why is Perl's Math::Complex taking up so much time when I try acos(1)?

    - by synapz
    While trying to profile my Perl program, I find that Math::Complex is taking up a lot of time with what looks like some kind of warning. Also, my code shouldn't have any complex numbers being generated or used, so I am not sure what it is doing in Math::Complex, anyway. Here's the FastProf output for the most expensive lines: /usr/lib/perl5/5.8.8/Math/Complex.pm:182 1.55480 276232: _cannot_make("real part", $re) unless $re =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:310 1.01132 453641: sub cartesian {$_[0]->{c_dirty} ? /usr/lib/perl5/5.8.8/Math/Complex.pm:315 0.97497 562188: sub set_cartesian { $_[0]->{p_dirty}++; $_[0]->{c_dirty} = 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:189 0.86302 276232: return $self; /usr/lib/perl5/5.8.8/Math/Complex.pm:1332 0.85628 293660: $self->{display_format} = { %display_format }; /usr/lib/perl5/5.8.8/Math/Complex.pm:185 0.81529 276232: _cannot_make("imaginary part", $im) unless $im =~ /^$gre$/; /usr/lib/perl5/5.8.8/Math/Complex.pm:1316 0.78749 293660: my %display_format = %DISPLAY_FORMAT; /usr/lib/perl5/5.8.8/Math/Complex.pm:1335 0.69534 293660: %{$self->{display_format}} : /usr/lib/perl5/5.8.8/Math/Complex.pm:186 0.66697 276232: $self->set_cartesian([$re, $im ]); /usr/lib/perl5/5.8.8/Math/Complex.pm:170 0.62790 276232: my $self = bless {}, shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:172 0.56733 276232: if (@_ == 0) { /usr/lib/perl5/5.8.8/Math/Complex.pm:316 0.53179 281094: $_[0]->{'cartesian'} = $_[1] } /usr/lib/perl5/5.8.8/Math/Complex.pm:1324 0.48768 293660: if (@_ == 1) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1319 0.44835 293660: if (exists $self->{display_format}) { /usr/lib/perl5/5.8.8/Math/Complex.pm:1318 0.40355 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:187 0.39950 276232: $self->display_format('cartesian'); /usr/lib/perl5/5.8.8/Math/Complex.pm:1315 0.39312 293660: my $self = shift; /usr/lib/perl5/5.8.8/Math/Complex.pm:1331 0.38087 293660: if (ref $self) { # Called as an object method /usr/lib/perl5/5.8.8/Math/Complex.pm:184 0.35171 276232: $im ||= 0; /usr/lib/perl5/5.8.8/Math/Complex.pm:181 0.34145 276232: if (defined $re) { /usr/lib/perl5/5.8.8/Math/Complex.pm:171 0.33492 276232: my ($re, $im); /usr/lib/perl5/5.8.8/Math/Complex.pm:390 0.20658 128280: my ($z1, $z2, $regular) = @_; /usr/lib/perl5/5.8.8/Math/Complex.pm:391 0.20631 128280: if ($z1->{p_dirty} == 0 and ref $z2 and $z2->{p_dirty} == 0) { Thanks for any help!

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • Oracle Coding Standards Feature Implementation

    - by Mike Hofer
    Okay, I have reached a sort of an impasse. In my open source project, a .NET-based Oracle database browser, I've implemented a bunch of refactoring tools. So far, so good. The one feature I was really hoping to implement was a big "Global Reformat" that would make the code (scripts, functions, procedures, packages, views, etc.) standards compliant. (I've always been saddened by the lack of decent SQL refactoring tools, and wanted to do something about it.) Unfortunatey, I am discovering, much to my chagrin, that there doesn't seem to be any one widely-used or even "generally accepted" standard for PL-SQL. That kind of puts a crimp on my implementation plans. My search has been fairly exhaustive. I've found lots of conflicting documents, threads and articles and the opinions are fairly diverse. (Comma placement, of all things, seems to generate quite a bit of debate.) So I'm faced with a couple of options: Add a feature that lets the user customize the standard and then reformat the code according to that standard. —OR— Add a feature that lets the user customize the standard and simply generate a violations list like StyleCop does, leaving the SQL untouched. In my mind, the first option saves the end-users a lot of work, but runs the risk of modifying SQL in potentially unwanted ways. The second option runs the risk of generating lots of warnings and doing no work whatsoever. (It'd just be generally annoying.) In either scenario, I still have no standard to go by. What I'd need to know from you guys is kind of poll-ish, but kind of not. If you were going to use a tool of this nature, what parts of your SQL code would you want it to warn you about or fix? Again, I'm just at a loss due to a lack of a cohesive standard. And given that there isn't anything out there that's officially published by Oracle, I think this is something the community could weigh in on. Also, given the way that voting works on SO, the votes would help to establish the popularity of a given "refactoring." P.S. The engine parses SQL into an expression tree so it can robustly analyze the SQL and reformat it. There should be quite a bit that we can do to correct the format of the SQL. But I am thinking that for the first release of the thing, layout is the primary concern. Though it is worth noting that the thing already has refactorings for converting keywords to upper case, and identifiers to lower case.

    Read the article

  • How can I easily maintain a cross-file JavaScript Library Development Environment

    - by John
    I have been developing a new JavaScript application which is rapidly growing in size. My entire JavaScript Application has been encapsulated inside a single function, in a single file, in a way like this: (function(){ var uniqueApplication = window.uniqueApplication = function(opts){ if (opts.featureOne) { this.featureOne = new featureOne(opts.featureOne); } if (opts.featureTwo) { this.featureTwo = new featureTwo(opts.featureTwo); } if (opts.featureThree) { this.featureThree = new featureThree(opts.featureThree); } }; var featureOne = function(options) { this.options = options; }; featureOne.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureTwo = function(options) { this.options = options; }; featureTwo.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; var featureThree = function(options) { this.options = options; }; featureThree.prototype.myFeatureBehavior = function() { //Lots of Behaviors }; })(); In the same file after the anonymous function and execution I do something like this: (function(){ var instanceOfApplication = new uniqueApplication({ featureOne:"dataSource", featureTwo:"drawingCanvas", featureThree:3540 }); })(); Before uploading this software online I pass my JavaScript file, and all it's dependencies, into Google Closure Compiler, using just the default Compression, and then I have one nice JavaScript file ready to go online for production. This technique has worked marvelously for me - as it has created only one global footprint in the DOM and has given me a very flexible framework to grow each additional feature of the application. However - I am reaching the point where I'd really rather not keep this entire application inside one JavaScript file. I'd like to move from having one large uniqueApplication.js file during development to having a separate file for each feature in the application, featureOne.js - featureTwo.js - featureThree.js Once I have completed offline development testing, I would then like to use something, perhaps Google Closure Compiler, to combine all of these files together - however I want these files to all be compiled inside of that scope, as they are when I have them inside one file - and I would like for them to remain in the same scope during offline testing too. I see that Google Closure Compiler supports an argument for passing in modules but I haven't really been able to find a whole lot of information on doing something like this. Anybody have any idea how this could be accomplished - or any suggestions on a development practice for writing a single JavaScript Library across multiple files that still only leaves one footprint on the DOM?

    Read the article

  • How do gitignore exclusion rules actually work?

    - by meowsqueak
    I'm trying to solve a gitignore problem on a large directory structure, but to simplify my question I have reduced it to the following. I have the following directory structure of two files (foo, bar) in a brand new git repository (no commits so far): a/b/c/foo a/b/c/bar Obviously, a 'git status -u' shows: # Untracked files: ... # a/b/c/bar # a/b/c/foo What I want to do is create a .gitignore file that ignores everything inside a/b/c but does not ignore the file 'foo'. If I create a .gitignore thus: c/ Then a 'git status -u' shows both foo and bar as ignored: # Untracked files: ... # .gitignore Which is as I expect. Now if I add an exclusion rule for foo, thus: c/ !foo According to the gitignore manpage, I'd expect this to to work. But it doesn't - it still ignores foo: # Untracked files: ... # .gitignore This doesn't work either: c/ !a/b/c/foo Neither does this: c/* !foo Gives: # Untracked files: ... # .gitignore # a/b/c/bar # a/b/c/foo In that case, although foo is no longer ignored, bar is also not ignored. The order of the rules in .gitignore doesn't seem to matter either. This also doesn't do what I'd expect: a/b/c/ !a/b/c/foo That one ignores both foo and bar. One situation that does work is if I create the file a/b/c/.gitignore and put in there: * !foo But the problem with this is that eventually there will be other subdirectories under a/b/c and I don't want to have to put a separate .gitignore into every single one - I was hoping to create 'project-based' .gitignore files that can sit in the top directory of each project, and cover all the 'standard' subdirectory structure. This also seems to be equivalent: a/b/c/* !a/b/c/foo This might be the closest thing to "working" that I can achieve, but the full relative paths and explicit exceptions need to be stated, which is going to be a pain if I have a lot of files of name 'foo' in different levels of the subdirectory tree. Anyway, either I don't quite understand how exclusion rules work, or they don't work at all when directories (rather than wildcards) are ignored - by a rule ending in a / Can anyone please shed some light on this? Is there a way to make gitignore use something sensible like regular expressions instead of this clumsy shell-based syntax? I'm using and observe this with git-1.6.6.1 on Cygwin/bash3 and git-1.7.1 on Ubuntu/bash3.

    Read the article

  • Suggestions for designing large-scale Java webapp from the group up

    - by Chris Thompson
    Hi all, I'm about to start developing a large-scale system and I'm struggling with which direction to proceed. I've done plenty of Java web apps before and I have plenty of experience with servlet containers and GWT and some experience with Spring. The problem is most of my webapps have been thrown together just to be a proof of concept and what I'm struggling with is what set of frameworks to use. I need to have both a browser based application as well as a web service designed to support access from mobile devices (Android and iPhone for now). Ideally, I'd like to design this system in such a way that I don't end up rewriting all of my servlets for each client (browser and phone) although I don't mind having some small checks in there to properly format the data. In addition, although I'm the only developer now, that won't necessarily be the case down the road and I'd like to design something that scales well both with regards to traffic and number of developers (isn't just a nightmare to maintain). So where I am now is planning on using GWT to design the browser-based interface but I'm struggling with how to reuse that code with to present the interface (most likely xml) for the mobile devices. Using GWT RPC would, I think, make it relatively easy to do all of the AJAX in the browser, but might make generating xml for the mobile phones difficult. In addition, I like the idea of using something like Hibernate for persistence and Spring Security to secure the whole thing. Again, I'm not sure how well those will cooperate with GWT (I think Hibernate should be fine...) There's obviously a lot more to this than I've presented here, but I've tried to give you the 5-minute overview. I'm a bit stumped and was wondering if anybody in the community had any experience starting from this place. Does what I'm trying to do make sense? Is it realistic? I have no doubt I can make all of these frameworks speak the same language, I'm just wondering if it's worth my time to fight with them. Also, am I missing a framework that would be really beneficial? Thanks in advance and sorry for the relatively broad question... Chris

    Read the article

  • Potential issues using member's "from" address and the "sender" header

    - by Paul Burney
    Hi all, A major component of our application sends email to members on behalf of other members. Currently we set the "From" address to our system address and use a "Reply-to" header with the member's address. The issue is that replies from some email clients (and auto-replies/bounces) don't respect the "Reply-to" header so get sent to our system address, effectively sending them to a black hole. We're considering setting the "From" address to our member's address, and the "Sender" address to our system address. It appears this way would pass SPF and Sender-ID checks. Are there any reasons not to switch to this method? Are there any other potential issues? Thanks in advance, -Paul Here are way more details than you probably need: When the application was first developed, we just changed the "from" address to be that of the sending member as that was the common practice at the time (this was many years ago). We later changed that to have the "from" address be the member's name and our address, i.e., From: "Mary Smith" <[email protected]> With a "reply-to" header set to the member's address: Reply-To: "Mary Smith" <[email protected]> This helped with messages being mis-categorized as spam. As SPF became more popular, we added an additional header that would work in conjunction with our SPF records: Sender: <[email protected]> Things work OK, but it turns out that, in practice, some email clients and most MTA's don't respect the "Reply-To" header. Because of this, many members send messages to [email protected] instead of the desired member. So, I started envisioning various schemes to add data about the sender to the email headers or encode it in the "from" email address so that we could process the response and redirect appropriately. For example, From: "Mary Smith" <[email protected]> where the string after "messages" is a hash representing Mary Smith's member in our system. Of course, that path could lead to a lot of pain as we need to develop MTA functionality for our system address. I was looking again at the SPF documentation and found this page interesting: http://www.openspf.org/Best_Practices/Webgenerated They show two examples, that of evite.com and that of egreetings.com. Basically, evite.com is doing it the way we're doing it. The egreetings.com example uses the member's from address with an added "Sender" header. So the question is, are there any potential issues with using the egreetings method of the member's from address with a sender header? That would eliminate the replies that bad clients send to the system address. I don't believe that it solves the bounce/vacation/whitelist issue since those often send to the MAIL FROM even if Return Path is specified.

    Read the article

  • Heroku Push Problem part 2 - Postgresql - PGError Relations does not exist - Ruby on Rails

    - by bgadoci
    Ok so got through my last problem with the difference between Postgresql and SQLite and seems like Heroku is telling me I have another one. I am new to ruby and rails so a lot of this stuff I can't decipher at first. Looking for a little direction here. The error message and PostsController Index are below. I checked my routes.rb file and all seems well there but I could be missing something. I will post if you need. Processing PostsController#index (for 99.7.50.140 at 2010-04-23 15:19:22) [GET] ActiveRecord::StatementInvalid (PGError: ERROR: relation "tags" does not exist : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a.attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"tags"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum ): PostsController#index def index @tag_counts = Tag.count(:group => :tag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @ugtag_counts = Ugtag.count(:group => :ugctag_name, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes @vote_counts = Vote.count(:group => :post_title, :order => 'count_all DESC', :limit => 20) conditions, joins = {}, :votes unless(params[:tag_name] || "").empty? conditions = ["tags.tag_name = ? ", params[:tag_name]] joins = [:tags, :votes] end @posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id ", :order => "created_at DESC", :page => params[:page], :per_page => 5) @popular_posts=Post.paginate( :select => "posts.*, count(*) as vote_total", :joins => joins, :conditions=> conditions, :group => "votes.post_id, posts.id", :order => "vote_total DESC", :page => params[:page], :per_page => 3) respond_to do |format| format.html # index.html.erb format.xml { render :xml => @posts } format.json { render :json => @posts } format.atom end end

    Read the article

  • Ultimate Get/Post with Android Thread for Dummies

    - by Jayomat
    Hi, I'm writing an app to check for the bus timetable's. Therefor I need to post some data to a html page, submit it, and parse the resulting page with htmlparser. Though it may be asked a lot, can some one help me identify if 1) this page does support post/get (I think it does) 2) which fields I need to use? 3) How to make the actual request? this is my code so far: String url = "http://busspur02.aseag.de/bs.exe?Cmd=RV&Karten=true&DatumT=30&DatumM=4&DatumJ=2010&ZeitH=&ZeitM=&Suchen=%28S%29uchen&GT0=&HT0=&GT1=&HT1="; String charset = "CP1252"; System.out.println("startFrom: "+start_from); System.out.println("goTo: "+destination); //String tag.v List<NameValuePair> params = new ArrayList<NameValuePair>(); params.add(new BasicNameValuePair("HTO", start_from)); params.add(new BasicNameValuePair("HT1", destination)); params.add(new BasicNameValuePair("GTO", "Aachen")); params.add(new BasicNameValuePair("GT1", "Aachen")); params.add(new BasicNameValuePair("DatumT", day)); params.add(new BasicNameValuePair("DatumM", month)); params.add(new BasicNameValuePair("DatumJ", year)); params.add(new BasicNameValuePair("ZeitH", hour)); params.add(new BasicNameValuePair("ZeitM", min)); UrlEncodedFormEntity query = new UrlEncodedFormEntity(params, charset); HttpPost post = new HttpPost(url); post.setEntity(query); InputStream response = new DefaultHttpClient().execute(post).getEntity().getContent(); // Now do your thing with the facebook response. String source = readText(response,"CP1252"); Log.d(TAG_AVV,response.toString()); System.out.println("STREAM "+source); One person also gave me a hint to use firebug to read what's going on at the page, but I don't really understand what to look for, or more precisely, how to use the provided information. I also find it confusing, for example, that when I enter the data by hand, the url says, for example, "....HTO=Kaiserplatz&...", but in Firebug, the same Kaiserplatz is connected to a different field, in this case: \<\td class="Start3" Kaiserplatz <\/td (I inserted \ to make it visible) The last line in my code prints the html page, but without having send a request.. it's printed as if there was no input at all... My app is almost done, I hope someone can help me out to finish it! thanks in advance

    Read the article

  • Creating a variable list Pashua, OS X & Bash.

    - by S1syphus
    First of all, for those that don't know pashua is a tool for creating native Aqua dialog windows. An example of what a window config looks like is # pashua_run() # Define what the dialog should be like # Take a look at Pashua's Readme file for more info on the syntax conf=" # Set transparency: 0 is transparent, 1 is opaque *.transparency=0.95 # Set window title *.title = Introducing Pashua # Introductory text tb.type = text tb.default = "HELLO WORLD" tb.height = 276 tb.width = 310 tb.x = 340 tb.y = 44 if [ -e "$icon" ] then # Display Pashua's icon conf="$conf img.type = image img.x = 530 img.y = 255 img.path = $icon" fi if [ -e "$bgimg" ] then # Display background image conf="$conf bg.type = image bg.x = 30 bg.y = 2 bg.path = $bgimg" fi pashua_run "$conf" echo " tb = $tb" The problem is, Pashua can't really get output from stdout, but it can get arguments. Following on from what Dennis Williamson posted here. What ideally it should do is generate an output file based on information from a text file, To executed in pashua_run ore add the pashua_run around the window argument: count=1 while read -r i do echo "AB${count}.type = openbrowser" echo "AB${count}.label = Choose a master playlist file" echo "AB${count}.width=310" echo "AB${count}.tooltip = Blabla filesystem browser" echo "some text with a line from the file: $i" (( count++ )) done < TEST.txt >> long.txt SO the output is AB1.type = openbrowser AB1.label = Choose a master playlist file AB1.width=310 AB1.tooltip = Blabla filesystem browser some text with a line from the file: foo AB2.type = openbrowser AB2.label = Choose a master playlist file AB2.width=310 AB2.tooltip = Blabla filesystem browser some text with a line from the file: bar AB3.type = openbrowser AB3.label = Choose a master playlist file AB3.width=310 AB3.tooltip = Blabla filesystem browser some text with a line from the file: dev AB4.type = openbrowser AB4.label = Choose a master playlist file AB4.width=310 AB4.tooltip = Blabla filesystem random So if there is a clever way to get the output of that and place it into pashua run would be cool, on the fly: I.E load te contents of TEST.txt and generate the place it into pashua_run, I've tried using cat and opening the file... but because it's in Pashua_run it doesn't work, is there a smart way to break out then back in? Or the second way which I was thinking, was create get the output then append it into the middle text file containing the pashua runtime, then execute it, maybe slightly hacky, but I would imagine it will do the job. Any ideas? ++ I know I probably could make my life a lot easier, by doing this in actionscript and cocoa, although at present don't have time for such a learning curve, although I do plan to get round to it.

    Read the article

  • What are five things you hate about your favorite language?

    - by brian d foy
    There's been a cluster of Perl-hate on Stackoverflow lately, so I thought I'd bring my "Five things you hate about your favorite language" question to StackOverflow. Take your favorite language and tell me five things you hate about it. Those might be things that just annoy you, admitted design flaws, recognized performance problems, or any other category. You just have to hate it, and it has to be your favorite language. Don't compare it to another language, and don't talk about languages that you already hate. Don't talk about the things you like in your favorite language. I just want to hear the things that you hate but tolerate so you can use all of the other stuff, and I want to hear it about the language you wished other people would use. I ask this whenever someone tries to push their favorite language on me, and sometimes as an interview question. If someone can't find five things to hate about his favorite tool, he don't know it well enough to either advocate it or pull in the big dollars using it. He hasn't used it in enough different situations to fully explore it. He's advocating it as a culture or religion, which means that if I don't choose his favorite technology, I'm wrong. I don't care that much which language you use. Don't want to use a particular language? Then don't. You go through due diligence to make an informed choice and still don't use it? Fine. Sometimes the right answer is "You have a strong programming team with good practices and a lot of experience in Bar. Changing to Foo would be stupid." This is a good question for code reviews too. People who really know a codebase will have all sorts of suggestions for it, and those who don't know it so well have non-specific complaints. I ask things like "If you could start over on this project, what would you do differently?" In this fantasy land, users and programmers get to complain about anything and everything they don't like. "I want a better interface", "I want to separate the model from the view", "I'd use this module instead of this other one", "I'd rename this set of methods", or whatever they really don't like about the current situation. That's how I get a handle on how much a particular developer knows about the codebase. It's also a clue about how much of the programmer's ego is tied up in what he's telling me. Hate isn't the only dimension of figuring out how much people know, but I've found it to be a pretty good one. The things that they hate also give me a clue how well they are thinking about the subject.

    Read the article

  • Tunnel over HTTPS

    - by ephemient
    At my workplace, the traffic blocker/firewall has been getting progressively worse. I can't connect to my home machine on port 22, and lack of ssh access makes me sad. I was previously able to use SSH by moving it to port 5050, but I think some recent filters now treat this traffic as IM and redirect it through another proxy, maybe. That's my best guess; in any case, my ssh connections now terminate before I get to log in. These days I've been using Ajaxterm over HTTPS, as port 443 is still unmolested, but this is far from ideal. (Sucky terminal emulation, lack of port forwarding, my browser leaks memory at an amazing rate...) I tried setting up mod_proxy_connect on top of mod_ssl, with the idea that I could send a CONNECT localhost:22 HTTP/1.1 request through HTTPS, and then I'd be all set. Sadly, this seems to not work; the HTTPS connection works, up until I finish sending my request; then SSL craps out. It appears as though mod_proxy_connect takes over the whole connection instead of continuing to pipe through mod_ssl, confusing the heck out of the HTTPS client. Is there a way to get this to work? I don't want to do this over plain HTTP, for several reasons: Leaving a big fat open proxy like that just stinks A big fat open proxy is not good over HTTPS either, but with authentication required it feels fine to me HTTP goes through a proxy -- I'm not too concerned about my traffic being sniffed, as it's ssh that'll be going "plaintext" through the tunnel -- but it's a lot more likely to be mangled than HTTPS, which fundamentally cannot be proxied Requirements: Must work over port 443, without disturbing other HTTPS traffic (i.e. I can't just put the ssh server on port 443, because I would no longer be able to serve pages over HTTPS) I have or can write a simple port forwarder client that runs under Windows (or Cygwin) Edit DAG: Tunnelling SSH over HTTP(S) has been pointed out to me, but it doesn't help: at the end of the article, they mention Bug 29744 - CONNECT does not work over existing SSL connection preventing tunnelling over HTTPS, exactly the problem I was running into. At this point, I am probably looking at some CGI script, but I don't want to list that as a requirement if there's better solutions available.

    Read the article

  • How to setup a webserver in common lisp?

    - by Serpico
    Several months ago, I was inspired by the magnificent book ANSI Common Lisp written by Paul Graham, and the statement that Lisp could be used as a secret weapon in your web development, published by the same author on his blog. Wow, that is amazing. That is something that I have been looking for long time. The author really developed a successful web applcation and sold it to Yahoo. With those encouraging images, I determined to spend some time (1 year or 2 year, who knows) on learning Common Lisp. Maybe someday I will development my web application and turn into a great Lisp expert. In fact, this is the second time for me to get to study Lisp. The first time was a couple of years ago when I was fascinated by the famous book SICP but found later Scheme was so unbelievably immature for real life application. After reading some chapters of ANSI Common Lisp, I was pretty sure that is a great book full of detailed exploration of Common Lisp. Then I began to set up a web server in Common Lisp. After all, this should be the best way if you want to learn something. Demonstrations are always better than definations. As suggested by the book Practical Common Lisp (by the way, this is also a great book), I chose to install AllegroServe on some Common Lisp implementation. Then, from somewhere else, I learned that Hunchentoot seems to be better than AllegroServe. (I don't remember where and whom this word is from. So don't argue with me.) Ironically, you know what, I never could installed the two packages on any Common Lisp implementation. More annoyingly, I even don't know why. The machine always spit up a lot of jargon and lead me into a chaos. I've tried searching the internet and have not found anything. Could anybody who has successfully installed these packages in Linux tell me how you did it? Have you run into any trouble? How did you figured out what is wrong and fixed it? The more detailed, the more helpful.

    Read the article

  • What is the best workaround for the WCF client `using` block issue?

    - by Eric King
    I like instantiating my WCF service clients within a using block as it's pretty much the standard way to use resources that implement IDisposable: using (var client = new SomeWCFServiceClient()) { //Do something with the client } But, as noted in this MSDN article, wrapping a WCF client in a using block could mask any errors that result in the client being left in a faulted state (like a timeout or communication problem). Long story short, when Dispose() is called, the client's Close() method fires, but throws and error because it's in a faulted state. The original exception is then masked by the second exception. Not good. The suggested workaround in the MSDN article is to completely avoid using a using block, and to instead instantiate your clients and use them something like this: try { ... client.Close(); } catch (CommunicationException e) { ... client.Abort(); } catch (TimeoutException e) { ... client.Abort(); } catch (Exception e) { ... client.Abort(); throw; } Compared to the using block, I think that's ugly. And a lot of code to write each time you need a client. Luckily, I found a few other workarounds, such as this one on IServiceOriented. You start with: public delegate void UseServiceDelegate<T>(T proxy); public static class Service<T> { public static ChannelFactory<T> _channelFactory = new ChannelFactory<T>(""); public static void Use(UseServiceDelegate<T> codeBlock) { IClientChannel proxy = (IClientChannel)_channelFactory.CreateChannel(); bool success = false; try { codeBlock((T)proxy); proxy.Close(); success = true; } finally { if (!success) { proxy.Abort(); } } } } Which then allows: Service<IOrderService>.Use(orderService => { orderService.PlaceOrder(request); } That's not bad, but I don't think it's as expressive and easily understandable as the using block. The workaround I'm currently trying to use I first read about on blog.davidbarret.net. Basically you override the client's Dispose() method wherever you use it. Something like: public partial class SomeWCFServiceClient : IDisposable { void IDisposable.Dispose() { if (this.State == CommunicationState.Faulted) { this.Abort(); } else { this.Close(); } } } This appears to be able to allow the using block again without the danger of masking a faulted state exception. So, are there any other gotchas I have to look out for using these workarounds? Has anybody come up with anything better?

    Read the article

  • In GWT I'd like to load the main page on www.domain.com and a dynamic page on www.domain.com/xyz - I

    - by Mike Brecht
    Hi, I have a question for which I'm sure there must a simple solution. I'm writing a small GWT application where I want to achieve this: www.domain.com : should serve the welcome page www.domain.com/xyz : should serve page xyz, where xyz is just a key for an item in a database. If there is an item associated with key xyz, I'll load that and show a page, otherwise I'll show a 404 error page. I was trying to modify the web.xml file accordingly but I just couldn't make it work. I could make it work with an url-pattern if the key in question is after another /, for example: www.domain.com/search/xyz. However, I'd like to have the key xyz directly following the root / (http://www.domain.com/xyz). Something like <servlet-mapping> <servlet-name>main</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> doesn't seem to work as I don't know how to address the main page index.html (as it's not an actual servlet) which will then load my main GWT module. I could make it work with a bad work around (see below): Redirecting a 404 exception to index.html and then doing the look up in the main entry point, but I'm sure that's not the best practice, also for SEO purposes. Can anyone give me a hint on how to configure the web.xml with GWT for my purpose? Thanks a lot. Mike Work-around via 404: Web.xml: <web-app> <error-page> <exception-type>404</exception-type> <location>/index.html</location> </error-page> <welcome-file-list> <welcome-file>index.html</welcome-file> </welcome-file-list> </web-app> index.html -- Main Entry point -- onModuleLoad(): String path = Window.Location.getPath(); if (path == null || path.length() == 0 || path.equalsIgnoreCase("/") || path.equalsIgnoreCase("/index.html")) { ... // load main page } else { lookup(path.substring(1)); // that's key xyz

    Read the article

  • Am I Leaking ADO.NET Connections?

    - by HardCode
    Here is an example of my code in a DAL. All calls to the database's Stored Procedures are structured this way, and there is no in-line SQL. Friend Shared Function Save(ByVal s As MyClass) As Boolean Dim cn As SqlClient.SqlConnection = Dal.Connections.MyAppConnection Dim cmd As New SqlClient.SqlCommand Try cmd.Connection = cn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "proc_save_my_class" cmd.Parameters.AddWithValue("@param1", s.Foo) cmd.Parameters.AddWithValue("@param2", s.Bar) Return True Finally Dal.Utility.CleanupAdoObjects(cmd, cn) End Try End Function Here is the Connection factory (if I am using the correct term): Friend Shared Function MyAppConnection() As SqlClient.SqlConnection Dim cn As New SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings("MyConnectionString").ToString) cn.Open() If cn.State <> ConnectionState.Open Then ' CriticalException is a custom object inheriting from Exception. Throw New CriticalException("Could not connect to the database.") Else Return cn End If End Function Here is the Dal.Utility.CleaupAdoObjects() function: Friend Shared Sub CleanupAdoObjects(ByVal cmd As SqlCommand, ByVal cn As SqlConnection) If cmd IsNot Nothing Then cmd.Dispose() If cn IsNot Nothing AndAlso cn.State <> ConnectionState.Closed Then cn.Close() End Sub I am getting a lot of "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." error messages reported by the users. The application's DAL opens a connection, reads or saves data, and closes it. No connections are ever left open - intentionally! There is nothing obvious on the Windows 2000 Server hosting the SQL Server 2000 that would indicate a problem. Nothing in the Event Logs and nothing in the SQL Server logs. The timeouts happen randomly - I cannot reproduce. It happens early in the day with only 1 to 5 users in the system. It also happens with around 50 users in the system. The most connections to SQL Server via Performance Monitor, for all databases, has been about 74. The timeouts happen in code that both saves to, and reads from, the database in different parts of the application. The stack trace does not point to one or two offending DAL functions. It's happened in many different places. Does my ADO.NET code appear to be able to leak connections? I've goolged around a bit, and I've read that if the connection pool fills up, this can happen. However, I'm not explicitly setting any connection pooling. I've even tried to increase the Connection Timeout in the connection string, but timeouts happen long before the 300 second (5 minute) value: <add name="MyConnectionString" connectionString="Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI;Connection Timeout=300;"/> I'm at a total loss already as to what is causing these Timeout issues. Any ideas are appreciated.

    Read the article

< Previous Page | 689 690 691 692 693 694 695 696 697 698 699 700  | Next Page >