Search Results

Search found 6001 results on 241 pages for 'requires'.

Page 183/241 | < Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >

  • git doesn't show where code was removed.

    - by Andrew Myers
    So I was tasked at replacing some dummy code that our project requires for historical compatibility reasons but has mysteriously dropped out sometime since the last release. Since disappearing code makes me nervous about what else might have gone missing but un-noticed I've been digging through the logs trying to find in what commit this handful of lines was removed. I've tried a number of things including "git log -S'add-visit-resource-pcf'", git blame, and even git bisect with a script that simply checks for the existence of the line but have been unable to pinpoint exactly where these lines were removed. I find this very perplexing, particularly since the last log entry (obtained by the above command) before my re-introduction of this code was someone else adding the code as well. commit 0b0556fa87ff80d0ffcc2b451cca1581289bbc3c Author: Andrew Date: Thu May 13 10:55:32 2010 -0400 Re-introduced add-visit-resource-pcf, see PR-65034. diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index f8e692d..a6f8d38 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -115,6 +115,7 @@ #:add-to-current-resource-pcf #:add-user-package-nickname #:add-value-criteria + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types commit 9fb10e25572c537076284a248be1fbf757c1a6e1 Author: Bob Date: Sun Jan 17 18:35:16 2010 -0500 update-defpackage for Spike 33.1 Delivery diff --git a/spike/hst/scheduler/defpackage.lisp b/spike/hst/scheduler/defpackage.lisp index 983666d..47f1a9a 100644 --- a/spike/hst/scheduler/defpackage.lisp +++ b/spike/hst/scheduler/defpackage.lisp @@ -118,6 +118,7 @@ #:add-user-package-nickname #:add-value-criteria #:add-vars-from-proposal + #:add-visit-resource-pcf #:add-window-to-gs-params #:adjust-derived-resources #:adjust-links-candidate-criteria-types This is for one of our package definition files, but the relevant source file reflects something similar. Does anyone know what could be going on here and how I could find the information I want? It's not really that important but this kind of things makes me a bit nervous.

    Read the article

  • How to generate an ear file from a maven-archetype-webapp artifact?

    - by Mike
    I currently have a project built with maven-archetype-webapp artifact. The default packaging for this project is war. Is it possible for me to insert the maven-ear-plugin in this webapp pom.xml generate an ear file that contains this project war? I tried that, but the war file doesn't get embedded in the generated ear file. It has everything except the war file. I read many Maven related articles, and perhaps I could use maven-archetype-j2ee-simple artifact. However, I'm reluctant to this use for 2 reasons:- This artifact handles ejbs and all the extra features that I don't use. It makes my project looks bloated. Second, it seems like it requires me to install the web module into the repository first before I can create the ear file. Is this the preferred way to create an ear file? How do I create an ear file that contains the war file using maven-ear-plugin from my webapp's pom.xml? If this way is not possible, what's the preferred way? I'm sorry if my questions sound a little novice, I realized I have whole lot more to learn about Maven. Thanks much.

    Read the article

  • Can I get a reference to a pending transaction from a SqlConnection object?

    - by Rune
    Hey, Suppose someone (other than me) writes the following code and compiles it into an assembly: using (SqlConnection conn = new SqlConnection(connString)) { conn.Open(); using (var transaction = conn.BeginTransaction()) { /* Update something in the database */ /* Then call any registered OnUpdate handlers */ InvokeOnUpdate(conn); transaction.Commit(); } } The call to InvokeOnUpdate(IDbConnection conn) calls out to an event handler that I can implement and register. Thus, in this handler I will have a reference to the IDbConnection object, but I won't have a reference to the pending transaction. Is there any way in which I can get a hold of the transaction? In my OnUpdate handler I want to execute something similar to the following: private void MyOnUpdateHandler(IDbConnection conn) { var cmd = conn.CreateCommand(); cmd.CommandText = someSQLString; cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); } However, the call to cmd.ExecuteNonQuery() throws an InvalidOperationException complaining that "ExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized". Can I in any way enlist my SqlCommand cmd with the pending transaction? Can I retrieve a reference to the pending transaction from the IDbConnection object (I'd be happy to use reflection if necessary)?

    Read the article

  • Encryption puzzle / How to create a PassStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documentation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the PassStub looks like.

    Read the article

  • Best practice - When to evaluate conditionals of function execution

    - by Tesserex
    If I have a function called from a few places, and it requires some condition to be met for anything it does to execute, where should that condition be checked? In my case, it's for drawing - if the mouse button is held down, then execute the drawing logic (this is being done in the mouse movement handler for when you drag.) Option one says put it in the function so that it's guaranteed to be checked. Abstracted, if you will. public function Foo() { DoThing(); } private function DoThing() { if (!condition) return; // do stuff } The problem I have with this is that when reading the code of Foo, which may be far away from DoThing, it looks like a bug. The first thought is that the condition isn't being checked. Option two, then, is to check before calling. public function Foo() { if (condition) DoThing(); } This reads better, but now you have to worry about checking from everywhere you call it. Option three is to rename the function to be more descriptive. public function Foo() { DoThingOnlyIfCondition(); } private function DoThingOnlyIfCondition() { if (!condition) return; // do stuff } Is this the "correct" solution? Or is this going a bit too far? I feel like if everything were like this function names would start to duplicate their code. About this being subjective: of course it is, and there may not be a right answer, but I think it's still perfectly at home here. Getting advice from better programmers than I is the second best way to learn. Subjective questions are exactly the kind of thing Google can't answer.

    Read the article

  • Problem with looping a video in Flash

    - by Blaze
    I am trying to loop a video and i am having some issues with this in flash. You can view the video here: http://www.healthcarepros.net/travel.html Here the specific code for the flash video: <script language="javascript"> if (AC_FL_RunContent == 0) { alert("This page requires AC_RunActiveContent.js."); } else { AC_FL_RunContent( 'codebase', 'http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0', 'width', '330', 'height', '245', 'src', 'healthcare-video', 'quality', 'high', 'pluginspage', 'http://www.macromedia.com/go/getflashplayer', 'align', 'middle', 'play', 'true', 'loop', 'true', 'scale', 'showall', 'wmode', 'window', 'devicefont', 'false', 'id', 'healthcare-video', 'bgcolor', '#ffffff', 'name', 'healthcare-video', 'menu', 'true', 'allowFullScreen', 'false', 'allowScriptAccess','sameDomain', 'movie', 'healthcare-video', 'salign', '' ); //end AC code } </script> <noscript> <object classid="clsid:d27cdb6e-ae6d-11cf-96b8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=9,0,0,0" width="330" height="245" id="healthcare-video" align="middle"> <param name="allowScriptAccess" value="sameDomain" /> <param name="allowFullScreen" value="false" /> <param name="loop" value="true" /> <param name="play" value="true" /> <param name="movie" value="healthcare-video.swf" /><param name="quality" value="high" /><param name="bgcolor" value="#ffffff" /> <embed src="healthcare-video.swf" play="true" flashvars="autoplay=true&play=true" quality="high" bgcolor="#ffffff" width="330" height="245" name="healthcare-video" align="middle" allowScriptAccess="sameDomain" allowFullScreen="false" type="application/x-shockwave-flash" pluginspage="http://www.macromedia.com/go/getflashplayer" /> </object> Additionally, i have added in the parameter code that calls the loop function but for some reason it still doesnt seem to work, any suggestions?

    Read the article

  • How do I write test code to exercise a C# generic Pair<TKey, TValue> ?

    - by Scott Davies
    Hi, I am reading through Jon Skeet's "C# in Depth", first edition (which is a great book). I'm in section 3.3.3, page 84, "Implementing Generics". Generics always confuse me, so I wrote some code to exercise the sample. The code provided is: using System; using System.Collections.Generic; public sealed class Pair<TFirst, TSecond> : IEquatable<Pair<TFirst, TSecond>> { private readonly TFirst first; private readonly TSecond second; public Pair(TFirst first, TSecond second) { this.first = first; this.second = second; } ...property getters... public bool Equals(Pair<TFirst, TSecond> other) { if (other == null) { return false; } return EqualityComparer<TFirst>.Default.Equals(this.First, other.First) && EqualityComparer<TSecond>.Default.Equals(this.Second, other.Second); } My code is: class MyClass { public static void Main (string[] args) { // Create new pair. Pair thePair = new Pair(new String("1"), new String("1")); // Compare a new pair to previous pair by generating a second pair. if (thePair.Equals(new Pair(new string("1"), new string("1")))) System.Console.WriteLine("Equal"); else System.Console.WriteLine("Not equal"); } } The compiler complains: "Using the generic type 'ManningListing36.Paie' requires 2 type argument(s) CS0305" What am I doing wrong ? Thanks, Scott

    Read the article

  • SQL: script to create country, state tables

    - by pcampbell
    Consider writing an application that requires registration for an entity, and the schema has been defined to require the country, state/prov/county data to be normalized. This is fairly typical stuff here. Naming also is important to reflect. Each country has a different name for this entity: USA = states Australia = states + territories Canada = provinces + territories Mexico = states Brazil = states Sweden = provinces UK = counties, principalities, and perhaps more! Most times when approaching this problem, I have to scratch together a list of good countries, and the states/prov/counties of each. The app may be concerned with a few countries and not others. The process is full of pain. It typically involves one of two approaches: opening up some previous DB and creating a CREATE script based on those tables. Run that script in the context of the new system. creating a DTS package from database1 to database2, with all the DDL and data included in the transfer. My goal now is to script the creation and insert of the countries that I'd be concerned with in the app of the day. When I want to roll out Countries X/Y/Z, I'll open CountryX.sql, and load its states into the ProvState table. Question: do you have a set of scripts in your toolset to create schema and data for countries and state/province/county? If so, would you share your scripts here? (U.K. citizens, please feel free to correct me by way of a comment in the use of counties.)

    Read the article

  • Using the Microsoft Ajax Minifier with Web Setup project & Source Control

    - by Rob
    I've just started investigating the Microsoft Ajax Minifer 4.0 for use with a Visual Studio 2008 Web Application I work on. It's proven easy enough to hook it into the .csproj file so it produced .min.js files for all scripts, however I'm stumped as to how to integrate this with the Web Setup project & Source Control. Essentially what I want to do is have the resultant .min.js files included in the Web Setup project without having them included in Source Control because: Having to check them out prior to the build being executing is a pain (the minifier cannot modify them if they're not checked out). As they're created as a "build artifact" it just seems wrong to have them stored under source control. The only option I've managed to come across so far is to explicitly include the .min.js files as part of the Setup project by right clicking on the Web Setup project and choosing "Add File", and then having the relevant folder hierarchy duplicated in "File System on Target Machine" so that I can force the file to the correct location. This is neither elegant or simple/robust as: It requires me to manually add every minified js file to the Web Setup project by hand Maintain a copy of the relevant directory structure in both the Web Application project and the Web Setup project Remember to add any new js files minified versions to the Web Setup project Is there a better way of doing this?

    Read the article

  • Does git clone work through NTLM proxies?

    - by AndreaG
    I've tried both using export http_proxy=http://[username]:[pwd]@[proxy] and git config --global http.proxy http://[username]:[pwd]@[proxy]. I couldn't make it work. It looks like git uses Basic authentication: Initialized empty Git repository in /home/.../.git/ * Couldn't find host github.com in the .netrc file, using defaults * About to connect() to github.com port 8080 (#0) * Trying 10.... * Connected to github.com (10....) port 8080 (#0) * Proxy auth using Basic with user '...' > GET http://github.com/sunlightlabs/fiftystates.git/info/refs HTTP/1.1 Proxy-Authorization: Basic MD... User-Agent: git/1.6.1.2 Host: github.com Pragma: no-cache Accept: */* Proxy-Connection: Keep-Alive < HTTP/1.1 407 Proxy Authentication Required ( The ISA Server requires authorization to fulfill the request. Access to t he Web Proxy filter is denied. ) < Via: 1.1 ... < Proxy-Authenticate: Negotiate < Proxy-Authenticate: Kerberos < Proxy-Authenticate: NTLM < Connection: Keep-Alive < Proxy-Connection: Keep-Alive < Pragma: no-cache < Cache-Control: no-cache < Content-Type: text/html < Content-Length: 4118 * The requested URL returned error: 407 * Closing connection #0 fatal: http://github.com/sunlightlabs/fiftystates.git/info/refs download error - The requested URL returned error: 407 Google search returned mixed and probably not updated results. Somewhere it says that curl is (was?) used under the hood, but its options are (were?) hardwired into code. For example, curl --proxy-ntlm --proxy ...:8080 google.com works, and I'd like to use the same option with git. I need some more definite answers here: has anybody succeed using git through Windows proxies? Which version? Thanks.

    Read the article

  • database design suggestion needed

    - by JMSA
    I need to design a table for daily sales of pharmaceutical products. There are hundreds of types of products available {Name, code}. Thousands of sales-persons are employed to sell those products{name, code}. They collect products from different depots{name, code}. They work in different Areas - Zones - Markets - Outlets, etc. {All have names and codes} Each product has various types of prices {Production Price, Trade Price, Business Price, Discount Price, etc.}. And, sales-persons are free to choose from those combination to estimate the sales price. The problem is, daily sales requires huge amount of data-entry. Within couple of years there may be gigabytes of data (if not terabytes). If I need to show daily, weekly, monthly, quarterly and yearly sales reports there will be various types of sql queries I shall need. This is my initial design: Product {ID, Code, Name, IsActive} ProductXYZPriceHistory {ID, ProductID, Date, EffectDate, Price, IsCurrent} SalesPerson {ID, Code, Name, JoinDate, and so on..., IsActive} SalesPersonSalesAraeaHistory {ID, SalesPersonID, SalesAreaID, IsCurrent} Depot {ID, Code, Name, IsActive} Outlet {ID, Code, Name, AreaID, IsActive} AreaHierarchy {ID, Code, Name, PrentID, AreaLevel, IsActive} DailySales {ID, ProductID, SalesPersonID, OutletID, Date, PriceID, SalesPrice, Discount, etc...} Now, apart from indexing, how can I normalize my DailySales table to have a fine grained design that I shall not need to change for years to come? Please show me a sample design of only the DailySales data-entry table (from which all types of reports would be queried) on the basis of above information. I don't need a detailed design advice. I just need an advice regarding only the DailySales table. Is there any way to break this particular table to achieve granularity?

    Read the article

  • How do I simulate the usage of a sequence of web pages?

    - by Rory Becker
    I have a simple sequence of web pages written in ASP.Net 3.5 SP1. Page1 - A Logon Form.... txtUsername, txtPassword and cmdLogon Page2 - A Menu (created using DevExpress ASP.Net controls) Page3 - The page redirected to by the server in the event that the user picks the right menu option in Page2 I would like to create a threaded program to simulate many users trying to use this sequence of pages. I have managed to create a host Winforms app which Launches a new thread for each "User" I have further managed to work out the basics of WebRequest enough to perform a request which retrieves the Logon page itself. Dim Request As HttpWebRequest = TryCast(WebRequest.Create("http://MyURL/Logon.aspx"), HttpWebRequest) Dim Response As HttpWebResponse = TryCast(Request.GetResponse(), HttpWebResponse) Dim ResponseStream As StreamReader = New StreamReader(Response.GetResponseStream(), Encoding.GetEncoding(1252)) Dim HTMLResponse As String = ResponseStream.ReadToEnd() Response.Close() ResponseStream.Close() Next I need to simulate the user having entered information into the 2 TextBoxes and pressing logon.... I have a hunch this requires me to add the right sort of "PostData" to the request. before submitting. However I'm also concerned that "ViewState" may be an issue. Am I correct regarding the PostData? How do I add the postData to the request? Do I need to be concerned about Viewstate? Update: While I appreciate that Selenium or similar products are useful for acceptance testing , I find that they are rather clumsy for what amounts to load testing. I would prefer not to load 100 instances of Firefox or IE in order to simulate 100 users hitting my site. This was the reason I was hoping to take the ASPNet HttpWebRequest route.

    Read the article

  • How to not specify ruleset name when reading from config file?

    - by user102533
    When I read rules from a configuration file, I do something like this: IConfigurationSource configurationSource = new FileConfigurationSource("myvalidation.config"); var validator = ValidationFactory.CreateValidator<Salary>(configurationSource); The config file looks like this: <ruleset name="Default"> <properties> <property name="Address"> <validator lowerBound="10" lowerBoundType="Inclusive" upperBound="15" upperBoundType="Inclusive" negated="false" messageTemplate="" messageTemplateResourceName="" messageTemplateResourceType="" tag="" type="Microsoft.Practices.EnterpriseLibrary.Validation.Validators.StringLengthValidator, Microsoft.Practices.EnterpriseLibrary.Validation, Version=4.1.0.0, Culture=neutral, PublicKeyToken=null" name="String Length Validator" /> </property> </properties> </ruleset> My question is - is there a way to not specify the ruleset name? It's not required for me to specify a ruleset name if I am using the attribute approach and I can validate using: ValidationResults results = Validation.Validate(salary); Now when reading from config files, I have to specify the ruleset name. There is no overload of the CreateValidator method that accepts the configuration source without specifying the ruleset name. Also, the xml in the config file requires a name attribute for ruleset

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • Uninitialized array offset

    - by kimmothy16
    Hey everyone, I am using PHP to create a form with an array of fields. Basically you can add an unlimited number of 'people' to the form and each person has a first name, last name, and phone number. The form requires that you add a phone number for the first person only. If you leave the phone number field blank on any others, the handler file is supposed to be programmed to use the phone number from the first person. So, my fields are: person[] - a hidden field with a value that is this person's primary key. fname[] - an input field lname[] - an input field phone[] - an input field My form handler looks like this: $people = $_POST['person'] $counter = 0; foreach($people as $person): if($phone[$counter] == '') { // use $phone[0]'s phone number } else { // use $phone[$counter] number } $counter = $counter + 1; endforeach; PHP doesn't like this though, it is throwing me an Notice: Uninitialized string offset error. I debugged it by running the is_array function on people, fname, lname, and phone and it returns true to being an array. I can also manually echo out $phone[2], etc. and get the correct value. I've also ran is_int on the $counter variable and it returned true, so I'm unsure why this isn't working as intended? Any help would be great!

    Read the article

  • Splitting EJBs and interfaces into separate module -- deployment fails

    - by Hank
    I'm having trouble following this guide to "extract" my interfaces and entities from my EAR to use them from another Web Application: I use NetBeans 6.8 and Glassfish 3.0.1 "Java Class Library" project contains all the entities and interfaces "Java EE Application" project class library added to the project, is packaged into the EAR contains EJB implementations, MDBs, Test "Java Web Application" project class library added to the project, is packaged into the WAR contains REST interface When I build and deploy the Web Application, all goes well. When I build the JEE application, I can see the jar-file (interfaces, entities) being included. But when I try to deploy the EAR, Glassfish refuses it with a java.lang.NoClassDefFoundError error: [#|2010-03-28T18:25:59.875+0200|WARNING|glassfishv3.0|javax.enterprise.system.tools.deployment.org.glassfish.deployment.common|_ThreadID=28;_ThreadName=Thread-1;|Error in annotation processing: java.lang.NoClassDefFoundError: mvs/core/StoreServiceLocal|#] [#|2010-03-28T18:25:59.876+0200|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=28;_ThreadName=Thread-1;|Exception while deploying the app java.lang.IllegalArgumentException: Invalid ejb jar [CoreServer]: it contains zero ejb. Note: 1. A valid ejb jar requires at least one session, entity (1.x/2.x style), or message-driven bean. 2. EJB3+ entity beans (@Entity) are POJOs and please package them as library jar. 3. If the jar file contains valid EJBs which are annotated with EJB component level annotations (@Stateless, @Stateful, @MessageDriven, @Singleton), please check server.log to see whether the annotations were processed properly. 'mvs/core/StoreServiceLocal' is an interface which is defined in the library jar file. What am I doing wrong?

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

  • Ruby and duck typing: design by contract impossible?

    - by davetron5000
    Method signature in Java: public List<String> getFilesIn(List<File> directories) similar one in ruby def get_files_in(directories) In the case of Java, the type system gives me information about what the method expects and delivers. In Ruby's case, I have no clue what I'm supposed to pass in, or what I'll expect to receive. In Java, the object must formally implement the interface. In Ruby, the object being passed in must respond to whatever methods are called in the method defined here. This seems highly problematic: Even with 100% accurate, up-to-date documentation, the Ruby code has to essentially expose its implementation, breaking encapsulation. "OO purity" aside, this would seem to be a maintenance nightmare. The Ruby code gives me no clue what's being returned; I would have to essentially experiment, or read the code to find out what methods the returned object would respond to. Not looking to debate static typing vs duck typing, but looking to understand how you maintain a production system where you have almost no ability to design by contract. Update No one has really addressed the exposure of a method's internal implementation via documentation that this approach requires. Since there are no interfaces, if I'm not expecting a particular type, don't I have to itemize every method I might call so that the caller knows what can be passed in? Or is this just an edge case that doesn't really come up?

    Read the article

  • SQL Server 2000 intermittent connection exceptions on production server - specific environment probl

    - by StickyMcGinty
    We've been having intermittent problems causing users to be forcibly logged out of out application. Our set-up is ASP.Net/C# web application on Windows Server 2003 Standard Edition with SQL Server 2000 on the back end. We've recently performed a major product upgrade on our client's VMWare server (we have a guest instance dedicated to us) and whereas we had none of these issues with the previous release the added complexity that the new upgrade brings to the product has caused a lot of issues. We are also running SQL Server 2000 (build 8.00.2039, or SP4) and the IIS/ASP.NET (.Net v2.0.50727) application on the same box and connecting to each other via a TCP/IP connection. Primarily, the exceptions being thrown are: System.IndexOutOfRangeException: Cannot find table 0. System.ArgumentException: Column 'password' does not belong to table Table. [This exception occurs in the log in script, even though there is clearly a password column available] System.InvalidOperationException: There is already an open DataReader associated with this Command which must be closed first. [This one is occurring very regularly] System.InvalidOperationException: This SqlTransaction has completed; it is no longer usable. System.ApplicationException: ExecuteReader requires an open and available Connection. The connection's current state is connecting. System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. And just today, for the first time: System.Web.UI.ViewStateException: Invalid viewstate. We have load tested the app using the same number of concurrent users as the production server and cannot reproduce these errors. They are very intermittent and occur even when there are only 8/9/10 user connections. My gut is telling me its ASP.NET - SQL Server 2000 connection issues.. We've pretty much ruled out code-level Data Access Layer errors at this stage (we've a development team of 15 experienced developers working on this) so we think its a specific production server environment issue.

    Read the article

  • Really simple JSON serialization in .NET

    - by Evgeny
    I have some simple .NET objects I'd like to serialize to JSON and back again. The set of objects to be serialized is quite small and I control the implementation, so I don't need a generic solution that will work for everything. Since my assembly will be distributed as a library I'd really like to avoid a dependency on some third-party DLL: I just want to give users one assembly that they can reference. I've read the other questions I could find on converting to and from JSON in .NET. The recommended solution of JSON.NET does work, of course, but it requires distributing an extra DLL. I don't need any of the fancy features of JSON.NET. I just need to handle a simple object (or even dictionary) that contains strings, integers, DateTimes and arrays of strings and bytes. On deserializing I'm happy to get back a dictionary - it doesn't need to create the object again. Is there some really simple code out there that I could compile into my assembly to do this simple job? I've also tried System.Web.Script.Serialization.JavaScriptSerializer, but where it falls down is the byte array: I want to base64-encode it and even registering a converter doesn't let me easily accomplish that due to the way that API works (it doesn't pass in the name of the field).

    Read the article

  • Generate an ID via COM interop

    - by Erik van Brakel
    At the moment, we've got an unmaintanable ball of code which offers an interface to a third party application. The third party application has a COM assembly which MUST be used to create new entries. This process involves two steps: generate a new object (basically an ID), and update that object with new field values. Because COM interop is so slow, we only use that to generate the ID (and related objects) in the database. The actual update is done using a regular SQL query. What I am trying to figure out if it's possible to use NHibernate to do some of the heavy lifting for us, without bypassing the COM assembly. Here's the code for saving something to the database as I envision it: using(var s = sessionFactory.OpenSession()) using(var t = s.BeginTransaction()) { MyEntity entity = new MyEntity(); s.Save(entity); t.Commit(); } Regular NH code I'd say. Now, this is where it gets tricky. I think I have to supply my own implementation of NHibernate.Id.IIdentifierGenerator which calls the COM assembly in the Generate method. That's not a problem. What IS a problem is that the COM assembly requires initialisation, which does take a bit of time. It also doesn't like multiple instances in the same process, for some reason. What I would like to know is if there's a way to properly access an external service in the generator code. I'm free to use any technique I want, so if it involves something like an IoC container that's no problem. The thing I am looking for is where exactly to hook-up my code so I can access the things I need in my generator, without having to resort to using singletons or other nasty stuff.

    Read the article

  • MVC: model of type Nullable<T>

    - by Fyodor Soikin
    I have a partial view that inherits from ViewUserControl<Guid?> - i.e. it's model is of type Nullable<Guid>. Very simple view, nothing special, but that's not the point. Somewhere else, I do Html.RenderPartial( "MyView", someGuid ), where someGuid is of type Nullable<Guid>. Everything's perfectly legal, should work OK, right? But here's the gotcha: the second argument of Html.RenderPartial is of type object, and therefore, Nullable<Guid> being a value type, it must be boxed. But nullable types are somehow special in the CLR, so that when you box one of those, you actually get either a boxed value of type T (Nullable's argument), or a null if the nullable didn't have a value to begin with. And that last case is actually interesting. Turns out, sometimes, I do have a situation when someGuid.HasValue == false. And in those cases, I effectively get a call Html.RenderPartial( "MyView", null ). And what does the HtmlHelper do when the model is null? Believe it or not, it just goes ahead and takes the parent view's model. Regardless of it's type. So, naturally, in those cases, I get an exception saying: "The model item passed into the dictionary is of type 'Parent.View.Model.Type', but this dictionary requires a model item of type 'System.Guid?'" So the question is: how do I make MVC correctly pass new Nullable<Guid> { HasValue = false } instead of trying to grab the parent's model? Note: I did consider wrapping my Guid? in an object of another type, specifically created for this occasion, but this seems completely ridiculous. Don't want to do that as long as there's another way. Note 2: now that I've wrote all this, I've realized that the question may be reduced to how to pass a null for model without ending up with parent's model?

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • How to lazy load scripts in YUI that accompany ajax html fragments

    - by Chris Beck
    I have a web app with Tabs for Messages and Contacts (think gmail). Each Tab has a Y.on('click') event listener that retrieves an HTML fragment via Y.io() and renders the fragment via node.setContent(). However, the Contact Tab also requires contact.js to enable an Edit button in the fragment. How do I defer the cost of loading contact.js until the user clicks on the Contacts tab? How should contact.js add it's listener to the Edit button? The Complete function of my Tab's on('click') event could serialize Get.script('contact.js') after Y.io('fragment'). However, for better performance, I would prefer to download the script in parallel to downloading the HTML fragment. In that case, I would need to defer adding an event listener to the Edit button until the Edit button is available. This seems like a common RIA design pattern. What is the best way to implement this with YUI? How should I get the script? How should I defer sections of the script until elements in the fragment are available in the DOM?

    Read the article

  • Using Flash comboboxes with Jaws

    - by Zoe Gagnon
    I'm working on a project for a government agency which requires 508 compliance. Our product is written for Flash 10 in ActionScript 3 using Flash CS4. We are doing this 100% programatically. We have almost all of the elements working properly, but when accessing combobox components, we have a problem. The combobox can be tabbed to directly with no problem, and the drop-down can be navigated directly with the arrow keys. However, when navigating, it reads the last item in the dropdown, not the current. For example, consider a combobox with the list of selections: first, second. Jaws reads the prompt fine, but when we press the down arrow to select the first item, it reads nothing. Pressing the down arrow again (so "second" is selected) causes it to read "first". Pressing down a final time causes it to read "second". I am completely baffled by this, and it is probably as likely that we don't know how to use Jaws, or that Flash simply can't support this function properly. If you have any suggestions for how we can resolve this, I would really appreciate it.

    Read the article

< Previous Page | 179 180 181 182 183 184 185 186 187 188 189 190  | Next Page >