Search Results

Search found 1682 results on 68 pages for 'tron legacy'.

Page 62/68 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Ruby on Rails has_one Model Not Supplying ID Column

    - by Metric Scantlings
    I have a legacy rails (version 1.2.3) app which runs without issue on a number of servers (not to mention my local environment). Deployed to its newest server, though, and I now get ActiveRecord::StatementInvalid: Mysql::Error: #23000Column 'video_id' cannot be null errors. Below are the models/relationships, simplified: class Video < ActiveRecord::Base has_one(:user, :dependent => :destroy) end class User < ActiveRecord::Base belongs_to(:video) end And below is a rails console transcript of the relationships failing: >> video = Video.create(:title => 'New Video') => #<Video:0xb6d5e31c>... >> video.id => 5 >> video.user = User.create(:name => 'Tester') ActiveRecord::StatementInvalid: Mysql::Error: #23000Column 'video_id' cannot be null: INSERT INTO users (`name`, `video_id`) VALUES('Tester', NULL) from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/abstract_adapter.rb:128:in `log' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/mysql_adapter.rb:243:in `execute' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/mysql_adapter.rb:253:in `insert' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:1811:in `create_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/callbacks.rb:254:in `create_without_timestamps' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/timestamp.rb:39:in `create' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:1789:in `create_or_update_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/callbacks.rb:242:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:1545:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/validations.rb:752:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:129:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/abstract/database_statements.rb:59:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:95:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:121:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:129:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:451:in `create' from (irb):3 from :0 Has anyone else come across ActiveRecord not sending an ID when it clearly knows it?

    Read the article

  • ASP.NET MVC, Webform hybrid

    - by Greg Ogle
    We (me and my team) have a ASP.NET MVC application and we are integrating a page or two that are Web Forms. We are trying to reuse the Master Page from our MVC part of the app in the WebForms part. We have found a way of rendering an MVC partial view in web forms, which works great, until we try and do a postback, which is the reason for using a WebForm. The Error: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. The Code to render the partial view from a WebForm (credited to "How to include a partial view inside a webform"): public static class WebFormMVCUtil { public static void RenderPartial(string partialName, object model) { //get a wrapper for the legacy WebForm context var httpCtx = new HttpContextWrapper(System.Web.HttpContext.Current); //create a mock route that points to the empty controller var rt = new RouteData(); rt.Values.Add("controller", "WebFormController"); //create a controller context for the route and http context var ctx = new ControllerContext( new RequestContext(httpCtx, rt), new WebFormController()); //find the partial view using the viewengine var view = ViewEngines.Engines.FindPartialView(ctx, partialName).View; //create a view context and assign the model var vctx = new ViewContext(ctx, view, new ViewDataDictionary { Model = model }, new TempDataDictionary()); //ERROR OCCURS ON THIS LINE view.Render(vctx, System.Web.HttpContext.Current.Response.Output); } } My only experience with this error is in context of a web farm, which is not the case. Also, I understand that the machine key is used for decrypting the ViewState. Any information on how to diagnose this issue would be appreciated. A Work-around: So far the work-around is to move the header content to a PartialView, then use an AJAX call to call a page with just the Partial View from the WebForms, and then using the PartialView directly on the MVC Views. Also, we are still able to share non-tech-specific parts of the Master Page, i.e. anything that is not MVC specific. Still yet, this is not an ideal solution, a server-side solution is still desired. Also, this solutino has issues when working with controls that have more sophisticated controls, using JavaScript, particularly dynamically generated script as used by 3rd party controls.

    Read the article

  • HTTP 400 Bad Request error attempting to add web reference to WCF Service

    - by c152driver
    I have been trying to port a legacy WSE 3 web service to WCF. Since maintaining backwards compatibility with WSE 3 clients is the goal, I've followed the guidance in this article. After much trial and error, I can call the WCF service from my WSE 3 client. However, I am unable to add or update a web reference to this service from Visual Studio 2005 (with WSE 3 installed). The response is "The request failed with HTTP status 400: Bad Request". I get the same error trying to generate the proxy using the wsewsdl3 utility. I can add a Service Reference using VS 2008. Any solutions or troubleshooting suggestions? Here are the relevant sections from the config file for my WCF service. <system.serviceModel> <services> <service behaviorConfiguration="MyBehavior" name="MyService"> <endpoint address="" binding="customBinding" bindingConfiguration="wseBinding" contract="IMyService" /> <endpoint address="mex" binding="mexHttpsBinding" contract="IMetadataExchange" /> </service> </services> <bindings> <customBinding> <binding name="wseBinding"> <security authenticationMode="UserNameOverTransport" /> <mtomMessageEncoding messageVersion="Soap11WSAddressingAugust2004" /> <httpsTransport/> </binding> </customBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="MyBehavior"> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType="MyCustomValidator" /> </serviceCredentials> <serviceAuthorization principalPermissionMode="UseAspNetRoles" roleProviderName="MyRoleProvider" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> </system.serviceModel>

    Read the article

  • Which management tools would you recommend for software development?

    - by Robert Schneider
    What would you generally recommend for software developement? Which combination do you use or would you recommend? I assume that tools are needed for Version control, issues/bugs/task, Release management, Requirement management. Tools for Test management and Project management should be accounted for as well, I guess. Did I forget anything? Maybe tools for Continous Integration. I'm not interested in a halfway combination of one ore two tools like a Subversion + Bugzilla (I know they are good but for a company they might not be sufficient). And also tools like make/ant shouldn't be taken into account. I'd like to know a combination that covers all what is important for professional software development. However, it could be a single tool of course if it covers all the management issues. What do you think would be a good combination? I assume a combination should be regardes as good if the tools itself are good but also if they have good integrations. Udate: Something like Ant is just a script not a management tool. Okay: we do already have Perforce. But this is somehow generic. We have different projects that uses C-, VS/.NET-, Python-, PHP and we will be starting new projects in Java. Plenty of languages and frameworks, though some are going to be legacy. So the tools should be generic. Obviously we don't want to use Bugzilla for one project and Jira for another. The management tools have to be generic. Further thoughts: Think of sourceforge: For each project they offer a version control system, an issue tracker, a wiki, ... totally independent of what the project is about. Those are some management tools that a project usually needs. I would say that most companies need such a set of tools. But I could imagine, if a company has several projects, they need further tools too: for Project management, Quality management, ... And I could imagine there are some recommendable combinations of tools. A not really apropiate comparison to this could be LAMP (Linux, Apache, MySQL, PHP). A combination of software that has evolved since they work very well together, and are pretty useful for a lot of applications (or rather web sites). So maybe there are some recommendable management tool combinations that also fit toghether like LAMP. The integration is important. Thank you for the incredibly fast and helpful responses!!!

    Read the article

  • How to break a Hibernate session?

    - by Péter Török
    In the Hibernate reference, it is stated several times that All exceptions thrown by Hibernate are fatal. This means you have to roll back the database transaction and close the current Session. You aren’t allowed to continue working with a Session that threw an exception. One of our legacy apps uses a single session to update/insert many records from files into a DB table. Each recourd update/insert is done in a separate transaction, which is then duly committed (or rolled back in case an error occurred). Then for the next record a new transaction is opened etc. But the same session is used throughout the whole process, even if a HibernateException was caught in the middle. We are using Oracle 9i btw with Hibernate 3.24.sp1 on JBoss 4.2. Reading the above in the book, I realized that this design may fail. So I refactored the app to use a separate session for each record update. In a unit test with a mock session factory, I could prove that it is now requesting a new session for each record update. So far, so good. However, we found no way to reproduce the session failure while testing the whole app (would this be a stress test btw, or ...?). We thought of shutting down the listener of the DB but we realized that the app is keeping a bunch of connections open to the DB, and the listener would not affect those connections. (This is a web app, activated once every night by a scheduler, but it can also be activated via the browser.) Then we tried to kill some of those connections in the DB while the app was processing updates - this resulted in some failed updates, but then the app happily continued. Apparently Hibernate is clever enough to reopen broken connections under the hood without breaking the whole session. So this might not be a critical issue, as our app seems to be robust enough even in its original form. However, the issue keeps bugging me. I would like to know: Under what circumstances does the Hibernate session really become unusable after a HibernateException was thrown? How to reproduce this in a test? (What's the proper term for such a test?)

    Read the article

  • MemoryStream, XmlTextWriter and Warning 4 CA2202 : Microsoft.Usage

    - by rasx
    The Run Code Analysis command in Visual Studio 2010 Ultimate returns a warning when seeing a certain pattern with MemoryStream and XmlTextWriter. This is the warning: Warning 7 CA2202 : Microsoft.Usage : Object 'ms' can be disposed more than once in method 'KinteWritePages.GetXPathDocument(DbConnection)'. To avoid generating a System.ObjectDisposedException you should not call Dispose more than one time on an object.: Lines: 421 C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 421 Songhay.DataAccess.KinteWritePages This is the form: static XPathDocument GetXPathDocument(DbConnection connection) { XPathDocument xpDoc = null; var ms = new MemoryStream(); try { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { writer.WriteStartDocument(); writer.WriteStartElement("data"); do { while(reader.Read()) { writer.WriteStartElement("item"); for(int i = 0; i < reader.FieldCount; i++) { writer.WriteRaw(String.Format("<{0}>{1}</{0}>", reader.GetName(i), reader[i].ToString())); } writer.WriteFullEndElement(); } } while(reader.NextResult()); writer.WriteFullEndElement(); writer.WriteEndDocument(); writer.Flush(); ms.Position = 0; xpDoc = new XPathDocument(ms); } } } finally { ms.Dispose(); } return xpDoc; } The same kind of warning is produced for this form: XPathDocument xpDoc = null; using(var ms = new MemoryStream()) { using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } } return xpDoc; By the way, the following form produces another warning: XPathDocument xpDoc = null; var ms = new MemoryStream(); using(XmlTextWriter writer = new XmlTextWriter(ms, Encoding.UTF8)) { using(DbDataReader reader = CommonReader.GetReader(connection, Resources.KinteRssSql)) { //... } } return xpDoc; The above produces the warning: Warning 7 CA2000 : Microsoft.Reliability : In method 'KinteWritePages.GetXPathDocument(DbConnection)', object 'ms' is not disposed along all exception paths. Call System.IDisposable.Dispose on object 'ms' before all references to it are out of scope. C:\Visual Studio 2010\Projects\Songhay.DataAccess.KinteWritePages\KinteWritePages.cs 383 Songhay.DataAccess.KinteWritePages In addition to the following, what are my options?: Supress warning CA2202. Supress warning CA2000 and hope that Microsoft is disposing of MemoryStream (because Reflector is not showing me the source code). Rewrite my legacy code to recognize the wonderful XDocument and LINQ to XML.

    Read the article

  • PNGs alpha transparancy in AS3 - Unknown file-type

    - by WiseDonkey
    Hello there! After whittling down of the options we've encountered a problem with PNG's and ActionScript 3 (AS3). When loading a PNG 8 or PNG 32 with alpha transparancy we're getting the following error reported in Flash:- "Error #2124: Loaded file is an unknown type" Now, we're dealing with some legacy images, and it appears as though this problem isn't universal - some images believed to be 32bit alpha PNG are loading. BUT, some conclusions:- converting one image that was 32 bit alpha (NOT WORKING IN AS3) to PNG 8 index transparency DID work. And converting that same image to PNG 8 alpha DID NOT work. These all worked in AS2 There is no difference between the headers Headers of a Failing Image [0] => HTTP/1.1 200 OK [1] => Date: Tue, 06 Apr 2010 14:17:28 GMT [2] => Server: Apache/2.2.3 (Red Hat) [3] => Last-Modified: Tue, 06 Apr 2010 13:44:05 GMT [4] => ETag: "3700054-11d6-a3983340" [5] => Accept-Ranges: bytes [6] => Content-Length: 4566 [7] => Connection: close [8] => Content-Type: image/png Headers of a Working Image [0] => HTTP/1.1 200 OK [1] => Date: Tue, 06 Apr 2010 14:19:02 GMT [2] => Server: Apache/2.2.3 (Red Hat) [3] => Last-Modified: Fri, 30 Oct 2009 18:38:08 GMT [4] => ETag: "ba8057-65f2-5445c400" [5] => Accept-Ranges: bytes [6] => Content-Length: 26098 [7] => Connection: close [8] => Content-Type: image/png Any thoughts of a direction of further investigation or thoughts on a bewildering problem with little to no documentation; very warmly welcomed. EDIT Now it would appear as though something in the PHP conversion of the images is shafting; I use the following PHP to add alpha layers:- imagealphablending($image_p, false); ImageSaveAlpha($image_p, true); ImageFill($image_p, 0, 0, IMG_COLOR_TRANSPARENT);

    Read the article

  • Debugging site written mainly in JScript with AJAX code injection

    - by blumidoo
    Hello, I have a legacy code to maintain and while trying to understand the logic behind the code, I have run into lots of annoying issues. The application is written mainly in Java Script, with extensive usage of jQuery + different plugins, especially Accordion. It creates a wizard-like flow, where client code for the next step is downloaded in the background by injecting a result of a remote AJAX request. It also uses callbacks a lot and pretty complicated "by convention" programming style (lots of events handlers are created on the fly based on certain object names - e.g. current page name, current step name). Adding to that, the code is very messy and there is no obvious inner structure - the functions are scattered in the code, file names do not reflect the business role of the code, lots of functions and code snippets are most likely not used at all etc. PROBLEM: How to approach this code base, so that the inner flow of the code can be sort-of "reverse engineered" using a suite of smart debugging tools. Ideally, I would like to be able to attach to the running application and step through the code, breaking on each new function call. Also, it would be nice to be able to create a "diagram of calls" in the application (i.e. in order to run a particular page logic, this particular flow of function calls was executed in a particular order). Not to mention to be able to run a coverage analysis, identifying potentially orphaned code fragments. I would like to stress out once more, that it is impossible to understand the inner logic of the application just by looking at the code itself, unless you have LOTS of spare time and beer crates, which I unfortunately do not have :/ (shame...) An IDE of some sort that would aid in extending that code would be also great, but I am currently looking into possibility to use Visual Studio 2010 to do the job, as the site itself is a mix of Classic ASP and ASP.NET (I'd say - 70% Java Script with jQuery, 30% ASP). I have obviously tried FireBug, but I was unable to find a way to define a breakpoint or step into the code, which is "injected" into the client JS using AJAX calls (i.e. the application retrieves the code by invoking an URL and injects it to the client local code). Venkman debugger had similar issues. Any hints would be welcome. Feel free to ask additional questions.

    Read the article

  • How can I make Swig correctly wrap a char* buffer that is modified in C as a Java Something-or-other

    - by Ukko
    I am trying to wrap some legacy code for use in Java and I was quite happy to see that Swig was able to handle the header file and it generate a great wrapper that almost works. Now I am looking for the deep magic that will make it really work. In C I have a function that looks like this DLL_IMPORT int DustyVoodoo(char *buff, int len, char *curse); This integer returned by this function is an error code in case it fails. The arguments are buff is a character buffer len is the length of the data in the buffer curse the another character buffer that contains the result of calling DustyVoodoo So, you can see where this is going, the result is actually coming back via the third argument. Also len is confusing since it may be the length of both buffers, they are always allocated as being the same size in calling code but given what DustyVoodoo does I don't think that they need be the same. To be safe both buffers should be the same size in practice, say 512 chars. The C code generated for the binding is as follows: SWIGEXPORT jint JNICALL Java_pemapiJNI_DustyVoodoo(JNIEnv *jenv, jclass jcls, jstring jarg1, jint jarg2, jstring jarg3) { jint jresult = 0 ; char *arg1 = (char *) 0 ; int arg2 ; char *arg3 = (char *) 0 ; int result; (void)jenv; (void)jcls; arg1 = 0; if (jarg1) { arg1 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg1, 0); if (!arg1) return 0; } arg2 = (int)jarg2; arg3 = 0; if (jarg3) { arg3 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg3, 0); if (!arg3) return 0; } result = (int)PemnEncrypt(arg1,arg2,arg3); jresult = (jint)result; if (arg1) (*jenv)->ReleaseStringUTFChars(jenv, jarg1, (const char *)arg1); if (arg3) (*jenv)->ReleaseStringUTFChars(jenv, jarg3, (const char *)arg3); return jresult; } It is correct for what it does; however, it misses the fact that cursed is not just an input, it is altered by the function and should be returned as an output. It also does not know that the java Strings are really buffers and should be backed by a suitably sized array. I think that Swig can do the right thing here, I just can't figure out from the documentation how to tell Swig what it needs to know. Any typemap masers in the house?

    Read the article

  • Reporting Services as PDF through WebRequest in C# 3.5 "Not Supported File Type"

    - by Heath Allison
    I've inherited a legacy application that is supposed to grab an on the fly pdf from a reporting services server. Everything works fine up until the point where you try to open the pdf being returned and adobe acrobat tells you: Adobe Reader could not open 'thisStoopidReport'.pdf' because it is either not a supported file type or because the file has been damaged(for example, it was sent as an email attachment and wasn't correctly decoded). I've done some initial troubleshooting on this. If I replace the url in the WebRequest.Create() call with a valid pdf file on my local machine ie: @"C:temp/validpdf.pdf") then I get a valid PDF. The report itself seems to work fine. If I manually type the URL to the reporting services report that should generate the pdf file I am prompted for user authentication. But after supplying it I get a valid pdf file. I've replace the actual url,username,userpass and domain strings in the code below with bogus values for obvious reasons. WebRequest request = WebRequest.Create(@"http://x.x.x.x/reportServer?/reports/reportNam&rs:format=pdf&rs:command=render&rc:parameters=blahblahblah"); int totalSize = 0; request.Credentials = new NetworkCredential("validUser", "validPass", "validDomain"); request.Timeout = 360000; // 6 minutes in milliseconds. request.Method = WebRequestMethods.Http.Post; request.ContentLength = 0; WebResponse response = request.GetResponse(); Response.Clear(); BinaryReader reader = new BinaryReader(response.GetResponseStream()); Byte[] buffer = new byte[2048]; int count = reader.Read(buffer, 0, 2048); while (count > 0) { totalSize += count; Response.OutputStream.Write(buffer, 0, count); count = reader.Read(buffer, 0, 2048); } Response.ContentType = "application/pdf"; Response.Cache.SetCacheability(HttpCacheability.Private); Response.CacheControl = "private"; Response.Expires = 30; Response.AddHeader("Content-Disposition", "attachment; filename=thisStoopidReport.pdf"); Response.AddHeader("Content-Length", totalSize.ToString()); reader.Close(); Response.Flush(); Response.End();

    Read the article

  • Complex relationship between tables in NHibernate

    - by Ilya Kogan
    Hi all, I'm writing a Fluent NHibernate mapping for a legacy Oracle database. The challenge is that the tables have composite primary keys. If I were at total freedom, I would redesign the relationships and auto-generate primary keys, but other applications must write to the same database and read from it, so I cannot do it. These are the two tables I'll focus on: Example data Trips table: 1, 10:00, 11:00 ... 1, 12:00, 15:00 ... 1, 16:00, 19:00 ... 2, 12:00, 13:00 ... 3, 9:00, 18:00 ... Faults table: 1, 13:00 ... 1, 23:00 ... 2, 12:30 ... In this case, vehicle 1 made three trips and has two faults. The first fault happened during the second trip, and the second fault happened while the vehicle was resting. Vehicle 2 had one trip, during which a fault happened. Constraints Trips of the same vehicle never overlap. So the tables have an optional one-to-many relationship, because every fault either happens during a trip or it doesn't. If I wanted to join them in SQL, I would write: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime and then I'd get a dataset where every fault appears exactly once (one-to-many as I said). Note that there is no Vehicles table, and I don't need one. But I did create a view that contains all VehicleIds from both tables, so I can use it as a junction table. What am I actually looking for? The tables are huge because they cover years of data, and every time I only need to fetch a range of a few hours. So I need a mapping and a criteria that will run something like the following SQL underneath: select ... from Faults left outer join Trips on Faults.VehicleId = Trips.VehicleId and Faults.FaultTime between Trips.TripStartTime and Trips.TripEndTime where Faults.FaultTime between :p0 and :p1 Do you have any ideas how to achieve it? Note 1: Currently the application shouldn't write to the database, so persistence is not a must, although if the mapping supports persistence, it may help at some point in the future. Note 2: I know it's a tough one, so if you give me a great answer, you will be properly rewarded :) Thank you for reading this long question, and now I only hope for the best :)

    Read the article

  • response.redirect to classic asp failing {Unable to evaluate expression because the code is optimize

    - by jeff
    I have the following code pasted below. For some reason, the response.redirect seems to be failing and it is maxing out the cpu on my server and just doesn't do anything. The .net code uploads the file fine, but does not redirect to the asp page to do the processing. I know this is absolute rubbish why would you have .net code redirecting to classic asp, it is a legacy app. I have tried putting false or true etc. at the end of the redirect as I have read other people have had issues with this. Please help as it's driving me insane! It's so strange, it runs locally on my machine but won't run on my server! I am getting the following error when I debugged remotely. {Unable to evaluate expression because the code is optimized or a native frame is on top of the call stack.} (UPDATED) After debugging remotely and taking the redirect out of the try catch, I have found that the redirect is trying to get to the correct location but after it leaves the redirect is just seems to get lost. (almost as if it can't navigate away from the cobra_import project) back up a level to COBRA/pages. Why is this??? This has worked previously!!! public void btnUploadTheFile_Click(object Source, EventArgs evArgs) { //need to check that the uploaded file is an xls file. string strFileNameOnServer = "PJI3.txt"; string strBaseLocation = ConfigurationSettings.AppSettings["str_file_location"]; if ("" == strFileNameOnServer) { txtOutput.InnerHtml = "Error - a file name must be specified."; return; } if (null != uplTheFile.PostedFile) { try { uplTheFile.PostedFile.SaveAs(strBaseLocation+strFileNameOnServer); txtOutput.InnerHtml = "File <b>" + strBaseLocation+strFileNameOnServer+"</b> uploaded successfully"; Response.Redirect ("/COBRA/pages/sap_import_pji3_prc.asp"); } catch (Exception e) { txtOutput.InnerHtml = "Error saving <b>" + strBaseLocation+strFileNameOnServer+"</b><br>"+ e.ToString(); } } }

    Read the article

  • should I ever put a major version number into a C#/Java namespace?

    - by Andrew Patterson
    I am designing a set of 'service' layer objects (data objects and interface definitions) for a WCF web service (that will be consumed by third party clients i.e. not in-house, so outside my direct control). I know that I am not going to get the interface definition exactly right - and am wanting to prepare for the time when I know that I will have to introduce a breaking set of new data objects. However, the reality of the world I am in is that I will also need to run my first version simultaneously for quite a while. The first version of my service will have URL of http://host/app/v1service.svc and when the times comes by new version will live at http://host/app/v2service.svc However, when it comes to the data objects and interfaces, I am toying with putting the 'major' version of the interface number into the actual namespace of the classes. namespace Company.Product.V1 { [DataContract(Namespace = "company-product-v1")] public class Widget { [DataMember] string widgetName; } public interface IFunction { Widget GetWidgetData(int code); } } When the time comes for a fundamental change to the service, I will introduce some classes like namespace Company.Product.V2 { [DataContract(Namespace = "company-product-v2")] public class Widget { [DataMember] int widgetCode; [DataMember] int widgetExpiry; } public interface IFunction { Widget GetWidgetData(int code); } } The advantages as I see it are that I will be able to have a single set of code serving both interface versions, sharing functionality where possible. This is because I will be able to reference both interface versions as a distinct set of C# objects. Similarly, clients may use both interface versions simultaneously, perhaps using V1.Widget in some legacy code whilst new bits move on to V2.Widget. Can anyone tell why this is a stupid idea? I have a nagging feeling that this is a bit smelly.. notes: I am obviously not proposing every single new version of the service would be in a new namespace. Presumably I will do as many non-breaking interface changes as possible, but I know that I will hit a point where all the data modelling will probably need a significant rewrite. I understand assembly versioning etc but I think this question is tangential to that type of versioning. But I could be wrong.

    Read the article

  • usercontrol hosted in IE renders as a textbox

    - by coxymla
    On my ongoing saga to mirror the hosting of a legacy app on a clean box, I've hit my next snag. One page relies on a big .NET UserControl that on the new machine renders only as a big, greyed out textarea (greyed out vertical scrollbar on the right hand edge. Inspecting the source shows the expected object tag.) This is particularly tricky because nobody seems to know much about hosted UserControls and all the discussions data back to 2002-2004. The page is quite simple: <%@ Page language="c#" Codebehind="DataExport.aspx.cs" AutoEventWireup="false" Inherits="yyyyy.Web.DataExport" %> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" > <html> <head> <title>DataExport</title> <link rel="Configuration" href="/xxxxx/yyyyy/DataExport.config"> </head> <body style="margin:0px;padding:0px;overflow:hidden"> <OBJECT id="DataExport" style="WIDTH: 100%; HEIGHT: 100%; position:absolute; left: 0px; top:0px" classid="yyyyy.Common.dll#yyyyy.Controls.DataExport" VIEWASTEXT> </OBJECT> </body> </html> The config file referenced: <?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="yyyyy"> <section name="dataExport" type="yyyyy.Controls.DataExportSectionHandler,yyyyy.Common" /> </sectionGroup> </configSections> <yyyyy> <dataExport> <layoutFile>http://vm2/xxxxx/yyyyy/layout.xml</layoutFile> <webServiceUrl>http://vm2/xxxxx/yyyyy/services/yyyyy.asmx</webServiceUrl> </dataExport> </yyyyy> </configuration> What I've checked: Security permissions should be OK, the site is trusted and adding a URL exception to grant FullTrust doesn't change anything. Config file is acessible over the web, layout.xml is accessible, ASMX shows the expected command list Machine.config grants GET permission for the usercontrol.config file. What perhaps looks fishy to me: The DataExport UserControl references Aspose.Excel to generate the spreadsheets it exports. When I navigate to the page and get a blank textbox, then run gacutil /ldl, nothing is in the local download cache. On the working machine, running the same command after viewing the page shows a laundry list of DLLs including the control DLL and the Aspose DLL.

    Read the article

  • Overwhelmed by design patterns... where to begin?

    - by Pete
    I am writing a simple prototype code to demonstrate & profile I/O schemes (HDF4, HDF5, HDF5 using parallel IO, NetCDF, etc.) for a physics code. Since focus is on IO, the rest of the program is very simple: class Grid { public: floatArray x,y,z; }; class MyModel { public: MyModel(const int &nip1, const int &njp1, const int &nkp1, const int &numProcs); Grid grid; map<string, floatArray> plasmaVariables; }; Where floatArray is a simple class that lets me define arbitrary dimensioned arrays and do mathematical operations on them (i.e. x+y is point-wise addition). Of course, I could use better encapsulation (write accessors/setters, etc.), but that's not the concept I'm struggling with. For the I/O routines, I am envisioning applying simple inheritance: Abstract I/O class defines read & write functions to fill in the "myModel" object HDF4 derived class HDF5 HDF5 using parallel IO NetCDF etc... The code should read data in any of these formats, then write out to any of these formats. In the past, I would add an AbstractIO member to myModel and create/destroy this object depending on which I/O scheme I want. In this way, I could do something like: myModelObj.ioObj->read('input.hdf') myModelObj.ioObj->write('output.hdf') I have a bit of OOP experience but very little on the Design Patterns front, so I recently acquired the Gang of Four book "Design Patterns: Elements of Reusable Object-Oriented Software". OOP designers: Which pattern(s) would you recommend I use to integrate I/O with the myModel object? I am interested in answering this for two reasons: To learn more about design patterns in general Apply what I learn to help refactor an large old crufty/legacy physics code to be more human-readable & extensible. I am leaning towards applying the Decerator pattern to myModel, so I can attach the I/O responsibilities dynamically to myModel (i.e. whether to use HDF4, HDF5, etc.). However, I don't feel very confident that this is the best pattern to apply. Reading the Gang of Four book cover-to-cover before I start coding feels like a good way to develop an unhealthy caffeine addiction. What patterns do you recommend?

    Read the article

  • Access violation when running native C++ application that uses a /clr built DLL

    - by doobop
    I'm reorganzing a legacy mixed (managed and unmanaged DLLs) application so that the main application segment is unmanaged MFC and that will call a C++ DLL compiled with /clr flag that will bridge the communication between the managed (C# DLLs) and unmanaged code. Unfortuantely, my changed have resulted in an Access violation that occurs before the application InitInstance() is called. This makes it very difficult to debug. The only information I get is the following stack trace. > 64006108() ntdll.dll!_ZwCreateMutant@16() + 0xc bytes kernel32.dll!_CreateMutexW@12() + 0x7a bytes So, here are some sceanrios I've tried. - Turned on Exceptions-Win32 Exceptions-c0000005 Access Violation to break when Thrown. Still the most detail I get is from the above stack trace. I've tried the application with F10, but it fails before any breakpoints are hit and fails with the above stack trace. - I've stubbed out the bridge DLL so that it only has one method that returns a bool and that method is coded to just return false (no C# code called). bool DllPassthrough::IsFailed() { return false; } If the stubbed out DLL is compiled with the /clr flag, the application fails. If it is compiled without the /clr flag, the application runs. - I've created a stub MFC application using the Visual Studio wizard for multidocument applications and call DllPassthrough::IsFailed(). This succeeds even with the /clr flag used to compile the DLL. - I've tried doing a manual LoadLibrary on winmm.lib as outlined in the following note Access violation when using c++/cli. The application still fails. So, my questions are how to solve the problem? Any hints, strategies, or previous incidents. And, failing that, how can I get more information on what code segment or library is causing the access exception? If I try more involved workarounds like doing LoadLibrary calls, I'd like to narrow it to the failing libraries. Thanks. BTW, we are using Visual Studio 2008 and the project is being built against the .NET 2.0 framework for the managed sections.

    Read the article

  • Migrating from hand-written persistence layer to ORM

    - by Sergey Mikhanov
    Hi community, We are currently evaluating options for migrating from hand-written persistence layer to ORM. We have a bunch of legacy persistent objects (~200), that implement simple interface like this: interface JDBC { public long getId(); public void setId(long id); public void retrieve(); public void setDataSource(DataSource ds); } When retrieve() is called, object populates itself by issuing handwritten SQL queries to the connection provided using the ID it received in the setter (this usually is the only parameter to the query). It manages its statements, result sets, etc itself. Some of the objects have special flavors of retrive() method, like retrieveByName(), in this case a different SQL is issued. Queries could be quite complex, we often join several tables to populate the sets representing relations to other objects, sometimes join queries are issued on-demand in the specific getter (lazy loading). So basically, we have implemented most of the ORM's functionality manually. The reason for that was performance. We have very strong requirements for speed, and back in 2005 (when this code was written) performance tests has shown that none of mainstream ORMs were that fast as hand-written SQL. The problems we are facing now that make us think of ORM are: Most of the paths in this code are well-tested and are stable. However, some rarely-used code is prone to result set and connection leaks that are very hard to detect We are currently squeezing some additional performance by adding caching to our persistence layer and it's a huge pain to maintain the cached objects manually in this setup Support of this code when DB schema changes is a big problem. I am looking for an advice on what could be the best alternative for us. As far as I know, ORMs has advanced in last 5 years, so it might be that now there's one that offers an acceptable performance. As I see this issue, we need to address those points: Find some way to reuse at least some of the written SQL to express mappings Have the possibility to issue native SQL queries without the necessity to manually decompose their results (i.e. avoid manual rs.getInt(42) as they are very sensitive to schema changes) Add a non-intrusive caching layer Keep the performance figures. Is there any ORM framework you could recommend with regards to that?

    Read the article

  • Mocking methods that call other methods Still hit database.Can I avoid it?

    - by devnet247
    Hi, It has been decided to write some unit tests using moq etc..It's lots of legacy code c# (this is beyond my control so cannot answer the whys of this) Now how do you cope with a scenario when you dont want to hit the database but you indirectly still hit the database? This is something I put together it's not the real code but gives you an idea. How would you deal with this sort of scenario? Basically calling a method on a mocked interface still makes a dal call as inside that method there are other methods not part of that interface?Hope it's clear [TestFixture] public class Can_Test_this_legacy_code { [Test] public void Should_be_able_to_mock_login() { var mock = new Mock<ILoginDal>(); User user; var userName = "Jo"; var password = "password"; mock.Setup(x => x.login(It.IsAny<string>(), It.IsAny<string>(),out user)); var bizLogin = new BizLogin(mock.Object); bizLogin.Login(userName, password, out user); } } public class BizLogin { private readonly ILoginDal _login; public BizLogin(ILoginDal login) { _login = login; } public void Login(string userName, string password, out User user) { //Even if I dont want to this will call the DAL!!!!! var bizPermission = new BizPermission(); var permissionList = bizPermission.GetPermissions(userName); //Method I am actually testing _login.login(userName,password,out user); } } public class BizPermission { public List<Permission>GetPermissions(string userName) { var dal=new PermissionDal(); var permissionlist= dal.GetPermissions(userName); return permissionlist; } } public class PermissionDal { public List<Permission> GetPermissions(string userName) { //I SHOULD NOT BE GETTING HERE!!!!!! return new List<Permission>(); } } public interface ILoginDal { void login(string userName, string password,out User user); } public interface IOtherStuffDal { List<Permission> GetPermissions(); } public class Permission { public int Id { get; set; } public string Name { get; set; } } Any suggestions? Am I missing the obvious? Is this Untestable code? Very very grateful for any suggestions.

    Read the article

  • Supporting multiple instances of a plugin DLL with global data

    - by Bruno De Fraine
    Context: I converted a legacy standalone engine into a plugin component for a composition tool. Technically, this means that I compiled the engine code base to a C DLL which I invoke from a .NET wrapper using P/Invoke; the wrapper implements an interface defined by the composition tool. This works quite well, but now I receive the request to load multiple instances of the engine, for different projects. Since the engine keeps the project data in a set of global variables, and since the DLL with the engine code base is loaded only once, loading multiple projects means that the project data is overwritten. I can see a number of solutions, but they all have some disadvantages: You can create multiple DLLs with the same code, which are seen as different DLLs by Windows, so their code is not shared. Probably this already works if you have multiple copies of the engine DLL with different names. However, the engine is invoked from the wrapper using DllImport attributes and I think the name of the engine DLL needs to be known when compiling the wrapper. Obviously, if I have to compile different versions of the wrapper for each project, this is quite cumbersome. The engine could run as a separate process. This means that the wrapper would launch a separate process for the engine when it loads a project, and it would use some form of IPC to communicate with this process. While this is a relatively clean solution, it requires some effort to get working, I don't now which IPC technology would be best to set-up this kind of construction. There may also be a significant overhead of the communication: the engine needs to frequently exchange arrays of floating-point numbers. The engine could be adapted to support multiple projects. This means that the global variables should be put into a project structure, and every reference to the globals should be converted to a corresponding reference that is relative to a particular project. There are about 20-30 global variables, but as you can imagine, these global variables are referenced from all over the code base, so this conversion would need to be done in some automatic manner. A related problem is that you should be able to reference the "current" project structure in all places, but passing this along as an extra argument in each and every function signature is also cumbersome. Does there exist a technique (in C) to consider the current call stack and find the nearest enclosing instance of a relevant data value there? Can the stackoverflow community give some advice on these (or other) solutions?

    Read the article

  • Across process marhalling problem with an array of points

    - by ElMagn
    Hi All, We have what we think is a marshalling problem with a renderer object when called across process boundaries. The renderer is an ATL COM server with a COM object that implements the IPoints interface defined below: typedef [uuid(B0E01719-005A-427c-B9DD-B42A18E969AE)] struct Point { double X; double Y; } Point; [ object, uuid(3BFECFE3-B4FB-4f14-8257-6E065D02E3B3), helpstring("IPoints Interface"), dual, ] interface IPoints : IDispatch { HRESULT DrawPolyLine([in] long hDC, [in] short count, [in, size_is(count)] Point * points ); // many more like DrawLine } The count parameter represents the number of points and the points parameter represents an array of the actual points. We have two process running, a graphical display process (GDP) and a tabular (grid) display process (TDP). A factory in the GDP, written in C#, creates the renderer and the clients of the renderer in the GDP. When the clients call into the renderer, everything displays correctly. The renderer is created at start up BTW. There is another factory in the TDP, written in VB6, that calls into the factory in the GDP to create the clients. When the clients call into the renderer, only the first point in the array is marshaled correctly, all the other points are garbage. Seems that the rendering works only when the client creation is started from the same process as the renderer. Now, i am not sure what the solution to this problem is. It seems that if we can guarantee that the clients are always created from a thread in the same GDP process as the renderer then the points are marshaled correctly. We tried using a background thread from the Thread Pool in C# and it indeed worked. The problem is that Windows Forms created from the clients stopped working because accessing the form's controls from a thread other than the thread that created the control is not allowed. We might change the calls to access the forms but we have quite a few of them and are trying to look into a different solution that might involve making changes to the renderer. The other problem is that the renderer is legacy code and we can't just change the interface. I am wondering what can we do to the renderer's interface that would help with marshalling from across process calls. Any ideas would be greatly appreciated. Regards, ElMagn

    Read the article

  • DB Design Pattern - Many to many classification / categorised tagging.

    - by Robin Day
    I have an existing database design that stores Job Vacancies. The "Vacancy" table has a number of fixed fields across all clients, such as "Title", "Description", "Salary range". There is an EAV design for "Custom" fields that the Clients can setup themselves, such as, "Manager Name", "Working Hours". The field names are stored in a "ClientText" table and the data stored in a "VacancyClientText" table with VacancyId, ClientTextId and Value. Lastly there is a many to many EAV design for custom tagging / categorising the vacancies with things such as Locations/Offices the vacancy is in, a list of skills required. This is stored as a "ClientCategory" table listing the types of tag, "Locations, Skills", a "ClientCategoryItem" table listing the valid values for each Category, e.g., "London,Paris,New York,Rome", "C#,VB,PHP,Python". Finally there is a "VacancyClientCategoryItem" table with VacancyId and ClientCategoryItemId for each of the selected items for the vacancy. There are no limits to the number of custom fields or custom categories that the client can add. I am now designing a new system that is very similar to the existing system, however, I have the ability to restrict the number of custom fields a Client can have and it's being built from scratch so I have no legacy issues to deal with. For the Custom Fields my solution is simple, I have 5 additional columns on the Vacancy Table called CustomField1-5. This removes one of the EAV designs. It is with the tagging / categorising design that I am struggling. If I limit a client to having 5 categories / types of tag. Should I create 5 tables listing the possible values "CustomCategoryItems1-5" and then an additional 5 many to many tables "VacancyCustomCategoryItem1-5" This would result in 10 tables performing the same storage as the three tables in the existing system. Also, should (heaven forbid) the requirements change in that I need 6 custom categories rather than 5 then this will result in a lot of code change. Therefore, can anyone suggest any DB Design Patterns that would be more suitable to storing such data. I'm happy to stick with the EAV approach, however, the existing system has come across all the usual performance issues and complex queries associated with such a design. Any advice / suggestions are much appreciated. The DBMS system used is SQL Server 2005, however, 2008 is an option if required for any particular pattern.

    Read the article

  • Maven Ant BuildException with maven-antrun-plugin ... unable to find javac compiler

    - by robsbobs
    Im trying to make Maven call an ANT build for some legacy code. the ant build builds correctly through ant however when i call it using the maven ant plugin it fails with the following error: [ ERROR] Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.6:run (default) on project CoreServices: An Ant BuildException has occured: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:158: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:62: The following error occurred while executing this line: [ERROR] C:\dev\projects\build\build.xml:33: The following error occurred while executing this line: [ERROR] C:\dev\projects\ods\build.xml:41: Unable to find a javac compiler; [ERROR] com.sun.tools.javac.Main is not on the classpath. [ERROR] Perhaps JAVA_HOME does not point to the JDK. [ERROR] It is currently set to "C:\bea\jdk150_11\jre" My javac exists at C:\bea\jdk150_11\bin and this works for all other things. Im not sure where Maven is getting this version of JAVA_HOME. JAVA_HOME in windows envionrmental variables is set to C:\bea\jdk150_11\ as it should be. The Maven code that im using to call the build.xml is <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-antrun-plugin</artifactId> <version>1.6</version> <executions> <execution> <phase>install</phase> <configuration> <target> <ant antfile="../build/build.xml" target="deliver" > </ant> </target> </configuration> <goals> <goal>run</goal> </goals> </execution> </executions> </plugin> </plugins> </build>

    Read the article

  • Methodology for a Rails app

    - by Aaron Vegh
    I'm undertaking a rather large conversion from a legacy database-driven Windows app to a Rails app. Because of the large number of forms and database tables involved, I want to make sure I've got the right methodology before getting too far. My chief concern is minimizing the amount of code I have to write. There are many models that interact together, and I want to make sure I'm using them correctly. Here's a simplified set of models: class Patient < ActiveRecord::Base has_many :PatientAddresses has_many :PatientFileStatuses end class PatientAddress < ActiveRecord::Base belongs_to :Patient end class PatientFileStatus < ActiveRecord::Base belongs_to :Patient end The controller determines if there's a Patient selected; everything else is based on that. In the view, I will be needing data from each of these models. But it seems like I have to write an instance variable in my controller for every attribute that I want to use. So I start writing code like this: @patient = Patient.find(session[:patient]) @patient_addresses = @patient.PatientAddresses @patient_file_statuses = @patient.PatientFileStatuses @enrollment_received_when = @patient_file_statuses[0].EnrollmentReceivedWhen @consent_received = @patient_file_statuses[0].ConsentReceived @consent_received_when = @patient_file_statuses[0].ConsentReceivedWhen The first three lines grab the Patient model and its relations. The next three lines are examples of my providing values to the view from one of those relations. The view has a combination of text fields and select fields to show the data above. For example: <%= select("patientfilestatus", "ConsentReceived", {"val1"="val1", "val2"="val2", "Written"="Written"}, :include_blank=true )% <%= calendar_date_select_tag "patient_file_statuses[EnrollmentReceivedWhen]", @enrollment_complete_when, :popup=:force % (BTW, the select tag isn't really working; I think I have to use collection_select?) My questions are: Do I have to manually declare the value of every instance variable in the controller, or can/should I do it within the view? What is the proper technique for displaying a select tag for data that's not the primary model? When I go to save changes to this form, will I have to manually pick out the attributes for each model and save them individually? Or is there a way to name the fields such that ActiveRecord does the right thing? Thanks in advance, Aaron.

    Read the article

  • Which is the "best" data access framework/approach for C# and .NET?

    - by Frans
    (EDIT: I made it a community wiki as it is more suited to a collaborative format.) There are a plethora of ways to access SQL Server and other databases from .NET. All have their pros and cons and it will never be a simple question of which is "best" - the answer will always be "it depends". However, I am looking for a comparison at a high level of the different approaches and frameworks in the context of different levels of systems. For example, I would imagine that for a quick-and-dirty Web 2.0 application the answer would be very different from an in-house Enterprise-level CRUD application. I am aware that there are numerous questions on Stack Overflow dealing with subsets of this question, but I think it would be useful to try to build a summary comparison. I will endeavour to update the question with corrections and clarifications as we go. So far, this is my understanding at a high level - but I am sure it is wrong... I am primarily focusing on the Microsoft approaches to keep this focused. ADO.NET Entity Framework Database agnostic Good because it allows swapping backends in and out Bad because it can hit performance and database vendors are not too happy about it Seems to be MS's preferred route for the future Complicated to learn (though, see 267357) It is accessed through LINQ to Entities so provides ORM, thus allowing abstraction in your code LINQ to SQL Uncertain future (see Is LINQ to SQL truly dead?) Easy to learn (?) Only works with MS SQL Server See also Pros and cons of LINQ "Standard" ADO.NET No ORM No abstraction so you are back to "roll your own" and play with dynamically generated SQL Direct access, allows potentially better performance This ties in to the age-old debate of whether to focus on objects or relational data, to which the answer of course is "it depends on where the bulk of the work is" and since that is an unanswerable question hopefully we don't have to go in to that too much. IMHO, if your application is primarily manipulating large amounts of data, it does not make sense to abstract it too much into objects in the front-end code, you are better off using stored procedures and dynamic SQL to do as much of the work as possible on the back-end. Whereas, if you primarily have user interaction which causes database interaction at the level of tens or hundreds of rows then ORM makes complete sense. So, I guess my argument for good old-fashioned ADO.NET would be in the case where you manipulate and modify large datasets, in which case you will benefit from the direct access to the backend. Another case, of course, is where you have to access a legacy database that is already guarded by stored procedures. ASP.NET Data Source Controls Are these something altogether different or just a layer over standard ADO.NET? - Would you really use these if you had a DAL or if you implemented LINQ or Entities? NHibernate Seems to be a very powerful and powerful ORM? Open source Some other relevant links; NHibernate or LINQ to SQL Entity Framework vs LINQ to SQL

    Read the article

  • Wierdness debugging Visual Studio C++ 2008

    - by Jeff Dege
    I have a legacy C++ app, that in its most incarnation we've been building with makefiles and VS2003's command-line tool. I'm trying to get it to build using VS2008 and MsBuild. The build is working OK, but I'm getting errors where I'd never seen errors, before, and stepping through in VS2008's debugger only confuses me. The app links a number of static libraries, which fall into two categories: those that are part of the same application suite, and those that are shared between a number of application suites. Originally, I had a .csproj file for each static library, and two .sln files, one for the application suite (including the suite-specific libraries) and one for the non-suite-specific shared libraries. The shared libraries were included in the link, their projects were not included in the application suite .sln. The application instantiates an object from a class that is defined in one of the shared libraries. The class has a member object of a class that wraps a linked list. The constructor of the linked list class sets its "head" pointer to null. When I run the app, and try to add an element to the linked list, I get an error - the head pointer contains the value 0xCCCCCCCC. So I step through with the debugger. And see weirdness. When the current line in the debugger is in a source file belonging to the static library, the head pointer contains 0x00000000. When I step into the constructor, I can see the pointer being set to that value, and when I'm stepped into any other method of the class, I can see that the head pointer still contains 0x00000000. But when I step out into methods that are defined in the application suite .sln, it contains 0xCCCCCCCC. It's not like it's being overwritten. It changes back and forth depending upon which source file I am currently debugging. So I included the shared library's project in the application suite .sln, and now I see the head pointer containing 0xCCCCCCCC all the time. It looks like the constructor of the linked list class is not being called. So now, I'm entirely confused. Anyone have any ideas?

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68  | Next Page >