Search Results

Search found 16902 results on 677 pages for 'strange errors'.

Page 578/677 | < Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >

  • Redeploying an ASP.NET site in IIS7 without files in use interfering

    - by fyjham
    Hey, We've got a process currently which causes ASP.NET websites to be redeployed. The code is itself an ASP.NET application. The current method, which has worked for quite a while, is simply to loop over all the files in one folder and copy them over the top of the files in the webroot. The problem that's arisen is that occasionally files end up being in use and hence can't be copied over. This has in the past been intermittent to the point it didn't matter but on some of our higher traffic sites it happens the majority of the time now. I'm wondering if anyone has a workaround or alternative approach to this that I haven't thought of. Currently my ideas are: Simply retry each file until it works. That's going to cause errors for a short time though which isn't really that good. Deploy to a new folder and update IIS's webroot to the new folder. I'm not sure how to do this short of running the application as an administrator and running batch files, which is very untidy. Does anyone know what the best way to do this is, or if it's possible to do #2 without running the publishing application as a user who has admin access (Willing to grant it special privileges, but I'd prefer to stop short of administrator)?

    Read the article

  • Android: How to periodically send location to a server

    - by Mark
    Hi, I am running a Web service that allows users to record their trips (kind of like Google's MyTracks) as part of a larger app. The thing is that it is easy to pass data, including coords and other items, to the server when a user starts a trip or ends it. Being a newbie, I am not sure how to set up a background service that sends the location updates once every (pre-determined) period (min 3 minutes, max 1 hr) until the user flags the end of the trip, or until a preset amount of time elapses. Once the trip is started from the phone, the server responds with a polling period for the phone to use as the interval between updates. This part works, in that I can display the response on the phone, and my server registers the user's action. Similarly, the trip is closed server-side upon the close trip request. However, when I tried starting a periodic tracking method from inside the StartTrack Activity, using requestLocationUpdates(String provider, long minTime, float minDistance, LocationListener listener) where minTime is the poll period from the server, it just did not work, and I'm not getting any errors. So it means I'm clueless at this point, never having used Android before. I have seen many posts here on using background services with handlers, pending intents, and other things to do similar stuff, but I really don't understand how to do it. I would like the user to do other stuff on the phone while the updates are going on, so if you guys could point me to a tutorial that shows how to actually write background services (maybe these run as separate classes?) or other ways of doing this, that would be great. Thanks!

    Read the article

  • How to generate a monotone MART ROC in R?

    - by user1521587
    I am using R and applying MART (Alg. for multiple additive regression trees) on a training set to build prediction models. When I look at the ROC curve, it is not monotone. I would be grateful if someone can help me with how I should fix this. I am guessing the issue is that initially, MART generates n trees and if these trees are not the same for all the models I am building, the results will not be comparable. Here are the steps I take: 1) Fix the false-negative cost, c_fn. Let cost = c(0, 1, c_fn, 0). 2) use the following line to build the mart model: mart(x, y, lx, martmode='class', niter=2000, cost.mtx=cost) where x is the matrix of training set variables, y is the observation matrix, lx is the matrix which specifies which of the variables in x is numerical, which one categorical. 3) I predict the test set observations using the mart model found in step 2 using this line: y_pred = martpred(x_test, probs=T) 4) I compute the false-positive and false-negative errors as follows: t = 1/(1+c_fn) %threshold based on Bayes optimal rule where c_fp=1 and c_fn. p_0 = length(which(y_test==1))/dim(y_test)[1] p_01 = sum(1*(y_pred[,2]t & y_test==0))/dim(y_test)[1] p_11 = sum(1*(y_pred[,2]t & y_test==1))/dim(y_test)[1] p_fp = p_01/(1-p_0) p_tp = p_11/p_0 5) repeat step 1-4 for a new false-negative cost.

    Read the article

  • Source folders for a maven project in eclipse

    - by 4NDR01D3
    Hello all, I have a that uses maven... and I want to put it in my working environment with eclipse(Galileo)... the project is in a svn server, and I can create check out the project and everything looks OK. I even can run the unit test and everything is working there. However, now that everything is there I wanted to work in the code, and oh surprise there are no packages in my project... I mean all the source code is in the src folder and browsing through it i can see all my files, ut if I open the files from there, the files are opened as text files with no coloring, but worst no help at all about errors in compilation. I don't know what im I doing wrong now, because I had the same project in other machine and it was working well. So here is what I did, please let me know if you notice if I did something wrong, miss any steps or anything that can help me: In the SVN Repository (Using subclipse 1.6.10) I added my SVN Repository Browsed to the folder where I have the pom file Right Click Check out as a Maven project...(Using m2eclipse 0.10.020100209) Used the default options and finish. The projects were created with no problem. I said projects because this maven project has modules, and each module became a project in eclipse. Back in the java perspective, Right click in the project, Run as maven test(Using JWebUnitTest, because I am testing a servlet) BUILD SUCCESS!! But as I said there is not packages so I can't really develop in this environment. Any help?? Thanks!

    Read the article

  • "Invalid use of Null" when using Str() with a Null Recordset field, but Str(Null) works fine

    - by Mike Spross
    I'm banging my head against the wall on this one. I was looking at some old database reporting code written in VB6 and case across this line (the code is moving data from a "source" database into a reporting database): rsTarget!VehYear = Trim(Str(rsSource!VehYear)) When rsSource!VehYear is Null, the above line generates an "Invalid use of Null" run-time error. If I break on the above line and type the following in the Immediate pane: ?rsSource!VehYear It outputs Null. Fine, that makes sense. Next, I try to reproduce the error: ?Str(rsSource!VehYear) I get an "Invalid use of Null" error. However, if I type the following into the Immediate window: ?Str(Null) I don't get an error. It simply outputs Null. If I repeat the same experiment with Trim() instead of Str(), everything works fine. ?Trim(rsSource!VehYear) returns Null, as does ?Trim(Null). No run-time errors. So, my question is, how can Str(rsSource!VehYear) possibly throw an "Invalid use of Null" error when Str(Null) does not, when I know that rsSource!VehYear is equal to Null?

    Read the article

  • Why is passing a string literal into a char* argument only sometimes a compiler error?

    - by Brian Postow
    I'm working in a C, and C++ program. We used to be compiling without the make-strings-writable option. But that was getting a bunch of warnings, so I turned it off. Then I got a whole bunch of errors of the form "Cannot convert const char* to char* in argmuent 3 of function foo". So, I went through and made a whole lot of changes to fix those. However, today, the program CRASHED because the literal "" was getting passed into a function that was expecting a char*, and was setting the 0th character to 0. It wasn't doing anything bad, just trying to edit a constant, and crashing. My question is, why wasn't that a compiler error? In case it matters, this was on a mac compiled with gcc-4.0. EDIT: added code: char * host = FindArgDefault("EMailLinkHost", ""); stripCRLF(linkHost, '\n'); where: char *FindArgDefault(char *argName, char *defVal) {// simplified char * val = defVal; return(val); } and void stripCRLF(char *str, char delim) { char *p, *q; for (p = q = str; *p; ++p) { if (*p == 0xd || *p == 0xa) { if (p[1] == (*p ^ 7)) ++p; if (delim == -1) *p = delim; } *q++ = *p; } *q = 0; // DIES HERE } This compiled and ran until it tried to set *q to 0...

    Read the article

  • UIAlertView Will not show

    - by John
    I have a program that is requesting a JSON string. I have created a class that contains the connect method below. When the root view is coming up, it does a request to this class and method to load up some data for the root view. When I test the error code (by changing the URL host to 127.0.0.1), I expect the Alert to show. Behavior is that the root view just goes black, and the app aborts with no alert. No errors in debug mode on the console, either. Any thoughts as to this behavior? I've been looking around for hints to this for hours to no avail. Thanks in advance for your help. Note: the conditional for (error) is called, as well as the UIAlertView code. - (NSString *)connect:(NSString *)urlString { NSString *jsonString; UIApplication *app = [UIApplication sharedApplication]; app.networkActivityIndicatorVisible = YES; NSError *error = nil; NSURLResponse *response = nil; NSURL *url = [[NSURL alloc] initWithString:urlString]; NSURLRequest *req = [NSURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; NSData *_response = [NSURLConnection sendSynchronousRequest: req returningResponse: &response error: &error]; if (error) { /* inform the user that the connection failed */ //AlertWithMessage(@"Connection Failed!", message); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Oopsie!" message:@"Unable to connect! Try later, thanks." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; [alert release]; } else { jsonString = [[[NSString alloc] initWithData:_response encoding:NSUTF8StringEncoding] autorelease]; } app.networkActivityIndicatorVisible = NO; [url release]; return jsonString; }

    Read the article

  • Why Is Apache Giving 403?

    - by ThinkCL
    I am getting 403 Errors from Apache when I send too many, 12, synchronous HTTP Posts via a desktop app I am building in XCode / Objective-C. The 12 POST requests are just a few kb each and go out instantly one after the other and the Apache Error Log shows... client denied by server configuration: /the-path/the-file.php Apache 2.0 PHP 5 and I have this same setup working fine on my local machine. The error is coming from a VPS with my host, which runs very fast and smooth and has plenty of resources. To debug I threw a sleep(1); function (stalls script execution by 1 second) into the php file and that fixed it. This makes me think that I am breaking some limit for too many requests for a single IP in a certain amount of time. I have googled and combed PHP ini and Apache configs, but I cannot find what that directive/setting might be. I should mention that the although it varies the first 4 or 5 POSTS usually work then it starts returning the 403 error intermittently after that. Just really acting like its bogging down. Any ideas?

    Read the article

  • Large scale Merge Replication strategy - what can go wrong?

    - by niidto
    Hi, I'm developing a piece of software that uses Merge Replication and SQL Compact on Windows Mobile 6. At the moment it is running on 5 devices reasonably well. The issues I've come up against are as follows: The schema has had to change a lot, and will continue to have to change as the application evolves. There have been various errors replicating these schema changes down to the device, uploads failing due to schema inconsistencies. Subscriptions expiring (after 14 days) and unable to reinitialize with upload - AKA, potential data los of unsynced data up to that point. Basically, the worst case scenario is data loss, and when merge replication fails, there seems to be no way back to get the data off. My method until now has been to drop and create the subscription on the device. I don't hear many people doing this, though it seems to solve everything. The long term plan is to role this out to 500+ devices. Any advice on people who have undertaken similar projects, and how to minimise data loss and make it so that there's appropriate error handling code to recover from sync failures would be much appreciated. James

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • How do I enforce the order of qmake library dependencies?

    - by James Oltmans
    I'm getting a lot of errors because qmake is improperly ordering the boost libraries I'm using. Here's what .pro file looks like QT += core gui TARGET = MyTarget TEMPLATE = app CONFIG += no_keywords \ link_pkgconfig SOURCES += file1.cpp \ file2.cpp \ file3.cpp PKGCONFIG += my_package \ sqlite3 LIBS += -lsqlite3 \ -lboost_signals \ -lboost_date_time HEADERS += file1.h\ file2.h\ file3.h FORMS += mainwindow.ui RESOURCES += Resources/resources.qrc This produces the following command: g++ -Wl,-O1 -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/lib/x86_64-linux-gnu -lboost_signals -lboost_date_time -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lQtGui -lQtCore Note: mylib1 and mylib2 are statically compiled by another project, placed in /usr/local/lib with an appropriate pkg-config .pc file pointing there. The .pro file references them via my_package in PKGCONFIG. The problem is not with pkg-config's output but with Qt's ordering. Here's the .pc file: prefix=/usr/local exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: my_package Description: My component package Version: 0.1 URL: http://example.com Libs: -L${libdir} -lmylib1 -lmylib2 Cflags: -I${includedir}/my_package/ The linking stage fails spectacularly as mylib1 and mylib2 come up with a lot of undefined references to boost libraries that both the app and mylib1 and mylib2 are using. We have another build method using scons and it properly orders things for the linker. It's build command order is below. g++ -o MyTarget file1.o file2.o file3.o moc_mainwindow.o -L/usr/local/lib -lmylib1 -lmylib2 -lsqlite3 -lboost_signals -lboost_date_time -lQtGui -lQtCore Note that the principle difference is the order of the boost libs. Scons puts them at the end just before QtGui and QtCore while qmake puts them first. The other differences in the compile commands are unimportant as I have hand modified the qmake produced make file and the simple reordering fixed the problem. So my question is, how do I enforce the right order in my .pro file despite what qmake thinks they should be?

    Read the article

  • VC9 C1083 Cannot open include file: 'boost...' after trying to abstract an include dependency

    - by ronivek
    Hey, So I've been working on a project for the past number of weeks and it uses a number of Boost libraries. In particular I'm using the boost::dynamic_bitset library quite extensively. I've had zero issues up until now; but tonight I discovered a dependency between some includes which I had to resolve; and I tried to do so by providing an abstract callback class. Effectively I now have the following: First include... class OtherClassCallback { public: virtual int someOtherMethod() const = 0; }; class SomeClass { public: void someMethod(OtherClassCallback *oc) { ... oc->someOtherMethod(); ... } }; Second include... #include "SomeClass.h" class SomeOtherClass : public OtherClassCallback { public: int someOtherMethod() const { return this->someInt; } }; Here is the issue; ever since I implemented this class I'm now getting the following error: fatal error C1083: Cannot open include file: 'boost/dynamic_bitset/dynamic_bitset.hpp': No such file or directory Now I'm getting no other compiler errors; and it's a pretty substantial project. My include paths and so on are perfect; my files are fully accessible and removing the changes fixes the issue. Does anyone have any idea what might be going on? I'm compiling to native Windows executables in VS9. I should confess that I'm very inexperienced with C++ in general so go easy on me if it's something horribly straightforward; I can't figure it out.

    Read the article

  • C++ 64bit issue

    - by Bobby
    I have the following code: tmp_data = simulated_data[index_data]; unsigned char *dem_content_buff; dem_content_buff = new unsigned char [dem_content_buff_size]; int tmp_data; unsigned long long tmp_64_data; if (!(strcmp(dems[i].GetValType(), "s32"))) { dem_content_buff[BytFldPos] = tmp_data; dem_content_buff[BytFldPos + 1] = tmp_data >> 8; dem_content_buff[BytFldPos + 2] = tmp_data >> 16; dem_content_buff[BytFldPos + 3] = tmp_data >> 24; } if (!(strcmp(dems[i].GetValType(), "f64"))) { dem_content_buff[BytFldPos] = tmp_data; dem_content_buff[BytFldPos + 1] = tmp_data >> 8; dem_content_buff[BytFldPos + 2] = tmp_data >> 16; dem_content_buff[BytFldPos + 3] = tmp_data >> 24; dem_content_buff[BytFldPos + 4] = tmp_data >> 32; dem_content_buff[BytFldPos + 5] = tmp_data >> 40; dem_content_buff[BytFldPos + 6] = tmp_data >> 48; dem_content_buff[BytFldPos + 7] = tmp_data >> 56; } I am getting some weird memory errors in other places of the application when the second if statement is true and executed. When I comment out the 2nd if statement, the problem works fine. So I suspect the way I am performing bitwise operations for 64bit data is incorrect. Can anyone see anything in this code that needs to be corrected?

    Read the article

  • (Android) Seems like my JSON query is getting double encode

    - by A Gardner
    Hi, I am getting some weird errors with my Android App. It appears that this code is double encoding the JSON string. What should be sent is ?{"email":"[email protected]","password":"asdf"} or ?%7B%22email%22:%22..... what the server is seeing is %257B%2522email%2522:%2522 .... which means the server sees %7B%22email%22:%22 ..... This confuses the server. Any ideas why this is happening? Thanks for your help Code: DefaultHttpClient c = new DefaultHttpClient(); if(cookies!=null) c.setCookieStore(cookies); if(loginNotLogout){ jso.put("email", userData.email); jso.put("password", userData.password); } URI u = null; if(loginNotLogout) u= new URI("HTTP","www.website.com","/UserService",jso.toString(),""); else u= new URI("HTTP","www.website.com","/UserService",jso.toString(),""); HttpGet httpget = new HttpGet(u); HttpResponse response = c.execute(httpget); ret.jsonString = EntityUtils.toString(response.getEntity());

    Read the article

  • RequestBuilder timeouts and browser connection limits per domain.

    - by WesleyJohnson
    This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think? Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds. We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?). Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR? I hope that's clear, if not please let me know and I'll try to explain more.

    Read the article

  • How can I make VS2010 insert using statements in the order dictated by StyleCop rules.

    - by Hamish Grubijan
    The related default StyleCop rules are: Place using statements inside namespace. Sort using statements alphabetically. But ... System using come first (still trying to figure out if that means just using System; or using System[.*];). So, my use case: I find a bug and decide that I need to at least add an intelligible Assert to make debugging less painful for the next guy. So I start typing Debug.Assert( and intellisense marks it in Red. I hover mouse over Debug and between using System.Diagnostics; and System.Diagnostics.Debug I choose the former. This inserts using System.Diagnostics; after all other using statements. It would be nice if VS2010 did not assist me in writing code that won't build due to warnings as errors. How can I make VS2010 smarter? Is there some sort of setting, or does this require a full-fledged add-in of some sort?

    Read the article

  • asp.net dynamic HTML form

    - by user204588
    Hi, I want to create an html page inside a asp.net page using c# and then request that html page. The flow is, I'll be creating a request that will give me a response with some values. Those values will be stored in hidden fields in the html page I'm creating on the fly and then requesting. I figure it would be something like below but I'm not sure if it would work, I've also received some "Thread Aborting" errors. So, does anyone know the proper way to do this or at least direct me to a nice article or something? StringBuilder builder = new StringBuilder(); builder.Append("<html><head></head>"); builder.Append("<body onload=\"document.aButton.submit();\">"); builder.Append("<input type=\"hidden\" name=\"something\" value=\"" + aValue + "\">"); HttpContext.Current...Response.Write(builder.ToString()); ... end response

    Read the article

  • What can cause a spontaneous EPIPE error without either end calling close() or crashing?

    - by Hongli
    I have an application that consists of two processes (let's call them A and B), connected to each other through Unix domain sockets. Most of the time it works fine, but some users report the following behavior: A sends a request to B. This works. A now starts reading the reply from B. B sends a reply to A. The corresponding write() call returns an EPIPE error, and as a result B close() the socket. However, A did not close() the socket, nor did it crash. A's read() call returns 0, indicating end-of-file. A thinks that B prematurely closed the connection. Users have also reported variations of this behavior, e.g.: A sends a request to B. This works partially, but before the entire request is sent A's write() call returns EPIPE, and as a result A close() the socket. However B did not close() the socket, nor did it crash. B reads a partial request and then suddenly gets an EOF. The problem is I cannot reproduce this behavior locally at all. I've tried OS X and Linux. The users are on a variety of systems, mostly OS X and Linux. Things that I've already tried and considered: Double close() bugs (close() is called twice on the same file descriptor): probably not as that would result in EBADF errors, but I haven't seen them. Increasing the maximum file descriptor limit. One user reported that this worked for him, the rest reported that it did not. What else can possibly cause behavior like this? I know for certain that neither A nor B close() the socket prematurely, and I know for certain that neither of them have crashed because both A and B were able to report the error. It is as if the kernel suddenly decided to pull the plug from the socket for some reason.

    Read the article

  • How do I cover unintuitive code blocks?

    - by naivedeveloper
    For some reason, I'm having a hard time trying to cover the block of code below. This code is an excerpt from the UNIX uniq command. I'm trying to write test cases to cover all blocks, but can't seem to reach this block: if (nfiles == 2) { // Generic error routine } In context: int main (int argc, char **argv) { int optc = 0; bool posixly_correct = (getenv ("POSIXLY_CORRECT") != NULL); int nfiles = 0; char const *file[2]; file[0] = file[1] = "-"; program_name = argv[0]; skip_chars = 0; skip_fields = 0; check_chars = SIZE_MAX; for (;;) { /* Parse an operand with leading "+" as a file after "--" was seen; or if pedantic and a file was seen; or if not obsolete. */ if (optc == -1 || (posixly_correct && nfiles != 0) || ((optc = getopt_long (argc, argv, "-0123456789Dcdf:is:uw:", longopts, NULL)) == -1)) { if (optind == argc) break; if (nfiles == 2) { // Handle errors } file[nfiles++] = argv[optind++]; } else switch (optc) { case 1: { unsigned long int size; if (optarg[0] == '+' && posix2_version () < 200112 && xstrtoul (optarg, NULL, 10, &size, "") == LONGINT_OK && size <= SIZE_MAX) skip_chars = size; else if (nfiles == 2) { // Handle error } else file[nfiles++] = optarg; } break; } } } Any help would be greatly appreciated. Thanks.

    Read the article

  • Qt/C++ Error handling

    - by ShiGon
    I've been doing a lot of research about handling errors with Qt/C++ and I'm still as lost as when I started. Maybe I'm looking for an easy way out (like other languages provide). One, in particular, provides for an unhandled exception which I use religiously. When the program encounters a problem, it throws the unhandled exception so that I can create my own error report. That report gets sent from my customers machine to a server online which I then read later. The problem that I'm having with C++ is that any error handling that's done has to be thought of BEFORE hand (think try/catch or massive conditionals). In my experience, problems in code are not thought of before hand else there wouldn't be a problem to begin with. Writing a cross-platform application without a cross-platform error handling/reporting/trace mechanism is a little scary to me. My question is: Is there any kind of Qt or C++ Specific "catch-all" error trapping mechanism that I can use in my application so that, if something does go wrong I can, at least, write a report before it crashes?

    Read the article

  • Porting Symbian C++ to Android NDK

    - by Donal Rafferty
    I've been given some Symbian C++ code to port over for use with the Android NDK. The code has lots of Symbian specific code in it and I have very little experience of C++ so its not going very well. The main thing that is slowing me down is trying to figure out the alternatives to use in normal C++ for the Symbian specific code. At the minute the compiler is throwing out all sorts of errors for unrecognised types. From my recent research these are the types that I believe are Symbian specific: TInt, TBool, TDesc8, RSocket, TInetAddress, TBuf, HBufc, RPointerArray Changing TInt and TBool to int and bool respectively works in the compiler but I am unsure what to use for the other types? Can anyone help me out with them? Especially TDesc, TBuf, HBuf. Also Symbian has a two phase contructor using NewL and NewLc But would changing this to a normal C++ constructor be ok? Finally Symbian uses the clean up stack to help eliminate memory leaks I believe, would removing the clean up stack code be acceptable, I presume it should be replaced with try/catch statements?

    Read the article

  • Why are we getting a WCF "Framing error" on some machines but not others

    - by Ian Ringrose
    We have just found we are getting “framing errors” (as reported by the WCF logs) when running our system on some customer test machine. It all works ok on our development machines. We have an abstract base class, with KnownType attributes for all its sub classes. One of it’s subclass is missing it’s DataContract attribute. However it all worked on our test machine! On the customers test machine, we got “framing error” showing up the WCF logs, this is not the error message I have seen in the past when missing a DataContract attribute, or a KnownType attribute. I wish to get to the bottom of this, as we can no longer have confidence in our ability to test the system before giving it to the customer until we can make our machines behave the some as the customer’s machines. Code that try to show what I am talking about, (not the real code) [DataContract()] [KnownType(typeof(SubClass1))] [KnownType(typeof(SubClass2))] // other subclasses with data members public abstract class Base { [DataMember] public int LotsMoreItemsThenThisInRealLife; } /// <summary> /// This works on some machines (not not others) when passed to Contract::DoIt, /// note the missing [DataContract()] /// </summary> public class SubClass1 : Base { // has no data members } /// <summary> /// This works in all cases when passed to Contract::DoIt /// </summary> [DataContract()] public class SubClass2 : Base { // has no data members } public interface IContract { void DoIt(Base[] items); } public static class MyProgram { public static IContract ConntectToServerOverWCF() { // lots of code ... return null; } public static void Startup() { IContract server = ConntectToServerOverWCF(); // this works all of the time server.DoIt(new Base[]{new SubClass2(){LotsMoreItemsThenThisInRealLife=2}}); // this works "in develperment" e.g. on our machines, but not on the customer's test machines! server.DoIt(new Base[] { new SubClass1() { LotsMoreItemsThenThisInRealLife = 2 } }); } }

    Read the article

  • Why do I have to specify pure virtual functions in the declaration of a derived class in Visual C++?

    - by neuviemeporte
    Given the base class A and the derived class B: class A { public: virtual void f() = 0; }; class B : public A { public: void g(); }; void B::g() { cout << "Yay!"; } void B::f() { cout << "Argh!"; } I get errors saying that f() is not declared in B while trying do define void B::f(). Do I have to declare f() explicitly in B? I think that if the interface changes I shouldn't have to correct the declarations in every single class deriving from it. Is there no way for B to get all the virtual functions' declarations from A automatically? EDIT: I found an article that says the inheritance of pure virtual functions is dependent on the compiler: http://www.objectmentor.com/resources/articles/abcpvf.pdf I'm using VC++2008, wonder if there's an option for this.

    Read the article

  • LNK2019 Against CArray Add, GetAt, GetSize, all includes are present

    - by David J
    I'm having some issues trying to Compile a DLL, but I just can't see where this linking error is coming from. My LNK2019 is: Exports.obj : error LNK2019: unresolved external symbol "__declspec(dllimport) public: int __thiscall CArray<struct HWND__ *,struct HWND__ *>::Add(struct HWND__ *)" (__imp_?Add@? $CArray@PAUHWND__@@PAU1@@@QAEHPAUHWND__@@@Z) referenced in function "int __stdcall _Disable(struct HWND__ *,long)" (?_Disable@@YGHPAUHWND__@@J@Z) Disable(...) is... static BOOL CALLBACK _Disable(HWND hwnd, LPARAM lParam) { CArray<HWND, HWND>* pArr = (CHWndArray*)lParam; if(::IsWindowEnabled(hwnd) && ::IsWindowVisible(hwnd)) { pArr->Add(hwnd); ::Enable(hwnd, FALSE); } } This is the first function in Exports.cpp; right above it is #include <afxtempl.h> I have the Windows 7.1 SDK installed (and have tried reinstalling both that and VS2010). The exact same project compiles perfectly fine on other machines, so it can't be the code itself.. I've spent countless errors researching, which led to desperate attempts of just changing random values in the solution file, including different Windows headers, etc. My last resort is getting to be just reinstalling the OS completely (assuming it's actually a problem with the Windows SDK being incorrect or something). Any suggestions at all would be a huge help. EDIT: I've added /showIncludes on the cpp giving issues, and I do see afxtempl.h being included. It's being included multiple times due to other headers including it, but it is there (and it is from the same directory every time): 1> Note: including file: C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\atlmfc\include\afxtempl.h

    Read the article

  • [WPF] ExceptionValidationRule doesn't react to exceptions...

    - by Darmak
    Hi, I have an ExceptionValidationRule on my TextBox: <Window.Resources> <Style x:Key="textStyleTextBox" TargetType="TextBox"> <Style.Triggers> <Trigger Property="Validation.HasError" Value="true"> <Setter Property="ToolTip" Value="{Binding RelativeSource={RelativeSource Self}, Path=(Validation.Errors)[0].ErrorContent}" /> </Trigger> </Style.Triggers> </Style> </Window.Resources> <TextBox x:Name="myTextBox" {Binding Path=MyProperty, ValidatesOnExceptions=True}" Style="{StaticResource ResourceKey=textStyleTextBox}" /> and MyProperty looks like that: private int myProperty; public int MyProperty { get { return myProperty; } set { if(value > 10) throw new ArgumentException("LOL that's an error"); myProperty = value; } } In DEBUG mode, application crashes with unhandled exception "LOL that's an error" (WPF Binding Engine doesn't catch this and I think it should...). In RELEASE mode, everything works fine. Can someone tell me, why the hell is this happening? And how can I fix this?

    Read the article

< Previous Page | 574 575 576 577 578 579 580 581 582 583 584 585  | Next Page >