Search Results

Search found 23968 results on 959 pages for 'tail call'.

Page 284/959 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Qt Socket blocking functions required to run in QThread where created. Any way past this?

    - by Alexander Kondratskiy
    The title is very cryptic, so here goes! I am writing a client that behaves in a very synchronous manner. Due to the design of the protocol and the server, everything has to happen sequentially (send request, wait for reply, service reply etc.), so I am using blocking sockets. Here is where Qt comes in. In my application I have a GUI thread, a command processing thread and a scripting engine thread. I create the QTcpSocket in the command processing thread, as part of my Client class. The Client class has various methods that boil down to writing to the socket, reading back a specific number of bytes, and returning a result. The problem comes when I try to directly call Client methods from the scripting engine thread. The Qt sockets randomly time out and when using a debug build of Qt, I get these warnings: QSocketNotifier: socket notifiers cannot be enabled from another thread QSocketNotifier: socket notifiers cannot be disabled from another thread Anytime I call these methods from the command processing thread (where Client was created), I do not get these problems. To simply phrase the situation: Calling blocking functions of QAbstractSocket, like waitForReadyRead(), from a thread other than the one where the socket was created (dynamically allocated), causes random behaviour and debug asserts/warnings. Anyone else experienced this? Ways around it? Thanks in advance.

    Read the article

  • How can I create a WebBrowser control (ActiveX / IWebBrowser2) without a UI?

    - by wangminhere
    I cannot figure out how to use the WebBrowser control without having it create a window in the taskbar. I am using the IWebBrowser2 ActiveX control directly because I need to use some of the advanced features like blocking downloading JAVA/ActiveX/images etc. That apparently is not available in the WPF or winforms WebBrowser wrappers (but these wrappers do have the ability to create the control with no UI) Here is my code for creating the control: Type webbrowsertype = Type.GetTypeFromCLSID(Iid_Clsids.CLSID_WebBrowser, true); m_WBWebBrowser2 = (IWebBrowser2)System.Activator.CreateInstance(webbrowsertype); m_WBWebBrowser2.Visible = false; m_WBOleObject = (IOleObject)m_WBWebBrowser2; int iret = m_WBOleObject.SetClientSite(this); iret = m_WBOleObject.SetHostNames("me", string.Empty); tagRECT rect = new tagRECT(0, 0, 0, 0); tagMSG nullMsg = new tagMSG(); m_WBOleInPlaceObject = (IOleInPlaceObject)m_WBWebBrowser2; //INPLACEACTIVATE the WB iret = m_WBOleObject.DoVerb((int)OLEDOVERB.OLEIVERB_INPLACEACTIVATE, ref nullMsg, this, 0, IntPtr.Zero, ref rect); IConnectionPointContainer cpCont = (IConnectionPointContainer)m_WBWebBrowser2; Guid guid = typeof(DWebBrowserEvents2).GUID; IConnectionPoint m_WBConnectionPoint = null; cpCont.FindConnectionPoint(ref guid, out m_WBConnectionPoint); m_WBConnectionPoint.Advise(this, out m_dwCookie); This code works perfectly but it shows a window in the taskbar. If i omit the DoVerb(OLEDOVERB.OLEIVERB_INPLACEACTIVATE) call, then Navigating to a webpage is not working properly. Navigate() will not download everything on the page and it never fires the DocumentComplete event. If I add a DoVerb(OLEIVERB_HIDE) then I get the same behavior as if I omitted the DoVerb(OLEDOVERB.OLEIVERB_INPLACEACTIVATE) call. This seems like a pretty basic question but I couldn't find any examples anywhere.

    Read the article

  • Windows Task Scheduler: IAction.QueryInterface() returns an error I cannot find a definition for

    - by Sascha
    Hello I am attempting to schedule a task (to open an .exe at a specific time) using C++ win32. But at one specific point I am getting an error, I have searched & searched to try & find the definition of this error but I cannot find it? Do you know what this error means: Hexadecimal: 80004003 Decimal: 2147500035 I wont post the whole function because its rather long (unless you may need it to determine the error context?). The code I am using (that causes the error) is the following: // QI for the executable task pointer. hr = action -> QueryInterface( IID_IExecAction, (void**) execAction ); action -> Release(); if( FAILED(hr) ) { printf("QueryInterface call failed for IExecAction: %x %X %u \n", hr, hr, hr ); rootFolder -> Release(); task -> Release(); CoUninitialize(); return false; } The output is: QueryInterface call failed for IExecAction: 80004003 80004003 2147500035

    Read the article

  • Invalid argument in sendfile() with two regular files

    - by Daniel Hershcovich
    I'm trying to test the sendfile() system call under Linux 2.6.32 to zero-copy data between two regular files. As far as I understand, it should work: ever since 2.6.22, sendfile() has been implemented using splice(), and both the input file and the output file can be either regular files or sockets. The following is the content of sendfile_test.c: #include <sys/sendfile.h> #include <fcntl.h> #include <stdio.h> int main(int argc, char **argv) { int result; int in_file; int out_file; in_file = open(argv[1], O_RDONLY); out_file = open(argv[2], O_WRONLY | O_CREAT | O_TRUNC, 0644); result = sendfile(out_file, in_file, NULL, 1); if (result == -1) perror("sendfile"); close(in_file); close(out_file); return 0; } And when I'm running the following commands: $ gcc sendfile_test.c $ ./a.out infile The output is sendfile: Bad file descriptor Which means that the system call resulted in errno = -EINVAL, I think. What am I doing wrong?

    Read the article

  • jQuery - Callback failing if there is no options parameter

    - by user249950
    Hi, I'm attempting to build a simple plugin like this (function($) { $.fn.gpSlideOut = function(options, callback) { // default options - these are used when no others are specified $.fn.gpSlideOut.defaults = { fadeToColour: "#ffffb3", fadeToColourSpeed: 500, slideUpSpeed: 400 }; // build main options before element iteration var o = $.extend({}, $.fn.gpSlideOut.defaults, options); this.each(function() { $(this) .animate({backgroundColor: o.fadeToColour},o.fadeToColourSpeed) .slideUp(o.SlideUpSpeed, function(){ if (typeof callback == 'function') { // make sure the callback is a function callback.call(this); // brings the scope to the callback } }); }); return this; }; // invoke the function we just created passing it the jQuery object })(jQuery); The confusion I'm having is that normally on jQuery plugins you can call something like this: $(this_moveable_item).gpSlideOut(function() { // Do stuff }); Without the options parameter, but it misses the callback if I do it like that so I have to always have var options = {} $(this_moveable_item).gpSlideOut(options, function() { // Do stuff }); Even if I only want to use the defaults. Is there anyway to make sure the callback function is called whether or not the options parameter is there? Cheers.

    Read the article

  • Pass Result of ASIHTTPRequest "requestFinished" Back to Originating Method

    - by Intelekshual
    I have a method (getAllTeams:) that initiates an HTTP request using the ASIHTTPRequest library. NSURL *httpURL = [[[NSURL alloc] initWithString:@"/api/teams" relativeToURL:webServiceURL] autorelease]; ASIHTTPRequest *request = [[[ASIHTTPRequest alloc] initWithURL:httpURL] autorelease]; [request setDelegate:self]; [request startAsynchronous]; What I'd like to be able to do is call [WebService getAllTeams] and have it return the results in an NSArray. At the moment, getAllTeams doesn't return anything because the HTTP response is evaluated in the requestFinished: method. Ideally I'd want to be able to call [WebService getAllTeams], wait for the response, and dump it into an NSArray. I don't want to create properties because this is disposable class (meaning it doesn't store any values, just retrieves values), and multiple methods are going to be using the same requestFinished (all of them returning an array). I've read up a bit on delegates, and NSNotifications, but I'm not sure if either of them are the best approach. I found this snippet about implementing callbacks by passing a selector as a parameter, but it didn't pan out (since requestFinished fires independently). Any suggestions? I'd appreciate even just to be pointed in the right direction. NSArray *teams = [[WebService alloc] getAllTeams]; (currently doesn't work, because getAllTeams doesn't return anything, but requestFinished does. I want to get the result of requestFinished and pass it back to getAllTeams:)

    Read the article

  • Apache CXF REST Services w/ Spring AOP

    - by jconlin
    I'm trying to get Apache CXF JAX-RS services working with Spring AOP. I've created a simple logging class: public class AOPLogger{ public void logBefore(){ System.out.println("Logging Before!"); } } My Spring configuration (beans.xml): <aop:config> <aop:aspect id="aopLogger" ref="test.aop.AOPLogger"> <aop:before method="logBefore" pointcut="execution(* test.rest.RestService(..))"/> </aop:aspect> </aop:config> <bean id="aopLogger" class="test.aop.AOPLogger"/> I always get an NPE in RestService when a call is made to a Method getServletRequest(), which has: return messageContext.getHttpServletRequest(); If I remove the aop configuration or comment it out from my beans.xml, everything works fine. All of my actual Rest services extend test.rest.RestService (which is a class) and call getServletRequest(). I'm just trying to just get AOP up and running based off of the example in the CXF JAX-RS documentation. Does anyone have any idea what I'm doing wrong? Thanks!

    Read the article

  • multiple calender in exchange web service

    - by user3559462
    i have multiple calender in my mailbox, i can retrieve only one calender that is main calnder folder using ews api 2.0, now i want whole list of calenders and appointments and meetings in that. like i have three calender one is main calnder 1.Calender(color-code:default) 2.Jorgen(color-code:pink) 3.Soren(color-code: yellow) i can retrieve all the values of main "Calnder", using the below code Folder inbox = Folder.Bind(service, WellKnownFolderName.Calendar); view.PropertySet = new PropertySet(BasePropertySet.IdOnly); // This results in a FindItem call to EWS. FindItemsResults<Item> results = inbox.FindItems(view); i = 1; m = results.TotalCount; if (results.Count() > 0) { foreach (var item in results) { PropertySet props = new PropertySet(AppointmentSchema.MimeContent,AppointmentSchema.ParentFolderId,AppointmentSchema.Id,AppointmentSchema.Categories,AppointmentSchema.Location); // This results in a GetItem call to EWS. var email = Appointment.Bind(service, item.Id, props); string iCalFileName = @"C:\export\appointment" +i ".ics"; // Save as .eml. using (FileStream fs = new FileStream(iCalFileName, FileMode.Create, FileAccess.Write)) { fs.Write(email.MimeContent.Content, 0, email.MimeContent.Content.Length); } i++; } now i want to get all the remaining calender schedules also, i am not able to get is Please help, need it urgently

    Read the article

  • Rate limiting a ruby file stream

    - by Matthew Savage
    I am working on a project which involves uploading flash video files to a S3 bucket from a number of geographically distributed nodes. The video files are about 2-3mb each, and we are only sending one file (per node) every ten minutes, however the bandwidth we consume needs to be rate limited to ~20k/s, as these nodes are delivering streaming media to a CDN, and due to the locations we are only able to get 512k max upload. I have been looking into the ASW-S3 gem and while it doesn't offer any kind of rate limiting I am aware that you can pass in a IO Stream. Given this I am wondering if it might be possible to create a rate-limited stream which overrides the read method, adds in the rate limiting logic (e.g. in its simplest form a call to sleep between reads) and then call out to the super of the overridden method. Another option I considered is hacking the code for Net::HTTP and putting the rate limiting into the send_request_with_body_stream method which is using a while loop, but I'm not entirely sure which would be the best option. I have attempted at extending the IO class, however that didn't work at all, simply inheriting from the class with class ThrottledIO < IO didn't do anything. Any suggestions will be greatly appreciated.

    Read the article

  • Can a destructor be recursive?

    - by Cubbi
    Is this program well-defined, and if not, why exactly? #include <iostream> #include <new> struct X { int cnt; X (int i) : cnt(i) {} ~X() { std::cout << "destructor called, cnt=" << cnt << std::endl; if ( cnt-- > 0 ) this->X::~X(); // explicit recursive call to dtor } }; int main() { char* buf = new char[sizeof(X)]; X* p = new(buf) X(7); p->X::~X(); // explicit call to dtor delete[] buf; } My reasoning: although invoking a destructor twice is undefined behavior, per 12.4/14, what it says exactly is this: the behavior is undefined if the destructor is invoked for an object whose lifetime has ended Which does not seem to prohibit recursive calls. While the destructor for an object is executing, the object's lifetime has not yet ended, thus it's not UB to invoke the destructor again. On the other hand, 12.4/6 says: After executing the body [...] a destructor for class X calls the destructors for X's direct members, the destructors for X's direct base classes [...] which means that after the return from a recursive invocation of a destructor, all member and base class destructors will have been called, and calling them again when returning to the previous level of recursion would be UB. Therefore, a class with no base and only POD members can have a recursive destructor without UB. Am I right?

    Read the article

  • Altering lazy-loaded object's private variables

    - by Kevin Pang
    I'm running into an issue with private setters when using NHibernate and lazy-loading. Let's say I have a class that looks like this: public class User { public int Foo {get; private set;} public IList<User> Friends {get; set;} public void SetFirstFriendsFoo() { // This line works in a unit test but does nothing during a live run with // a lazy-loaded Friends list Users(0).Foo = 1; } } The SetFirstFriendsFoo call works perfectly inside a unit test (as it should since objects of the same type can access each others private properties). However, when running live with a lazy-loaded Friends list, the SetFirstFriendsFoo call silently fails. I'm guessing the reason for this is because at run-time, the Users(0).Foo object is no longer of type User, but of a proxy class that inherits from User since the Friends list was lazy-loaded. My question is this: shouldn't this generate a run-time exception? You get compile-time exceptions if you try to access another class's private properties, but when you run into a situation like this is looks like the app just ignores you and continues along its way.

    Read the article

  • Save a form in an XML file using Ajax and JSP

    - by novellino
    Hello, I want to create a simple form with a name and an email and save these data in an XML file. So far I found that using Ajax with jQuery is quite easy. So I used the usual code: //dataString have the values taken from the form var dataString = 'name='+ name + '&email=' + email; $.ajax({ type: "POST", url: "users.xml", data: dataString, dataType: "xml", success: function() { .... } }); If I understood well, in the url I should add the name of the XML file that will be created. When the user clicks a button I call the function with the Ajax request, and then I should call somewhere a function for generating the xml. I am using also two beans. One is for setting the elements of the user and the other is for saving the data in the XML. I am using the XStream library for the xml although I don't know if is the best solution. The problem now it that I can not connect all these together in order to save the data in the XML. Does anyone know what should I do? Thanks a lot!

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • calll html button onclick event from asp server side login authenticate event

    - by CraigJSte
    Need to programmatically click an html button from a login event (code behind? the html button sends variables to Flash using method: no response - with no postback and uses ExternalInterface API via javascript. Going from SWF ASPX is great, but need to send User.Identity to SWF from ASPX via javascript after authenticate with login event which am having impossible time getting to work... (calling HTML event from Login button) tried scripting in javascript to login event with no luck, possibly because postback clears SWF variables - so perhaps keeping separate (login then html send) would work... Here is my relevant code: function sendToActionScript(value) { swfobject.getObjectById("Property").sendToActionScript(value); } </script> <object ..// SWF File embedded> </object <form id="form1" runat="server"> <asp:Login id="login1" OnAuthenticate="login1_Authenticate"/> </form> <form id="form" onsubmit="return false;"> <input type="text" name="input" id="input" value="" runat="server" /> <button id="btnInput" runat="server" causesvalidation="false" visible="true" style="width: 51px" onclick="sendToActionScript(this.form.input.value);" >Send</button><br /> </form> // CODE BEHIND protected void Login1_Authenticate(object sender, AuthenticateEventArgs e) { // do something to get User Id and Role //bind the string (user or role) to input.value //then call the HTML button onclick event to send it to SWF file. //which I could put in separate function and call from Login_Authenticate } Can anyone help me I am out of ideas. Craig

    Read the article

  • Debug NAudio MP3 reading difference?

    - by Conrad Albrecht
    My code using NAudio to read one particular MP3 gets different results than several other commercial apps. Specifically: My NAudio-based code finds ~1.4 sec of silence at the beginning of this MP3 before "audible audio" (a drum pickup) starts, whereas other apps (Windows Media Player, RealPlayer, WavePad) show ~2.5 sec of silence before that same drum pickup. The particular MP3 is "Like A Rolling Stone" downloaded from Amazon.com. Tested several other MP3s and none show any similar difference between my code and other apps. Most MP3s don't start with such a long silence so I suspect that's the source of the difference. Debugging problems: I can't actually find a way to even prove that the other apps are right and NAudio/me is wrong, i.e. to compare block-by-block my code's results to a "known good reference implementation"; therefore I can't even precisely define the "error" I need to debug. Since my code reads thousands of samples during those 1.4 sec with no obvious errors, I can't think how to narrow down where/when in the input stream to look for a bug. The heart of the NAudio code is a P/Invoke call to acmStreamConvert(), which is a Windows "black box" call which I can't think how to error-check. Can anyone think of any tricks/techniques to debug this?

    Read the article

  • log4js ConsoleAppender initialization

    - by perrierism
    I'm wondering if anyone happens to have some experience using Log4js? It seems its normal ConsoleAppender isn't always ready to use immediately after it's added to a logger object... If I have two sequential script tags in a document like: //Initialize logger <script type="text/javascript"> var logger = new Log4js.getLogger("JSLOG"); logger.addAppender(new Log4js.ConsoleAppender(logger, false)); logger.setLevel(Log4js.Level.INFO); </script> //Use logger <script type="text/javascript"> logger.info('Test test'); </script> ... It causes the console pop-up (pop-up window) to appear with an error message on page load: 12:58:23 PM WARN Log4js - Could not run the listener function () { return fn.apply(object, arguments); }. TypeError: this.outputElement is null The console is still initialised, it's there afterward, but for just that first logger call it doesn't seem to be there fully. If I make the first logger call setTimeout("logger.info('test test')", 1000), it doesn't have the error. So it's like it's not ready immediately. Anyone see this before or know what a workaround might be? Cheers

    Read the article

  • JSESSIONID collision between two servers on same ip but different ports

    - by Steve Armstrong
    I've got a situation where I have two different webapps running on a single server, using different ports. They're both running Java's Jetty servlet container, so they both use a cookie parameter named JSESSIONID to track the session id. These two webapps are fighting over the session id. Open a Firefox tab, and go to WebApp1 WebApp1's HTTP response has a set-cookie header with JSESSIONID=1 Firefox now has a Cookie header with JSESSIONID=1 in all it's HTTP requests to WebApp1 Open a second Firefox tab, and go to WebApp2 The HTTP reqeust to WebApp2 also has a Cookie header with JSESSIONID=1, but in the doGet, when I call req.getSession(false); I get null. And if I call req.getSession(true) I get a new Session object, but then the HTTP response from WebApp2 has a set-cookie header with JSESSIONID=20 Now, WebApp2 has a working Session, but WebApp1's session is gone. Going to WebApp1 will give me a new session, blowing away WebApp2's session. Continue forever So the Sessions are thrashing between each web app. I'd really like for the req.getSession(false) to return a valid session if there's already a JSESSIONID cookie defined. One option is to basically reimplement the Session framework with a HashMap and cookies called WEBAPP1SESSIONID and WEBAPP2SESSIONID, but that sucks, and means I'll have to hack the new Session stuff into ActionServlet and a few other places. This must be a problem others have encountered. Is Jetty's HttpServletRequest.getSession(boolean) just crappy?

    Read the article

  • Sharing session variables from http and https versio

    - by tangurena
    I am trying to fix an ASP.NET site that a friend had botched converting from older technologies. To the user, the site appears to have public and secured sections. Behind the scenes, the public and private sites are separate web applications with separate app pools. The difficulty arises because it appears that the applications share the same session IDs (when going from the public to the secured pages, the session ID remains the same), yet none of the (InProc) session variables are getting passed from the public site to the private one. Basically, the workflow consists of the user checking a checkbox ("I agree" type of stuff) on the public site (let's call that page http://www.boring.gov/iAgree.aspx), then logging in on the secured site (let's call that page https://www.boring.gov/login.aspx). The commandments from the parent agency in DC are that the user may not bookmark the login page, the user has to click "I agree" every time they log in, and that the "I agree" stuff has to be on a separate page. What am I missing? How would you do it? Notes: 1 - This is getting hosted on a single Windows 2003 server. 2 - Yes, it is a government agency. 3 - I would have done things very differently if I was doing the conversion, but I wasn't brought in until the poop hit the fan, and it is too late to redo things. 4 - Two previous SO threads that appear to be related, yet don't apply are this and that

    Read the article

  • POSTing JSON data to WCF REST

    - by Randall Sexton
    I'm trying to send data from a client application using jQuery to a REST WCF service based on the WCF REST starter kit. Here's what I have so far. Service Definition: [WebHelp(Comment = "Save PropertyValues to the database")] [WebInvoke(Method = "POST", UriTemplate = "PropertyValues_Save", BodyStyle = WebMessageBodyStyle.WrappedRequest, RequestFormat = WebMessageFormat.Json, ResponseFormat = WebMessageFormat.Json)] [OperationContract] public bool PropertyValues_Save(Guid assetId, Dictionary<Guid, string> newValues) { ... } Call from the client: $.ajax({ url:SVC_PROPERTYVALUES_SAVE, type: "POST", contentType: "application/json; charset=utf-8", data: jsonData, dataType: "json", error: function(XMLHttpRequest, textStatus, errorThrown) { alert(textStatus + ' ' + errorThrown); }, success: function(data) { if (data) { alert('Values saved'); $("#confirmSubmit").dialog('close'); } else { alert('Values failed to save'); $("#confirmSubmit").dialog('close'); } } }); Example of the JSON being passed: { "assetId": "d70714c3-e403-4cc5-b8a9-9713d05b2ee0", "newValues": [ { "key": "bd01aa88-b48d-47c7-8d3f-eadf47a46680", "value": "0e9fdf34-2d12-4639-8d70-19b88e753ab1" }, { "key": "06e8eda2-a004-450e-90ab-64df357013cf", "value": "1d490aec-f40e-47d5-865c-07fe9624f955" } ] } I'm using Windows Authentication on the virtual directory. When I call operations that are GETs, everything is fine. This code is prompting the browser to log in. When I enter my credentials, I simply get an alert in my browser which says "error undefined". Even if you can't help my specific error, do you see anything that looks wrong from glancing? I've been beating my head on this nearly all day. Thanks in advance.

    Read the article

  • Spring bean's DESTROY-METHOD attribute and web-application "prototype"d bean

    - by EugeneP
    Can get work the attribute "destroy-method". First, even if I type non-existing method name into "destroy-method" attribute, Spring initialization completes fine (already strange!). Next, when a bean has a "prototype" scope, then I suppose it must be destroyed before the application is closed. That not happens, it is simply never called in my case. Though, after extracting this bean I can call this method explicitly and it does its job. Could you explain why this method is never called in my Spring 2.5 case? p.s. The method exists, it is public and has no arguments. It seems to be a more difficult task then I thought. The problem is that this destroy method is called whenever the context is closed, and this is a rare case. My question is this: I have a web app. I have a "prototype"-scoped bean. What I need is when the current session is closed, this destroy method was automatically called by Spring. I can do it by hand, but is there any solution how to make Spring do this job? It destroys the bean after the session is destroyed, it might be possible for Spring to call a method on that bean before destroying it?

    Read the article

  • .Net Architecture challenge: The Change-prone Frankestein Model

    - by SDReyes
    Good Morning SO! We've been scratching our heads with with this interesting scenario at the office, and we're anxious to hear your ideas and approaches: We have a database, whose schema is prone to changes -lets call it Prony-. (is used to store configuration parameters for embedded devices. so if the embedded devices guy need a new table, property or relationship for the model, he should be able to adapt the schema in a easy way -happens so often- ). Prony needs a web interface to create/edit its data. We have another database containing data that also need to be loaded to the devices, after making some transformations - lets call this one Oddy- (this data it's generated by an already existent administrative web application). Finally we have Tracy, a server that communicates our DBs and our embedded devices. She should to auto-adapt herself, to our dbs schema changes and serialize the data to the devices. Nice puzzle, don't think so? : ) Our current candidates: Rady: The fast Lets create some views in Prony that make the data transformation from Oddy. then use DynamicData (or some RAD tool) to create/update a simple web interface for Prony (so he can even consult the transformated data from coming from Prony : ). About Tracy, she will need to be recompiled to update her DB schema (Entity framework should work) and use Reflection to explore recursively the schema and serialize data. Cons: We would have to recompile Tracy and the Prony's web interface. What do you think of the candidate(s)? What would you do?

    Read the article

  • Why is a CoreData forceFetch required after a delete on the iPad but not the iPhone?

    - by alyoshak
    When the following code is run on the iPhone the count of fetched objects after the delete is one less than before the delete. But on the iPad the count remains the same. This inconsistency was causing a crash on the iPad because elsewhere in the code, soon after the delete, fetchedObjects is called and the calling code, trusting the count, attempts access to the just-deleted object's properties, resulting in a NSObjectInaccessibleException error (see below). A fix has been to use that commented-out call to performFetch, which when executed makes the second call to fetchObjects yield the same result as on the iPhone without it. My question is: Why is the iPad producing different results than the iPhone? This is the second of these differences that I've discovered and posted recently. -(NSError*)deleteObject:(NSManagedObject*)mo; { NSLog(@"\n\nNum objects in store before delete: %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); [self.managedObjectContext deleteObject:mo]; // Save the context. NSError *error = nil; if (![self.managedObjectContext save:&error]) { } // [self.fetchedResultsController performFetch:&error]; // force a fetch NSLog(@"\n\nNum objects in store after delete (and save): %i\n\n", [[self.fetchedResultsController fetchedObjects] count]); return error; } (The full NSObjectInaccessibleException is: "Terminating app due to uncaught exception 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault for '0x1dcf90 <x-coredata://DC02B10D-555A-4AB8-8BC4-F419C4982794/Blueprint/p"

    Read the article

  • SQL UNION ALL problem after using UNION ALL more than 10 times

    - by VBGKM
    I'm getting a formatting problem if I use more than 10 UNION ALL statements in my VBA Code. If I use 10 or less everything works great. What I'm trying to do is combine 12 worksheets (Excel 2007). I have a numerical column called SC that turns into string and date if I have more than 10 UNION ALL. If I try to use ROUND with more than 10 UNION ALL my last selection will change all the records by one unit. I'm using Microsoft.ACE.OLEDB.12.0 as my provider and my connection string has worked for several things in my code so far. Is there any limit for UNION ALL statements when using OLEDB? Here is my code. Dim StrOr As String Dim i As Variant Dim Cnt As ADODB.Connection Dim Rs As ADODB.Recordset For i = 1 To 12 StrOr = StrOr & " " & "SELECT SC FROM [" & MonthName(i, True) & "$" & "] UNION ALL" Next StrOr = Left(StrOr, Len(StrOr) - 9) & ";" Call GetADOCnt Call ADORs

    Read the article

  • Handling Errors in PHP

    - by Mike
    I have a custom class that, when called, will redirect to a page and send a 'message_type' and 'message' variable via GET. When the page opens it checks for these variables and displays a 'success', 'warning', or 'error' message depending on the 'message_type' variable. I made it so the user thinks they stay on the same page. It also allows for other variables to be passed along with the message. Is this good practice, or should I just start using exceptions? Example: //Call a static function that will redirect to a page, with an error message RedirectWithMessage::go('somepage.php', MessageType::ERROR, 'Error message here.'); The following checkMessage() function is an include file: function checkMessage() { if((isset($_GET['message_type']) && strlen($_GET['message_type'])) && (isset($_GET['message']) && strlen($_GET['message_type']))) { DisplayMessage::display($_GET['message_type'], $_GET['message']); return true; } return false; } On the page that is redirected to, call checkMessage(); //If a message is received, display it. If not, do nothing checkMessage(); I know this might be vague, and I can supply more code if necessary. I guess the issue is that I don't have much experience using exceptions, but I think they seem cumbersome (writing try-catch blocks everywhere). Please let me know if I am making this more difficult for myself or if there is a better solution. Thanks! Mike

    Read the article

  • Visual Studio: How to attach a debugger dynamically to a specific process

    - by Jeff Cyr
    I am building an internal dev tool to manage different processes commonly used in our development environment. The tool show the list the monitored processes, indicate their running state and allow to start or stop each process. I'd like to add the functionality of attaching a debugger to a monitored process from my tool instead of going in 'Debug-Attach to process' in visual studio and finding the process. My goal is to have something like Debugger.Launch() that would show a list of the available visual studio. I can't use Debugger.Launch() because it lauches the debugger on the process that make the call. I would need something like Debugger.Launch(processId). Does anyone know how to acheive this functionality? A solution could be to implement a command in each monitored process to call Debugger.Launch() when the command is received from the monitoring tool, but I would prefer something that does not require to modify the code of the monitored processes. Side question: When using Debugger.Launch(), instances of Visual Studio that already have a debugger attached are not listed. Visual Studio is not limited to one attached debugger, you can attach on multiple process when using 'Debug - Attach to process'. Anyone know how to bypass this limitation when using Debugger.Launch() or an alternative?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >