Search Results

Search found 24011 results on 961 pages for 'call me dummy'.

Page 287/961 | < Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >

  • Easy bidirectional communication via P2P NetStream

    - by andsve
    I've been looking into the P2P support in Flash 10, using Adobe Stratus service. I have successfully been able to send data from one user to another, put my problem is that I haven't figured out how to send data back in some easy way (or as some kind of response to the first call). What I'm currently doing; First set up a connection with Stratus service nc = new NetConnection(); nc.addEventListener(NetStatusEvent.NET_STATUS, ncStatusHandler); nc.connect(APPLICATION_URL + DEVELOPER_KEY); On the "server" side I do: sendStream = new NetStream(nc, NetStream.DIRECT_CONNECTIONS); sendStream.addEventListener(NetStatusEvent.NET_STATUS, sendStreamHandler); sendStream.publish("file"); And on the "client" side: // remoteFileID.text is manually copied by the user from the server (which is nc.nearID). recvStream = new NetStream(nc, remoteFileID.text); recvStream.client = this; recvStream.addEventListener(NetStatusEvent.NET_STATUS, recvStreamHandler); recvStream.play("file"); Then I call a remote function on the client: ... sendStream.send("aRemoteFunction", parameterData); ... Now my problem; I want to do the same from the client to the server, to notify that everything went well, or something failed. From what I understand, I will have to setup a new NetStream from the client to the server (i.e publish on the client and play on the server). But to accomplish this, the server need to know the nc.nearID on the client. Is it possible to get that ID without forcing the user to manually copy it from the client to server? Or, is there an easier way for the client to talk back to the server that I am missing?

    Read the article

  • comparing strings from two different sources in javascript

    - by andy-score
    This is a bit of a specific request unfortunately. I have a CMS that allows a user to input text into a tinymce editor for various posts they have made. The editor is loaded via ajax to allow multiple posts to be edited from one page. I want to be able to check if there were edits made to the main text if cancel is clicked. Currently I get the value of the text from the database during the ajax call, json_encode it, then store it in a javascript variable during the callback, to be checked against later. When cancel is clicked the current value of the hidden textarea (used by tinymce to store the data for submission) is grabbed using jquery.val() and checked against the stored value from the previous ajax call like this: if(stored_value!=textarea.val()) { return true } It currently always returns true, even if no changes have been made. The issue seems to be that the textarea.val() uses html entities, whereas the ajax jsoned version doesn't. the response from ajax in firebug looks like this: <p>some text<\/p>\r\n<p>some more text<\/p> the textarea source code looks like this: &lt;p&gt;some text&lt;/p&gt; &lt;p&gt;some more text&lt;/p&gt; these are obviously different, but how can I get them to be treated as the same when evaluated? Is there a function that compares the final output of a string or a way to convert one string to the other using javascript? I tried using html entities in the ajax page, but this returned the string with html entities intact when alerted, I assume because json_encoding it turned them into characters. Any help would be greatly appreciated.

    Read the article

  • Passing a list of ints to WebMethod using jQuery and ajax.

    - by birdus
    I'm working on a web page (ASP.NET 4.0) and am just starting simple to try and get this ajax call working (I'm an ajax/jQuery neophyte) and I'm getting an error on the call. Here's the js: var TestParams = new Object; TestParams.Items = new Object; TestParams.Items[0] = 1; TestParams.Items[1] = 5; TestParams.Items[2] = 10; var finalObj = JSON.stringify(TestParams); var _url = 'AdvancedSearch.aspx/TestMethod'; $(document).ready(function () { $.ajax({ type: "POST", url: _url, data: finalObj, contentType: "application/json; charset=utf-8", dataType: "json", success: function (msg) { $(".main").html(msg.d); }, error: function (xhr, ajaxOptions, thrownError) { alert(thrownError.toString()); } }); Here's the method in my code behind file: [Serializable] public class TestParams { public List<int> Items { get; set; } } public partial class Search : Page { [WebMethod] public static string TestMethod(TestParams testParams) { // I never hit a breakpoint in here // do some stuff // return some stuff return ""; } } Here's the stringified json I'm sending back: {"Items":{"0":1,"1":5,"2":10}} When I run it, I get this error: Microsoft JScript runtime error: 'undefined' is null or not an object It breaks on the error function. I've also tried this variation on building the json (based on a sample on a website) with this final json: var TestParams = new Object; TestParams.Positions = new Object; TestParams.Positions[0] = 1; TestParams.Positions[1] = 5; TestParams.Positions[2] = 10; var DTO = new Object; DTO.positions = TestParams; var finalObj = JSON.stringify(DTO) {"positions":{"Positions":{"0":1,"1":5,"2":10}}} Same error message. It doesn't seem like it should be hard to send a list of ints from a web page to my webmethod. Any ideas? Thanks, Jay

    Read the article

  • How can I create objects based on dump file memory in a WinDbg extension?

    - by pj4533
    I work on a large application, and frequently use WinDbg to diagnose issues based on a DMP file from a customer. I have written a few small extensions for WinDbg that have proved very useful for pulling bits of information out of DMP files. In my extension code I find myself dereferencing c++ class objects in the same way, over and over, by hand. For example: Address = GetExpression("somemodule!somesymbol"); ReadMemory(Address, &addressOfPtr, sizeof(addressOfPtr), &cb); // get the actual address ReadMemory(addressOfObj, &addressOfObj, sizeof(addressOfObj), &cb); ULONG offset; ULONG addressOfField; GetFieldOffset("somemodule!somesymbolclass", "somefield", &offset); ReadMemory(addressOfObj+offset, &addressOfField, sizeof(addressOfField), &cb); That works well, but as I have written more extensions, with greater functionality (and accessing more complicated objects in our applications DMP files), I have longed for a better solution. I have access to the source of our own application of course, so I figure there should be a way to copy an object out of a DMP file and use that memory to create an actual object in the debugger extension that I can call functions on (by linking in dlls from our application). This would save me the trouble of pulling things out of the DMP by hand. Is this even possible? I tried obvious things like creating a new object in the extension, then overwriting it with a big ReadMemory directly from the DMP file. This seemed to put the data in the right fields, but freaked out when I tried to call a function. I figure I am missing something...maybe c++ pulls some vtable funky-ness that I don't know about? My code looks similar to this: SomeClass* thisClass = SomeClass::New(); ReadMemory(addressOfObj, &(*thisClass), sizeof(*thisClass), &cb);

    Read the article

  • When is onBind or onCreate called in an android service browser plugin?

    - by anselm
    I have adapted the example plugin of the android source and the browser recognises the plugin without any problem. Here is an extract of AndroidManifest.xml: <application android:icon="@drawable/icon" android:label="@string/app_name" android:debuggable="true"> <service android:name="com.domain.plugin.PluginService"> <intent-filter> <action android:name="android.webkit.PLUGIN" /> </intent-filter> </service> </application> <uses-sdk android:minSdkVersion="7" /> <uses-permission android:name="android.webkit.permission.PLUGIN"></uses-permission> The actual Service class looks like so: public class PluginService extends Service { @Override public IBinder onBind(Intent arg0) { Log.d("PluginService", "onBind"); return null; } @Override public void onCreate() { Log.d("PluginService", "onCreate"); // TODO Auto-generated method stub super.onCreate(); AssetInstaller.getInstance(this).installAssets("/data/data/com.domain.plugin"); } } The AssetInstaller code is supposed to extract some files required by the actual plugin into the /data/data/com.domain.plugin directory, however wether onBind nor onCreate are called. But I get lot's of debug trace of the actual libnpplugin.so file I'm using. So the puzzle is when and under what circumstance is the Service bound or created in case of a browser plugin. As things look the service seems to be a dummy service. Having said that, is there another intent that can be executed at installation time probably? The only solution I see right now is installing the needed files from the native plugin code instead. Any ideas? I know this is quite a tricky question ;)

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • Passing Object to Service in WCF

    - by hgulyan
    Hi, I have my custom class Customer with its properties. I added DataContract mark above the class and DataMember to properties and it was working fine, but I'm calling a service class's function, passing customer instance as parameter and some of my properties get 0 values. While debugging I can see my properties values and after it gets to the function, some properties' values are 0. Why it can be so? There's no code between this two actions. DataContract mark workes fine, everything's ok. Any suggestions on this issue? I tried to change ByRef to ByVal, but it doesn't change anything. Why it would pass other values right and some of integer types just 0? Maybe the answer is simple, but I can't figure it out. Thank You. <DataContract()> Public Class Customer Private Type_of_clientField As Integer = -1 <DataMember(Order:=1)> Public Property type_of_client() As Integer Get Return Type_of_clientField End Get Set(ByVal value As Integer) Type_of_clientField = value End Set End Property End Class <ServiceContract(SessionMode:=SessionMode.Allowed)> <DataContractFormat()> Public Interface CustomerService <OperationContract()> Function addCustomer(ByRef customer As Customer) As Long End Interface type_of_client properties value is 6 before I call addCustomer function. After it enters that function the value is 0. UPDATE: The issue is in instance creating. When I create an instance of a class on client side, that is stored on service side, some of my properties pass 0 or nothing, but when I call a function of a service class, that returns a new instance of that class, it works fine. What's is the difference? Could that be serialization issue?

    Read the article

  • Strange Problem with Webservice and IIS

    - by Rene
    Hello there, I have a Problem which confuses me a little bit, resp. where I don't have any Idea about what it could be. The System I'm using is Windows Vista, IIS 7.0, VS2008, Windows Software Factory, Entity Framework, WCF. The Binding for all Webservices is wshttpbinding. I'm using a Webservice hosted in IIS. This Webservice uses/calls another Webservice (also installed in the IIS). If I use a client calling the first Webservice (which calls the second Webservice) it works fine for about 4-10 Times. And then (it is repeatable to get this Problem, but sometimes it happens after 4, sometimes after 10 Time, but it always will happen), the Service and the IIS gets stuck. Stuck means, that this Webservice isn't callable anymore and generates an timeout after 1 minute. Even increasing Timeout doesn't change anything. If i try to restart the IIS I get an timeout error. So the IIS is also "stuck" (it is not really stuck, but I can't restart it). Only if I kill the w3wp.exe IIS is restartable and the Webservice will work again (until i again call this service several times). The logfiles (i'm no expert in things like logging or where to find/enable such logs, so to say : i'm a newbie) like http-logging, Event Viewer or WCF-Message Logging don't show any hints upon the source of the problem. I don't have this problem when I'm using a Webservice which doesn't call another Service. Calling a Webservice is done by Service Reference (I'm using no Proxy-Classes), but I think this should be no Problem. I have no idea of what is happening, nor how to solve this Problem. Regards Rene

    Read the article

  • Inheritance of closure objects and overriding of methods

    - by bobikk
    I need to extend a class, which is encapsulated in a closure. This base class is following: var PageController = (function(){ // private static variable var _current_view; return function(request, new_view) { ... // priveleged public function, which has access to the _current_view this.execute = function() { alert("PageController::execute"); } } })(); Inheritance is realised using the following function: function extend(subClass, superClass){ var F = function(){ }; F.prototype = superClass.prototype; subClass.prototype = new F(); subClass.prototype.constructor = subClass; subClass.superclass = superClass.prototype; StartController.cache = ''; if (superClass.prototype.constructor == Object.prototype.constructor) { superClass.prototype.constructor = superClass; } } I subclass the PageController: var StartController = function(request){ // calling the constructor of the super class StartController.superclass.constructor.call(this, request, 'start-view'); } // extending the objects extend(StartController, PageController); // overriding the PageController::execute StartController.prototype.execute = function() { alert('StartController::execute'); } Inheritance is working. I can call every PageController's method from StartController's instance. However, method overriding doesn't work: var startCont = new StartController(); startCont.execute(); alerts "PageController::execute". How should I override this method?

    Read the article

  • About Data Objects and DAO Design when using Hibernate

    - by X. Ma
    I'm hesitating between two designs of a database project using Hibernate. Design #1. (1) Create a general data provider interface, including a set of DAO interfaces and general data container classes. It hides the underneath implementation. A data provider implementation could access data in database, or an XML file, or a service, or something else. The user of a data provider does not to know about it. (2) Create a database library with Hibernate. This library implements the data provider interface in (1). The bad thing about Design #1 is that in order to hide the implementation details, I need to create two sets of data container classes. One in the general data provider interface - let's call them DPI-Objects, the other set is used in the database library, exclusively for entity/attribute mapping in Hibernate - let's call them H-Objects. In the DAO implementation, I need to read data from database to create H-Objects (via Hibernate) and then convert H-Objects into DPI-Objects. Design #2. Do not create a general data provider interface. Expose H-Objects directly to components that use the database lib. So the user of the database library needs to be aware of Hibernate. I like design #1 more, but I don't want to create two sets of data container classes. Is that the right way to hide H-Objects and other Hibernate implementation details from the user who uses the database-based data provider? Are there any drawbacks of Design #2? I will not implement other data provider in the new future, so should I just forget about the data provider interface and use Design #2? What do you think about this? Thanks for your time!

    Read the article

  • How to- NSAttributedString to CGImageRef

    - by kroko
    Hello! I'm writing a QuickLook plugin. Well, everything works. Just want to try it make better ;). Thus the question. Here is a function that returns thumbnail image and that I'm using now. QLThumbnailRequestSetImageWithData( QLThumbnailRequestRef thumbnail, CFDataRef data, CFDictionaryRef properties); ); http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImageWithData Right now I'm creating a TIFF - encapsulated it into NSData. An example // Setting CFDataRef CGSize thumbnailMaxSize = QLThumbnailRequestGetMaximumSize(thumbnail); NSMutableAttributedString *attributedString = [[[NSMutableAttributedString alloc] initWithString:@"dummy" attributes:[NSDictionary dictionaryWithObjectsAndKeys: [NSFont fontWithName:@"Monaco" size:10], NSFontAttributeName, [NSColor colorWithCalibratedRed:0.0 green:0.0 blue:0.0 alpha:1.0], NSForegroundColorAttributeName, nil] ] autorelease]; NSImage *thumbnailImage = [[[NSImage alloc] initWithSize:NSMakeSize(thumbnailMaxSize.width, thumbnailMaxSize.height)] autorelease]; [thumbnailImage lockFocus]; [[NSColor whiteColor] set]; NSRectFill(NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height)); [attributedString drawInRect:NSMakeRect(0, 0, thumbnailMaxSize.width, thumbnailMaxSize.height)]; [thumbnailImage unlockFocus]; (CFDataRef)[thumbnailImage TIFFRepresentation]; // This is data // Setting CFDictionaryRef (CFDictionaryRef)[NSDictionary dictionaryWithObjectsAndKeys:@"kUTTypeTIFF", (NSString *)kCGImageSourceTypeIdentifierHint, nil ]; // this is properties However QuickLook provides another function to return thumbnail image, namely QLThumbnailRequestSetImage( QLThumbnailRequestRef thumbnail, CGImageRef image, CFDictionaryRef properties); ); http://developer.apple.com/mac/library/documentation/UserExperience/Reference/QLThumbnailRequest_Ref/Reference/reference.html#//apple_ref/c/func/QLThumbnailRequestSetImage I have a feeling that passing CGImage to the QL instead of TIFF data would help in speeding things up. However- I have never worked with CG context before. I know, the documentation is there :), but anyways- could anyone give an example how to turn that NSAttributed string into CGImageRef. An example is worth 10 times reading the documentation ;) Any help appreciated. Thanks in advance!

    Read the article

  • How to associate static entity instances in a Session without database retrieval

    - by Michael Hedgpeth
    I have a simple Result class that used to be an Enum but has evolved into being its own class with its own table. public class Result { public static readonly Result Passed = new Result(StatusType.Passed) { Id = [Predefined] }; public static readonly Result NotRun = new Result(StatusType.NotRun) { Id = [Predefined] }; public static readonly Result Running = new Result(StatusType.Running) { Id = [Predefined] }; } Each of these predefined values has a row in the database at their predefined Guid Id. There is then a failed result that has an instance per failure: public class FailedResult : Result { public FailedResult(string description) : base(StatusType.Failed) { . . . } } I then have an entity that has a Result: public class Task { public Result Result { get; set; } } When I save a Task, if the Result is a predefined one, I want NHibernate to know that it doesn't need to save that to the database, nor does it need to fetch it from the database; I just want it to save by Id. The way I get around this is when I am setting up the session, I call a method to load the static entities: protected override void OnSessionOpened(ISession session) { LockStaticResults(session, Result.Passed, Result.NotRun, Result.Running); } private static void LockStaticResults(ISession session, params Result[] results) { foreach (var result in results) { session.Load(result, result.Id); } } The problem with the session.Load method call is it appears to be fetching to the database (something I don't want to do). How could I make this so it does not fetch the database, but trusts that my static (immutable) Result instances are both up to date and a part of the session?

    Read the article

  • How do I use the 7-zip LZMA SDK 9.x to self-extract?

    - by Christopher
    I am writing a SFX for an installer. I have a number of good reasons for doing this, primarily: The installer is actually a large Python program which uses plugins. Using py2exe or pyinstaller makes doing plugins annoyingly complicated. I want to be able to pass command-line options directly to the Python installer script, as if it were getting run directly. Using the existing 7-zip SFX modules is clunky because I cannot pass command-line options directly into the processes I want to start. I need more flexibility than any of the existing SFX modules I have seen provide. I have already tried using the SDK to open the file, seek to the 7z archive signature, and run the decompression from there. That fails because the SzArEx_Open() call appears to assume that you are starting at a 0 offset in the file. I am using the File_Seek() call to perform the seeking. It seems like there must be a way to do this, since the 7z archive format itself supports multiple embedded streams. Any pointers to examples would be awesome, but narrative explanation is also quite welcome!

    Read the article

  • What's next for all of these Microsoft "overlapping" and "enhanced" products ?

    - by pointlesspolitics
    Recently I attended a road show, organised by MS Gold Partner company in the UK. The products discussed were: SharePoint server (2010 and 2007), Exchange server, Office Communication Server 2007, Exchange hosted services Office Live meeting, Office Communicator, System Center Configuration Manager and Operation Manager, VMware, Windows 7 etc. As Microsoft claims the enhancement in the each product against higher version, I felt that clients are not much interested in all these details. For example Office Communicator, surely they have improved a lot the product and first site all said 'WOW' great product, but nobody wish to pay money for all these extra features. Some argued, they are bogged down by all these increased number of menus. They don't need soft call feature included with mobile call. It apply for all other products as well such as MS office (next what 2 ribbons ?), windows OS and many more. Indeed there must be good features in all these products, but is it worth to spend money and time to update the older system ? Also sometimes these feature will decrease the productivity instead increase it. *So do you think what ever enhancement MS is doing in the products is only for selling purpose, not a real use ?? and I think also keep the developer busy learning the new tools and features. * I am sure some some people here will argue that some people need this sort of features. But I am not talking about NASA or MI5 guys. I am talking of usual businesses and joe public. Any ideas welcome.

    Read the article

  • Runtime Exception when using Custom Healthmonitoring event in medium trust

    - by Elementenfresser
    Hi, I'm using custom healthmonitoring events in ASP.NET We recently moved to a new server with default High Trust Permissions. Literature says that healthmonitoring and custom events should work under Medium or higher trust (http://msdn.microsoft.com/en-us/library/bb398933.aspx). Problem is it doesn't. In less than high trust I get a SecurityException saying The application attempted to perform an operation not allowed by the security policy It works in Full trust or when I remove the inheritance of System.Web.Management.WebErrorEvent. Any suggestions anyone? Here is the super simple code behind with a custom event defined: public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { try { CallCustomEvent(); } catch (Exception ex) { Response.Write(ex.Message); throw ex; } } /// <summary> /// this metho is never called due to lacking permissions... /// </summary> private void CallCustomEvent() { try { //do something useful here } catch (Exception) { //code to instantiate the forbidden inheritance... WebBaseEvent.Raise(new CustomEvent()); } } } /// <summary> /// custom error inheriting WebErrorEvent which is not allowed in high trust? can't believe that... /// </summary> public class CustomEvent : WebErrorEvent { public CustomEvent() : base("test", HttpContext.Current.Request, 100001, new ApplicationException("dummy")) { } } and the Web Config excerpt for high trust: <system.web> <trust level="High" originUrl="" />

    Read the article

  • VBScript: Disable caching of response from server to HTTP GET URL request

    - by Rob
    I want to turn off the cache used when a URL call to a server is made from VBScript running within an application on a Windows machine. What function/method/object do I use to do this? When the call is made for the first time, my Linux based Apache server returns a response back from the CGI Perl script that it is running. However, subsequent runs of the script seem to be using the same response as for the first time, so the data is being cached somewhere. My server logs confirm that the server is not being called in those subsequent times, only in the first time. This is what I am doing. I am using the following code from within a commercial application (don't wish to mention this application, probably not relevant to my problem): With CreateObject("MSXML2.XMLHTTP") .open "GET", "http://myserver/cgi-bin/nsr/nsr.cgi?aparam=1", False .send nsrresponse =.responseText End With Is there a function/method on the above object to turn off caching, or should I be calling a method/function to turn off the caching on a response object before making the URL? I looked here for a solution: http://msdn.microsoft.com/en-us/library/ms535874(VS.85).aspx - not quite helpful enough. And here: http://www.w3.org/TR/XMLHttpRequest/ - very unfriendly and hard to read. I am also trying to force not using the cache using http header settings and html document header meta data: Snippet of server-side Perl CGI script that returns the response back to the calling client, set expiry to 0. print $httpGetCGIRequest-header( -type = 'text/html', -expires = '+0s', ); Http header settings in response sent back to client: <html><head><meta http-equiv="CACHE-CONTROL" content="NO-CACHE"></head> <body> response message generated from server </body> </html> The above http header and html document head settings haven't worked, hence my question.

    Read the article

  • Dumping an ADODB recordset to XML, then back to a recordset, then saving to the db

    - by Mark Biek
    I've created an XML file using the .Save() method of an ADODB recordset in the following manner. dim res dim objXML: Set objXML = Server.CreateObject("MSXML2.DOMDocument") 'This returns an ADODB recordset set res = ExecuteReader("SELECT * from some_table) With res Call .Save(objXML, 1) Call .Close() End With Set res = nothing Let's assume that the XML generated above then gets saved to a file. I'm able to read the XML back into a recordset like this: dim res : set res = Server.CreateObject("ADODB.recordset") res.open server.mappath("/admin/tbl_some_table.xml") And I can loop over the records without any problem. However what I really want to do is save all of the data in res to a table in a completely different database. We can assume that some_table already exists in this other database and has the exact same structure as the table I originally queried to make the XML. I started by creating a new recordset and using AddNew to add all of the rows from res to the new recordset dim outRes : set outRes = Server.CreateObject("ADODB.recordset") dim outConn : set outConn = Server.CreateObject("ADODB.Connection") dim testConnStr : testConnStr = "DRIVER={SQL Server};SERVER=dev-windows\sql2000;UID=myuser;PWD=mypass;DATABASE=Testing" outConn.open testConnStr outRes.activeconnection = outConn outRes.cursortype = adOpenDynamic outRes.locktype = adLockOptimistic outRes.source = "product_accessories" outRes.open while not res.eof outRes.addnew for i=0 to res.fields.count-1 outRes(res.fields(i).name) = res(res.fields(i).name) next outRes.movefirst res.movenext wend outRes.updatebatch But this bombs the first time I try to assign the value from res to outRes. Microsoft OLE DB Provider for ODBC Drivers error '80040e21' Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done. Can someone tell me what I'm doing wrong or suggest a better way for me to copy the data loaded from XML to a different database?

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • Creating a form dynamically

    - by Nathan
    Hi, I use a search button that creates a form dynamically at the server side and returns it with Jquery syntax. After I fill-up the form and click on submit button, there is another .submit() Jquery function that suppose to be called to validate input before data is sent to the server. But, for some reason, this function is never called, and the data is request is sent. In more details: This is the form that the serach button creates dynamically at the server side and "prints" to html page with Jquery: <form action=... name="stockbuyform" class="stockbuyform" method="post"> <input type=text value="Insert purchasing amount"> <input type="submit" value="Click to purchase"> </form> And here is the .submit() function : $(".stockbuyform").submit(function() { alert("Need to validate purchasing details"); } But whaen I click on purchase button, the .submit() function is never called. Does it mean that I can't use another Jquery call with the answer I got in the first call?

    Read the article

  • Why do pure virtual base classes get direct access to static data members while derived instances do

    - by Shamster
    I've created a simple pair of classes. One is pure virtual with a static data member, and the other is derived from the base, as follows: #include <iostream> template <class T> class Base { public: Base (const T _member) { member = _member; } static T member; virtual void Print () const = 0; }; template <class T> T Base<T>::member; template <class T> void Base<T>::Print () const { std::cout << "Base: " << member << std::endl; } template <class T> class Derived : public Base<T> { public: Derived (const T _member) : Base<T>(_member) { } virtual void Print () const { std::cout << "Derived: " << this->member << std::endl; } }; I've found from this relationship that when I need access to the static data member in the base class, I can call it with direct access as if it were a regular, non-static class member. i.e. - the Base::Print() method does not require a this- modifier. However, the derived class does require the this-member indirect access syntax. I don't understand why this is. Both class methods are accessing the same static data, so why does the derived class need further specification? A simple call to test it is: int main () { Derived<double> dd (7.0); dd.Print(); return 0; } which prints the expected "Derived: 7"

    Read the article

  • Focus behavior in Applet-Javascript interaction

    - by Dan
    I have a web page with an applet that opens a popup window and also makes Javascript calls. When that Javascript call results in a focus() call on an HTML input, that causes the browser window to push itself in front of the applet window. But only on certain browsers, namely MSIE. On Firefox the applet window remains on top. How can I keep that behavior consistent in MSIE? Note that using the old Microsoft VM for Java also achieves the desired (applet window in front) result. HTML code: <html> <head> <script type="text/javascript"> function focusMe() { document.getElementById('mytext').focus(); } </script> </head> <body> <applet id="myapplet" mayscript code="Popup.class" ></applet> <form> <input type="text" id="mytext"> <input type="button" onclick="document.getElementById('myapplet').showPopup()" value="click"> </form> </body> </html> Java code: public class Popup extends Applet { Frame frame; public void start() { frame = new Frame("Test Frame"); frame.setLayout(new BorderLayout()); Button button = new Button("Push Me"); frame.add("Center", button); button.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent e) { frame.setVisible(false); } }); frame.pack(); } public void showPopup() { frame.setVisible(true); JSObject.getWindow(this).eval("focusMe()"); } }

    Read the article

  • What is actually happening to this cancelled HTTP request?

    - by Brian Schroth
    When a user takes a particular action on a page, an AJAX call is made to save their data. Unfortunately, this call is synchronous as they need to wait to see if the data is valid before being allowed to continue. Obviously, this eliminates a lot of the benefit of using Asynchronous Javascript And XML, but that's a subject for another post. That's the design I'm working with. The request is made using the dojo.xhrPost function, with a 60s timeout parameter, and the error handler redirects to an error page. What I am finding in testing is that in Firefox, if I initiate the ajax request and then press ESC, the page hangs waiting for a response, and then eventually after exactly 90s (not 60s, the function's timeout), the error handler will kick in and redirect to the error page. I expected this to happen, but either immediately as soon as the request was cancelled, or after 60s due to the timeout value being 60s. What I don't understand is why is it 90s? What is actually happening under the hood when the user cancels their request in Firefox, and how does it differ from IE, where everything works fine exactly the same as if the request had not been cancelled? Is the 90s related to any user-configurable browser settings?

    Read the article

  • ASP.NET MVC: How to create a usable UrlHelper instance?

    - by Marek
    I am using quartz.net to schedule regular events within asp.net mvc application. The scheduled job should call a service layer script that requires a UrlHelper instance (for creating Urls based on correct routes (via urlHelper.Action(..)) contained in emails that will be sent by the service). I do not want to hardcode the links into the emails - they should be resolved using the urlhelper. The job: public class EvaluateRequestsJob : Quartz.IJob { public void Execute(JobExecutionContext context) { // where to get a usable urlHelper instance? ServiceFactory.GetRequestService(urlHelper).RunEvaluation(); } } Please note that this is not run within the MVC pipeline. There is no current request being served, the code is run by the Quartz scheduler at defined times. How do I get a UrlHelper instance usable on the indicated place? If it is not possible to construct a UrlHelper, the other option I see is to make the job "self-call" a controller action by doing a HTTP request - while executing the action I will of course have a UrlHelper instance available - but this seems a little bit hacky to me.

    Read the article

  • Connection

    - by pepersview
    Hello, I would like to ask you about NSURLConnection in objective-c for iPhone. I have one app that needs to connect to one webservice to receive data (about YouTube videos), Then I have all the things that I need to connect (Similar to Hello_Soap sample code in the web). But now, my problem is that I create a class (inherits from NSObject) named Connection and I have implemented the methods: didReceiveResponse, didReceiveData, didFailWithError and connectionDidFinishLoading. Also the method: -(void)Connect:(NSString *) soapMessage{ NSLog(soapMessage); NSURL *url = [NSURL URLWithString:@"http://....."]; NSMutableURLRequest *theRequest = [NSMutableURLRequest requestWithURL:url]; NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMessage length]]; [theRequest addValue: @"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [theRequest addValue: msgLength forHTTPHeaderField:@"Content-Length"]; [theRequest setHTTPMethod:@"POST"]; [theRequest setHTTPBody: [soapMessage dataUsingEncoding:NSUTF8StringEncoding]]; NSURLConnection *theConnection = [[NSURLConnection alloc] initWithRequest:theRequest delegate:self]; if( theConnection ) { webData = [[NSMutableData data] retain]; } else { NSLog(@"theConnection is NULL"); } } But when from my AppDelegate I create one Connection object: Connection * connect = [[Connection alloc] Init:num]; //It's only one param to test. [connect Connect:method.soapMessage]; And I call this method, when this finishes, it doesn't continue calling didReceiveResponse, didReceiveData, didFailWithError or connectionDidFinishLoading. I'm trying to do this but I can't for the moment. The thing I would like to do is to be able to call this class "Connection" each time that I want to receive data (after that to be parsed and displayed in UITableViews). Thank you.

    Read the article

  • How can I prevent default_environment variables from getting set by Capistrano's sudo action?

    - by Logan Koester
    My deploy.rb sets some environment variables to use the regular user's local Ruby rather than the system-wide one. set :default_environment, { :PATH => '/home/myapp/.rvm/bin:/home/myapp/.rvm/bin:/home/myapp/.rvm/rubies/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global/bin:/home/myapp/bin:/usr/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/sbin:/bin:/usr/games', :RUBY_VERSION => 'ruby-1.9.1-p378', :GEM_HOME => '/home/myapp/.rvm/gems/ruby-1.9.1-p378', :GEM_PATH => '/home/myapp/.rvm/gems/ruby-1.9.1-p378:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global' } Naturally, when a task is using sudo, I would expect the system-wide ruby to be used instead. But it seems the environment variables are being set anyway, which is obviously invalid for the root user and returns an error: executing "sudo -p 'sudo password: ' /etc/init.d/god stop" servers: ["myapp.com"] [myapp.com] executing command command finished failed: "env PATH=/home/myapp/.rvm/bin:/home/myapp/.rvm/bin:/home/myapp/.rvm/rubies/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378/bin:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global/bin:/home/myapp/bin:/usr/bin:/opt/local/bin:/opt/local/sbin:/usr/local/bin:/usr/local/sbin:/usr/sbin:/sbin:/bin:/usr/games RUBY_VERSION=ruby-1.9.1-p378 GEM_HOME=/home/myapp/.rvm/gems/ruby-1.9.1-p378 GEM_PATH=/home/myapp/.rvm/gems/ruby-1.9.1-p378:/home/myapp/.rvm/gems/ruby-1.9.1-p378%global sh -c 'sudo -p '\\''sudo password: '\\'' /etc/init.d/god stop'" on myapp.com It makes no difference whether I use capistrano's sudo "system call" or the regular run "sudo system call". How can I avoid this?

    Read the article

< Previous Page | 283 284 285 286 287 288 289 290 291 292 293 294  | Next Page >