Search Results

Search found 39047 results on 1562 pages for 'process control'.

Page 450/1562 | < Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >

  • Producing Mini Dumps for _caught_ SEH exceptions in mixed code DLL

    - by Assaf Lavie
    I'm trying to use code similar to clrdump to create mini dumps in my managed process. This managed process invokes C++/CLI code which invokes some native C++ static lib code, wherein SEH exceptions may be thrown (e.g. the occasional access violation). C# WinForms -> C++/CLI DLL -> Static C++ Lib -> ACCESS VIOLATION Our policy is to produce mini dumps for all SEH exceptions (caught & uncaught) and then translate them to C++ exceptions to be handled by application code. This works for purely native processes just fine; but when the application is a C# application - not so much. The only way I see to produce dumps from SEH exceptions in a C# process is to not catch them - and then, as unhandled exceptions, use the Application.ThreadException handler to create a mini dump. The alternative is to let the CLR translate the SEH exception into a .Net exception and catch it (e.g. System.AccessViolationException) - but that would mean no dump is created, and information is lost (stack trace information in Exception isn't as rich as the mini dump). So how can I handle SEH exceptions by both creating a minidump and translating the exception into a .Net exception so that my application may try to recover?

    Read the article

  • VS 2008 designer and usercontrol.

    - by Ram
    Hello, I have created a custom data grid control. I dragged it on windows form and set its properties like column and all & ran the project. It built successfully and I am able to view the grid control on the form. Now if i try to view that form in designer, I am getting following error.. Object reference not set to an instance of an object. Instances of this error (1) 1. Hide Call Stack at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.GetMemberTargetObject(XmlElementData xmlElementData, String& member) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.CreateAssignStatement(XmlElementData xmlElement) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.XmlElementData.get_CodeDomElement() at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.EndElement(String prefix, String name, String urn) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.Parse(XmlReader reader) at Microsoft.VisualStudio.Design.Serialization.CodeDom.XML.CodeDomXmlProcessor.ParseXml(String xmlStream, CodeStatementCollection statementCollection, String fileName, String methodName) at Microsoft.VisualStudio.Design.Serialization.CodeDom.VSCodeDomParser.OnMethodPopulateStatements(Object sender, EventArgs e) at System.CodeDom.CodeMemberMethod.get_Statements() at System.ComponentModel.Design.Serialization.TypeCodeDomSerializer.Deserialize(IDesignerSerializationManager manager, CodeTypeDeclaration declaration) at System.ComponentModel.Design.Serialization.CodeDomDesignerLoader.PerformLoad(IDesignerSerializationManager manager) at Microsoft.VisualStudio.Design.Serialization.CodeDom.VSCodeDomDesignerLoader.PerformLoad(IDesignerSerializationManager serializationManager) at Microsoft.VisualStudio.Design.Serialization.CodeDom.VSCodeDomDesignerLoader.DeferredLoadHandler.Microsoft.VisualStudio.TextManager.Interop.IVsTextBufferDataEvents.OnLoadCompleted(Int32 fReload) If I ignore the exception, form appears blank with no sign of grid control on it. However I can see the code for the grid in the designer file. Any pointer on this would be a great help. I have customized grid for my custom requirements like I have added custom text box n all. I have defined 3 constructors public GridControl() public GridControl(IContainer container) protected GridControl(SerializationInfo info, StreamingContext context)

    Read the article

  • Automatically release resources RAII-style in Perl

    - by Philip Potter
    Say I have a resource (e.g. a filehandle or network socket) which has to be freed: open my $fh, "<", "filename" or die "Couldn't open filename: $!"; process($fh); close $fh or die "Couldn't close filename: $!"; Suppose that process might die. Then the code block exits early, and $fh doesn't get closed. I could explicitly check for errors: open my $fh, "<", "filename" or die "Couldn't open filename: $!"; eval {process($fh)}; my $saved_error = $@; close $fh or die "Couldn't close filename: $!"; die $saved_error if $saved_error; but this kind of code is notoriously difficult to get right, and only gets more complicated when you add more resources. In C++ I would use RAII to create an object which owns the resource, and whose destructor would free it. That way, I don't have to remember to free the resource, and resource cleanup happens correctly as soon as the RAII object goes out of scope - even if an exception is thrown. Unfortunately in Perl a DESTROY method is unsuitable for this purpose as there are no guarantees for when it will be called. Is there a Perlish way to ensure resources are automatically freed like this even in the presence of exceptions? Or is explicit error checking the only option?

    Read the article

  • Run a .java file using ProcessBuilder

    - by David K
    I'm a novice programmer working in Eclipse, and I need to get multiple processes running (this is going to be a simulation of a multi-computer system). My initial hackup used multiple threads to multiple classes, but now I'm trying to replace the threads with processes. From my reading, I've gleaned that ProcessBuilder is the way to go. I have tried many many versions of the input you see below, but cannot for the life of me figure out how to properly use it. I am trying to run the .java files I previously created as classes (which I have modified). I eventually just made a dummy test.java to make sure my process is working properly - its only function is to print that it ran. My code for the two files are below. Am I using ProcessBuilder correctly? Is this the correct way to read the output of my subprocess? Any help would be much appreciated. David primary process package Control; import java.io.*; import java.lang.*; public class runSPARmatch { /** * @param args */ public static void main(String args[]) { try { ProcessBuilder broker = new ProcessBuilder("javac.exe","test.java","src\\Broker\\"); Process runBroker = broker.start(); Reader reader = new InputStreamReader(runBroker.getInputStream()); int ch; while((ch = reader.read())!= -1) System.out.println((char)ch); reader.close(); runBroker.waitFor(); System.out.println("Program complete"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } subprocess package Broker; public class test { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println("This works"); } }

    Read the article

  • Makefile : Build in a separate directory tree

    - by Simone Margaritelli
    My project (an interpreted language) has a standard library composed by multiple files, each of them will be built into an .so dynamic library that the interpreter will load upon user request (with an import directive). Each source file is located into a subdirectory representing its "namespace", for instance : The build process has to create a "build" directory, then when each file is compiling has to create its namespace directory inside the "build" one, for instance, when compiling std/io/network/tcp.cc he run an mkdir command with mkdir -p build/std/io/network The Makefile snippet is : STDSRC=stdlib/std/hashing/md5.cc \ stdlib/std/hashing/crc32.cc \ stdlib/std/hashing/sha1.cc \ stdlib/std/hashing/sha2.cc \ stdlib/std/io/network/http.cc \ stdlib/std/io/network/tcp.cc \ stdlib/std/io/network/smtp.cc \ stdlib/std/io/file.cc \ stdlib/std/io/console.cc \ stdlib/std/io/xml.cc \ stdlib/std/type/reflection.cc \ stdlib/std/type/string.cc \ stdlib/std/type/matrix.cc \ stdlib/std/type/array.cc \ stdlib/std/type/map.cc \ stdlib/std/type/type.cc \ stdlib/std/type/binary.cc \ stdlib/std/encoding.cc \ stdlib/std/os/dll.cc \ stdlib/std/os/time.cc \ stdlib/std/os/threads.cc \ stdlib/std/os/process.cc \ stdlib/std/pcre.cc \ stdlib/std/math.cc STDOBJ=$(STDSRC:.cc=.so) all: stdlib stdlib: $(STDOBJ) .cc.so: mkdir -p `dirname $< | sed -e 's/stdlib/stdlib\/build/'` $(CXX) $< -o `dirname $< | sed -e 's/stdlib/stdlib\/build/'`/`basename $< .cc`.so $(CFLAGS) $(LDFLAGS) I have two questions : 1 - The problem is that the make command, i really don't know why, doesn't check if a file was modified and launch the build process on ALL the files no matter what, so if i need to build only one file, i have to build them all or use the command : make path/to/single/file.so Is there any way to solve this? 2 - Any way to do this in a "cleaner" way without have to distribute all the build directories with sources? Thanks

    Read the article

  • Daemon program that uses select() inside infinite loop uses significantly more CPU when ported from

    - by Jake
    I have a daemon app written in C and is currently running with no known issues on a Solaris 10 machine. I am in the process of porting it over to Linux. I have had to make minimal changes. During testing it passes all test cases. There are no issues with its functionality. However, when I view its CPU usage when 'idle' on my Solaris machine it is using around .03% CPU. On the Virtual Machine running Red Hat Enterprise Linux 4.8 that same process uses all available CPU (usually somewhere in the 90%+ range). My first thought was that something must be wrong with the event loop. The event loop is an infinite loop ( while(1) ) with a call to select(). The timeval is setup so that timeval.tv_sec = 0 and timeval.tv_usec = 1000. This seems reasonable enough for what the process is doing. As a test I bumped the timeval.tv_sec to 1. Even after doing that I saw the same issue. Is there something I am missing about how select works on Linux vs. Unix? Or does it work differently with and OS running on a Virtual Machine? Or maybe there is something else I am missing entirely? One more thing I am sure sure which version of vmware server is being used. It was just updated about a month ago though. Thanks.

    Read the article

  • Finding latency issues (stalls) in embedded Linux systems

    - by camh
    I have an embedded Linux system running on an Atmel AT91SAM9260EK board on which I have two processes running at real-time priority. A manager process periodically "pings" a worker process using POSIX message queues to check the health of the worker process. Usually the round-trip ping takes about 1ms, but very occasionally it takes much longer - about 800ms. There are no other processes that run at a higher priority. It appears the stall may be related to logging (syslog). If I stop logging the problem seems to go away. However it makes no difference if the log file is on JFFS2 or NFS. No other processes are writing to the "disk" - just syslog. What tools are available to me to help me track down why these stalls are occurring? I am aware of latencytop and will be using that. Are there some other tools that may be more useful? Some details: Kernel version: 2.6.32.8 libc (syslog functions): uClibc 0.9.30.1 syslog: busybox 1.15.2

    Read the article

  • Registering custom webcontrol inside mvc view?

    - by kastermester
    I am in the middle of a project where I am migrating some code for a website from WebForms to MVC - unfortunatly there's not enough time to do it all at once, so I will have to do some... not so pretty solutions. I am though facing a problems with a custom control I have written that inherits from the standard GridView control namespace Controls { public class MyGridView : GridView { ... } } I have added to the web.config file as usual: <configuration> ... <system.web> ... <pages> ... <controls> ... <add tagPrefix="Xui" namespace="Controls"/> </controls> </pages> </system.web> </configuration> Then on the MVC View: <Xui:MyGridView ID="GridView1" runat="server" ...>...</Xui:MyGgridView> However I am getting a parser error stating that the control cannot be found. I am suspecting this has to do with the mix up of MVC and WebForms, however I am/was under the impression that such mixup should be possible, is there any kind of tweak for this? I realise this solution is far from ideal, however there's no time to "do the right thing". Thanks

    Read the article

  • Need Advice on designing ATL inproc Server (dll) that serves as both a soure and a sink of events.

    - by Andrew
    Hi, I need to design an ATL inproc server that besides exposing methods and properties, also can fire events (source) and serve as a sink for a third party COM control that fires events. I would assume that this is a fairly common requirement. I can also foresee several "gotchas" that I would like to read up on before commencing the design. My questions/concerns are: Can someone point me to an example? Which threading model to use? Should I have a seperate COM object for the sink? Should I, and how do I, protect certain memory. For example, my server will receive data from the third party control. It will save this, and in some cases, fire an event to interested clients. The interested clients will request the data through a standard method or property. I did try to research this myself. I can find many examples of COM servers that are soures, and some that are sinks, but never both. The only post I did find was this: http://www.generation-nt.com/us/atl-control-an-event-source-sink-help-9098542.html Which strongly advocates putting the sink on a seperate COM object. Any leads, tutorials or ideas would be much appreciated. Thanks, Andrew

    Read the article

  • Modify Executing Jar file

    - by pinkynobrain
    Hello Stack Overflow friends. I have a simple problem which i fear doesnt have a simple solution and i need advice as to how to proceed. I am developing a java application packaged as and executable JAR but it requires to modify some of its JAR file contents during execution. At this stage i hit a problem because some OS lock the file preventing writes to it. It is essential that the user sees an updated version of the jar file by the time the application exits allthough i can be pretty flexible as to how to achieve this. A clean and efficient solution is obviously prefereable but portability is the only hard requirement. The following are three approaches i can see to solving the problem, feel free to comment on them or suggest others. Tell Java to unlock the JAR file for writing(this doesnt seem possible but it would be the easyest solution) Copy the executable class files to a tempory file on application startup, use a class loader to load these files and unload the ones from the initial JAR file.(Not had much experience with the classloaders but hopefully the JVM would then be smart enough to realize that the original JAR is nolonger in use and so unlock it) Put a Second executable JAR File inside the First, on startup extract the inner jar to e temporaryfile, invoke a new java process and pass it the location of the Outer JAR, first process exits, second process modifys the Outer jar unincumbered.(This will work but im not sure there is a platform independant way of one java app invoking another) I know this is a weird question but any help would be appreciated.

    Read the article

  • Programmatically open an email from a POP3 and extract an attachment

    - by Josh
    We have a vendor that sends CSV files as email attachments. These CSV files contain statuses that are imported into our application. I'm trying to automate the process end-to-end, but it currently depends on someone opening an email, saving the attachment to a server share, so the application can use the file. Since I cannot convince the vendor to change their process, such as offering an FTP location or a Web Service, I'm stuck with trying to automate the existing process. Does anyone know of a way to programmatically open an email from a POP3 account and extract an attachment? The preferred solution would reside on a Windows 2003 server, be written VB.NET and secure. The application can reside on the same server as the POP3 server, for example, we could setup the free POP3 server that comes with Windows Server and pull against the mail file stored on the file system. BTW, we are willing to pay for an off-the-shelf solution, if one exists. Note: I did look at this question but the answer points to a CodeProject solution that doesn't deal with attachments.

    Read the article

  • Workflow UI Integration - is WF a good approach?

    - by AJ
    Somewhat similar to this question, except we haven't decided that we're going with WF yet. I'm working on designing a system that requires a series of decisions and activities on a "work object," so I naturally began to consider workflow, specifically WF. What I'm wondering is if WF is a good solution for a situation like the following (oversimplified for this question) case (please forgive bad ascii art): __________________ | Gather some info | | (web page) | |__________________| | | / \ / \ / \ / \ / cond \ \ 1 / \ / \ / \ / \ / | | ______________|_______________ | | | | | ______|______ ______|________ / do some / | Get more info | / process / | (web page) | /____________/ |_______________| | | / \ / \ / \ / cond. \ \ 2 / \ / \ / \ / | | |__________________ | | | | _____|_____ _____|_____ / some / / another / / process / / process / /__________/ /__________/ The part I'm struggling with is the get more info (web page) step and what happens subsequent, which would mean a halt in the execution of the workflow runtime. I'm aware that this is possible, but I'm not sure that WF is the best approach for this type of code, as the user interaction may be required at many different points through the entire workflow, and the workflow will drive what data entry screens are needed. We are using a WinForms/ASP.NET web forms package for UI, which is homegrown and difficult to push deployments on, so something like SharePoint integration is out of the question. Our back-end is DB2, and the workflow code (whether it's in WF or otherwise) will need to interact with that as well. I guess the bottom line is, should we look into using WF for this, or would we be better served just coding it ourselves? Can WF easily integrate data entry screens to capture information that can be used further on in the workflow?

    Read the article

  • Industry-style practices for increasing productivity in a small scientific environment

    - by drachenfels
    Hi, I work in a small, independent scientific lab in a university in the United States, and it has come to my notice that, compared with a lot of practices that are ostensibly followed in the industry, like daily checkout into a version control system, use of a single IDE/editor for all languages (like emacs), etc, we follow rather shoddy programming practices. So, I was thinking of getting together all my programs, scripts, etc, and building a streamlined environment to increase productivity. I'd like suggestions from people on Stack Overflow for the same. Here is my primary plan.: I use MATLAB, C and Python scripts, and I'd like to edit, compile them from a single editor, and ensure correct version control. (questions/things for which I'd like suggestions are in italics) 1] Install Cygwin, and get it to work well with Windows so I can use git or a similar version control system (is there a DVCS which can work directly from the windows CLI, so I can skip the Cygwin step?). 2] Set up emacs to work with C, Python, and MATLAB files, so I can edit and compile all three at once from a single editor (say, emacs) (I'm not very familiar with the emacs menu, but is there a way to set the path to the compiler for certain languages? I know I can Google this, but emacs documentation has proved very hard for me to read so far, so I'd appreciate it if someone told me in simple language) 3] Start checking in code at the end of each day or half-day so as to maintain a proper path of progress of my code (two questions), can you checkout files directly from emacs? is there a way to checkout LabVIEW files into a DVCS like git? Lastly, I'd like to apologize for the rather vague nature of the question, and hope I shall learn to ask better questions over time. I'd appreciate it if people gave their suggestions, though, and point to any resources which may help me learn.

    Read the article

  • target-action uicontrolevents

    - by Fabrizio Farinelli
    I must be missing something obvious here but ... UIControl has a method - (void)addTarget:(id)target action:(SEL)action forControlEvents: (UIControlEvents)controlEvents which lets you add an action to be called when any of the given controlEvents occur. ControlEvents are a bitmask of events which tell you if a touch went down, or up inside, or was dragged etc., there's about 16 of them, you or them together and get called when any of them occur. The selector can have one of the following signatures - (void)action - (void)action:(id)sender - (void)action:(id)sender forEvent:(UIEvent *) none of those tell you what the control event bitmask was. The UIEvent is something slightly different, it's related to the actual touch event and doesn't (I think) contain the UIControlEvent. The sender (UIControl) doesn't have a way to find the control events either. I'd like to have one method which deals with a number of control events as I have some common code regardless of which event or events happened but I still need to know what the UIControlEvents were for some specific processing. Am I missing a way to find out what UIControlEvents were used when the action was called or do I really have to separate my code into -(void)actionWithUIControlEventX; -(void)actionWithUIControlEventY;

    Read the article

  • Creating a DataTable by filtering another DataTable

    - by Jeff Dege
    I'm working on a system that currently has a fairly complicated function that returns a DataTable, which it then binds to a GUI control on a ASP.NET WebForm. My problem is that I need to filter the data returned - some of the data that is being returned should not be displayed to the user. I'm aware of DataTable.select(), but that's not really what I need. First, it returns an array of DataRows, and I need a DataTable, so I can databind it to the GUI control. But more importantly, the filtering I need to do isn't something that can be easily put into a simple expression. I have an array of the elements which I do not want displayed, and I need to compare each element from the DataTable against that array. What I could do, of course, is to create a new DataTable, reading everything out of the original, adding to the new what is appropriate, then databinding the new to the GUI control. But that just seems wrong, somehow. In this case, the number of elements in the original DataTable aren't likely to be enough that copying them all in memory is going to cause too much trouble, but I'm wondering if there is another way. Does the .NET DataTable have functionality that would allow me to filter via a callback function?

    Read the article

  • Binding data to the custom DataGridView

    - by Jatin Chaturvedi
    All, I have a customized DataGridView where I have implemented extra functionality and bounded datasource in the same custom DataGridView (Using C# and .NET). Now, I could able to use it properly by placing it on a panel control. I have added label as button on panel control to display data from datasource on to datagrid and create a binding source. Another Label which act as a button is used to update data from grid to databse. Issue: I pressed show label to display data in a dsatagridview. Modified the grid cell value and immediately pressed update label which is on same panel control. I observed that, the cursor is still in the grid cell when I press Save button. While saving, the cell value is null even though I have entered something in the presentation layer. My expected behaviour is to get the modified value while saving. Special Case: After typing something in the grid cell, if I click on somewhere else like the row below where I entered something, before I click on Save button, it is working fine. (Here, mainly I tried to remove the focus from the currently modified cell) Is there any way to bind sources before I click on save button? Please suggest me. Please feel free to ask me if you need any information. I have also seen same kind of problem on this forum, but unfortunately the author got the answer and didnt post it back. here is that URL: http://social.msdn.microsoft.com/Forums/en/winformsdesigner/thread/54dcc87a-adc2-4965-b306-9aa9e79c2946 Please help me.

    Read the article

  • Testing fault tolerant code

    - by Robert
    I’m currently working on a server application were we have agreed to try and maintain a certain level of service. The level of service we want to guaranty is: if a request is accepted by the server and the server sends on an acknowledgement to the client we want to guaranty that the request will happen, even if the server crashes. As requests can be long running and the acknowledgement time needs be short we implement this by persisting the request, then sending an acknowledgement to the client, then carrying out the various actions to fulfill the request. As actions are carried out they too are persisted, so the server knows the state of a request on start up, and there’s also various reconciliation mechanisms with external systems to check the accuracy of our logs. This all seems to work fairly well, but we have difficult saying this with any conviction as we find it very difficult to test our fault tolerant code. So far we’ve come up with two strategies but neither is entirely satisfactory: Have an external process watch the server code and then try and kill it off at what the external process thinks is an appropriate point in the test Add code the application that will cause it to crash a certain know critical points My problem with the first strategy is the external process cannot know the exact state of the application, so we cannot be sure we’re hitting the most problematic points in the code. My problem with the second strategy, although it gives more control over were the fault takes, is I do not like have code to inject faults within my application, even with optional compilation etc. I fear it would be too easy to over look a fault injection point and have it slip into a production environment.

    Read the article

  • Controls added in the designer are null during Page_Load

    - by mwright
    All of the names below are generic and not the actual names used. I have a custom UserControl with a Panel that contains a a couple Labels, both .aspx controls. .aspx: <asp:Panel runat="server"> <asp:Label ID="label1" runat="server"> </asp:Label> </asp:Panel> <asp:Panel runat="server"> <asp:Label ID="label2" runat="server"> </asp:Label> </asp:Panel> Codebehind: private readonly Object object; protected void Page_Load(object sender, EventArgs e) { // These are the lines that are failing // label1 and label2 are null label1.Text = object.Value1; label2.Text = object.Value2; } public ObjectRow(Object objectToDisplay) { object = objectToDisplay; } On another page, in the code behind, I create a new instance of the custom user control. protected void Page_Load(object sender, EventArgs e) { CustomControl control = new CustomControl(object); } The user control takes the parameter and attempts to set the labels based off of the object passed in. The labels that it tries to assign the values to are however, null. Is this an ASP.net lifecycle issue that I'm not understanding? My understanding based on the Microsoft ASP.net lifecycle page was that page controls were available after the Page_Initialization. What is the proper way to do this? Is there a better way?

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • using the window object for accessing global user defined objects and using text within html for cre

    - by timpone
    I don't do very much jquery / javascript but wanted to ask for some advice on the following piece. I have tried to cut out as much as possible. Most of this was semi-inherited code with catching a bunch of events just hardcoded in. I'd like to generalized them more by putting the object name in the html and accessing via jquery on processing (by_date, by_popularity). I retriev as string and access the object via window[current_obj]. Is this a good way to do this or am I missing something? Are there preferable ways to introduce specificity. thanks for any advice. <script> var by_date={}; by_date.current_page=1; by_date.per_page=4; var by_popularity={}; by_popularity.current_page=1; by_popularity.per_page=4; $(function(){ $('.previous.active').live('click',function(){ window[current_obj].current_page--; process(window[current_obj]); }); }); function process(game_obj){ //will process and output new items here } </script> <div class="otherContainer"> <a class='previous active'>Prev</a><div style="display:none;">by_date</div> | <a class='next'>Next</a><div style="display:none;">by_date</div> </div> <div class="topPrevNextContainer"> <a class='previous active'>Prev</a><div style="display:none;">by_popularity</div> | <a class='next'>Next</a><div style="display:none;">by_popularity</div> </div>

    Read the article

  • Problem consuming Exchange Web Service 2010 with jax-ws metro

    - by Johan Karlberg
    I am trying to consume the Exchange 2010 Web Service interface using JAX-WS. I'm using JAX-WS 2.2 RI (Metro 2.0). 2.1 exhibited the same problem. I am running into trouble with Exchange, which returns "HTTP/1.1 415 Cannot process the message because the content type 'text/xml;charset=utf-8' was not the expected type 'text/xml; charset=utf-8'." as a reponse (2.1 quoted the charset value, otherwise same response). Apparently I need to dictate the exact Content-type header for Exchange to be happy. Is there a way for me to do this without forcing me to manually rebuild the dependency? I currently rely on published maven artifacts, and would like to continue doing this if at all possible. The consuming process is a regular J2SE app, with no containers in sight. I have control of the application and can add pretty much anything required to the applications scope, but can not add out-of-process items like proxy servers. The client classes were generated from local WSDL, but the charset specification is derived from constants declared in the jaxws RI implementation, not the generated code. The resulting HTTP transport is thus handled by the standard http/https client from Sun JRE5 or JRE6.

    Read the article

  • Architecture Guidance Needed?

    - by vijay
    We are about to automate number of process for our reporting team. (The reports are like daily reports, weekly reports, monthly reports, etc..) Mostly the process is like pulling some data from the oracle and then fill them in particular excel template files. Each reports and so their templates are different from each other. Except the excel file manipulation, there are hardly any business logic behind these. Client wanted an integrated tool and all the automated processes are placed as menus/submenus. Right now roughly there are around 30 process waiting to be automated. And we are expecting more new reports in the next quarter. I am nowhere to near having any practical experience when comes to architecuring. Already i have been maintaining two or three systems(they are more than 4yrs old.) for this prestegious client.The possiblity of the above mentioned tool will be manintained for another 3 yrs is very likely. From my past experience i've been through the pain of implmenting change requests to the rigd & undocumented code base resulting in the break down of the system and then eventually myself. So My main and top most concern is the maintainablity. When i was searching for these i came across this link, Smart Clients Using CAB and SCSF is the above link appropriate for my requirement? Also Should i place each automated processes in separate forms under a single project, or place them in separate projects under a single solution.. Please correct me if have missed any other important information. Thx.

    Read the article

  • Eclipse CDT: cannot debug or terminate application

    - by Paul Lammertsma
    I have Eclipse set up fairly nicely to run the G++ compiler through Cygwin. Even the character encoding is set up correctly! There still seems to be something wrong with my configuration: I can't debug. The pause button in the debug view is simply disabled, and no threads appear in my application tree. It seems that gdb is simply not communicating with Eclipse. Presently, I have the debug settings as follows: Debugger: "Cygwin gdb Debugger" GDB debugger: gdb GDB command file: .gdbinit Protocol: Default I should mention here that I have no idea what .gdbinit does; in my project it is merely an empty file. What is wrong with my configuration? Debugging When attempting to terminate the application in debug mode, Eclipse displays the following error: Target request failed: failed to interrupt. I can't kill the process, either; I have to kill its parent gdb.exe, which in turn kills my application. Running When running it normally, a bunch of kill.exes are called, doing nothing, while Eclipse displays the following error: Terminate failed. I can kill FaceDetector.exe from the task manager. Process Explorer This is what it looks like in Process Explorer (debugging left, running right):

    Read the article

  • Usage of static analysis tools - with Clear Case/Quest

    - by boyd4715
    We are in the process of defining our software development process and wanted to get some feed back from the group about this topic. Our team is spread out - US, Canada and India - and I would like to put into place some simple standard rules that all teams will apply to their code. We make use of Clear Case/Quest and RAD I have been looking at PMD, CPP, checkstyle and FindBugs as a start. My thought is to just put these into ANT and have the developers run these manually. I realize doing this you have to have some trust in that each developer will do this. The other thought is to add in some builders in to the IDE which would run a subset of the rules (keep the build process light) and then add another set (heavy) when they check in the code. Some other ideals is to make use of something like Cruse Control and have it set up to run these static analysis tools along with the unit test when ever Clear Case/Quest is idle. Wondering if others have done this and if it was successfully or can provide lessons learned.

    Read the article

  • Automatic web form testing/filling

    - by Polatrite
    I recently became lead on getting an inordinate amount of testing done in a very short period of time. We have many different web forms, using custom (Telerik) controls that need to be tested for proper data validation and sensible handling of the data. Some of the forms are several pages long with 30-80 different controls for data entry. I am looking for a software solution (that is free) that would allow me to automate the process of filling in these forms by designing a script, or using a UI. The other requirement is that I can't use any browsers but IE6 (terrible, I know). I have previously used AutoHotkey to great success for automatic Windows form testing, since Autohotkey's API allows you to directly reference controls on the Windows form. However Autohotkey does not have similar support for web forms (everything is just one big "InternetExplorer" control). While I would prefer that I could script some variance in the data to help serialize each test, it's not necessary, as I could go back through and manually edit a field or two (plus "break" whatever control I'm currently testing) to serialize each test. If you've ever seen Spawner: http://forge.mysql.com/projects/project.php?id=214 It's almost exactly the sort of thing I'm looking for (Spawner generates dummy SQL data, as opposed to dummy webform data) - but I won't be picky, I've got a really short deadline to meet and had this thrust in my lap just today. ;) Edit1: One of the challenges of just using Autohotkey to simulate keyboard input (tabbing through controls) is that some controls don't currently have tab index (bug), and some controls cause a page reload after modification, resulting in inconsistent control focus (tabbing screwed up). Our application makes heavy use of page reloads to populate fields (select a location, it auto-populates a city, for example).

    Read the article

< Previous Page | 446 447 448 449 450 451 452 453 454 455 456 457  | Next Page >