Search Results

Search found 3874 results on 155 pages for 'differed execution'.

Page 119/155 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • application packaging

    - by user285825
    The application packaging (software deliverables) is a vague concept to me. I worked in web application before so all I know how to prepare the deliverables ie WAR and then put them into the servlet container and rest of this being taken care of. But when considering other java or c++-based application (.exe or .class) how to prepare the package? What should be considered for preparation of the package and deployment of the application? Like I understand there should be some entries made into the windows registry or some libraries to be put in /usr/bin directory and so on. And during the development phase the execution and testing is done in more informal manner like writing some shell script or so on. But the deployment scenario is a more formal thing. As I said I have only the idea of deploying web applications I am not much knowledgeable in the other areas. Are there any kind of conventions like there are conventions for documentations like javadoc or doxygen/man pages? Application packaging is not that sort of a thing that there are much discussion on books or online materials. This is very disappointing.

    Read the article

  • Transitive SQL query on same table

    - by MiKu
    Hey. consider d following table and data... in_timestamp | out_timestamp | name | in_id | out_id | in_server | out_server | status timestamp1 | timestamp2 | data1 |id1 | id2 | others-server1 | my-server1 | success timestamp2 | timestamp3 | data1 | id2 | id3 | my-server1 | my-server2 | success timestamp3 | timestamp4 | data1 | id3 | id4 | my-server2 | my-server3 | success timestamp4 | timestamp5 | data1 | id4 | id5 | my-server3 | others-server2 | success the above data represent log of a execution flow of some data across servers. e.g. some data has flowed from some 'outside-server1' to bunch of 'my-servers' and finally to destined 'others-server2'. Question : 1) I need to give this log in representable form to client where he doesn't need to know anything about the bunch of 'my-servers'. All i am supposed to give is timestamp of the data entered my infrastructure and when it left; drilling down to following info. in_timestamp (of 'others_server1' to 'my-server1') out_timestamp (of 'my-server3' to 'others-server2') name status I want to write sql for the same! Can someone help? NOTE : there might not be 3 'my-servers' all the time. It differs from situation to situation. e.g. there might be 4 'my-server' involved for, say, data2! 2) Are there any other alternatives to SQL? I mean stored procs/etc? 3) Optimizations? (The records are huge in number! As of now, it is around 5 million a day. And we are supposed to show records that are upto a week old.) In advance, THANKS FOR THE HELP! :)

    Read the article

  • Self - hosted WCF server and SSL

    - by jitm
    Hello, There is self - hosted WCF server (Not IIS), and was generated certificates (on the Win Xp) using command line like makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureClient -sky exchange -pe makecert.exe -sr CurrentUser -ss My -a sha1 -n CN=SecureServer -sky exchange -pe These certificates was added to the server code like this: serviceCred.ServiceCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureServer"); serviceCred.ClientCertificate.SetCertificate(StoreLocation.LocalMachine, StoreName.My, X509FindType.FindBySubjectName, "SecureClient"); After all previous operation I created simple client to check SSL connection to the server. Client configuration: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IAdminContract" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <security mode="TransportCredentialOnly"> <transport clientCredentialType="Basic"/> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="https://myhost:8002/Admin" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IAdminContract" contract="Admin.IAdminContract" name="BasicHttpBinding_IAdminContract" /> </client> </system.serviceModel> </configuration> Code: Admin.AdminContractClient client = new AdminContractClient("BasicHttpBinding_IAdminContract"); client.ClientCredentials.UserName.UserName = "user"; client.ClientCredentials.UserName.Password = "pass"; var result = client.ExecuteMethod() During execution I receiving next error: The provided URI scheme 'https' is invalid; expected 'http'.\r\nParameter name: via Question: How to enable ssl for self-hosted server and where should I set - up certificates for client and server ? Thanks.

    Read the article

  • IUsable: controlling resources in a better way than IDisposable

    - by Ilya Ryzhenkov
    I wish we have "Usable" pattern in C#, when code block of using construct would be passed to a function as delegate: class Usable : IUsable { public void Use(Action action) // implements IUsable { // acquire resources action(); // release resources } } and in user code: using (new Usable()) { // this code block is converted to delegate and passed to Use method above } Pros: Controlled execution, exceptions The fact of using "Usable" is visible in call stack Cons: Cost of delegate Do you think it is feasible and useful, and if it doesn't have any problems from the language point of view? Are there any pitfalls you can see? EDIT: David Schmitt proposed the following using(new Usable(delegate() { // actions here }) {} It can work in the sample scenario like that, but usually you have resource already allocated and want it to look like this: using (Repository.GlobalResource) { // actions here } Where GlobalResource (yes, I know global resources are bad) implements IUsable. You can rewrite is as short as Repository.GlobalResource.Use(() => { // actions here }); But it looks a little bit weird (and more weird if you implement interface explicitly), and this is so often case in various flavours, that I thought it deserve to be new syntactic sugar in a language.

    Read the article

  • Optimizing PHP require_once's for low disk i/o?

    - by buggedcom
    Q1) I'm designing a CMS (-who isn't!) but priority is being given to caching. Literally everything is cached. DB rows, DB id queries, Configuration data, processed data, compiled templates. Currently it has two layers of caching. The first is a opcode cache or memory cache such as apc, eaccelerator, xcache or memcached. If an entry is not found in there it is then searched for in the secondary slow cache, ie php includes. Are the opcode caches actually faster than doing a require_once to a php file with a var_export'd array of data in it? My tests are inconclusive as my development box (5.3 of XAMPP) keeps throwing errors installing any of the aforementioned programs. Q2) The CMS has numerous helper classes that are autoloaded on demand instead of loading all files. Mostly each has a require before it so no autoloading needs to take place, however this is not the question. Because a page script can have up to 50/60 helper files included I have a feeling that if the site was under pressure it would buckle because of all the i/o that this incurs. Ignore for the moment that there is output cache in place that would remove the need for what I am about to suggest, and also that opcode caches would render this moot. What I have tried to do is join all the helper files required for the scripts execution in one single file. This is achievable and works well, however it has a side effect of greatly increasing the memory usage dramatically even though technically the same code is being used. What are your thoughts and opinions on this?

    Read the article

  • How do I Unit Test Actions without Mocking that use UpdateModel?

    - by Hellfire
    I have been working my way through Scott Guthrie's excellent post on ASP.NET MVC Beta 1. In it he shows the improvements made to the UpdateModel method and how they improve unit testing. I have recreated a similar project however anytime I run a UnitTest that contains a call to UpdateModel I receive an ArgumentNullException naming the controllerContext parameter. Here's the relevant bits, starting with my model: public class Country { public Int32 ID { get; set; } public String Name { get; set; } public String Iso3166 { get; set; } } The controller action: [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(Int32 id, FormCollection form) { using ( ModelBindingDataContext db = new ModelBindingDataContext() ) { Country country = db.Countries.Where(c => c.CountryID == id).SingleOrDefault(); try { UpdateModel(country, form); db.SubmitChanges(); return RedirectToAction("Index"); } catch { return View(country); } } } And finally my unit test that's failing: [TestMethod] public void Edit() { CountryController controller = new CountryController(); FormCollection form = new FormCollection(); form.Add("Name", "Canada"); form.Add("Iso3166", "CA"); var result = controller.Edit(2 /*Canada*/, form) as RedirectToRouteResult; Assert.IsNotNull(result, "Expected to be redirected on successful POST."); Assert.AreEqual("Show", result.RouteName, "Expected to redirect to the View action."); } ArgumentNullException is thrown by the call to UpdateModel with the message "Value cannot be null. Parameter name: controllerContext". I'm assuming that somewhere the UpdateModel requires the System.Web.Mvc.ControllerContext which isn't present during execution of the test. I'm also assuming that I'm doing something wrong somewhere and just need to pointed in the right direction. Help Please!

    Read the article

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • AuthorizationExecuteWithPrivileges and osascript failing

    - by cygnl7
    I'm attempting to execute an uninstaller (written in AppleScript) through AuthorizationExecuteWithPrivileges. I'm setting up my rights after creating an empty auth ref like so: char *tool = "/usr/bin/osascript"; AuthorizationItem items = {kAuthorizationRightExecute, strlen(tool), tool, 0}; AuthorizationRights rights = {sizeof(items)/sizeof(AuthorizationItem), &items}; AuthorizationFlags flags = kAuthorizationFlagDefaults | kAuthorizationFlagExtendRights | kAuthorizationFlagPreAuthorize | kAuthorizationFlagInteractionAllowed; status = AuthorizationCopyRights(authorizationRef, &rights, NULL, flags, NULL); Later I call: status = AuthorizationExecuteWithPrivileges(authorizationRef, tool, kAuthorizationFlagDefaults, (char *const *)args, NULL); On Snow Leopard this works fine, but on Leopard I get the following in syslog.log: Apr 19 15:30:09 hostname /usr/bin/osascript[39226]: OpenScripting.framework - 'gdut' event blocked in process with mixed credentials (issetugid=0 uid=501 euid=0 gid=20 egid=20) Apr 19 15:30:12: --- last message repeated 1 time --- ... Apr 19 15:30:12 hostname [0x0-0x2e92e9].com.example.uninstaller[39219]: /var/folders/vm/vmkIi0nYG8mHMrllaXaTgk+++TI/-Tmp-/TestApp_tmpfiles/Uninstall.scpt: Apr 19 15:30:12 hostname [0x0-0x2e92e9].com.example.uninstaller[39219]: execution error: «constant afdmasup» doesn’t understand the «event earsffdr» message. (-1708) Am I going about this all wrong? I just want to run the equivalent of "sudo /usr/bin/osascript ..."

    Read the article

  • applicationWillTerminate Appears to be Inconsistent

    - by Lauren Quantrell
    This one has me batty. In applicationWillTerminate I am doing two things: saving some settings to the app settings plist file and updating any changed data to the SQLite database referenced in the managedObjectContext. Problem is it works sometimes and not others. Same issue in the simulator and on the device. If I hit the home button while the app is running, I can only sometimes get the data to store in the plist and into the CoreData store. It seems that it's both works or neither works, and it makes no difference if I switch the execution order (saveState, managedObjectContext or managedObjectContext, saveState). I can't figure out how this can happen. Any help is greatly appreciated. lq AppDelegate.m @synthesize rootViewController; - (void)applicationWillTerminate:(UIApplication *)application { [rootViewController saveState]; NSError *error; if (managedObjectContext != nil) { if ([managedObjectContext hasChanges] && ![managedObjectContext save:&error]) { // Handle error exit(-1); // Fail } } } RootViewController.m - (void)saveState { NSUserDefaults *userDefaults = [NSUserDefaults standardUserDefaults]; [userDefaults setInteger:self.someInteger forKey:kSomeNumber]; [userDefaults setObject:self.someArray forKey:kSomeArray]; }

    Read the article

  • Modify Executing Jar file

    - by pinkynobrain
    Hello Stack Overflow friends. I have a simple problem which i fear doesnt have a simple solution and i need advice as to how to proceed. I am developing a java application packaged as and executable JAR but it requires to modify some of its JAR file contents during execution. At this stage i hit a problem because some OS lock the file preventing writes to it. It is essential that the user sees an updated version of the jar file by the time the application exits allthough i can be pretty flexible as to how to achieve this. A clean and efficient solution is obviously prefereable but portability is the only hard requirement. The following are three approaches i can see to solving the problem, feel free to comment on them or suggest others. Tell Java to unlock the JAR file for writing(this doesnt seem possible but it would be the easyest solution) Copy the executable class files to a tempory file on application startup, use a class loader to load these files and unload the ones from the initial JAR file.(Not had much experience with the classloaders but hopefully the JVM would then be smart enough to realize that the original JAR is nolonger in use and so unlock it) Put a Second executable JAR File inside the First, on startup extract the inner jar to e temporaryfile, invoke a new java process and pass it the location of the Outer JAR, first process exits, second process modifys the Outer jar unincumbered.(This will work but im not sure there is a platform independant way of one java app invoking another) I know this is a weird question but any help would be appreciated.

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • How to form submit and show a different page in ASP.Net MVC?

    - by melaos
    hi guys i'm new to asp.net mvc.. so basically i just build up a two page app which takes the registration information of the user and post it to the database. i use a lot of jquery and ajax calls to retrieve data from the database using linq to sql stored proc object. and currently i'm stuck at one page where after the user submits the form it should redirect him to /Home/AddProduct. What i found was the error: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Web.HttpException: Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that <machineKey> configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster. what used on my form are basically a combination of html controls, asp.net controls and some asp.net mvc type controls. i submit the form using action="/Home/ProductAdded" and after doing some googling i found i was supposed to add in the machine key but after doing so, the index page becomes unviewable. because it couldn't find the index file now. removing the action helps, but now it just doesn't go anywhere. so what am i missing here? i feel i'm missing a lot of fundamentals understanding about asp.net mvc and i don't even know how to submit a form and go to a different page here!!

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • UIAlertView clickedButtonAtIndex with presentModalViewController

    - by Vivas
    Hi, I am trying to call a view via presentModalViewController from a UIAlertView button. The code is below, the NSlog is displayed in the console so I know that code execution is indeed reaching that point. Also, there are no errors or anything displayed: - (void)alertView:(UIAlertView *)alertView clickedButtonAtIndex:(NSInteger)buttonIndex { if (buttonIndex != [alertView cancelButtonIndex]) { NSLog(@" The username is: %@ AND the password is: %@ ",textField.text,passTextField.text); // do some logic in here later CompanyViewController *companyViewController = [[[CompanyViewController alloc] initWithNibName:@"CompanyViewController" bundle:nil] autorelease]; [self presentModalViewController:companyViewController animated:YES]; } [textField resignFirstResponder]; [passTextField resignFirstResponder]; } * Edit: The method above belongs to a UIViewController. Below is the interface file: @interface testingViewController : UIViewController <UIAlertViewDelegate, UITextFieldDelegate> { UITextField *textField; UITextField *passTextField; } @property (nonatomic, retain) UITextField *textField; @property (nonatomic, retain) UITextField *passTextField; @property (readonly) NSString *enteredText; @property (readonly) NSString *enteredPassword; @end Any help is appreciated.

    Read the article

  • C# NullReferenceException when passing DataTable

    - by Timothy
    I've been struggling with a NullReferenceException and hope someone here will be able to point me in the right direction. I'm trying to create and populate a DataTable and then show the results in a DataGridView control. The basic code follows, and Execution stops with a NullReferenceException at the point where I invoke the new UpdateResults_Delegate. Oddly enough, I can trace entries.Rows.Count successfully before I return it from QueryEventEntries, so I can at least show 1) entries is not a null reference, and 2) the DataTable contains rows of data. I know I have to be doing something wrong, but I just don't know what. private delegate void UpdateResults_Delegate(DataTable entries); private void UpdateResults(DataTable entries) { dataGridView.DataSource = entries; } private void button_Click(object sender, EventArgs e) { Thread t = new Thread(new ThreadStart(PerformQuery)); t.Start(); } private void PerformQuery() { DateTime start = new DateTime(dateTimePicker1.Value.Year, dateTimePicker1.Value.Month, dateTimePicker1.Value.Day, 0, 0, 0); DateTime stop = new DateTime(dateTimePicker2.Value.Year, dateTimePicker2.Value.Month, dateTimePicker2.Value.Day, 0, 0, 0); DataTable entries = QueryEventEntries(start, stop); Invoke(new UpdateResults_Delegate(UpdateResults), entries); } private DataTable QueryEventEntries(DateTime start, DateTime stop) { DataTable entries = new DataTable(); entries.Columns.Add("colEventType", typeof(Int32)); entries.Columns.Add("colTimestamp", typeof(Int32)); entries.Columns.Add("colDetails", typeof(String)); ... conn.Open(); using (SqlDataReader r = cmd.ExecuteReader()) { while (r.Read()) { entries.Rows.Add(result.GetInt32(0), result.GetInt32(1), result.GetString(2)); } } return entries; }

    Read the article

  • Execute code on assembly load

    - by Dmitriy Matveev
    I'm working on wrapper for some huge unmanaged library. Almost every of it's functions can call some error handler deeply inside. The default error handler writes error to console and calls abort() function. This behavior is undesirable for managed library, so I want to replace the default error handler with my own which will just throw some exception and let program continue normal execution after handling of this exception. The error handler must be changed before any of the wrapped functions will be called. The wrapper library is written in managed c++ with static linkage to wrapped library, so nothing like "a type with hundreds of dll imports" is present. I also can't find a single type which is used by everything inside wrapper library. So I can't solve that problem by defining static constructor in one single type which will execute code I need. I currently see two ways of solving that problem: Define some static method like Library.Initialize() which must be called one time by client before his code will use any part of the wrapper library. Find the most minimal subset of types which is used by every top-level function (I think the size of this subset will be something like 25-50 types) and add static constructors calling Library.Initialize (which will be internal in that scenario) to every of these types. I've read this and this questions, but they didn't helped me. Is there any proper ways of solving that problem? Maybe some nice hacks available?

    Read the article

  • CPU friendly infinite loop

    - by Adi
    Writing an infinite loop is simple: while(true){ //add whatever break condition here } But this will trash the CPU performance. This execution thread will take as much as possible from CPU's power. What is the best way to lower the impact on CPU? Adding some Thread.Sleep(n) should do the trick, but setting a high timeout value for Sleep() method may indicate an unresponsive application to the operating system. Let's say I need to perform a task each minute or so in a console app. I need to keep Main() running in an "infinite loop" while a timer will fire the event that will do the job. I would like to keep Main() with the lowest impact on CPU. What methods do you suggest. Sleep() can be ok, but as I already mentioned, this might indicate an unresponsive thread to the operating system. LATER EDIT: I want to explain better what I am looking for: I need a console app not Windows service. Console apps can simulate the Windows services on Windows Mobile 6.x systems with Compact Framework. I need a way to keep the app alive as long as the Windows Mobile device is running. We all know that the console app runs as long as its static Main() function runs, so I need a way to prevent Main() function exit. In special situations (like: updating the app), I need to request the app to stop, so I need to infinitely loop and test for some exit condition. For example, this is why Console.ReadLine() is no use for me. There is no exit condition check. Regarding the above, I still want Main() function as resource friendly as possible. Let asside the fingerprint of the function that checks for the exit condition.

    Read the article

  • eventmachine and external scripts via backticks

    - by Maciek
    I have a small HTTP server script I've written using eventmachine which needs to call external scripts/commands and does so via backticks (``). When serving up requests which don't run backticked code, everything is fine, however, as soon as my EM code executes any backticked external script, it stops serving requests and stops executing in general. I noticed eventmachine seems to be sensitive to sub-processes and/or threads, and appears to have the popen method for this purpose, but EM's source warns that this method doesn't work under Windows. Many of the machines running this script are running Windows, so I can't use popen. Am I out of luck here? Is there a safe way to run an external command from an eventmachine script under Windows? Is there any way I could fire off some commands to be run externally without blocking EM's execution? edit: the culprit that seems to be screwing up EM the most is my usage of the Windows start command, as in: start java myclass. The reason I'm using start is because I want those external scripts to start running and keep running after the EM request is served

    Read the article

  • Using PHP SoapClient with Java JAX-WS RI (Webservice)

    - by Kau-Boy
    For a new project, we want to build a web service in JAVA using JAX-WS RI and for the web service client, we want to use PHP. In a small tutorial about JAX-WS RI I found this example web service: package webservice; import javax.jws.WebService; import javax.jws.soap.SOAPBinding; import javax.jws.soap.SOAPBinding.Style; @WebService @SOAPBinding(style = Style.RPC) public class Calculator { public long addValues(int val1, int val2) { return val1 + val2; } } and for the web server: package webservice; import javax.xml.ws.Endpoint; import webservice.Calculator; public class CalculatorServer { public static void main(String args[]) { Calculator server = new Calculator(); Endpoint endpoint = Endpoint.publish("http://localhost:8080/calculator", server); } } Starting the Server and browsing the WDSL with the URL "http://localhost:8080/calculator?wsdl" works perfectly. But calling the web service from PHP fails My very simple PHP call looks like this: $client = new SoapClient('http://localhost:8080/calculator?wsdl', array('trace' => 1)); echo 'Sum: '.$client->addValues(4, 5); But than I either get a "Fatal error: Maximum execution time of 60 seconds exceeded..." or a "Uncaught SoapFault exception: [WSDL] SOAP-ERROR: Parsing WSDL: Couldn't load from 'http://localhost:8080/calculator?wsdl' ..." exception. I have tested the PHP SoapClient() with other web services and they work without any issues. Is there a known issue with JAX-WS RI in comibation with PHP, or is there an error in my web service I didn't see? I have found this bug report, but even updating to PHP 5.3.2 doesn't solved the problem. Can anyone tell me what to do? And by the way, my java version running on Windows 7 x64 is the following: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode)

    Read the article

  • use of assertions for type checking in php?

    - by user151841
    I do some checking of arguments in my classes in php using exception-throwing functions. I have functions that do a basic check ( ===, in_array etc ) and throw an exception on false. So I can do assertNumeric($argument, "\$argument is not numeric."); instead of if ( ! is_numeric($argument) ) { throw new Exception("\$argument is not numeric."); } Saves some typing I was reading in the comments of the php manual page on assert() that As noted on Wikipedia - "assertions are primarily a development tool, they are often disabled when a program is released to the public." and "Assertions should be used to document logically impossible situations and discover programming errors— if the 'impossible' occurs, then something fundamental is clearly wrong. This is distinct from error handling: most error conditions are possible, although some may be extremely unlikely to occur in practice. Using assertions as a general-purpose error handling mechanism is usually unwise: assertions do not allow for graceful recovery from errors, and an assertion failure will often halt the program's execution abruptly. Assertions also do not display a user-friendly error message." This means that the advice given by "gk at proliberty dot com" to force assertions to be enabled, even when they have been disabled manually, goes against best practices of only using them as a development tool So, am I 'doing it wrong'? What other/better ways of doing this are there?

    Read the article

  • Rename a file with perl

    - by perlnoob
    I have a file in a different folder I want to rename in perl, I was looking at a solution earlier that showed something like this: for (<backup.rar>) { my $file = $_; my $new = $_ 'backup'. @test .'.rar'; rename $file, $new or die "Error, can not rename $file as $new: $!"; } however backup.rar is in a different folder, I did try putting "C:\backup\backup.rar" in the < above, however I got the same error. C:\Program Files\WinRARperl backup.pl String found where operator expected at backup.pl line 35, near "$_ 'backup'" (Missing operator before 'backup'?) syntax error at backup.pl line 35, near "$_ 'backup'" Execution of backup.pl aborted due to compilation errors. I was using # Get time my @test = POSIX::strftime("%m-%d-%Y--%H-%M-%S\n", localtime); print @test; To get the current time, however I couldn't seem to get it to rename correctly. What can I do to fix this? Please note I am doing this on a windows box.

    Read the article

  • Workflow UI Integration - is WF a good approach?

    - by AJ
    Somewhat similar to this question, except we haven't decided that we're going with WF yet. I'm working on designing a system that requires a series of decisions and activities on a "work object," so I naturally began to consider workflow, specifically WF. What I'm wondering is if WF is a good solution for a situation like the following (oversimplified for this question) case (please forgive bad ascii art): __________________ | Gather some info | | (web page) | |__________________| | | / \ / \ / \ / \ / cond \ \ 1 / \ / \ / \ / \ / | | ______________|_______________ | | | | | ______|______ ______|________ / do some / | Get more info | / process / | (web page) | /____________/ |_______________| | | / \ / \ / \ / cond. \ \ 2 / \ / \ / \ / | | |__________________ | | | | _____|_____ _____|_____ / some / / another / / process / / process / /__________/ /__________/ The part I'm struggling with is the get more info (web page) step and what happens subsequent, which would mean a halt in the execution of the workflow runtime. I'm aware that this is possible, but I'm not sure that WF is the best approach for this type of code, as the user interaction may be required at many different points through the entire workflow, and the workflow will drive what data entry screens are needed. We are using a WinForms/ASP.NET web forms package for UI, which is homegrown and difficult to push deployments on, so something like SharePoint integration is out of the question. Our back-end is DB2, and the workflow code (whether it's in WF or otherwise) will need to interact with that as well. I guess the bottom line is, should we look into using WF for this, or would we be better served just coding it ourselves? Can WF easily integrate data entry screens to capture information that can be used further on in the workflow?

    Read the article

  • Deploying Application with mvc in shared hosting server

    - by ankita-13-3
    We have created an MVC web application in asp.net 3.5, it runs absolutely fine locally but when we deploy it on godaddy hosting server (shared hosting), it shows an error which is related to trust level problem. We contacted godaddy support and they say, that we only support medium trust level application. So how to convert my application in medium trust level. Do I need to make changes to web.config file. It shows the following error : Security Exception Description: The application attempted to perform an operation not allowed by the security policy. To grant this application the required permission please contact your system administrator or change the application's trust level in the configuration file. Exception Details: System.Security.SecurityException: Request failed. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [SecurityException: Request failed.] System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Assembly asm, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +150 System.Security.CodeAccessSecurityEngine.ThrowSecurityException(Object assemblyOrString, PermissionSet granted, PermissionSet refused, RuntimeMethodHandle rmh, SecurityAction action, Object demand, IPermission permThatFailed) +100 System.Security.CodeAccessSecurityEngine.CheckSetHelper(PermissionSet grants, PermissionSet refused, PermissionSet demands, RuntimeMethodHandle rmh, Object assemblyOrString, SecurityAction action, Boolean throwException) +284 System.Security.PermissionSetTriple.CheckSetDemand(PermissionSet demandSet, PermissionSet& alteredDemandset, RuntimeMethodHandle rmh) +69 System.Security.PermissionListSet.CheckSetDemand(PermissionSet pset, RuntimeMethodHandle rmh) +150 System.Security.PermissionListSet.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +30 System.Threading.CompressedStack.DemandFlagsOrGrantSet(Int32 flags, PermissionSet grantSet) +40 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, CompressedStack securityContext) +123 System.Security.CodeAccessSecurityEngine.ReflectionTargetDemandHelper(Int32 permission, PermissionSet targetGrant, Resolver accessContext) +41 Look forward to your help. Regards Ankita Software Developer Shakti Informatics Pvt. Ltd. Web Template Hub

    Read the article

  • Calling function using 'new' is less expensive than without it?

    - by Matthew Taylor
    Given this very familiar model of prototypal construction: function Rectangle(w,h) { this.width = w; this.height = h; } Rectangle.prototype.area = function() { return this.width * this.height; }; Can anyone explain why calling "new Rectangle(2,3)" is consistently 10x FASTER than calling "Rectangle(2,3)" without the 'new' keyword? I would have assumed that because new adds more complexity to the execution of a function by getting prototypes involved, it would be slower. Example: var myTime; function startTrack() { myTime = new Date(); } function stopTrack(str) { var diff = new Date().getTime() - myTime.getTime(); println(str + ' time in ms: ' + diff); } function trackFunction(desc, func, times) { var i; if (!times) times = 1; startTrack(); for (i=0; i<times; i++) { func(); } stopTrack('(' + times + ' times) ' + desc); } var TIMES = 1000000; trackFunction('new rect classic', function() { new Rectangle(2,3); }, TIMES); trackFunction('rect classic (without new)', function() { Rectangle(2,3); }, TIMES); Yields (in Chrome): (1000000 times) new rect classic time in ms: 33 (1000000 times) rect classic (without new) time in ms: 368 (1000000 times) new rect classic time in ms: 35 (1000000 times) rect classic (without new) time in ms: 374 (1000000 times) new rect classic time in ms: 31 (1000000 times) rect classic (without new) time in ms: 368

    Read the article

  • TypeError: Error #1009 - (Null reference error) With Flash.

    - by Wind Chimez
    I am not an expert in flash, but i do work with AS and tweak Flash projects , though not having deep expertise in it. Currently i need to revamp a flash website done by one another guy, and the code base given to me, upon execution is throwing the following error. "--- TypeError: Error #1009: Cannot access a property or method of a null object reference. at NewSite_fla::MainTimeline/__setProp_ContactOutP1_ContactOut_Contents_0() at NewSite_fla::MainTimeline/frame1() --" The structure of the project is like, it has the different sections split into different movie clips. There is no single main timeline, but click actions on different areas of seperate movie clips will take them between one another. All the AS logic of event handling are written inline in FLA , no seperate Document class exists. Preloader Movie clip is the first one getting loaded. As i understood the error is getting thrown initially itself, and it is not happening due to any Action script logic written inline, because it is throwing error even before hitting the first inline AS code. I am not able to figure Out what exactly it causing the problem, or where to resolve it. I setup the stuff online, for reference if anybody want to take a look at it, and here is the link.You need to have flash debugger turned ON in your browser, if need to see the exception getting triggered. http://tinyurl.com/2alvlfx I really got stuck at this point. Any help will be great.I had not seen the particular solution i am looking for anywhere yet, though Error #1009 is common.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >