Search Results

Search found 7651 results on 307 pages for 'execution plan'.

Page 242/307 | < Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >

  • setcookie, Cannot modify header information - headers already sent

    - by Nano HE
    Hi,I am new to PHP, I practised PHP setcookie() just now and failed. http://localhost/test/index.php <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title></title> </head> <body> <?php $value = 'something from somewhere'; setcookie("TestCookie", $value); ?> </body> </html> http://localhost/test/view.php <?php // I plan to view the cookie value via view.php echo $_COOKIE["TestCookie"]; ?> But I failed to run index.php, IE warning like this. Warning: Cannot modify header information - headers already sent by (output started at C:\xampp\htdocs\test\index.php:9) in C:\xampp\htdocs\test\index.php on line 12 I enabled my IE 6 cookie no doubt. Is there anything wrong on my procedure above? Thank you. WinXP OS and XAMPP 1.7.3 used.

    Read the article

  • How to reliably send a request cross domain and cross browser on page unload

    - by Agmin
    I have javascript code that's loaded by 3rd parties. The javascript keeps track of a number of metrics, and when a user exits the page I'd like to send the metrics back to my server. Due to XSS checks in some browsers, like IE, I cannot do a simple jquery.ajax() call. Instead, I'm appending an image src to the page with jquery. Here's the code, cased by browser: function record_metrics() { //Arbitrary code execution here to set test_url $esajquery('#MainDiv').append("<img src='" + test_url + "' />"); } if ($esajquery.browser.msie) { window.onbeforeunload = function() { record_metrics(); } } else { $esajquery(window).unload( function(){ record_metrics(); } ); } FF aborts the request to "test_url" if I use window.onbeforeunload, and IE8 doesn't work with jquery's unload(). IE8 also fails to work if the arbitrary test_url setting code is too long, although IE8 seems to work fine if the is immediately appended to the DOM. Is there a better way to solve this issue? Unfortunately this really needs to execute when a user leaves the page.

    Read the article

  • Locking a table for getting MAX in LINQ

    - by Hossein Margani
    Hi Every one! I have a query in LINQ, I want to get MAX of Code of my table and increase it and insert new record with new Code. just like the IDENTITY feature of SQL Server, but here my Code column is char(5) where can be alphabets and numeric. My problem is when inserting a new row, two concurrent processes get max and insert an equal Code to the record. my command is: var maxCode = db.Customers.Select(c=>c.Code).Max(); var anotherCustomer = db.Customers.Where(...).SingleOrDefault(); anotherCustomer.Code = GenerateNextCode(maxCode); db.SubmitChanges(); I ran this command cross 1000 threads and each updating 200 customers, and used a Transaction with IsolationLevel.Serializable, after two or three execution an error occured: using (var db = new DBModelDataContext()) { DbTransaction tran = null; try { db.Connection.Open(); tran = db.Connection.BeginTransaction(IsolationLevel.Serializable); db.Transaction = tran; . . . . tran.Commit(); } catch { tran.Rollback(); } finally { db.Connection.Close(); } } error: Transaction (Process ID 60) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. other IsolationLevels generates this error: Row not found or changed. Please help me, thank you.

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • singleton pattern in Windows Activation Service

    - by Joshua
    Hello I have a few WCF services that are currently being self hosted, in a very basic NT Service. I want to expand my application to add provisioning of WCF Services, and updates, as well as isolation (I want each WCF Service to be in its own AppDomain). These WCF Services contain logic that needs to be run on a regular basis, pinging the database, and getting information from external devices so that when a request comes in the data is readily available. I'm thinking about trying out Windows Activation Service, because i really like the provisioning, and isolation that comes with a managed services infrastructure. If I didn't use WAS I would essentially have to write the same code myself. From what I understand though WAS does not really support the model of having a service that is running before someone actually calls a method on the service. the article I read here MSDN Article Link states "That means in essence that out-of-the-box WAS hosting is not something that is really suited for sessionful or singleton services. It is more suitable for stateless per-call services." it does say that "Out of the box" so I'm wondering if anyone has used WAS to host a WCF service that really behaves more like an NT Service (starting and stopping independantly of having a method called upon it). Or any other ideas would be great. I was planning on writting this infrastructure myself, to host WCF services in a custom ServiceHost, and put their execution in a seporate AppDomain, as well as allow for provision of these services after initial installation, along with updates. However, I would MUCH MUCH MUCH rather not own that code if I don't have to. thanks Joshua

    Read the article

  • Change made in the Converter will notify the change in the bound property?

    - by Kishore Kumar
    I have two property FirstName and LastName and bound to a textblock using Multibinidng and converter to display the FullName as FirstName + Last Name. FirstName="Kishore" LastName="Kumar" In the Converter I changed the LastName as "Changed Text" values[1] = "Changed Text"; After executing the Converter my TextBlock will show "Kishore Changed Text" but Dependency property LastName is still have the last value "Kumar". Why I am not getting the "Changed Text" value in the LastName property after the execution?. Will the change made at converter will notify the bound property? <Window.Resources> <local:NameConverter x:Key="NameConverter"></local:NameConverter> </Window.Resources> <Grid> <TextBlock> <TextBlock.Text> <MultiBinding Converter="{StaticResource NameConverter}"> <Binding Path="FirstName"></Binding> <Binding Path="LastName"></Binding> </MultiBinding> </TextBlock.Text> </TextBlock> </Grid> Converter: public class NameConverter:IMultiValueConverter { #region IMultiValueConverter Members public object Convert(object[] values, Type targetType, object parameter, System.Globalization.CultureInfo culture) { values[1] = "Changed Text"; return values[0].ToString() + " " + values[1].ToString(); } public object[] ConvertBack(object value, Type[] targetTypes, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion }

    Read the article

  • IRC Server configuration possibilities

    - by Katai
    I need to know a couple of things, concerning IRC servers that I couldnt directly find out over google (or werent clear enough for me to be sure if it actually works) I'm working at a larger community site, and wanted to deliver an in-page chat. Since it would be a nice feature to let people access it from outside too, over their own clients, I tought implementing an IRC Server would be the best solution (probably dedicated, I'll have to teach myself a couple of things for that) I plan to include a Web-based IRC client over an APE Client / Server. The problem is, I want to strip down the user rights, to disallow many functionalities that IRC would offer: Change of nicknames: The user logs in over the Page login, and I'll automatically create an IRC auth for this user with that password. So basically, he would connect to the IRC client over a button. And after connecting, he shouldnt be able to change his nickname at all Creating channels: I want the possibility to create channels, but not from 'normal' users. Basically, I would prefer to set up basic channels that are public, and if a user really creates an own channel, that one should be private and via invitation (is that possible?) Private conversations: private conversations should be filtered out from the allaround IRC client, into separate 'in-browser-windows' that I create over JS. I guess I just have to filter the stuff coming from IRC - or is there a better solution to that? Only 'registered' users have access: Like I said, if someone registers on the page, I would like to create an IRC 'account' for him. Users that arent registered on the page, cant access the IRC server at all (or get thrown out). Mainly to avoid spammers or bots from outside. Is this stuff solvable over IRC? I've read some FAQ's and Instructions for IRC OP's and servers, but I couldnt find a clear answer - it seems that everyone can do pretty much everything - I would like to configure it in a way that user possibilities are more cut down. Basically, giving users the possibility to chat, but not more. So the Question basically is, how possible / solvable this issues are allaround, or if I have to find other solutions for this.

    Read the article

  • create app that has plugin which contains PyQt widget

    - by brian
    I'm writing an application that will use plugins. In the plugin I want to include a widget that allows the options for that plugin to be setup. The plugin will also include methods to operate on the data. What is is the best way to include a widget in a plugin? Below is pseudo code for what I've tried to do. My original plan was to make the options widget: class myOptionsWidget(QWidget): “”” create widget for plug in options “”” …. Next I planned on including the widget in my plugin: class myPlugin def __init__(self): self.optionWidget = myOptionsWidget() self.pluginNum = 1 …. def getOptionWidget(self): return(self.optionWidget) Then at the top level I'd do something like a = myPlugin() form = createForm(option=a.getOptionWidget()) … where createForm would create the form and include my plugin options widget. But when I try "a = myPlugin()" I get the error "QWidget: Must construct a QApplication before a QpaintDevice" so this method won't work. I know I would store the widget as a string and call eval on it but I'd rather not do that in case later on I want to convert the program to C++. What is the best way to write a plugin that includes a widget that has the options? Brian

    Read the article

  • Alternative native api for Process.Start

    - by Akash Kava
    Ok this is not duplicate of "http://stackoverflow.com/questions/2065592/alternative-to-process-start" because my question is something different here. I need to run a process and wait till execution of process and get the output of console. There is way to set RedirectStandardOutput and RedirectStandardError to true, however this does not function well on some machines, (where .NET SDK is not installed), only .NET runtime is installed, now it works on some machines and doesnt work on some machines so we dont know where is the problem. I have following code, ProcessStartInfo info = new ProcessStartInfo("myapp.exe", cmd); info.CreateNoWindow = true; info.UseShellExecute = false; info.RedirectStandardError = true; info.RedirectStandardOutput = true; Process p = Process.Start(info); p.WaitForExit(); Trace.WriteLine(p.StandardOutput.ReadToEnd()); Trace.WriteLine(p.StandardError.ReadToEnd()); On some machines, this will hang forever on p.WaitForExit(), and one some machine it works correctly, the behaviour is so random and there is no clue. Now if I can get a real good workaround for this using pinvoke, I will be very happy.

    Read the article

  • Problem in creating win installer in i

    - by user356108
    Hi Everyone, I am trying to create an executable file (.exe) of iReport with my module included in it. While I run the target the create-iReport-distro-win-installer, I am getting the following error. Note: I am using netbeans 6.5.1 java.io.IOException: Cannot run program "makensis" (in directory "C:\Program Files\NetBeans 6.5.1\iReport-3.7.2-src"): CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at java.lang.Runtime.exec(Runtime.java:593) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.taskdefs.Execute$Java13CommandLauncher.exec(Execute.java:832) at org.apache.tools.ant.taskdefs.Execute.launch(Execute.java:447) at org.apache.tools.ant.taskdefs.Execute.execute(Execute.java:461) at net.sf.nsisant.Task.execute(Task.java:205) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.GeneratedMethodAccessor97.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:106) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.tools.ant.Target.performTasks(Target.java:385) at org.apache.tools.ant.Project.executeSortedTargets(Project.java:1337) at org.apache.tools.ant.Project.executeTarget(Project.java:1306) at org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:41) at org.apache.tools.ant.Project.executeTargets(Project.java:1189) at org.apache.tools.ant.module.bridge.impl.BridgeImpl.run(BridgeImpl.java:273) at org.apache.tools.ant.module.run.TargetExecutor.run(TargetExecutor.java:499) at org.netbeans.core.execution.RunClassThread.run(RunClassThread.java:151) Caused by: java.io.IOException: CreateProcess error=2, The system cannot find the file specified at java.lang.ProcessImpl.create(Native Method) at java.lang.ProcessImpl.<init>(ProcessImpl.java:81) at java.lang.ProcessImpl.start(ProcessImpl.java:30) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 24 more C:\Program Files\NetBeans 6.5.1\iReport-3.7.2-src\build.xml:327: Command failed: 'makensis /DPRODUCT_VERSION=3.7.2 /DPRODUCT_NAME=iReport /DPRODUCT_WEB_SITE=http://ireport.sourceforge.net "C:\Program Files\NetBeans 6.5.1\iReport-3.7.2-src\etc\iReportInstaller.nsi"' BUILD FAILED (total time: 1 minute 22 seconds)

    Read the article

  • What makes these two R data frames not identical?

    - by Matt Parker
    UPDATE: I remembered dput() about the time Sharpie mentioned it. It's probably the row names. Back in a moment with an answer. I have two small data frames, this_tx and last_tx. They are, in every way that I can tell, completely identical. this_tx == last_tx results in a frame of identical dimensions, all TRUE. this_tx %in% last_tx, two TRUEs. Inspected visually, clearly identical. But when I call identical(this_tx, last_tx) I get a FALSE. Hilariously, even identical(str(this_tx), str(last_tx)) will return a TRUE. If I set this_tx <- last_tx, I'll get a TRUE. What is going on? I don't have the deepest understanding of R's internal mechanics, but I can't find a single difference between the two data frames. If it's relevant, the two variables in the frames are both factors - same levels, same numeric coding for the levels, both just subsets of the same original data frame. Converting them to character vectors doesn't help. Background (because I wouldn't mind help on this, either): I have records of drug treatments given to patients. Each treatment record essentially specifies a person and a date. A second table has a record for each drug and dose given during a particular treatment (usually, a few drugs are given each treatment). I'm trying to identify contiguous periods during which the person was taking the same combinations of drugs at the same doses. The best plan I've come up with is to check the treatments chronologically. If the combination of drugs and doses for treatment[i] is identical to the combination at treatment[i-1], then treatment[i] is a part of the same phase as treatment[i-1]. Of course, if I can't compare drug/dose combinations, that's right out.

    Read the article

  • Maven GAE Plugin - Unable to run gae:debug

    - by Taylor L
    I'm having trouble running the gae:debug goal of the Maven GAE Plugin. The error I'm receiving is below. Any ideas? I'm running it with "mvn gae:debug". [INFO] Packaging webapp [INFO] Assembling webapp[test-gae] in [C:\development\test-gae\target\test-gae-0.0.1-SNAPSHOT] [INFO] Processing war project [INFO] Webapp assembled in[56 msecs] [INFO] Building war: C:\development\test-gae\target\test-gae-0.0.1-SNAPSHOT.war [INFO] [statemgmt:end-fork] [INFO] Ending forked execution [fork id: -2101914270] [INFO] [gae:debug] Usage: <dev-appserver> [options] <war directory> Options: --help, -h Show this help message and exit. --server=SERVER The server to use to determine the latest -s SERVER SDK version. --address=ADDRESS The address of the interface on the local machine -a ADDRESS to bind to (or 0.0.0.0 for all interfaces). --port=PORT The port number to bind to on the local machine. -p PORT --sdk_root=root Overrides where the SDK is located. --disable_update_check Disable the check for newer SDK versions. EDIT: gae:run with the jvmFlags option is also giving me the same result with the below configuration. <plugin> <groupId>net.kindleit</groupId> <artifactId>maven-gae-plugin</artifactId> <version>0.5.0</version> <configuration> <jvmFlags> <jvmFlag>-Xdebug</jvmFlag> <jvmFlag>-Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8000</jvmFlag> </jvmFlags> </configuration> </plugin>

    Read the article

  • Versioning SharePoint binary Workflow ASPX task forms

    - by Janis Veinbergs
    Hello. As noted by some developers, workflow versioning is somekind of headache in SharePoint. I`m wondering is there a way I can version my aspx forms? For sure, i can version code behind assemblies, but if markup changes for any of my files in LAYOUTS folder? Is there versioning available for files or do i have to choose new filename for my form? Sorry, i should have been more specific. Yes, i have files under version control (i can restore previous versions etc), but i`m not talking about this kind of version control. But by deploying new Workflow Version, i must not delete old one, because it is still running on many items in SharePoint, but rather , as noted in previous links, deploy new one so i don't break execution of workflows. But workflows will still break if i don't preserve old aspx forms used by users to interact with workflows. So i must ensure that Assemblies with old version numbers used by old workflow exists (this one is ok, i just changed assembly version number and deployed to GAC) I must ensure that old workflow still uses old aspx form used users to interact with workflow, but new workflow version should use new aspx form with more options (how to do this?).

    Read the article

  • Custom CheckBoxList in ASP.NET

    - by Rick
    Since ASP.NET's CheckBoxList control does not allow itself to be validated with one of the standard validation controls (i.e., RequiredFieldValidator), I would like to create a UserControl that I can use in my project whenever I need a checkbox list that requires one or more boxes to be checked. The standard CheckBoxList can be dragged onto a page, and then you can manually add <asp:ListItem> controls if you want. Is there any way I can create a UserControl that lets me manually (in the markup, not programmatically) insert ListItems from my page in a similar manner? In other words, can I insert a UserControl onto a page, and then from the Designer view of the Page (i.e., not the designer view of the UserControl), can I manually add my ListItems like so: <uc1:RequiredCheckBoxList> <asp:ListItem Text="A" value="B"></asp:ListItem> <asp:ListItem Text="X" value="Y"></asp:ListItem> </uc1:RequiredCheckBoxList> If a UserControl is not the appropriate choice for the end result I'm looking for, I'm open to other suggestions. Please note that I am aware of the CustomValidator control (which is how I plan to validate within my UserControl). It's just a pain to write the same basic code each time I need one of these required checkbox lists, which is why I want to create a re-usable control.

    Read the article

  • How to completely wipe a previous ClickOnce installation?

    - by Dabblernl
    I have a curious problem: My app is distributed through ClickOnce. I recently installed three new clients on a new location. They worked. After an update however, all old clients worked fine, but the three new clients did not. As my code is swallowing an exception somewhere I have been unable thusfar to pinpoint where the error lies. When I XCopy the latest version of the app to the desktop of the three new client computers the program works fine. So, I thought uninstalling and reinstalling the program from the download location should fix the problem, but it does not! I can think of two explanations: The new location has some firewall/virusscanner in place that doesn't like the latest version of my app when it is run from a standard ClickOnce directory, but it allows execution from the desktop. Some old settings (the app uses user scoped and app scoped settings) remain in effect after the uninstall. When I find and check the user.config file for the app however, I find no incorrect setttings there. Thusfar, I have been unable to reproduce the error on any other machine. How can I solve this!?

    Read the article

  • Vlad the deployer on Dreamhost - initial script

    - by xmariachi
    Hi, I'm trying to deploy an app with SVN and Vlad the deployer. Vlad and its dependencies are installed and seem OK. I'm trying the following: rake prod vlad:update Being my config/deploy.rb file: task :prod do set :application, "xxx" set :deploy_timestamped, "false" set :user, "username" set :scm_user, "scmusername" set :repository, "http://domain.com/svn/app" set :domain, "domain.com" set :deploy_to, "/home/username/deployments/app" puts "Production deployment to #{deploy_to}" end I have done "rake prod vlad:setup" already, that's fine. But when calling "rake prod vlad:update", I get the following A ...file Exported revision 14. ln: creating symbolic link `/home/username/deployments/drupalgestalt/releases/20100503164225/public/system' to `/home/username/deployments/drupalgestalt/shared/system': No such file or directory rake aborted! execution failed with status 1: ssh domain.com ln -s /home/username/deployments/app/shared/log /home/username/deployments/app/releases/20100503164225/log && ln -s /home/username/deployments/app/shared/system /home/username/deployments/app/releases/20100503164225/public/system && ln -s /home/username/deployments/app/shared/pids /home/username/deployments/app/releases/20100503164225/tmp/pids Apparently it complains when creating the ln, but permissions are all set up fine. Am I doing anything wrong? I'm just starting with Vlad on the assumption it was super-easy to set up. Had played a bit with cap in the past, and I do like Vlad idea.

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, can any one tell how to tell svn that these files are to be deleted from repository through command line. i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • Is there a way in .NET to access the bytecode/IL/CLR that is currently running?

    - by Alix
    Hi. I'd like to have access to the bytecode that is currently running or about to run in order to detect certain instructions and take specific actions (depending the instructions). In short, I'd like to monitor the bytecode in order to add safety control. Is this possible? I know there are some AOP frameworks that notify you of specific events, like an access to a field or the invocation of a method, but I'd like to skip that extra layer and just look at all the bytecode myself, throughout the entire execution of the application. I've already looked at the following questions (...among many many others ;) ):     Preprocessing C# - Detecting Methods     What CLR/.NET bytecode tools exist? as well as several AOP frameworks (although not in great detail, since they don't seem to do quite what I need) and I'm familiar with Mono.Cecil. I appreciate alternative suggestions, but I don't want to introduce the overhead of an AOP framework when what I actually need is access to the bytecode, without all the stuff they add on top to make it more user-friendly (... admittedly very useful stuff when you don't want to go low-level). Thanks :)

    Read the article

  • Far jump in ntdll.dll's internal ZwCreateUserProcess

    - by user49164
    I'm trying to understand how the Windows API creates processes so I can create a program to determine where invalid exes fail. I have a program that calls kernel32.CreateProcessA. Following along in OllyDbg, this calls kernel32.CreateProcessInternalA, which calls kernel32.CreateProcessInternalW, which calls ntdll.ZwCreateUserProcess. This function goes: mov eax, 0xAA xor ecx, ecx lea edx, dword ptr [esp+4] call dword ptr fs:[0xC0] add esp, 4 retn 0x2C So I follow the call to fs:[0xC0], which contains a single instruction: jmp far 0x33:0x74BE271E But when I step this instruction, Olly just comes back to ntdll.ZwCreateUserProcess at the add esp, 4 right after the call (which is not at 0x74BE271E). I put a breakpoint at retn 0x2C, and I find that the new process was somehow created during the execution of add esp, 4. So I'm assuming there's some magic involved in the far jump. I tried to change the CS register to 0x33 and EIP to 0x74BE271E instead of actually executing the far jump, but that just gave me an access violation after a few instructions. What's going on here? I need to be able to delve deeper beyond the abstraction of this ZwCreateUserProcess to figure out how exactly Windows creates processes.

    Read the article

  • Forking in PHP on Windows

    - by Doug Kavendek
    We are running PHP on a Windows server (a source of many problems indeed, but migrating is not an option currently). There are a few points where a user-initiated action will need to kick off a few things that take a while and about which the user doesn't need to know if they succeed or fail, such as sending off an email or making sure some third-party accounts are updated. If I could just fork with pcntl_fork(), this would be very simple, but the PCNTL functions are not available in Windows. It seems the closest I can get is to do something of this nature: exec( 'php-cgi.exe somescript.php' ); However, this would be far more complicated. The actions I need to kick off rely on a lot of context that already will exist in the running process; to use the above example, I'd need to figure out the essential data and supply it to the new script in some way. If I could fork, it'd just be a matter of letting the parent process return early, leaving the child to work on a few more things. I've found a few people talking about their own work in getting various PCNTL functions compiled on Windows, but none seemed to have anything available (broken links, etc). Despite this question having practically the same name as mine, it seems the problem was more execution timeout than needing to fork. So, is my best option to just refactor a bit to deal with calling php-cgi, or are there other options? Edit: It seems exec() won't work for this, at least not without me figuring some other aspect of it, as it waits until the call returns. I figured I could use START, sort of like exec( 'start php-cgi.exe somescript.php' );, but it still waits until the other script finishes.

    Read the article

  • PHP set timeout for script with system call, set_time_limit not working

    - by tehalive
    I have a command-line PHP script that runs a wget request using each member of an array with foreach. This wget request can sometimes take a long time so I want to be able to set a timeout for killing the script if it goes past 15 seconds for example. I have PHP safemode disabled and tried set_time_limit(15) early in the script, however it continues indefinitely. Update: Thanks to Dor for pointing out this is because set_time_limit() does not respect system() calls. So I was trying to find other ways to kill the script after 15 seconds of execution. However, I'm not sure if it's possible to check the time a script has been running while it's in the middle of a wget request at the same time (a do while loop did not work). Maybe fork a process with a timer and set it to kill the parent after a set amount of time? Thanks for any tips! Update: Below is my relevant code. $url is passed from the command-line and is an array of multiple URLs (sorry for not posting this initially): foreach( $url as $key => $value){ $wget = "wget -r -H -nd -l 999 $value"; system($wget); }

    Read the article

  • What AOP tools exist for doing aspect-oriented programming at the assembly language level against x8

    - by JohnnySoftware
    Looking for a tool I can use to do aspect-oriented programming at the assembly language level. For experimentation purposes, I would like the code weaver to operate native application level executable and dynamic link libraries. I have already done object-oriented AOP. I know assembly language for x86 and so forth. I would like to be able to do logging and other sorts of things using the familiar before/after/around constructs. I would like to be able to specify certain instructions or sequences/patterns of consecutive instructions as what to do a pointcut on since assembly/machine language is not exactly the most semantically rich computer language on the planet. If debugger and linker symbols are available, naturally, I would like to be able to use them to identify subroutines' entry points , branch/call/jump target addresses, symbolic data addresses, etc. I would like the ability to send notifications out to other diagnostic tools. Thus, support for sending data through connection-oriented sockets and datagrams is highly desirable. So is normal logging to files, UI, etc. This can be done using the action part of an aspect to make a function call, but then there are portability issues so the tool needs to support a flexible, well-abstracted logging/notifying mechanism with a clean, simple yet flexible. The goal is rapid-QA. The idea is to be able to share aspect source code braodly within communties as well as publicly. So, there needs to be a declarative security policy file that users can share. This insures that nothing untoward that is hidden directly or indirectly in an aspect source file slips by the execution manager. The policy file format needs to be simple to read, write, modify, understand, type-in, edit, and generate. Sort of like Java .policy files. Think the exact opposite of anything resembling XML Schema files and you get the idea. Is there such a tool in existence already?

    Read the article

  • Why does VBA Find loop fail when called from Evaluate?

    - by Abiel
    I am having some problems running a find loop inside of a subroutine when the routine is called using the Application.Evaluate or ActiveSheet.Evaluate method. For example, in the code below, I define a subroutine FindSub() which searches the sheet for a string "xxx". The routine CallSub() calls the FindSub() routine using both a standard Call statement and Evaluate. When I run Call FindSub, everything will work as expected: each matching address gets printed out to the immediate window and we get a final message "Finished up" when the code is done. However, when I do Application.Evaluate "FindSub()", only the address of the first match gets printed out, and we never reach the "Finished up" message. In other words, an error is encountered after the Cells.FindNext line as the loop tries to evaluate whether it should continue, and program execution stops without any runtime error being printed. I would expect both Call FindSub and Application.Evaluate "FindSub()" to yield the same results in this case. Can someone explain why they do not, and if possible, a way to fix this? Thanks. Note: In this example I obviously do not need to use Evaluate. This version is simplified to just focus on the particular problem I am having in a more complex situation. Sub CallSub() Call FindSub Application.Evaluate "FindSub()" End Sub Sub FindSub() Dim rngFoundCell As Range Dim rngFirstCell As Range Set rngFoundCell = Cells.Find(What:="xxx", after:=ActiveCell, LookIn:=xlValues, _ LookAt:=xlPart, SearchOrder:=xlByRows, SearchDirection:=xlNext, _ MatchCase:=False, SearchFormat:=False) If Not rngFoundCell Is Nothing Then Set rngFirstCell = rngFoundCell Do Debug.Print rngFoundCell.Address Set rngFoundCell = Cells.FindNext(after:=rngFoundCell) Loop Until (rngFoundCell Is Nothing) Or (rngFoundCell.Address = rngFirstCell.Address) End If Debug.Print "Finished up" End Sub

    Read the article

  • Flush separate Castle ActiveRecord Transaction, and refresh object in another Transaction

    - by eanticev
    I've got all of my ASP.NET requests wrapped in a Session and a Transaction that gets commited only at the very end of the request. At some point during execution of the request, I would like to insert an object and make it visible to other potential threads - i.e. split the insertion into a new transaction, commit that transaction, and move on. The reason is that the request in question hits an API that then chain hits another one of my pages (near-synchronously) to let me know that it processed, and thus double submits a transaction record, because the original request had not yet finished, and thus not committed the transaction record. So I've tried wrapping the insertion code with a new SessionScope, TransactionScope(TransactionMode.New), combination of both, flushing everything manually, etc. However, when I call Refresh on the object I'm still getting the old object state. Here's some code sample for what I'm seeing: Post outsidePost = Post.Find(id); // status of this post is Status.Old using (TransactionScope transaction = new TransactionScope(TransactionMode.New)) { Post p = Post.Find(id); p.Status = Status.New; // new status set here p.Update(); SessionScope.Current.Flush(); transaction.Flush(); transaction.VoteCommit(); } outsidePost.Refresh(); // refresh doesn't get the new status, status is still Status.Old Any suggestions, ideas, and comments are appreciated!

    Read the article

< Previous Page | 238 239 240 241 242 243 244 245 246 247 248 249  | Next Page >