Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 346/973 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Do servlet containers prevent web applications from causing each other interference and how do they do it?

    - by chrisbunney
    I know that a servlet container, such as Apache Tomcat, runs in a single instance of the JVM, which means all of its servlets will run in the same process. I also know that the architecture of the servlet container means each web application exists in its own context, which suggests it is isolated from other web applications. As depicted here: Accepting that each web application is isolated, I would expect that you could create 2 copies of an identical web application, change the names and context paths of each (as well as any other relevant configuration), and run them in parallel without one affecting the other. The answers to this question appear to support this view. However, a colleague disagrees based on their experience of attempting just that. They took a web application and tried to run 2 separate instances (with different names etc) in the same servlet container and experienced issues with the 2 instances conflicting (I'm unable to elaborate more as I wasn't involved in that work). Based on this, they argue that since the web applications run in the same process space, they can't be isolated and things such as class attributes would end up being inadvertently shared. This answer appears to suggest the same thing The two views don't seem to be compatible, so I ask you: Do servlet containers prevent web applications deployed to the same container from conflicting with each other? If yes, How do they do this? If no, Why does interference occur? and finally, Under what circumstances could separate web applications conflict and cause each other interference?, perhaps scenarios involving resources on the file system, native code, or database connections?

    Read the article

  • Doing large updates against indexed view

    - by user217136
    We have an indexed view that runs across three large tables. Two of these tables (A & B) are constantly getting updated with user transactions and the other table (C) contains data product info that is needs to be updated once a week. This product table contains over 6 million records. We need this view across these three tables for our core business process and unfortunately we cannot change this aspect. We even had a sql server MVP come in to help test under load to make sure we have the most efficient configuration. There is one column in the product table that gets utilized in the view and has to be updated each week. The problem we are now encountering is that as volume is increasing on our transactions against tables A & B, the update to Table C is causing deadlocks. I have tried several different methods to no avail: 1) I was hoping that we could change the view so that table C could be a dirty read "WITH (NOLOCK)" but apparently that functionality is not available with indexes views. 2) I thought about updating a new column in Table C and then just renaming it when the process is done but you cannot do that due to the dependency in the view. 3) I also entertained the idea of writing this value to a temporary product table, and then running an ALTER statement against the view to have it point to my new table. however when i did that the indexes on my view were dropped and it took quite a bit of time to recreate them. 4) we tried to do the weekly update in small chunks (as small as 100 records at a time) but we still run into dead locks. questions: a) we are using sql server 2005. Does sql server 2008 have a new functionality with their indexed views that would help us? Is there now a way to do dirty reads w/ an indexed view? b) a better approach to altering an existing view to point to a new table? thanks!

    Read the article

  • Migration of .NET COM object to 64 bit.

    - by Victor Ronin
    Hi, We have C++ application which uses several COM object. COM object are .NET based (using COM Interop). I need to migrate application to 64 bit. I specifically need C++ application to be 64 bit. I don't want to recompile all of .NET com object to 64 bit and deliver two sets of DLL's (32 bit and 64 bit). I was investigating and found that I can load 32 bit COM Dll's in 32 bit surrogate process using (DllSurrogate in registry). I know how to do that, but it means that all COM objects will become out of process. In the C++ I had the code: CoCreateInstance(CLSID_SomeClass, NULL, CLSCTX_INPROC_SERVER, IID_SomeInterface, (void**)&pobj); It worked fine, but as soon as I switch to CLSCTX_LOCAL_SERVER (and add registry keys for DllSurrogate), it can't find interfaces (error 0x80004002). I checked registry and found out that when .NET COM DLL is registered, it adds ClsID registry keys, but doesn't add Interface and TypeLib registry key. The question is, how to create these registry keys for .NET COM? Regards, Victor

    Read the article

  • Which persistent & lightweight queue messaging for cross domain (> 2) data exchange with rails integ

    - by Erwan
    Hi all, I'm looking for the right messaging system for my needs. Can you help me ? For now, there won't be a huge amount of data to process, but I don't want to be limited later ... The machines are not just web servers, so the messaging tool should be lightweight, even if processing is not very speed. When some data change on a server, all servers should have the information and process it locally. (should I create one channel per server on each of them ?) The frontend is written on Rails, so it is important, in order to simplify the development, that there is a gem / plugin to manage communications and data sent. At this time : RabbitMQ + workling seems to fit my needs. Could this be a right choice ? ActiveMQ make me afraid, because of Java (I really don't know very well Java, but it seems to me to be big CPU consumer) Others don't seem to be as mature as them. There might be lot of development using this kind of technology, so I can't go to the wrong way ! Thank you for help.

    Read the article

  • What is the exact difference between MEM_RESERVE and MEM_COMMIT states?

    - by pj4533
    As I understand it MEM_RESERVE is actually 'free' memory, ie available to be used by my process, but just hasn't been allocated yet? Or it was previously allocated, but had since been freed? Specifically, see in my !address output below how I am nearly out of virtual address space (99900 KB free, 2307872 as MEM_PRIVATE. But the states shows that 44.75% of that is actually MEM_RESERVE. Does that mean it is actually free, in my process...but maybe fragmented? 0:000> !address -summary --------- PEB a8bd8000 not found ---- -------------------- Usage SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Pct(Busy) Usage 259af000 ( 616124) : 22.29% 23.12% : RegionUsageIsVAD 618f000 ( 99900) : 03.61% 00.00% : RegionUsageFree 13e22000 ( 325768) : 11.78% 12.22% : RegionUsageImage 42c04000 ( 1093648) : 39.56% 41.04% : RegionUsageStack 42d000 ( 4276) : 00.15% 00.16% : RegionUsageTeb 2625d000 ( 625012) : 22.61% 23.45% : RegionUsageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePageHeap 0 ( 0) : 00.00% 00.00% : RegionUsagePeb 1000 ( 4) : 00.00% 00.00% : RegionUsageProcessParametrs 1000 ( 4) : 00.00% 00.00% : RegionUsageEnvironmentBlock Tot: a8bf0000 (2764736 KB) Busy: a2a61000 (2664836 KB) -------------------- Type SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 618f000 ( 99900) : 03.61% : <free> 13e22000 ( 325768) : 11.78% : MEM_IMAGE 1e77000 ( 31196) : 01.13% : MEM_MAPPED 8cdc8000 ( 2307872) : 83.48% : MEM_PRIVATE -------------------- State SUMMARY -------------------------- TotSize ( KB) Pct(Tots) Usage 57235000 ( 1427668) : 51.64% : MEM_COMMIT 618f000 ( 99900) : 03.61% : MEM_FREE 4b82c000 ( 1237168) : 44.75% : MEM_RESERVE Largest free region: Base 7e4a1000 - Size 000ff000 (1020 KB)

    Read the article

  • Why would some $_POST variables go missing for a form with PHP?

    - by Chad Johnson
    Sometimes, multiple times a day in fact, users of my web application are submitting a certain form which has about a dozen form fields, half of which are hidden fields, and half of the $_POST data is simply not present in the processing script. Note that the fields that are not present are at the very bottom of the form. I know this because this raises a fatal error, and an email is dispatched to me which includes the post data. And of course, neither I nor any of the developers on my team can reproduce the problem. Flash is involved in the process, as I'm using a library called Uploadify to display a progress meter. Here is the flow...does anyone have ANY ideas at all why some of the post data would be getting wiped out? User visits edit screen for a page in the CMS I am using. Record id for the page is put into a form as a hidden value. User clicks the Uploadify browse button and selects a file (only single file selection is allowed). User clicks Submit button for my form. jQuery intercepts the form submit action, triggers Uploadify to start uploading, and returns false for the submit action (manually cancelling the form submit event so that Uploadify can take over). Uploadify uploads to a custom process script. Uploadify finishes uploading and triggers the Javascript completion callback. The Javascript callback calls $('#myForm').submit() to submit the form. This happens on multiple browsers (Firefox 3.5, 3.6, Safari, Internet Explorer 7, 8) and multiple platforms (Mac OS 10.5, 10.6 and Windows XP, 7).

    Read the article

  • Samba - Is my server vulnerable to CVE-2008-1105?

    - by Joao Heleno
    Hi! I have a CentOS server that is running Samba and I want to verify the vulnerability addressed by CVE-2008-1105. What scenarios can I build in order to run the exploit that is mentioned in http://secunia.com/advisories/cve_reference/CVE-2008-1105/? http://secunia.com/secunia_research/2008-20/advisory/ says that "Successful exploitation allows execution of arbitrary code by tricking a user into connecting to a malicious server (e.g. by clicking an "smb://" link) or by sending specially crafted packets to an "nmbd" server configured as a local or domain master browser." More info: http://www.samba.org/samba/security/CVE-2008-1105.html http://secunia.com/secunia_research/2008-20/advisory/

    Read the article

  • How to create a console application that does not terminate?

    - by John
    Hello, In C++,a console application can have a message handler in its Winmain procedure.Like this: int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { HWND hwnd; MSG msg; #ifdef _DEBUG CreateConsole("Title"); #endif hwnd = CreateDialog(hInstance, MAKEINTRESOURCE(IDD_DIALOG1), NULL, DlgProc); PeekMessage(&msg, NULL, 0, 0, PM_NOREMOVE); while(msg.message != WM_QUIT) { if(PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if(IsDialogMessage(hwnd, &msg)) continue; TranslateMessage(&msg); DispatchMessage(&msg); } } return 0; } This makes the process not close until the console window has received WM_QUIT message. I don't know how to do something similar in delphi. My need is not for exactly a message handler, but a lightweight "trick" to make the console application work like a GUI application using threads. So that, for example, two Indy TCP servers could be handled without the console application terminating the process. My question: How could this be accomplished?

    Read the article

  • Why do I get "Sequence contains no elements"?

    - by Gary McGill
    NOTE: see edits at bottom. I am an idiot. I had the following code to process set of tag names and identify/process new ones: IEnumberable<string> tagNames = GetTagNames(); List<Tag> allTags = GetAllTags(); var newTagNames = tagNames.Where(n => !allTags.Any(t => t.Name == n)); foreach (var tagName in newTagNames) { // ... } ...and this worked fine, except that it failed to deal with cases where there's a tag called "Foo" and the list contains "foo". In other words, it wasn't doing a case-insensitive comparison. I changed the test to use a case-insensitive comparison, as follows: var newTagNames = tagNames.Where(n => !allTags.Any(t => t.Name.Equals(n, StringComparison.InvariantCultureIgnoreCase))); ... and suddenly I get an exception thrown when the foreach runs (and calls MoveNext on) newTagNames. The exception says: Sequence has no elements I'm confused by this. Why would foreach insist on the sequence being non-empty? I'd expect to see that error if I was calling First(), but not when using foreach? EDIT: more info. This is getting weirder by the minute. Because my code is in an async method, and I'm superstitious, I decided that there was too much "distance" between the point at which the exception is raised, and the point at which it's caught and reported. So, I put a try/catch around the offending code, in the hope of verifying that the exception being thrown really was what I thought it was. So now I can step through in the debugger to the foreach line, I can verify that the sequence is empty, and I can step right up to the bit where the debugger highlights the word "in". One more step, and I'm in my exception handler. But, not the exception handler I just added, no! It lands in my outermost exception handler, without visiting my recently-added one! It doesn't match catch (Exception ex) and nor does it match a plain catch. (I did also put in a finally, and verified that it does visit that on the way out). I've always taken it on faith that an Exception handler such as those would catch any exception. I'm scared now. I need an adult. EDIT 2: OK, so um, false alarm... The exception was not being caught by my local try/catch simply because it was not being raised by the code I thought. As I said above, I watched the execution in the debugger jump from the "in" of the foreach straight to the outer exception handler, hence my (wrong) assumption that that was where the error lay. However, with the empty enumeration, that was simply the last statement executed within the function, and for some reason the debugger did not show me the step out of the function or the execution of the next statement at the point of call - which was in fact the one causing the error. Apologies to all those who responded, and if you would like to create an answer saying that I am an idoit, I will gladly accept it. That is, if I ever show my face on SO again...

    Read the article

  • Signals and Variables in VHDL - Problem

    - by Morano88
    I have a signal and this signal is a bitvector. The length of the bitvector depends on an input n, it is not fixed. In order to find the length, I have to do some computations. Can I define a signal after defining the variables ? It is ggiving me errors when I do that. It is working fine If I keep the signal before the variables .. but I don't want that .. the length of Z depends on the computations of the variables. What is the solution ? library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity BSD_Full_Comp is Generic (n:integer:=8); Port(X, Y : inout std_logic_vector(n-1 downto 0); FZ : out std_logic_vector(1 downto 0)); end BSD_Full_Comp; architecture struct of BSD_Full_Comp is Component BSD_BitComparator Port ( Ai_1 : inout STD_LOGIC; Ai_0 : inout STD_LOGIC; Bi_1 : inout STD_LOGIC; Bi_0 : inout STD_LOGIC; S1 : out STD_LOGIC; S0 : out STD_LOGIC ); END Component; Signal Z : std_logic_vector(2*n-3 downto 0); begin ass : process Variable length : integer := n; Variable pow : integer :=0 ; Variable ZS : integer :=0; begin while length /= 0 loop length := length/2; pow := pow+1; end loop; length := 2 ** pow; ZS := length - n; wait; end process; end struct;

    Read the article

  • How can I substitute the nth occurrence of a match in a Perl regex?

    - by Zaid
    Following up from an earlier question on extracting the n'th regex match, I now need to substitute the match, if found. I thought that I could define the extraction subroutine and call it in the substitution with the /e modifier. I was obviously wrong (admittedly, I had an XY problem). use strict; use warnings; sub extract_quoted { # à la codaddict my ($string, $index) = @_; while($string =~ /'(.*?)'/g) { $index--; return $1 if(! $index); } return; } my $string = "'How can I','use' 'PERL','to process this' 'line'"; extract_quoted ( $string, 3 ); $string =~ s/&extract_quoted($string,2)/'Perl'/e; print $string; # Prints 'How can I','use' 'PERL','to process this' 'line' There are, of course, many other issues with this technique: What if there are identical matches at different positions? What if the match isn't found? In light of this situation, I'm wondering in what ways this could be implemented.

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • SSAS Cube reprocessing fails - then succeeds if I try again

    - by EdgarVerona
    So I'm basically brand new to the concept of BI, and I've inherited an existing ETL process that is a two step process: 1) Loads the data into a database that is only used by the cube processing 2) Starts off the SSAS cube processing against said database It seems pretty well isolated, but occasionally (once a week, sometimes twice) it will fail with the following exception: "Errors in the OLAP storage engine: The attribute key cannot be found" Now the interesting thing is that: 1) The dimension having the issue is not usually the same one (i.e. there's no single dimension that consistently has this failure) 2) The source table, when I inspect it, does actually contain the attribute key that it says could not be found And, most interestingly... 3) If I then immediately reprocess the dimensions and cubes manually through SSMS, they reprocess successfully and without incident. In both the aforementioned job and when I reprocess them through SSMS, I am using "ProcessFull", so it should be reprocessing them completely. Has anyone run into such an issue? I'm scratching my head about it... because if it was a genuine data integrity issue, reprocessing the cube again wouldn't fix it. What on earth could be happening? I've been tasked with finding out why this happens, but I can neither reproduce it consistently nor can I point to a data integrity problem as the root cause. Thanks for any input you can provide!

    Read the article

  • GetEffectiveRightsFromAcl throws invalid acl error

    - by apoorv020
    I am trying to get the effective rights a user has on a file using interop in C#. Following is the code I am using : public static FileSystemRights GetFileEffectiveRights(string FileName, string UserName) { IntPtr pDacl, pZero = IntPtr.Zero; int Mask = 0; uint errorReturn = GetNamedSecurityInfo(FileName, SE_OBJECT_TYPE.SE_FILE_OBJECT, SECURITY_INFORMATION.Dacl , out pZero, out pZero, out pDacl, out pZero, out pZero); if (errorReturn != 0) { throw new Exception("Win error : " + errorReturn); } Program.TRUSTEE pTrustee = new TRUSTEE(); pTrustee.pMultipleTrustee = IntPtr.Zero; pTrustee.MultipleTrusteeOperation = (int)Program.MULTIPLE_TRUSTEE_OPERATION.NO_MULTIPLE_TRUSTEE; pTrustee.ptstrName = UserName; pTrustee.TrusteeForm = (int)Program.TRUSTEE_FORM.TRUSTEE_IS_NAME; pTrustee.TrusteeType = (int)Program.TRUSTEE_TYPE.TRUSTEE_IS_USER; errorReturn = GetEffectiveRightsFromAcl(pDacl, ref pTrustee, ref Mask); if (errorReturn != 0) { throw new Exception("Win error : " + errorReturn); } return (FileSystemRights)Mask; } This code works fine until I start modifying the ACL structure using the classes FileAccessRule and FileInfo, and then I start getting Windows Error 1336 : ERROR_INVALID_ACL. Same is the case if I debug the process call GetFileEffectiveRights once, pause the process,change the ACL through windows API, and resume and call GetFileEffectiveRights again(the 1st call succeeds but the second gives 1336.) What is going wrong?

    Read the article

  • Spring upload file

    - by benaissa
    hi every one, I'm a novice in Spring, i started to develop an application to upload files,i used the official spring documentation but, i have this error: Handler processing failed; nested exception is java.lang.NoClassDefFoundError: org/apache/commons/io/output/DeferredFileOutputStream at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:823) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:719) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:644) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:560) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:143) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:237) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Thread.java:619) Caused by: java.lang.NoClassDefFoundError: org/apache/commons/io/output/DeferredFileOutputStream at org.apache.commons.fileupload.disk.DiskFileItemFactory.createItem(DiskFileItemFactory.java:191) at org.apache.commons.fileupload.FileUploadBase.parseRequest(FileUploadBase.java:350) at org.apache.commons.fileupload.servlet.ServletFileUpload.parseRequest(ServletFileUpload.java:126) at org.springframework.web.multipart.commons.CommonsMultipartResolver.parseRequest(CommonsMultipartResolver.java:155) at org.springframework.web.multipart.commons.CommonsMultipartResolver.resolveMultipart(CommonsMultipartResolver.java:138) at org.springframework.web.servlet.DispatcherServlet.checkMultipart(DispatcherServlet.java:907) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:750)

    Read the article

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • Using Qt signals/slots instead of a worker thread

    - by Rob
    I am using Qt and wish to write a class that will perform some network-type operations, similar to FTP/HTTP. The class needs to connect to lots of machines, one after the other but I need the applications UI to stay (relatively) responsive during this process, so the user can cancel the operation, exit the application, etc. My first thought was to use a separate thread for network stuff but the built-in Qt FTP/HTTP (and other) classes apparently avoid using threads and instead rely on signals and slots. So, I'd like to do something similar and was hoping I could do something like this: class Foo : public QObject { Q_OBJECT public: void start(); signals: void next(); private slots: void nextJob(); }; void Foo::start() { ... connect(this, SIGNAL(next()), this, SLOT(nextJob())); emit next(); } void Foo::nextJob() { // Process next 'chunk' if (workLeftToDo) { emit next(); } } void Bar::StartOperation() { Foo* foo = new Foo; foo->start(); } However, this doesn't work and UI freezes until all operations have completed. I was hoping that emitting signals wouldn't actually call the slots immediately but would somehow be queued up by Qt, allowing the main UI to still operate. So what do I need to do in order to make this work? How does Qt achieve this with the multitude of built-in classes that appear to perform lengthy tasks on a single thread?

    Read the article

  • How to get both PIPESTATUS and output in bash script

    - by Mustafa Serdar Sanli
    I'm trying to get last modification date of a file with this command TM_LOCAL=`ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }'` TM_LOCAL has value like "2012-05-16 23:18" after execution of this line I'd also like to check PIPESTATUS to see if there was an error. For example if file does not exists, ls returns 2. But $? has value 0 as it has the return value of awk. If I run this command alone, I can check the return value of ls by looking at ${PIPESTATUS[0]} ls -l --time-style=long-iso ~/.vimrc | awk '{ print $6" "$7 }' But $PIPESTATUS does not work as I've expected if I assign the output to a variable as in the first example. In this case, $PIPESTATUS array has only 1 element which is same as $? So, the question is, how can I get both $PIPESTATUS and assign the output to a variable at the same time?

    Read the article

  • Unexpected end of file while searching for ']' to end attribute selector.

    - by zurna
    I dont understand what would be the problem with the following code. It needs to copy image's id value to another textbox but instead I get an error. Unexpected end of file while searching for ']' to end attribute selector. <script> $(function() { $(".floatLeft").click(function() { var id = $(this).attr("id").replace(/\D/g, ""); $("input[name='photo[" + id + "]'").val(Math.abs($("input[name='photo[" + id + "]'").val() - 1)); }); }); </script> <ul class="thumbs"> <li> <img src="/FLPM/media/news/images/2M9Y1I2K_sm.jpg" alt="Garden" id="28" class="floatLeft" /> <input type="text" name="photo28" value="0" /> <br /> <a href="?Process=&IMAGEID=28" class="thumb"><span class="floatLeft">DELETE</span></a> </li> <li> <img src="/FLPM/media/news/images/2A9L1V2X_sm.jpg" alt="Frangipani Flowers" id="27" class="floatLeft" /> <input type="text" name="photo27" value="0" /> <br /> <a href="?Process=&IMAGEID=27" class="thumb"><span class="floatLeft">DELETE</span></a> </li> </ul>

    Read the article

  • Bash script to log in over SSH

    - by user1042928
    Sorry if this seems like a dumb question but I am just learning bash scripting. For a school project we need to code an RTX that runs in Unix. It runs as a process in terminal and takes in user input and then prints it to the screen. I want to write a bash script to test that it can respond to lots of quick user input without overflowing or failing. My main problem is that once the RTX starts the bash script will stop on that line until the RTX terminates and only then print the loop to the terminal (instead of printing it to the RTX prompt as I intend it to). I have tried running the RTX in the background but that didn't work. I need to find a way to redirect input to the RTX while it is still running with a bash script. Google searches didn't come up with examples that I understood/could adapt. Any help is appreciated. #!/bin/bash # declare STRING variable STRING="RTX Worked =D" #Start the rtx in a new process, stuck on this line until rtx terminates. ./main #Somehow redirect io to the rtx. for i in `seq 1 100`; do echo $i echo " \n" done echo $STRING

    Read the article

  • Technically why is processes in Erlang more efficient than OS threads?

    - by Jonas
    Spawning processes seam to be much more efficient in Erlang than starting new threads using the OS (i.e. by using Java or C). Erlang spawns thousands and millions of processes in a short period of time. I know that they are different since Erlang do per process GC and processes in Erlang are implemented the same way on all platforms which isn't the case with OS threads. But I don't fully understand technically why Erlang processes are so much more efficient and has much smaller memory footprint per process. Both the OS and Erlang VM has to do scheduling, context switches and keep track of the values in the registers and so on... Simply why isn't OS threads implemented in the same way as processes in Erlang? Does it have to support something more? and why does it need a bigger memory footprint? Technically why is processes in Erlang more efficient than OS threads? And why can't threads in the OS be implemented and managed in the same efficient way?

    Read the article

  • Using Git to work with subversion: Ignoring modifications to tracked files

    - by Chris Nicola
    I am currently working with a subversion repository but I am using git to work locally on my machine. It makes work much easier, but it also makes some of the bad behavior going on in the subversion repo quite glaring and that creates problems for me. There is a somewhat complex local build process after pulling down the code and it creates (and unfortunately modifies) a number of files. Obviously these changes are not meant to be committed back to the repository. Unfortunately the build process is actually modifying some tracked files (yes, most likely because someone mistakenly committed these build artifacts at some point to the subversion repository). Since these are modifications adding them to my ignore file does nothing for me. I can avoid checking these changes back it, I simple don't stage or commit them, but having unstaged local changes means I can't rebase without first cleaning them up. What I would like to know is if there any way to ignore future changes to a set of tracked files? Alternatively, is there another way to handle the problem I am having, or will I just have to tell whoever checked in these files to clean them up?

    Read the article

  • Is ODBC on Windows 2003 slower than on Windows 7?

    - by nbolton
    I am seeing some MSSQL 2005 performance issues, and I am trying to diagnose the cause. I am using SQL profiler to gather query execution times. Both the client (using ODBC), and the SQL server are running on Windows 2003. I am also using Windows 7 (client) with a different Windows 2003 server to compare results. Windows 7 client / Windows 2003 server: SQL management studio: 393ms Through ODBC: 215ms Windows 2003 client: SQL management studio: approx 155ms Through ODBC: 3145ms ... in both cases, I'm running SQL management studio on the client. To me, these figures suggest there's something wrong with the ODBC client on the Windows 2003 server. On Windows, I see that the ODBC "SQL Server" driver is version 6.01.7600.16385 but on Windows 2003, it is 2000.86.3959.00 (by default). Could this be the problem? Is it possible to update an ODBC driver?

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • Can FAXCOMEXLib and Windows Fax Service send a color fax?

    - by Craig
    We are in the process of testing different options for sending faxes from within our C# code (receiving faxes is not necessary). One of those options is to use FAXCOMEXLib. Without surprise, I've had pretty good success sending out black & white faxes with FAXCOMExLib. But we also have a requirement to support sending color faxes. So I execute the following code (just a snippet): IFaxDocument oFaxDoc = new FaxDocumentClass(); oFaxDoc.Body = @"C:\Test\color_image.jpg"; oFaxDoc.ConnectedSubmit(m_oFaxServer); The image is 24bit color, 1728x2304, 204x196 dpi. For the most part, this process works (with a couple of small quirks) and the fax shows up in my "Windows Fax and Scan" outbox (I'm on Vista). The problem is the image has been dithered to a 1bit black & white image. I assume that what I see in "Windows Fax and Scan" is what is actually transmitted. So is there a way to send a color fax using this technology? Are we missing a configuration option somewhere to make it work?

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >