Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 777/903 | < Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >

  • Passing arguments and values from HTML to jQuery (events)

    - by Jaroslav Moravec
    What is the practice to pass arguments from HTML to jQuery events function. For example getting id of row from db: <tr class="jq_killMe" id="thisItemId-id"> ... </tr> and jQuery: $(".jq_killMe").click(function () { var tmp = $(this).attr('id).split("-"); var id = tmp[0] // ... } What's the best practise, if I want to pass more than one argument? Is it better not to use jQuery? For example: <tr onclick="killMe('id')"> ... </tr> I didn't find the answer on my question, I will be glad even for links. Thanks. Edit (pre solution) So you suggested two methods to do that: Add custom attributes to element (XHTML) Use attribute ID and parse it by regex Attribute data-* attributes in HTML5 Use hidden children elements I like first solution, but... I would like to (I have to (employer)) produce valid code. Here is a nice question and answers: http://stackoverflow.com/questions/994856/so-what-if-custom-html-attributes-arent-valid-xhtml And the second is not so pretty as the first, but valid. So the compromise is... The third is the solution for future, but here is a lot of CMS where we have to use XHTML or HTML4. (And HTML5 is the long process)

    Read the article

  • Passing an Ajax variable to a Codeigniter function

    - by Matt
    Hello, I think this is a simple one. I have a Codeigniter function which takes the inputs from a form and inserts them into a database. I want to Ajaxify the process. At the moment the first line of the function gets the id field from the form - I need to change this to get the id field from the Ajax post (which references a hidden field in the form containing the necessary value) instead. How do I do this please? My Codeigniter Controller function function add() { $product = $this->products_model->get($this->input->post('id')); $insert = array( 'id' => $this->input->post('id'), 'qty' => 1, 'price' => $product->price, 'size' => $product->size, 'name' => $product->name ); $this->cart->insert($insert); redirect('home'); } And the jQuery Ajax function $("#form").submit(function(){ var dataString = $("input#id") //alert (dataString);return false; $.ajax({ type: "POST", url: "/home/add", data: dataString, success: function() { } }); return false; }); As always, many thanks in advance.

    Read the article

  • Text piped to PowerShell.exe isn't recieved when using [Console]::ReadLine()

    - by crtracy
    I'm getting itermittent data loss when calling .NET [Console]::ReadLine() to read piped input to PowerShell.exe: >ping localhost | powershell -NonInteractive -NoProfile -C "do {$line = [Console]::ReadLine(); ('' + (Get-Date -f 'HH:mm :ss') + $line) | Write-Host; } while ($line -ne $null)" 23:56:45time<1ms 23:56:45 23:56:46time<1ms 23:56:46 23:56:47time<1ms 23:56:47 23:56:47 Normally 'ping localhost' from Vista64 looks like this, so there is a lot of data missing from the output above: Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Reply from ::1: time<1ms Ping statistics for ::1: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms But using the same API from C# recieves all the data sent to the process (excluding some newline differences). Code: namespace ConOutTime { class Program { static void Main (string[] args) { string s; while ((s = Console.ReadLine ()) != null) { if (s.Length > 0) // don't write time for empty lines Console.WriteLine("{0:HH:mm:ss} {1}", DateTime.Now, s); } } } } Output: 00:44:30 Pinging WORLNTEC02.bnysecurities.corp.local [::1] from ::1 with 32 bytes of data: 00:44:30 Reply from ::1: time<1ms 00:44:31 Reply from ::1: time<1ms 00:44:32 Reply from ::1: time<1ms 00:44:33 Reply from ::1: time<1ms 00:44:33 Ping statistics for ::1: 00:44:33 Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), 00:44:33 Approximate round trip times in milli-seconds: 00:44:33 Minimum = 0ms, Maximum = 0ms, Average = 0ms So, if calling the same API from PowerShell instead of C# many parts of StdIn get 'eaten'. Is the PowerShell host reading string from StdIn even though I didn't use 'PowerShell.exe -Command -'?

    Read the article

  • Programming style question on how to code functions

    - by shawnjan
    Hey all! So, I was just coding a bit today, and I realized that I don't have much consistency when it comes to a coding style when programming functions. One of my main concerns is whether or not its proper to code it so that you check that the input of the user is valid OUTSIDE of the function, or just throw the values passed by the user into the function and check if the values are valid in there. Let me sketch an example: I have a function that lists hosts based on an environment, and I want to be able to split the environment into chunks of hosts. So an example of the usage is this: listhosts -e testenv -s 2 1 This will get all the hosts from the "testenv", split it up into two parts, and it is displaying part one. In my code, I have a function that you pass it in a list, and it returns a list of lists based on you parameters for splitting. BUT, before I pass it a list, I first verify the parameters in my MAIN during the getops process, so in the main I check to make sure there are no negatives passed by the user, I make sure the user didnt request to split into say, 4 parts, but asking to display part 5 (which would not be valid), etc. tl;dr: Would you check the validity of a users input the flow of you're MAIN class, or would you do a check in your function itself, and either return a valid response in the case of valid input, or return NULL in the case of invalid input? Obviously both methods work, I'm just interested to hear from experts as to which approach is better :) Thanks for any comments and suggestions you guys have!

    Read the article

  • JTA or LOCAL transactions in JPA2+Hibernate 3.6.0?

    - by Pangea
    We are in the process of re-thinking our tech stack and below are our choices (We can't live without Spring and Hibernate due to the complexity etc of the app). We are also moving from J2EE 1.4 to JEE 5. Tech stack JEE 5 JPA 2.0 (I know JEE 5 only supports JPA 1.0 but we want to use Hibernate as the JPA provider) Hibernate 3.6.0 (We already have lots of hbm files with custom types etc. so we doesn't want to migrate them at this time to JPA. This means we want both jpa/hbm mappings work together and hence the Hibernate as the JPA provider instead of using the default that comes with App Server) Now the problems is that I want to stick with local transactions but other team members want to use JTA. I have been working with J2EE for last 9 years and I've heard time and again people suggesting to stick with local transactions if I doesn't need two phase commits. This is not only for performance reasons but debugging/troubleshooting a local transaction is lot easier than a distributed transaction. My suggestion is to use spring declarative transaction management + local transactions (HibernateTransactionManager) I want to make sure if I am being paranoid or I have a valid point. I'd like to hear what the rest of the JEE world thinks. Thank you.

    Read the article

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

  • How to create an instance of object with RTTI in Delphi 2010?

    - by Paul
    As we all known, when we call a constructor of a class like this: instance := TSomeClass.Create; The Delphi compiler actually do the following things: Call the static NewInstance method to allocate memory and initialize the memory layout. Call the constructor method to perform the initialization of the class Call the AfterConstruction method It's simple and easy to understand. but I'm not very sure how the compiler handle exceptions in the second and the third step. It seems there are no explicit way to create an instance using a RTTI constructor method in D2010. so I wrote a simple function in the Spring Framework for Delphi to reproduce the process of the creation. class function TActivator.CreateInstance(instanceType: TRttiInstanceType; constructorMethod: TRttiMethod; const arguments: array of TValue): TObject; var classType: TClass; begin TArgument.CheckNotNull(instanceType, 'instanceType'); TArgument.CheckNotNull(constructorMethod, 'constructorMethod'); classType := instanceType.MetaclassType; Result := classType.NewInstance; try constructorMethod.Invoke(Result, arguments); except on Exception do begin if Result is TInterfacedObject then begin Dec(TInterfacedObjectHack(Result).FRefCount); end; Result.Free; raise; end; end; try Result.AfterConstruction; except on Exception do begin Result.Free; raise; end; end; end; I feel it maybe not 100% right. so please show me the way. Thanks!

    Read the article

  • App hosting Report Viewer crashes on exit after export

    - by Paul Sasik
    We have a .NET Winforms application that hosts the Crystal Reports Viewer control (Version XI). It works well for the most part but when an export of data from the viewer is performed the application will crash on exit and in unmanaged code. The error message is not very useful and just says that an incorrect memory location was accessed. No other info such a specific DLL etc. is provided. This only happens after the viewer is used to export a report to CSV, XML etc. My guess is that at some point in the export process Crystal creates a resource that attempts an action on shut down to a parent window (perhaps) that no longer exists. I've seen a number of memory leak and shut down issues with Crystal but this one's new. Has anyone seen it and come up with a workaround or has ideas for workarounds? So far we've tried explicitly disposing of all crystal-related objects, setting to null and even setting a Thread.Sleep cycle on shut down to "give Crystal time to clean up." Update: The crash happens only on shut down (so not immediate) All export formats work All export files are created properly CR is installed on the same machine as the hosting .NET app not sure about exporting from the IDE... is that even possible?

    Read the article

  • Why two subprocesses created by Java behave differently?

    - by Lily
    I use Java Runtime.getRuntime().exec(command) to create a subprocess and print its pid as follows: public static void main(String[] args) { Process p2; try { p2 = Runtime.getRuntime().exec(cmd); Field f2 = p2.getClass().getDeclaredField("pid"); f2.setAccessible(true); System.out.println( f2.get( p2 ) ); } catch (Exception ie) { System.out.println("Yikes, you are not supposed to be here"); } } I tried both C++ executable and Java executable (.jar file). Both executables will continuously print out "Hello World" to stdout. When cmd is the C++ executable, the pid is printed out to console but the subprocess gets killed as soon as main() returns. However, when I call the .jar executable in cmd, the subprocess does not get killed, which is the desired behavior. I don't understand why same Java code, with different executables can behave so differently. How should I modify my code so that I could have persistent subprocesses in Java. Newbie in this field. Any suggestion is welcomed. Lily

    Read the article

  • Files under Program Files have a split personality

    - by regularfry
    I have a Ruby application I'm installing (along with a packaged ruby interpreter) under Program Files on Windows 7 with an NSIS-built installer. In order to debug it, I edited one of the files to add some debugging statements. After that, I uninstalled the package and ran a new version of the installer which includes a new copy of the edited file, without debugging statements. Now, I can't get the new copy to load into ruby. If I run type <filename> in cmd.exe, or open the file in Notepad.exe or Firefox, I see the new version. If I run ruby -e "puts File.read('<filename>')", or open the file in emacs, I see the old version. If, in Windows Explorer, I copy the file to a new filename, everything can see the new contents at that filename. If I delete the original file and rename the copy to replace the original, the split personality returns. This situation survives a reboot, so it's not a simple matter of a file being accidentally held open. What on earth is going on here? Is there some aspect of the install process that might be checkpointing the file in a way I can revert, or at least switch off while I'm debugging the installer?

    Read the article

  • How to deal with clients and iterations in Agile team?

    - by Ondrej Slinták
    This thread is a follow up to my previous one. It's in fact 2 questions, so I hope no one minds, as they are dependent on each other. We are starting a new project at work and we consider it as a great opportunity to try Agile techniques in action. We had a brainstorming about ideas we read in several books and articles, and came up with concept that would suit us the best: 2 weeks iteration, followed by call with clients who would choose what stuff they want to have in next iteration. I just have few more questions, which we couldn't figure out ourselves. What to do in the first iteration? What to, generally, do in the first few iterations if we start from the scratch? Just give it a month of development to code core of the application or start with simple wire-frames with limited pre-coded functionality? What usually clients want to see? Shiny stuff that doesn't work or ugly stuff that does work? How to communicate with clients? Our initial thought it to set the process to something like this: Is it a good idea to have a Focal Point on client side or is it better to communicate straight with all the clients to prevent miscommunication? Any thoughts are welcome! Thanks in advance.

    Read the article

  • Linux network stack : adding protocols with an LKM and dev_add_pack

    - by agent0range
    Hello, I have recently been trying to familiarize myself with the Linux Networking stack and device drivers (have both similarly named O'Reilly books) with the eventual goal of offloading UDP. I have already implemented UDP on the NIC but now the hard part... Rather than ask for assistance on this larger goal I was hoping someone could clarify for me a particular snippet I found that is part of a LKM which registeres a new protocol (OTP) that acts as a filter between the device driver and network stack. http://www.phrack.org/archives/55/p55_0x0c_Building%20Into%20The%20Linux%20Network%20Layer_by_lifeline%20&%20kossak.txt (Note: this Phrack article contains three different modules, code for the OTP is at the bottom of the page) In the init function of his example he has: otp_proto.type = htons(ETH_P_ALL); otp_proto.func = otp_func; dev_add_pack(&otp_proto); which (if I understand correctly) should register otp_proto as a packet sniffer and put it into the ptype_all data structure. My question is about the dev_add_pack. Is it the case that the protocol being registered as a filter will always be placed at this layer between L2 and the device driver? Or, for instance could I make such a filtering occur between the application and transport layers (analyze socket parameters) using the same process? I apologize if this is confusing - I am having some trouble wrapping my head around the bigger picture when it comes to modules altering kernel stack functionality. Thanks

    Read the article

  • How can I create Wikipedia's footnote highlighting solely with jQuery

    - by andrew.bachman
    I would like to duplicate Wikipedia's footnote highlighting from in-text citation click solely in jQuery and CSS classes. I found a webpage that describes how to do so with CSS3 and then a JavaScript solution for IE. I would like however to do so solely with jQuery as the site I'm doing it on already has a bunch of jQuery elements. I've come up with a list of the process. In-Text Citation Clicked Replace highlight class with standard footnote class on <p> tag of footnotes that are already highlighted. Add highlight to the appropriate footnote Use jQuery to scroll down the page to appropriate footnote. I've come up with some jQuery so far but I'm extremely new to it relying heavy on tutorials and plugins to this point. Here is what I have with some plain English for the parts I haven't figured out yet. $('.inlineCite').click(function() { $('.footnoteHighlight').removeClass('footnoteHighlight').addClass('footnote'); //add highlight to id of highlightScroll //scroll to footnote with matching id }); If I had a method to pass a part of the selector into the function and turn it into a variable I believe I could pull it off. Most likely I'm sure one of you gurus will whip something up that puts anything I think I have done to shame. Any help will be greatly appreciated so thank you in advance. Cheers.

    Read the article

  • Jasmine testing coffeescript expect(setTimeout).toHaveBeenCalledWith

    - by Lee Quarella
    In the process of learning Jasmine, I've come to this issue. I want a basic function to run, then set a timeout to call itself again... simple stuff. class @LoopObj constructor: -> loop: (interval) -> #do some stuff setTimeout((=>@loop(interval)), interval) But I want to test to make sure the setTimeout was called with the proper args describe "loop", -> xit "does nifty things", -> it "loops at a given interval", -> my_nifty_loop = new LoopObj interval = 10 spyOn(window, "setTimeout") my_nifty_loop.loop(interval) expect(setTimeout).toHaveBeenCalledWith((-> my_nifty_loop.loop(interval)), interval) I get this error: Expected spy setTimeout to have been called with [ Function, 10 ] but was called with [ [ Function, 10 ] ] Is this because the (-> my_nifty_loop.loop(interval)) function does not equal the (=>@loop(interval)) function? Or does it have something to do with the extra square brackets around the second [ [ Function, 10 ] ]? Something else altogther? Where have I gone wrong?

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Delphi: Error when starting MCI

    - by marco92w
    I use the TMediaPlayer component for playing music. It works fine with most of my tracks. But it doesn't work with some tracks. When I want to play them, the following error message is shown: Which is German but roughly means that: In the project pMusicPlayer.exe an exception of the class EMCIDeviceError occurred. Message: "Error when starting MCI.". Process was stopped. Continue with "Single Command/Statement" or "Start". The program quits directly after calling the procedure "Play" of TMediaPlayer. This error occurred with the following file for example: file size: 7.40 MB duration: 4:02 minutes bitrate: 256 kBit/s I've encoded this file with a bitrate of 128 kBit/s and thus a file size of 3.70 MB: It works fine! What's wrong with the first file? Windows Media Player or other programs can play it without any problems. Is it possible that Delphi's TMediaPlayer cannot handle big files (e.g. 5 MB) or files with a high bitrate (e.g. 128 kBit/s)? What can I do to solve the problem?

    Read the article

  • Skipping the BufferedReader readLine() method in java

    - by DDP
    Is there a easy way to skip the readLine() method in java if it takes longer than, say, 2 seconds? Here's the context in which I'm asking this question: public void run() { boolean looping = true; while(looping) { for(int x = 0; x<clientList.size(); x++) { try { Comm s = clientList.get(x); String str = s.recieve(); // code that does something based on the string in the line above } // other stuff like catch methods } } } Comm is a class I wrote, and the receive method, which contains a BufferedReader called "in", is this: public String recieve() { try { if(active) return in.readLine(); } catch(Exception e) { System.out.println("Comm Error 2: "+e); } return ""; } I've noticed that the program stops and waits for the input stream to have something to read before continuing. Which is bad, because I need the program to keep looping (as it loops, it goes to all the other clients and asks for input). Is there a way to skip the readLine() process if there's nothing to read? I'm also pretty sure that I'm not explaining this well, so please ask me questions if I'm being confusing.

    Read the article

  • Internet Explorer randomly drops sessions between pages in cakePHP

    - by Emerson Taymor
    Hello everyone, I've come across an extremely unusual bug that my team has literally no idea how to solve. Doing some research, I found some similar solutions that I thought would work, but alas did not. Here is my situation, let me know if I can provide additional insight to help solve the problem. The first step is that someone chooses a country via a flash map. Flash passes this region name (as well as a date) through the URL, which we then convert to a session. The next page contains no Flash and doesn't display the selected region, but it does hold on to it for further down the process. Everything works perfectly in Safari and Firefox; however, in IE sometimes unexpected results occur. Frequently (but not always), the session is dropped completely and no sessions are stored between the first and 2nd pages. Here are the steps that I have taken thus far, unsuccessfully: 1. Changed Security from Medium - Low 2. Changed CheckUserAgent from True - False 3. Changed storing of sessions from PHP - Database Some additional information that may be useful: I have tried printing out the session data in Debug (debug($_SESSION) on my view file and debug set to 2 in config). In Internet Explorer everything prints out as expected EXCEPT when the region and date don't get set. For example: If the region and date don't get set NOTHING is printed out for debug. I don't get the session details at the top, and I don't get the normal dump of calls at the bottom of the page either. I am not using redirection on these pages. Please let me know if you have ANY idea of what is causing this or any solutions. I am beyond frustrated and have tried as much as I can to solve this. Thanks!

    Read the article

  • MSBuild / PowerShell: Copy SQL Server 2012 database to SQL Azure via BACPAC (for Continuous Integration)

    - by giveme5minutes
    I'm creating a continuous integration MSBuild script which copies a database in on-premise SQL Server 2012 to SQL Azure. Easy right? Methods After a fair bit of research I've come across the following methods: Use PowerShell to access the DAC library directly, then use the MSBuild PowerShell extension to wrap the script. This would require installing PowerShell 3 and working out how to make the MSBuild PowerShell extension work with it, as apparently MS moved the DAC API to a different namespace in the latest version of the library. PowerShell would give direct access to the API, but may require quite a bit of boilerplate. Use the sample DAC Framework Client Side Tools, which requires compiling them myself, as the downloads available from Codeplex only include the Hosted version. It would also require fixing them to use DAC 3.0 classes as they appear to currently use an earlier version of DAC. I could then call these tools from an <Exec Command="" /> in the MSBuild script. Less boilerplate and if I hit any bumps in the road I can just make changes to the source. Processes Using whichever method, the process could be either: Export from on-premise SQL Server 2012 to local BACPAC Upload BACPAC to blog storage Import BACPAC to SQL Azure via Hosted DAC Or: Export from on-premise SQL Server 2012 to local BACPAC Import BACPAC to SQL Azure via Client DAC Question All of the above seems to be quite a lot of effort for something that seems to be a standard feature... so before I start reinventing the wheel and documenting the results for all to see, is there something really obvious that I've missed here? Is there pre-written script that MS has released that I have not yet uncovered? There's an command in the GUI of SQL Server Management Studio 2012 that does EXACTLY what I'm trying to do (right click on local database, click "Tasks", click "Deploy Database to SQL Azure"). Surely if it's a few clicks in the GUI it must be a single command on the command line somewhere??

    Read the article

  • Are there more secure alternatives to the .Net SQLConnection class?

    - by KeyboardMonkey
    Hi SO people, I'm very surprised this issue hasn't been discussed in-depth: This article tells us how to use windbg to dump a running .Net process strings in memory. I spent much time researching the SecureString class, which uses unmanaged pinned memory blocks, and keeps the data encrypted too. Great stuff. The problem comes in when you use it's value, and assign it to the SQLConnection.ConnectionString property, which is of the System.String type. What does this mean? Well... It's stored in plain text Garbage Collection moves it around, leaving copies in memory It can be read with windbg memory dumps That totally negates the SecureString functionality! On top of that, the SQLConnection class is non-inheritable, I can't even roll my own with a SecureString property instead; Yay for closed-source. Yay. A new DAL layer is in progress, but for a new major version and for so many users it will be at least 2 years before every user is upgraded, others might stay on the old version indefinitely, for whatever reason. Because of the frequency the connection is used, marshalling from a SecureString won't help, since the immutable old copies stick in memory until GC comes around. Integrated Windows security isn't an option, since some clients don't work on domains, and other roam and connect over the net. How can I secure the connection string, in memory, so it can't be viewed with windbg?

    Read the article

  • select failing with C program but not shell

    - by Gary
    So I have a parent and child process, and the parent can read output from the child and send to the input of the child. So far, everything has been working fine with shell scripts, testing commands which input and output data. I just tested with a simple C program and couldn't get it to work. Here's the C program: #include <stdio.h> int main( void ) { char stuff[80]; printf("Enter some stuff:\n"); scanf("%s", stuff); return 0; } The problem with with the C program is that my select fails to read from the child fd and hence the program cannot finish. Here's the bit that does the select.. //wait till child is ready fd_set set; struct timeval timeout; FD_ZERO( &set ); // initialize fd set FD_SET( PARENT_READ, &set ); // add child in to set timeout.tv_sec = 3; timeout.tv_usec = 0; int r = select(FD_SETSIZE, &set, NULL, NULL, &timeout); if( r < 1 ) { // we didn't get any input exit(1); } Does anyone have any idea why this would happen with the C program and not a shell one? Thanks!

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • Why do I get empty request from the Jakarta Commons HttpClient?

    - by polyurethan
    I have a problem with the Jakarta Commons HttpClient. Before my self-written HttpServer gets the real request there is one request which is completely empty. That's the first problem. The second problem is, sometimes the request data ends after the third or fourth line of the http request: POST / HTTP/1.1 User-Agent: Jakarta Commons-HttpClient/3.1 Host: 127.0.0.1:4232 For debugging I am using the Axis TCPMonitor. There every things is fine but the empty request. How I process the stream: StringBuffer requestBuffer = new StringBuffer(); InputStreamReader is = new InputStreamReader(socket.getInputStream(), "UTF-8"); int byteIn = -1; do { byteIn = is.read(); if (byteIn > 0) { requestBuffer.append((char) byteIn); } } while (byteIn != -1 && is.ready()); String requestData = requestBuffer.toString(); How I send the request: client.getParams().setSoTimeout(30000); method = new PostMethod(url.getPath()); method.getParams().setContentCharset("utf-8"); method.setRequestHeader("Content-Type", "application/xml; charset=utf-8"); method.addRequestHeader("Connection", "close"); method.setFollowRedirects(false); byte[] requestXml = getRequestXml(); method.setRequestEntity(new InputStreamRequestEntity(new ByteArrayInputStream(requestXml))); client.executeMethod(method); int statusCode = method.getStatusCode(); Have anyone of you an idea how to solve these problems? Alex

    Read the article

  • C# : Forcing a clean run in a long running SQL reader loop?

    - by Wardy
    I have a SQL data reader that reads 2 columns from a sql db table. once it has done its bit it then starts again selecting another 2 columns. I would pull the whole lot in one go but that presents a whole other set of challenges. My problem is that the table contains a large amount of data (some 3 million rows or so) which makes working with the entire set a bit of a problem. I'm trying to validate the field values so i'm pulling the ID column then one of the other cols and running each value in the column through a validation pipeline where the results are stored in another database. My problem is that when the reader hits the end of handlin one column I need to force it to immediately clean up every little block of ram used as this process uses about 700MB and it has about 200 columns to go through. Without a full Garbage Collect I will definately run out of ram. Anyone got any ideas how I can do this? I'm using lots of small reusable objects, my thought was that I could just call GC.Collect() on the end of each read cycle and that would flush everything out, unfortunately that isn't happening for some reason.

    Read the article

  • C++ VB6 interfacing problem

    - by Roshan
    Hi, I'm tearing my hair out trying to solve this one, any insights will be much appreciated: I have a C++ exe which acquires data from some hardware in the main thread and processes it in another thread (thread 2). I use a c++ dll to supply some data processing functions which are called from thread 2. I have a requirement to make another set of data processing functions in VB6. I have thus created a VB6 dll, using the add-in vbAdvance to create a standard dll. When I call functions from within this VB6 dll from the main thread, everything works exactly as expected. When I call functions from this VB6 dll in thread 2, I get an access violation. I've traced the error to the CopyMemory command, it would seem that if this is used within the call from the main thread, it's fine but in a call from the process thread, it causes an exception. Why should this be so? As far as I understand, threads share the same address space. Here is the code from my VB dll Public Sub UserFunInterface(ByVal in1ptr As Long, ByVal out1ptr As Long, ByRef nsamples As Long) Dim myarray1() As Single Dim myarray2() As Single Dim i As Integer ReDim myarray1(0 To nsamples - 1) As Single ReDim myarray2(0 To nsamples - 1) As Single With tsa1din(0) ' defined as safearray1d in a global definitions module .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = in1ptr End With With tsa1dout .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = out1ptr End With CopyMemory ByVal VarPtrArray(myarray1), VarPtr(tsa1din(0)), 4 CopyMemory ByVal VarPtrArray(myarray2), VarPtr(tsa1dout), 4 For i = 0 To nsamples - 1 myarray2(i) = myarray1(i) * 2 Next i ZeroMemory ByVal VarPtrArray(myarray1), 4 ZeroMemory ByVal VarPtrArray(myarray2), 4 End Sub

    Read the article

< Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >