Search Results

Search found 22041 results on 882 pages for 'kill process'.

Page 636/882 | < Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >

  • how to handle db concurrency in client-server application in C#?

    - by RAJ K
    I am developing an application in C# WPF which will have Client-Server architecture (Client will do products sales billing). I am novice in this area and I asked this question to start my development process. Click here to view question. So, ultimately I have selected MySQl, WCf & WPF. Now I have one silly question. Do i need to handle DB concurrency explicitly in my application (like 3 clients inserting data same time) or MySQl will handle this without any conflict? To accomplish my project i thought, I will create a service in WCf which will do DB queries from client application. Do you have any suggestion to improve my application performance.

    Read the article

  • Add widgets to custom WordPress sidebars on theme activation?

    - by McGirl
    I'm working on a WordPress 3.0 multi-site installation. Each new blog will use the same theme with slight modifications (a custom Thesis install, if it matters). I'm trying to automate the set-up process for each new blog as much as possible. To that end, I'd like to automatically add widgets to my custom sidebars and widget-enabled footers. It'd be even better if the widgets could have pre-set parameters/content, that I or the blog owner could then go into the Widgets panel and edit. I've searched high and low and haven't been able to figure out a way to make this work. Any and all suggestions are welcome. Thanks so much!

    Read the article

  • MiniDumpWriteDump segfault?

    - by Steven Penny
    I am trying to dump a process, say calc.exe When I run my program I get Program received signal SIGSEGV, Segmentation fault. 0x0000000000401640 in MiniDumpWriteDump () Here is the code #include <windows.h> #include <dbghelp.h> int main(){ HANDLE hFile = CreateFileA( "calc.dmp", GENERIC_READ | GENERIC_WRITE, FILE_SHARE_DELETE | FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL ); DWORD procID = 196; HANDLE hProc = OpenProcess( PROCESS_ALL_ACCESS, FALSE, procID ); MiniDumpWriteDump( hProc, procID, hFile, MiniDumpNormal, NULL, NULL, NULL ); CloseHandle(hFile); }

    Read the article

  • Creating an n tiered application

    - by aaron
    I am researching architecture for a project that will be started next year. It is mainly a c# web app, but there will be a service layer so that it can talk to our facebook/iphone app. There are a few long running processes, which means that I will be creating a windows process that can handle those. I’m thinking of putting the entire app in the windows service instead of just the long running processes. Asp - wcf - bll Vs Asp - bll I know this will be more scalable. But it is probably overkill as everything will be running on the same box, even the database. This could change down the road if the server can’t handle the traffic like marketing says it will. I don’t have access to production hardware, just my crappy testing box and my local machine. Has anyone decided to go down this route? But mostly, what is the best way to test both methods to get some metrics?

    Read the article

  • How to design a client server architect

    - by Saurabh01
    I like to know the server (TCP based) architecture to support large scale of clients(at least10K) to implement Fix server. My points are How we design it. How to listen on the open port? Use select or poll or any other function. How to process the response of the client? On large scale we cannot create the one thread for each client. Should the processing of response is in the different executable and share the request and response to the server executable through IPC. There is much more on it. I would appreciate if anyone explains it or provide any link. Thanks

    Read the article

  • Import small number of records from a very large CSV file in Biztalk 2006

    - by rwmnau
    I have a Biztalk project that imports an incoming CSV file and dumps it to a database table. The import works fine, but I only need to keep about 200-300 records from a file with upwards of a million rows. My orchestration discards these rows, but the problem is that the flat file I'm importing is still 250MB, and when converted to XML using a regular flat file pipeline, it takes hours to process and sometimes causes the server to run out memory. Is there something I can do to have the Custom Pipeline itself discard rows I don't care about? The very first item in each CSV row is one of a few strings, and I only want to keep rows that start with a certain string. Thanks for any help you're able to provide.

    Read the article

  • Should I use "id" or "unique username"?

    - by roa3
    I am using PHP, AS3 and mysql. I have a website. A flash(as3) website. The flash website store the members' information in mysql database through php. In "members" table, i have "id" as the primary key and "username" as a unique field. Now my situation is: When flash want to display a member's profile. My questions: Should Flash pass the member "ID" or "username" to php to process the mysql query? Is there any different passing the "id" or "username"? Which one is more secure? Which one you recommend? I would like to optimize my website in terms of security and performance.

    Read the article

  • Google Contacts Data API and PHP

    - by pako
    I'm developing a PHP application to retrieve the list of contacts from a GMail account. I'm looking for a solution which would enable the user of my application to provide the login and password to their Gmail account in my application (as opposed to getting redirected to Google) and then automatically do the retrieval. The retrieval process can be run in PHP or JavaScript (which would then feed the list of contacts back to PHP using Ajax). Is it possible to do that? Which JavaScript API should I use for that? Can someone point me at the right chapter in Google Contacts Data API documentation?

    Read the article

  • insert mysql record with table format in a string then echo in a html

    - by user1292042
    Ok, I have this code. $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("juliver", $con); $result = mysql_query("SELECT * FROM items"); while($row = mysql_fetch_array($result)) { $hi1='<img src="'.$row['name'].'" />'; $hi2= $row['title']; $hi3= $row['description']; $hi4= $row['link']; } Now, Im in a process of making those records above to display in a table view and that 4 row should be on one td and so the rest.

    Read the article

  • Is referencing a selector faster in jquery than actually calling the selector? if so, how much does it make a difference?

    - by anthonypliu
    Hi, I have this code: $(preview-button).click(...) $(preview-button).slide(...) $(preview-button).whatever(...) Is it a better practice to do this: var preview-button = $(preview-button); preview-button.click(...); preview-button.click(...); preview-button).slide(...); preview-button.whatever(...); It probably would be better practice to do this for the sake of keeping code clean and modular, BUT does it make a difference performance wise? Does one take longer to process than the other? Thanks guys.

    Read the article

  • Test plans and how best to write them

    - by Karim
    We're trying to figure out the best way to write tests in our test plan. Specifically, when writing a test that is meant to be used by anyone including QA staff, should the steps in the test be very specific or more broad giving the tester more leeway in how the task can be accomplished. As a very simple example, if you're testing opening a document in word processing document, should the test read: Using the mouse, open the file menu Choose "Open File..." in the file menu In the open file dialog that appears, navigate to x and double-click the document called y OR Bring up the file open dialog Open the file y Now I realize one answer is probably going to be "it depends on what you're trying to test" but I'm trying to answer a broader question here: If the test steps are too specific do we risk a) making the testing process to laborious and tedious and more importantly b) do we risk missing something because we wrote down too specific a path to achieve a goal. Alternatively, if we make it broad do we depend too much on the whims of the tester at the time and lose crucial testing of paths that are more common to customers/clients?

    Read the article

  • What is the correct method to load an XML file and re-write it as a CSV? (C# Only)

    - by codesmack
    I have a XML file that I want to load into an unknown object type. (I say unknown object type because I am not sure what direction to go) Once I have the data loaded I need to do some processing on certain elements that are now loaded into the new object. For sake of example, we can say that the xml file is full of elements named <car> and within the car element I need to process the <mileage> element. Then once this is all done I need to write the file as a CSV file. I would like to do this is the most direct way possible. (The less code the better) I am using VS 2008 C#

    Read the article

  • How to start an Open Source Software development

    - by harigm
    I have an idea to start a Open Source Software using Java and PHP, I am so sure that software will help many individuals in their daily routine. Can Any one suggest me how to start the process ? Do we need to register some where and submit our idea for an approval before we start development? Any license that we need to get? How to invite the people for the open source development community, if they are interested? If any people who contributes Do we need to get any agreement signed off? once the Open Source product is stabilized who will have the ownership?

    Read the article

  • Workflow Foundation: Asynchronous operations (lengthy network I/O)

    - by StormianRootSolver
    I have to create an application that will be started a few times per day (it's non - interactive). To operate, it needs LARGE amounts of data from the Internet (megabytes) via a rather slow connection, so the WCF service calls take quite some time. At the same time, it needs to perform local calculations and has a sophisticated initialization process. So, what I want to do is to create a workflow that asynchronously fetches the data (takes a few minutes) while already initializing / calculating locally. Is there a way to accomplish this?

    Read the article

  • What is the fastest way to compare 2 rows in SQL?

    - by Swoosh
    I have 2 different databases. Upon changing something in the big one (i don't have access to), i get some rows imported in my databases in a similar HUGE table. I have a job checking for records in this table, and if any, execute a stored procedure, process and delete from table. Performance. (Huge amount of data) I would like to know what is the fastest way to know if something has changed using let's say 2 imported rows with 100 columns each. Don't have FK-s, don't need. Chances are, that even though I have records in my table, nothing has actually changed. Also. Let's say there is actually changed something. Is it possible for example to check only for changes inside datetime columns? Thanks

    Read the article

  • Good overview tool / board for visualizing Subversion branch acitivity?

    - by Sam
    Our team is sometimes finding it a bit confusing and time-consuming to figure out which subversion operations have been perrformed on our different branches in Subversion. Example, when has the Development branch last been merged into the Trunk? When was this particular Tag created, based on what branch etc etc. All of this information can of course be extracted from the Subversion Log, but thats always a manual, time-consuming and error-prone process. Simplest solution seems to be a simple whiteboard with a visualization of all the different branches/tags/trunk in Subversion and people drawing on it, whenever something significant happens. But we're not averse to finding some kind of a digital solution as well, stored centrally. Obviously both systems depend on people actually maintaining the model, but you'll always more or less have that. What do you use as best practice for keeping a clear view on all Subversion operations in the current Sprint (or beyond)?

    Read the article

  • The cost of finalize in .Net

    - by Jules
    (1) I've read a lot of questions about IDisposable where the answers recommend not using Finalize unless you really need to because of the process time involved. What I haven't seen is how much this cost is and how often it's paid. Every millisecond? second? hour, day etc. (2) Also, it seems to me that Finalize is handy when its not always known if an object can be disposed. For instance, the framework font class. A control can't dispose of it because it doesn't know if the font is shared. The font is usually created at design time so the user won't know to dispose it, therefore finalize kicks in to finally get rid of it when there are no references left. Is that a correct impression?

    Read the article

  • How to make IIS wait for WCF service gets ready?

    - by Kamarey
    I have a WCF service hosted in IIS 7. It takes some minutes until this service finish to load its data and ready for external calls. Data loads in internal thread. The problem is that from IIS point the service is ready just after it was activated (by some call), and it process a request without waiting for data to be loaded. Is it possible to tell IIS that the service is still loading and make this service unavailable for requests? No problem if such request will throw an exception.

    Read the article

  • Puzzle: find the minimum number of weights

    - by avd
    I came across this question: say given two weights 1 and 3, u can weigh 1,2 (by 3-1),3,4 (by 3+1). Now find the minimum number of weights so that you can measure 1 to 1000. So the answer was 1,3,9,27... I want to know how do you arrive at such a solution means powers of 3. What is the thought process? Source: http://classic-puzzles.blogspot.com/search/label/Google%20Interview%20Puzzles Solution: http://classic-puzzles.blogspot.com/2006/12/solution-to-shopkeeper-problem.html

    Read the article

  • How do I show the print with AJAX/jQuery?

    - by Doug
    So I'm trying to understand this whole AJAX/jQuery thing. Right now, when I run this PHP script alone, I would have to wait and watch the wheel spin until it's done with the loop and then it will load. while ( $row = mysql_fetch_array($res) ) { postcode_to_storm( $row['Test'] ); $dom = new DOMDocument(); @$dom->loadHTML($result); $xPath = new DOMXPath($dom); $failInvite = 'Rejected'; $findFalse = strpos($result, $failInvite); if ( $findFalse == true ) { $array[$i] = $row['Test']; $i++; echo $array[$i]}; } } Now, how do I use AJAX/jQuery to show echo $array[$i]}; everytime it is invoked instead of waiting for the whole process to complete?

    Read the article

  • Is there such a thing as a dektop application event aggregator, similar to that used in Prism?

    - by brownj
    The event aggregator in Prism is great, and allows loosely coupled communication between modules within a composite application. Does such a thing exist that allows the same thing to happen between standalone applications running on a user's desktop? I could imagine developing a solution that uses WCF with TCP binding and running inside Windows Process Activation Service. Client applications could subscribe or publish events to this service as required and it would ensure all other listeners get notified of events as appropriate. Using TCP would enable event messages to be pushed out to clients without the need for polling, ensuring messages are delivered very quickly. I can't help but think though that such a thing would already exist... Is anyone aware of something like this, or have any advice on how it may be best implemented?

    Read the article

  • Reverse engineering a custom data file

    - by kerchingo
    At my place of work we have a legacy document management system that for various reasons is now unsupported by the developers. I have been asked to look into extracting the documents contained in this system to eventually be imported into a new 3rd party system. From tracing and process monitoring I have determined that the document images (mainly tiff files) are stored in a number of 1.5GB files. These files seem to be read from a specific offset and then written to a tmp file that is then served via a web app to the client, and then deleted. I guess I am looking for suggestions as to how I can inspect these large files that contain the tiff images, and eventually extract and write them to individual files.

    Read the article

  • How to have partial incremental synchronizations based on a GUID?

    - by Gonçalo Veiga
    I need to synchronize an SQL Server database to Oracle through an Oracle Transparent Gateway. The synchronization is performed in batches, so I need to get the next set of data from the point where I left off. The problem I'm having is that the only field I have in the source, to help me, is a GUID. If it were a number I could just order by it, keep the last one processed and restart the process by getting the records which are my recorded number. This won't work with a GUID. Any ideas?

    Read the article

  • AJAX form sections - how to pass url of next stage of form

    - by dan727
    Hi, I've got a multi-part form (in a PHP MVC setup) which I have working correctly without javascript enhancement. I'm starting to add the AJAX form handling code which will handle each stage of a form submission, validating/saving data etc, before using AJAX to load the next stage of the form. I'm wondering how best to pass the URL of the next stage to the current form being processed, so that my jQuery form handling code can process the current form, then load the next part via AJAX. The form "action" is different from what the url of the next stage of the form is - what do you think would be good practice here? I was thinking about either appending the url of the next stage to the form action url, via a query string - then just use javascript to extract this url when the form is successfully processed. The other option is via a hidden form element. Not sure what other client side options I have here Any thoughts?

    Read the article

  • Ajax: Appending list-item at the top

    - by ptamzz
    I'm calling a php processing file using ajax an appending the returned HTML usign the code below $.ajax({ type: "POST", url: "proc/process.php", data: dataString, cache: false, success: function(html){ $("ul#lists").append(html); $("ul#lists li:last").fadeIn("slow"); $("#flash").hide(); } }); It appends a <li></li> item at the end of the ul#lists. I want the returned list-item <li></li> at the top of the lists instead of being appended at the last. How do I do that??

    Read the article

< Previous Page | 632 633 634 635 636 637 638 639 640 641 642 643  | Next Page >