Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 374/976 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • Invoking different methods on threads

    - by Kraken
    I have a main process main. It creates 10 threads (say) and then what i want to do is the following: while(required){ Thread t= new Thread(new ClassImplementingRunnable()); t.start(); counter++; } Now i have the list of these threads, and for each thread i want to do a set of process, same for all, hence i put that implementation in the run method of ClassImplementingRunnable. Now after the threads have done their execution, i wan to wait for all of them to stop, and then evoke them again, but this time i want to do them serially not in parallel. for this I join each thread, to wait for them to finish execution but after that i am not sure how to evoke them again and run that piece of code serially. Can i do something like for(each thread){ t.reevoke(); //how can i do that. t.doThis(); // Also where does `dothis()` go, given that my ClassImplementingRunnable is an inner class. } Also, i want to use the same thread, i.e. i want the to continue from where they left off, but in a serial manner. I am not sure how to go about the last piece of pseudo code. Kindly help. Working with with java.

    Read the article

  • Framework or tool for "distributed unit testing"?

    - by user262646
    Is there any tool or framework able to make it easier to test distributed software written in Java? My system under test is a peer-to-peer software, and I'd like to perform testing using something like PNUnit, but with Java instead of .Net. The system under test is a framework I'm developing to build P2P applications. It uses JXTA as a lower subsystem, trying to hide some complexities of it. It's currently an academic project, so I'm pursuing simplicity at this moment. In my test, I want to demonstrate that a peer (running in its own process, possibly with multiple threads) can discover another one (running in another process or even another machine) and that they can exchange a few messages. I'm not using mocks nor stubs because I need to see both sides working simultaneously. I realize that some kind of coordination mechanism is needed, and PNUnit seems to be able to do that. I've bumped into some initiatives like Pisces, which "aims to provide a distributed testing environment that extends JUnit, giving the developer/tester an ability to run remote JUnits and create complex test suites that are composed of several remote JUnit tests running in parallel or serially", but this project and a few others I have found seem to be long dead.

    Read the article

  • In .NET, how do you send multiple arguments into a DOS command prompt?

    - by donde
    I am trying to execute DOS commands in ASP.NET 2.0. What I have now calls a BAT file which, in turn, calls a CMD file. It works (with the end result being a file gets ftp'ed). However, I'd like to dump the BAT and CMD files and run everything in .NET. What is the format of sending multiple arguments into the command window? Here is what I have now. The .NET Code: System.Diagnostics.Process proc = new System.Diagnostics.Process(); proc.EnableRaisingEvents = false; proc.StartInfo.FileName = "C:\\MyBat.BAT"; proc.Start(); proc.WaitForExit(); The Bat File looks like this (all it does is run the cmd file): ftp.exe -s:C:\MyCMD.cmd And here is the content of the Cmd file: open <my host> <my user name> <my pw> quote site cyl pri=1 sec=1 lrecl=1786 blksize=0 recfm=fb retpd=30 put C:\MyDTLFile.dtl 'MyDTLFile.dtl' quit

    Read the article

  • Sign out of messenger on Windows 8

    - by jmlumpkin
    I just installed the Windows 8 Consumer Preview. Just going through the default procedure, I let it use my Xbox Live account to create a user. When I then went and turned on my Xbox, it now notified me that I was logged into Messenger in two locations. I went back to Windows 8, and turned my Live account into a local account on that machine. But when I then turned the Xbox again, I got the same message. Is there a way to just 'sign out of messenger' on Windows 8? Or is there a location to even see where I am signed in at?

    Read the article

  • Compile C++ in Visual Studio

    - by Kasun
    Hi All.. I use this method to compile C++ file in VS. But even i provide the correct file it returns false. Can any one help me... This is class called CL class CL { private const string clexe = @"cl.exe"; private const string exe = "Test.exe", file = "test.cpp"; private string args; public CL(String[] args) { this.args = String.Join(" ", args); this.args += (args.Length > 0 ? " " : "") + "/Fe" + exe + " " + file; } public Boolean Compile(String content, ref string errors) { if (File.Exists(exe)) File.Delete(exe); if (File.Exists(file)) File.Delete(file); File.WriteAllText(file, content); Process proc = new Process(); proc.StartInfo.UseShellExecute = false; proc.StartInfo.RedirectStandardOutput = true; proc.StartInfo.RedirectStandardError = true; proc.StartInfo.FileName = clexe; proc.StartInfo.Arguments = this.args; proc.StartInfo.CreateNoWindow = true; proc.Start(); //errors += proc.StandardError.ReadToEnd(); errors += proc.StandardOutput.ReadToEnd(); proc.WaitForExit(); bool success = File.Exists(exe); return success; } } This is my button click event private void button1_Click(object sender, EventArgs e) { string content = "#include <stdio.h>\nmain(){\nprintf(\"Hello world\");\n}\n"; string errors = ""; CL k = new CL(new string[] { }); if (k.Compile(content, ref errors)) Console.WriteLine("Success!"); else MessageBox.Show("Errors are : ", errors); }

    Read the article

  • How to set buffer size in client-server app using sockets?

    - by nelly
    First of all i am new to networking so i may say dumb thing in here. Considering a client-server application using sockets(.net with c# if that matters). The client sends some data, the server process it and sends back a string. The client sends some other data, the serve process it, queries the db and sends back several hundreds of items from the database The client sends some other type of data and the server notifies some other clients . My question is how to set the buffer size correctly for reading/writing operation. Should i do something like this: byte[] buff = new byte[client.ReceiveBufferSize] ? I am thinking of something like this: Client sends data to the server(and the server will follow the same pattern) byte[] bytesToSend=new byte[2048] //2048 to be standard for any command send by the client bytes 0..1 ->command type bytes 1..2047 ->command parameters byte[] bytesToReceive=new byte[8]/byte[64]/byte[8192] //switch(command type) But..what is happening when a client is notified by the server without sending data? What is the correct way to accomplish what i am trying to do? Thanks for reading.

    Read the article

  • Django | passing form values

    - by MMRUser
    I want to create a user sign up process that requires two different forms with the same data one (1st form) is for filling out the data and other one (2nd form) is for displaying the filled data as a summery (before actually saving the data) so then user can view what he/she has filled up... my problem is that how do I pass 1st form's data in to the 2nd one .. I have used the basic Django form manipulation mechanism and passed the form field values to the next form using Django template tags.. if request.method == 'POST': form = Users(request.POST) if form.is_valid(): cd = form.cleaned_data try: name = cd['fullName'] email = cd['emailAdd'] password1 = cd['password'] password2 = cd['password2'] phoneNumber = cd['phoneNumber'] return render_to_response('signup2.html', {'name': name, 'email': email, 'password1': password1, 'password2': password2, 'phone': phone, 'pt': phoneType}) except Exception, ex: return HttpResponse("Error %s" % str(ex)) and from the second from I just displayed those field values using tags and also used hidden fields in order to submit the form with values, like this: <label for="">Email:</label> {{ email }} <input type="hidden" id="" name="email" class="width250" value="{{ email }}" readonly /> It works nicely from the out look, but the real problem is that if someone view the source of the html he can simply get the password even hackers can get through this easily. So how do I avoid this issue.. and I don't want to use Django session since this is just a simple sign up process and no other interactions involved. Thanks.

    Read the article

  • a friendly elaboratation on how iceweasel misses as recent request was discovered in post tut

    - by v3x3n
    I would like to specify what you asked about in how iceweasel misses on in hopes that you are interested in considering the answer to be valid considering the package should run efficient to a novice user especially when getting a first impression on a new software, browser, or package they are given by default. First let me say that I am very grateful for all debian has and continues to do for its user base and I am not complaining, only hoping that in mentioning such it will eventually aid in improving the disappointing experience iceweasel gave me in multiple ways right from the start of using it. I am sure tremendous amounts of hard work went into making it work as well as it does, and tremendous effort continues to go into IW like the rest of the package and distros such as this... Having said that, I do aim to keep my examples simple as possible but also explained in enough detail as to not leave out important info to get a full idea of my issue... (Thanks for understanding if I sound novice to debian, I am but continue to learn and advance by trial and error each day, and of course the wonderful help from all the great minds available here and there)... Well, my bone to pick (unfortunately as I disdain being a complainer when so much has been done for us, the end user) with IW is mainly 3 immediately noticed problems in offering me a smooth test drive (completely normal browsing conduct of your average user, I should indicate) as soon as I started out a task using IW (heading directly to google images and searched for a few web sized images, then saved a few (no more than 800 to 900 in dimensions, typically a normal procedure I would imagine) I experienced a horrendous reoccurring freeze and lag time that almost left me to kill the process more than once on each attempt and continued to compound so that by a few attempts later, even 1 single image save locked up the entire browser, ultimately giving up on my procedure in its simplicity with a bit of aggravation, if there was something I had done wrong on my end pre-attempt or during installation. Secondly, visiting video player sites in this case youtube will not allow for any video feed to play whatsoever and again, experienced a slight freezing that eventually dissipated as long as I didn't attempt to press play or reload another video trial... Very annoying and possibly related to my first issue (which could rely on my own lack of knowledge in setup config, but that troubles me if so).... Lastly as another user has stated already, in which brought you to ask them what exactly seemed to be the problem in the first place is also my final straw that broke the weasel's back, leading me here to find a solution to a updated version of firefox... being that iceweasel (an apparent build of firefox 17.x.x) leaving the user to find themselves out of luck when installing their favorite or much useful add-ons or plugins that aid their task or normal usage online.... As the other user clearly stated... there is almost zero options and barely any luck to find compatible plugins with such an outdated version still being used... I can only imagine if as a novoice or standard user to linux... and beginner to debian (which I am all for btw!) if I am discovering major conductivity issues which hinder normal user conduct right off the bat... that there is probably a bundle or more of security vulnerabilities that would unravel to me if I continued to give mr Hommey's package any further attention, or interest in continuing to use... This is a deal breaker on any account, unless all 3 of these issues stated are due to a very n00berish setting or lack thereof (which if that is the case, such was not properly stated for beginner in any place available by following installation tutorials such as LVM installers, which I felt lacked a clear understanding to someone who is trying to learn why and how, rather than, a fix to a potentially broken out of box installation... I really think that is irrelevant in any case however, as there is much step by step support regarding that kind of thing, just saying... its very challenging, along side a job to keep deadlines on, and the need to learn a better and more costomizible system, that was not as popularized when I became a dev via windows (OK OK I know thats a huge mistake and now I am way off original topic but anyway) Thanks very much for taking the time to read over this, and I hope it honestly gives you a few good and valid reasons for users wanting and seeking a way to get around iceweasel altogether... Finally, if I am missing something vital that renders all I've stated invalid or useless, please feel free to shove that elephant in the room my direction! Thanks for all you do, as I am sure it is very useful and dedicated, being you are a debian maintainer, and super user! Much love for intelligence, freedom and sharing the secrets of the web! May it remain unregulated and a good place to grow and learn for generations to come! Stay true! Take care now! -v3x (v3x @ gmx .com)

    Read the article

  • How can I write to a single xml file from two programs at the same time?

    - by Tom Bushell
    We recently started working with XML files, after many years of experience with the old INI files. My coworker found some CodeProject sample code that uses System.Xml.XmlDocument.Save. We are getting exceptions when two programs try to write to the same file at the same time. System.IO.IOException: The process cannot access the file 'C:\Test.xml' because it is being used by another process. This seems obvious in hindsight, but we had not anticipated it because accessing INI files via the Win32 API does not have this limitation. I assume there's some arbitration done by the Win32 calls that work at a higher level than the the XmlDocument.Save method. I'm hoping there are higher level XML routines somewhere in the .Net library that work similarily to the Win32 functions, but don't know where to start looking. Or maybe we can set up our file access permissions to allow multiple programs to write to the same file? Time is short (like almost all SW projects), and if we can't find a solution quickly, we'll have to hold our noses and go back to INI files.

    Read the article

  • How can I lock a dictionary in debian server installed with ngix?

    - by Tin Aung Linn
    I tried so many methods and get stick hours with this.I edit /etc/nginx/nginx.conf and write these lines. location /home/user/domains/example.com/public_html/lockfolder/ { auth_basic "Restricted"; auth_basic_user_file /home/user/domains/example.com/.htpasswd; } and I use crypt(3) encryption to make passwd with the command mkpasswd.Then I did with the given procedure user:encryptedpasswd in .htpasswd. But things does not work as said.Let me know if anyone know how I can exactly make configure for my purpose! Thanks you.

    Read the article

  • "Ghost" values in PHP/Smarty.

    - by Kyle Sevenoaks
    I've been working on a site for a while changing the layout and skin of a webshop checkout process. I've noticed that if you go all the way through the process until the last page, then click the link to go back to the view products page, the delivery method price displays underneath the navigation buttons, until you refresh and it goes away again. I've downloaded both sourced from the browser (Chrome, but this bug applies to all browsers) and used a file difference tool to display the differences, the result being only: < error.html vs > normal.html 34c34 < <link href="gzip.php?file=167842c1496093fbcd391b41cf7b03da.css&time=1272272181" rel="Stylesheet" type="text/css"/> --- > <link href="gzip.php?file=167842c1496093fbcd391b41cf7b03da.css&time=1272272348" rel="Stylesheet" type="text/css"/> Which is just the way it zips up the CSS stylesheets. (afaik) Has anyone ever encountered such a problem, or anything similar? Normal: Error: I can't even hazard a guess as to what is causing this, at all. I've searched over Google for anything and come up with nothing. What could be causing this? The site in question is Euroworker.no. HTML @ Pastebin. Smarty snippet: {if !$CANONICAL} {canonical}{self}{/canonical} {/if} <link rel="canonical" href="{$CANONICAL}" /> <!-- Css includes --> {includeCss file="frontend/Frontend.css"} {includeCss file="backend/stat.css"} {if {isRTL}} {includeCss file="frontend/FrontendRTL.css"} {/if} {compiledCss glue=true nameMethod=hash} <!--[if lt IE 8]> <link href="stylesheet/frontend/FrontendIE.css" rel="Stylesheet" type="text/css"/> {if $ieCss} <link href="{$ieCss}" rel="Stylesheet" type="text/css"/> {/if} <![endif]--> Thanks.

    Read the article

  • Cisco: changing VTP mode server/client to transparent

    - by J. Smith
    I have around 40 L2 switches (2960, 3560, 3500, 3750) running in VTP client mode and one L3 switch (6500) running in VTP server mode, all connected together. I would like to switch everything in VTP transparent mode without any interruption of service. I've already test on a poc version with 3 switches (Catalyst 2950, 3560 and 3750). Using serial and SSH connection, it is seems working but i'm not sure it is enough representative compare to the real network. Knowing that the VTP Pruning is enabled, I am wondering what would be the best procedure to proceed the changement. I've read that we could lost connection by keeping it enabled. Should I change the server or clients first? Is it important to change the VTP domain and VTP Pruning parameters?

    Read the article

  • How do I redirect standard output to a file in Perl? [closed]

    - by rockyurock
    I want to send standard output to the file "my_output.txt" but failed. Here's the output: inside value loop ------------------------------------------------------------ Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 108 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.16.2 port 5001 connected with 192.168.16.1 port 3189 [ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams [ 3] 0.0- 5.0 sec 2.14 MBytes 3.61 Mbits/sec 0.369 ms 0/ 1528 (0%) inside value loop3 clue1 clue2 inside value loop4 one iperf completed *************************************** When I enable the local *STDOUT; in below code then I could see the above output on command prompt display (ofcourse server is sending some data): my $file = 'my_output.txt'; use Win32::Process; print"inside value loop\n"; # redirect stdout to a file #local *STDOUT; open STDOUT, '>', $file or die "can't redirect STDOUT to <$file> $!"; Win32::Process::Create(my $ProcessObj, "D:\\IOT_AUTOMATION_UTILITY\\_SATURDAY_09-04-10\\adb_cmd.bat", "adb shell /data/app/iperf -u -s -p 5001", 0, NORMAL_PRIORITY_CLASS, ".") || die ErrorReport(); #$alarm_time = $IPERF_RUN_TIME+10; #20sec #$ProcessObj->Wait(40); #print"inside value loop2\n"; #sleep $alarm_time; sleep 40; $ProcessObj->Kill(0); sub ErrorReport{ print Win32::FormatMessage( Win32::GetLastError() ); }

    Read the article

  • Animate screen while loading textures

    - by Omega
    My RPG-like game has random battles. When the player enters a random battle, it is necessary for my game to load the textures used within that battle (animated monsters, animations, etc). The textures are quite a lot, and rather big (the battles are very graphical intensive). Such process consumes significant time. And while it is loading, the whole screen freezes. The game's map freezes, and the wait time is significant - I personally find it annoying. I can't afford to preload the textures because, after doing some math, I realized: If I preload all the textures at the beginning of the game, the application will definitely crash. If I preload the textures that are used in a specific map when the player enters the map, the application is very likely to crash as well. I can only afford to load the textures when I need them, and dispose of them as soon as the battle ends. I'd prefer to not use a "loading screen" image because it affects my game's design and concept. I want to avoid this approach. If I could do some kind of animation while loading the textures, it would be great, which leads to my question: is that possible? What kind of animation, you ask? Well, how about... you remember when Final Fantasy used to distort the screen while apparently loading the textures? Something like that. But well, distorting is quite a time-consuming process as well, so maybe just a cool frame-by-frame animation or something. While writing this, I realized that I could make small pauses between textures (there are multiple textures), and during such pauses, I update the screen to represent the animation's state. However, this is very unlikely to happen, because each texture is 2048x2048, so the animation would be refreshed at a rather laggy (and annoying) rate. I'd prefer to avoid this as well.

    Read the article

  • Subscript/Superscript Hotkey for Excel 2010 Macro?

    - by advs89
    Background In Excel 2010, for some ridiculous reason, there is no built-in hotkey (or even a button on the toolbar) for subscripting/superscripting text within a text cell. You can, however, highlight the text, right-click the selection, click format, and then check the [x] subscript or [x] superscript checkbox. Question Are there any kinds of excel macros or workarounds to map two keyboard hotkeys to the subscript and superscript keys, respectively? (It should only be, like, two lines of code - one for the event handler and one for the actual procedure call... I would write one myself but my VBA is rusty, at best, and I am pretty confident there is probably already some kind of solution, despite my inability to find one via search engine) Thanks for any help you can provide!

    Read the article

  • Do I lose the benefits of macro recording if I develop Excel apps in Visual Studio?

    - by DanM
    I've written lots of Excel macros in the past using the following development process: Record a macro. Open the VBA editor. Edit the macro. I'm now experimenting with a Visual Studio 2008 "Excel 2007 Add-In" project (C#), and I'm wondering if I will have to give up this development process. Questions: I know I can still record macros using Excel, but is there any way to access the resulting code in Visual Studio? Or do I just have to copy and paste then C#-ize it? What happens with my "Personal Macro Workbook"? Can I use the macros I have stored in there within C#? Or is there some way to convert them to C#? If there is some support for opening and editing VBA macros in Visual Studio, can you provide a very brief summary of how it works or point me to a good reference? Do you have any other tips for transitioning from writing macros in VBA using Excel's built-in editor to writing them in C# with Visual Studio?

    Read the article

  • Detect Application Shutdown in C# NET?

    - by Michael Pfiffer
    I am writing a small console application (will be ran as a service) that basically starts a Java app when it is running, shuts itself down if the Java app closes, and shuts down the Java app if it closes. I think I have the first two working properly, but I don't know how to detect when the .NET application is shutting down so that I can shutdown the Java app prior to that happening. Google search just returns a bunch of stuff about detecting Windows shutting down. Can anyone tell me how I can handle that part and if the rest looks fine? namespace MinecraftDaemon { class Program { public static void LaunchMinecraft(String file, String memoryValue) { String memParams = "-Xmx" + memoryValue + "M" + " -Xms" + memoryValue + "M "; String args = memParams + "-jar " + file + " nogui"; ProcessStartInfo processInfo = new ProcessStartInfo("java.exe", args); processInfo.CreateNoWindow = true; processInfo.UseShellExecute = false; try { using (Process minecraftProcess = Process.Start(processInfo)) { minecraftProcess.WaitForExit(); } } catch { // Log Error } } static void Main(string[] args) { Arguments CommandLine = new Arguments(args); if (CommandLine["file"] != null && CommandLine["memory"] != null) { // Launch the Application LaunchMinecraft(CommandLine["file"], CommandLine["memory"]); } else { LaunchMinecraft("minecraft_server.jar", "1024"); } } } }

    Read the article

  • Many users, many cpus, no delays. Good for cloud?

    - by Eric
    I wish to set up a CPU-intensive time-important query service for users on the internet. A usage scenario is described below. Is cloud computing the right way to go for such an implementation? If so, what cloud vendor(s) cater to this type of application? I ask specifically, in terms of: 1) pricing 2) latency resulting from: - slow CPUs, instance creations, JIT compiles, etc.. - internal management and communication of processes inside the cloud (e.g. a queuing process and a calculation process) - communication between cloud and end user 3) ease of deployment A usage scenario I am expecting is: - A typical user sends a query (XML of size around 1K) once every 30 seconds on average. - Each query requires a numerical computation of average time 0.2 sec and max time 1 sec on a 1 GHz Pentium. The computation requires no data other than the query itself and is performed by the same piece of code each time. - The delay a user experiences between sending a query and receiving a response should be on average no more than 2 seconds and in general no more than 5 seconds. - A background save to a DB of the response should occur (not time critical) - There can be up to 30000 simultaneous users - i.e., on average 1000 queries a second, each requiring an average 0.2 sec calculation, so that would necessitate around 200 CPUs. Currently I'm look at GAE Java (for quicker deployment and less IT hassle) and EC2 (Speed and price optimization) as options. Where can I learn more about the right way to set ups such a system? past threads, different blogs, books, etc.. BTW, if my terminology is wrong or confusing, please let me know. I'd greatly appreciate any help.

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • Need a Java based interruptible timer thread

    - by LambeauLeap
    I have a Main Program which is running a script on the target device(smart phone) and in a while loop waiting for stdout messages. However in this particular case, some of the heartbeat messages on the stdout could be spaced almost 45secs to a 1minute apart. something like: stream = device.runProgram(RESTORE_LOGS, new String[] {}); stream.flush(); String line = stream.readLine(); while (line.compareTo("") != 0) { reporter.commentOnJob(jobId, line); line = stream.readLine(); } So, I want to be a able to start a new interruptible thread after reading line from stdout with a required a sleep window. Upon being able to read a new line, I want to be able to interrupt/stop(having trouble killing the process), handle the newline of stdout text and restart a process. And it the event I am not able to read a line within the timer window(say 45secs) I want to a way to get out of my while loop either. I already tried the thread.run, thread.interrupt approach. But having trouble killing and starting a new thread. Is this the best way out or am I missing something obvious?

    Read the article

  • Execute JavaScript from within a C# assembly

    - by ScottKoon
    I'd like to execute JavaScript code from within a C# assembly and have the results of the JavaScript code returned to the calling C# code. It's easier to define things that I'm not trying to do: I'm not trying to call a JavaScript function on a web page from my code behind. I'm not trying to load a WebBrowser control. I don't want to have the JavaScript perform an AJAX call to a server. What I want to do is write unit tests in JavaScript and have then unit tests output JSON, even plain text would be fine. Then I want to have a generic C# class/executible that can load the file containing the JS, run the JS unit tests, scrap/load the results, and return a pass/fail with details during a post-build task. I think it's possible using the old ActiveX ScriptControl, but it seems like there ought to be a .NET way to do this without using SilverLight, the DLR, or anything else that hasn't shipped yet. Anyone have any ideas? update: From Brad Abrams blog namespace Microsoft.JScript.Vsa { [Obsolete("There is no replacement for this feature. Please see the ICodeCompiler documentation for additional help. http://go.microsoft.com/fwlink/?linkid=14202")] Clarification: We have unit tests for our JavaScript functions that are written in JavaScript using the JSUnit framework. Right now during our build process, we have to manually load a web page and click a button to ensure that all of the JavaScript unit tests pass. I'd like to be able to execute the tests during the post-build process when our automated C# unit tests are run and report the success/failure alongside of out C# unit tests and use them as an indicator as to whether or not the build is broken.

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Is rsync corrupting my RAR?

    - by Mark Henderson
    We have two qnap devices - one in our datacentre and one off-site. We have hundreds of password protected RAR files stored on the qnap that contain virtual machine image snapshots, with approx 20 of them being created each day. We synchronise the two devices using rsync, and it looks like all the files are being rsynced OK - they come over and have the same file size and all the files are present and accounted for. However, when I try to open the RAR files on the remote site, I get Cannot open \\qnap01\FromDatacentre\Snapshots\DB001SQL1-20110626.rar I can open the RAR files on the local site just fine, so I assume that something is getting mangled during the rsync procedure. However, the older files (pre 2011-06-20) work just fine, it's something that's only started happening in the last week. There haven't been (as far as I know) any changes to any of the devices, setup or configuration in that time. Obviously something has changed though. Where should I start investigating?

    Read the article

  • ssh rsa keys expires after 5 years

    - by Kreker
    I'm using an old login with ssh-rsa public/private key and all was good. I noticed that a couple of days ago the authorizazion was avoid with message "server refused our keys". After diggin' I figured out that the couple of keys stop working after exactly 5 years of their creation. So I make a new pair of keys, take the public one, paste inside a file in ~/.ssh of the username that I'm using, converted it with ssh-keygen -if and paste the new file into authorized_keys but the I still get "server refused our key". It's ok to copy and paste the real key without transfer it? What I'm missing? This isn't my first time using a pair of keys and I follow the same procedure as described. I'm in doubt if I'm changing the correct authorized_keys file but I've take a look in /etc/passwd and see where is the home of the login which I'm using.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >