Search Results

Search found 4272 results on 171 pages for 'processes'.

Page 55/171 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • 4 Key Ingredients for the Cloud

    - by Kellsey Ruppel
    It's a short week here with the US Thanksgiving Holiday. So, before we put on our stretch pants and get ready to belly up to the dinner table for turkey, stuffing and mashed potatoes, let's spend a little time this week talking about the Cloud (kind of like the feathery whipped goodness that tops the infamous Thanksgiving pumpkin pie!) But before we dive into the Cloud, let's do a side by side comparison of the key ingredients for each. Cloud Whipped Cream  Application Integration  1 cup heavy cream  Security  1/4 cup sugar  Virtual I/O  1 teaspoon vanilla  Storage  Chilled Bowl It’s no secret that millions of people are connected to the Internet. And it also probably doesn’t come as a surprise that a lot of those people are connected on social networking sites.  Social networks have become an excellent platform for sharing and communication that reflects real world relationships and they play a major part in the everyday lives of many people. Facebook, Twitter, Pinterest, LinkedIn, Google+ and hundreds of others have transformed the way we interact and communicate with one another.Social networks are becoming more than just an online gathering of friends. They are becoming a destination for ideation, e-commerce, and marketing. But it doesn’t just stop there. Some organizations are utilizing social networks internally, integrated with their business applications and processes and the possibility of social media and cloud integration is compelling. Forrester alone estimates enterprise cloud computing to grow to over $240 billion by 2020. It’s hard to find any current IT project today that is NOT considering cloud-based deployments. Security and quality of service concerns are no longer at the forefront; rather, it’s about focusing on the right mix of capabilities for the business. Cloud vs. On-Premise? Policies & governance models? Social in the cloud? Cloud’s increasing sophistication, security in applications, mobility, transaction processing and social capabilities make it an attractive way to manage information. And Oracle offers all of this through the Oracle Cloud and Oracle Social Network. Oracle Social Network is a secure private network that provides a broad range of social tools designed to capture and preserve information flowing between people, enterprise applications, and business processes. By connecting you with your most critical applications, Oracle Social Network provides contextual, real-time communication within and across enterprises. With Oracle Social Network, you and your teams have the tools you need to collaborate quickly and efficiently, while leveraging the organization’s collective expertise to make informed decisions and drive business forward. Oracle Social Network is available as part of a portfolio of application and platform services within the Oracle Cloud. Oracle Cloud offers self-service business applications delivered on an integrated development and deployment platform with tools to rapidly extend and create new services. Oracle Social Network is pre-integrated with the Fusion CRM Cloud Service and the Fusion HCM Cloud Service within the Oracle Cloud. If you are looking for something to watch as you veg on the couch in a post-turkey dinner hangover, you might consider watching these how-to videos! And yes, it is perfectly ok to have that 2nd piece of pie

    Read the article

  • PHP-FPM stops responding and dies [migrated]

    - by user12361
    I'm running Drupal 6 with Nginx 1.5.1 and PHP-FPM (PHP 5.3.26) on a 1GB single core VPS with 3GB of swap space on SSD storage. I just switched from shared hosting to this unmanaged VPS because my site was getting too heavy, so I'm still learning the ropes. I have moderately high traffic, I don't really monitor it closely but Google Adsense usually record close to 30K page views/day. I usually have 50 to 80 authenticated users logged in and a few hundred more anonymous users hitting the Boost static HTML cache at any given moment. The problem I'm having is that PHP-FPM frequently stops responding, resulting in Nginx 502 or 504 errors. I swear I have read every page on the internet about this issue, which seems fairly common, and I've tried endless combinations of configurations, and I can't find a good solution. After restarting Nginx and PHP-FPM, the site runs really fast for a while, and then without warning it simply stops responding. I get a white screen while the browser waits on the server, and after about 30 seconds to a minute it throws an Nginx 502 or 504 error. Sometimes it runs well for 2 minutes, sometimes 5 minutes, sometimes 5 hours, but it always ends up hanging. When I find the server in this state, there is still plenty of free memory (500MB or more) and no major CPU usage, the control and worker PHP-FPM processes are still present, and the server is still pingable and usable via SSH. A reload of PHP-FPM via the init script revives it again. The hangups don't seem to correspond to the amount of traffic, because I observed this behavior consistently when I was testing this configuration on a development VPS with no traffic at all. I've been constantly tweaking the settings, but I can't definitively eliminate the problem. I set Nginx workers to just 1. In the PHP-FPM config I have tried all three of the process managers. "Dynamic" is definitely the least reliable, consistently hanging up after only a few minutes. "Static" also has been unreliable and unpredictable. The least buggy has been "ondemand", but even that is failing me, sometimes after as much as 12 to 24 hours. But I can't leave the server unattended because PHP-FPM dies and never comes back on its own. I tried adjusting the pm.max_children value from as low as 3 to as high as 50, doesn't make a lot of difference, but I currently have it at 10. Same thing for the spare servers values. I also have set pm.max_requests anywhere from 30 to unlimited, and it doesn't seem to make a difference. According to the logs, the PHP-FPM processes are not exiting with SIGSEGV or SIGBUS, but rather with SIGTERM. I get a lot of lines like: WARNING: [pool www] child 3739, script '/var/www/drupal6/index.php' (request: "GET /index.php") execution timed out (38.739494 sec), terminating and: WARNING: [pool www] child 3738 exited on signal 15 (SIGTERM) after 50.004380 seconds from start I actually found several articles that recommend doing a graceful reload of PHP-FPM via cron every few minutes or hours to circumvent this issue. So that's what I did, "/etc/init.d/php-fpm reload" every 5 minutes. So far, it's keeping the lights on. But it feels like a dreadful hack. Is PHP-FPM really that unreliable? Is there anything else I can do? Thanks a lot!

    Read the article

  • Editing files without race conditions?

    - by user2569445
    I have a CSV file that needs to be edited by multiple processes at the same time. My question is, how can I do this without introducing race conditions? It's easy to write to the end of the file without race conditions by open(2)ing it in "a" (O_APPEND) mode and simply write to it. Things get more difficult when removing lines from the file. The easiest solution is to read the file into memory, make changes to it, and overwrite it back to the file. If another process writes to it after it is in memory, however, that new data will be lost upon overwriting. To further complicate matters, my platform does not support POSIX record locks, checking for file existence is a race condition waiting to happen, rename(2) replaces the destination file if it exists instead of failing, and editing files in-place leaves empty bytes in it unless the remaining bytes are shifted towards the beginning of the file. My idea for removing a line is this (in pseudocode): filename = "/home/user/somefile"; file = open(filename, "r"); tmp = open(filename+".tmp", "ax") || die("could not create tmp file"); //"a" is O_APPEND, "x" is O_EXCL|O_CREAT while(write(tmp, read(file)); //copy the $file to $file+".new" close(file); //edit tmp file unlink(filename) || die("could not unlink file"); file = open(filename, "wx") || die("another process must have written to the file after we copied it."); //"w" is overwrite, "x" is force file creation while(write(file, read(tmp))); //copy ".tmp" back to the original file unlink(filename+".tmp") || die("could not unlink tmp file"); Or would I be better off with a simple lock file? Appender process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "a"); write(file, "stuff"); close(file); close(lock); unlink(filename+".lock"); Editor process: lock = open(filename+".lock", "wx") || die("could not lock file"); file = open(filename, "rw"); while(contents += read(file)); //edit "contents" write(file, contents); close(file); close(lock); unlink(filename+".lock"); Both of these rely on an additional file that will be left over if a process terminates before unlinking it, causing other processes to refuse to write to the original file. In my opinion, these problems are brought on by the fact that the OS allows multiple writable file descriptors to be opened on the same file at the same time, instead of failing if a writable file descriptor is already open. It seems that O_CREAT|O_EXCL is the closest thing to a real solution for preventing filesystem race conditions, aside from POSIX record locks. Another possible solution is to separate the file into multiple files and directories, so that more granular control can be gained over components (lines, fields) of the file using O_CREAT|O_EXCL. For example, "file/$id/$field" would contain the value of column $field of the line $id. It wouldn't be a CSV file anymore, but it might just work. Yes, I know I should be using a database for this as databases are built to handle these types of problems, but the program is relatively simple and I was hoping to avoid the overhead. So, would any of these patterns work? Is there a better way? Any insight into these kinds of problems would be appreciated.

    Read the article

  • Development process for an embedded project with significant hardware changes

    - by pierr
    I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware changes. I will describe below what we are currently doing (Ad-hoc way, no defined process yet). The changes are divided into three categories and different processes are used for each of them: complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the legacy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real hardware hardware improvement example : enhance the image display quality by improving the underlying algorithm a) RTL/FPGA simulation b) Wait until hardware and test on the hardware Minor change example : only change hardware register mapping a) Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware changes as the bring-up schedule is always very tight and the customer desired a seamless change when updating to a new version of hardware. How did you manage this kind of hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatic test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well-documented processes for this kind of change?

    Read the article

  • Rules engine for spatial and temporal reasoning?

    - by John
    I have an application that receives a number of datums that characterize spatial / temporal processes. It then filters these datums and creates actions which are then sent to processes that perform the actions. Rinse and repeat. At present, I have a collection of custom filters that perform a lot of complicated spatial/temporal calculations. Many times as I discuss my system to individuals in my company, they ask if I'm using a rules engine. I have yet to find a rules engine that is able to reason well temporally and spatially. (Things like When are two entities ever close? Is entity A ever in region B? If entity C is near entity D but oriented backwards relative to C then perform action D.) I have looked at Drools, Cyc, Jess in the past (say 3-4 years ago). It's time to re-examine the state of the art. Any suggestions? Any standards that you know of that support this kind of reasoning? Any defacto standards? Any applications? Thanks!

    Read the article

  • Python - Help with multiprocessing / threading basics.

    - by orokusaki
    I haven't ever used multi-threading, and I decided to learn it today. I was reluctant to ever use it before, but when I tried it out it seemed way to easy, which makes me wary. Are there any gotchas in my code, or is it really that simple? import uuid import time import multiprocessing def sleep_then_write(content): time.sleep(5) f = open(unicode(uuid.uuid4()), 'w') f.write(content) f.close() if __name__ == '__main__': for i in range(3): p = multiprocessing.Process(target=sleep_then_write, args=('Hello World',)) p.start() My primary purpose of using threading would be to offload multiple images to S3 after re-sizing them, all at the same time. Is that a reasonable task for Python's multiprocessing? I've read a lot about certain types of tasks not really getting any gain from using threading in Python due to the GIL, but it seems that multiprocessing completely removes that worry, yes? I can imagine a case where 50 users hit the system and it spawns 150 Python interpreters. I can also imagine that wouldn't be good on a production server. How can something like that be avoided? Finally (but most important): How can I return control back to the caller of the new processes? I need to be able to continue with returning an HTTP response and content back to the user and then have the processes continue doing there work after the user of my website is done with the transaction.

    Read the article

  • No coverage for runtime with Devel::Cover and ModPerl::Registry

    - by codeholic
    When I'm running Devel::Cover with ModPerl::Registry, I get no coverage info except for BEGIN blocks. When I'm running the same script with Devel::Cover from command line or as a CGI, everything works alright (obviously). How can I make Devel::Cover "see" my code being executed in the runtime? Here's Devel::Cover related stuff in my httpd.conf: MaxClients 1 PerlSetEnv DEVEL_COVER_OPTIONS -db,/tmp/cover_db,silent,1 PerlRequire /var/www/project/startup.pl Here's startup.pl: #!/usr/bin/perl use strict; use warnings; use Apache2::Directive (); use File::Basename (); use File::Find (); BEGIN { # Devel::Cover database must be writable by worker processes my $conftree = Apache2::Directive::conftree->as_hash; my $name = $conftree->{User} or die "couldn't find user in Apache config"; print "user=$name\n"; my $uid = getpwnam($name); defined $uid or die "couldn't determine uid by name"; no warnings 'redefine'; local $> = $uid; require Devel::Cover; my $old_report = \&Devel::Cover::report; *Devel::Cover::report = sub { local $> = $uid; $old_report->(@_) }; Devel::Cover->import; } 1; (As you may see, I made a monkey patch for Devel::Cover since startup.pl is being run by root, but worker processes run under a different user, and otherwise they couldn't read directories created by startup.pl. If you know a better solution, make a note, please.)

    Read the article

  • Visual Studio Debugging is not attaching to WebDev.WebServer.EXE

    - by Aaron Daniels
    I have a solution with many projects. On Debug, I have three web projects that I want to start up on their own Cassini ASP.NET Web Development servers. In the Solution Properties - Common Properties - Startup Project, I have Multiple startup projects chosen with the three web applications' Action set to Start. All three web development servers start, and all three web pages load. However, Visual Studio is only attaching to two of the WebDev.WebServer.EXE processes. I have to manually go attach to the third process in order to debug it with the debugger. This behavior just started happening, and I'm at a loss as to how to troubleshoot this. Any help is appreciated. EDIT: Also to note, I have stopped and restarted the development servers several times with no change in behavior. Also, when attaching to the process manually, I see that the Type property of the two automatically attached WebDev.WebServer.EXE processes is Managed, while the Type property of the unattached WebDev.WebServer.EXE process is TSQL, Managed, x86. When looking at the project's properties, however, I am targeting AnyCPU, and do NOT have SQL Server debugging enabled. EDIT: Also to note, the two projects that attach correctly are C# web applications. <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{fae04ec0-301f-11d3-bf4b-00c04f79efbc}</ProjectTypeGuids> The project that is not attaching correctly is a VB.NET web application. <ProjectTypeGuids>{349c5851-65df-11da-9384-00065b846f21};{F184B08F-C81C-45F6-A57F-5ABD9991F28F}</ProjectTypeGuids> EDIT: Also to note, the behavior is the same on another workstation. So odds are that it's not a machine specific problem.

    Read the article

  • Working around MySQL error "Deadlock found when trying to get lock; try restarting transaction"

    - by Anon Guy
    Hi all: I have a MySQL table with about 5,000,000 rows that are being constantly updated in small ways by parallel Perl processes connecting via DBI. The table has about 10 columns and several indexes. One fairly common operation gives rise to the following error sometimes: DBD::mysql::st execute failed: Deadlock found when trying to get lock; try restarting transaction at Db.pm line 276. The SQL statement that triggers the error is something like this: UPDATE file_table SET a_lock = 'process-1234' WHERE param1 = 'X' AND param2 = 'Y' AND param3 = 'Z' LIMIT 47 The error is triggered only sometimes. I'd estimate in 1% of calls or less. However, it never happened with a small table and has become more common as the database has grown. Note that I am using the a_lock field in file_table to ensure that the four near-identical processes I am running do not try and work on the same row. The limit is designed to break their work into small chunks. I haven't done much tuning on MySQL or DBD::mysql. MySQL is a standard Solaris deployment, and the database connection is set up as follows: my $dsn = "DBI:mysql:database=" . $DbConfig::database . ";host=${DbConfig::hostname};port=${DbConfig::port}"; my $dbh = DBI->connect($dsn, $DbConfig::username, $DbConfig::password, { RaiseError => 1, AutoCommit => 1 }) or die $DBI::errstr; I have seen online that several other people have reported similar errors and that this may be a genuine deadlock situation. I have two questions: What exactly about my situation is causing the error above? Is there a simple way to work around it or lessen its frequency? For example, how exactly do I go about "restarting transaction at Db.pm line 276"? Thanks in advance.

    Read the article

  • When using SendKeys()-InvalidOperationException: Undo Operation encountered...

    - by M0DC0M
    Here is my code public void KeyPress() { //Finds the target window and sends a key command to the application Process[] processes = Process.GetProcessesByName("calc"); IntPtr calculatorHandle; foreach (Process proc in processes) { calculatorHandle = proc.MainWindowHandle; if (calculatorHandle == IntPtr.Zero) { MessageBox.Show("Calculator is not running."); return; } SetForegroundWindow(calculatorHandle); break; } SendKeys.SendWait("1"); } After Executing this code I recieve an Error, i know the source is the SendKeys. Here is the full error I am Receiving System.InvalidOperationException was unhandled Message="The Undo operation encountered a context that is different from what was applied in the corresponding Set operation. The possible cause is that a context was Set on the thread and not reverted(undone)." Source="mscorlib" StackTrace: at System.Threading.SynchronizationContextSwitcher.Undo() at System.Threading.ExecutionContextSwitcher.Undo() at System.Threading.ExecutionContext.runFinallyCode(Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteBackoutCodeHelper(Object backoutCode, Object userData, Boolean exceptionThrown) at System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Net.ContextAwareResult.Complete(IntPtr userToken) at System.Net.LazyAsyncResult.ProtectedInvokeCallback(Object result, IntPtr userToken) at System.Net.Sockets.BaseOverlappedAsyncResult.CompletionPortCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.Threading._IOCompletionCallback.PerformIOCompletionCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* pOVERLAP) InnerException: I'm not sure what the problem is, The number will appear in my calculator but that error pops up

    Read the article

  • Java FileLock for Reading and Writing

    - by bobtheowl2
    I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time. The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that. Here is the code that fails: FileInputStream fin = new FileInputStream(f); FileLock fl = fin.getChannel().tryLock(); if(fl != null) { System.out.println("Locked File"); BufferedReader in = new BufferedReader(new InputStreamReader(fin)); System.out.println(in.readLine()); ... The exception is thrown on the FileLock line. java.nio.channels.NonWritableChannelException at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source) at java.nio.channels.FileChannel.tryLock(Unknown Source) at Mover.run(Mover.java:74) at java.lang.Thread.run(Unknown Source) Looking at the JavaDocs, it says Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing. But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

    Read the article

  • perl multiple tasks problem

    - by Alice Wozownik
    I have finished my earlier multithreaded program that uses perl threads and it works on my system. The problem is that on some systems that it needs to run on, thread support is not compiled into perl and I cannot install additional packages. I therefore need to use something other than threads, and I am moving my code to using fork(). This works on my windows system in starting the subtasks. A few problems: How to determine when the child process exits? I created new threads when the thread count was below a certain value, I need to keep track of how many threads are running. For processes, how do I know when one exits so I can keep track of how many exist at the time, incrementing a counter when one is created and decrementing when one exits? Is file I/O using handles obtained with OPEN when opened by the parent process safe in the child process? I need to append to a file for each of the child processes, is this safe on unix as well. Is there any alternative to fork and threads? I tried use Parallel::ForkManager, but that isn't installed on my system (use Parallel::ForkManager; gave an error) and I absolutely require that my perl script work on all unix/windows systems without installing any additional modules.

    Read the article

  • Far jump in ntdll.dll's internal ZwCreateUserProcess

    - by user49164
    I'm trying to understand how the Windows API creates processes so I can create a program to determine where invalid exes fail. I have a program that calls kernel32.CreateProcessA. Following along in OllyDbg, this calls kernel32.CreateProcessInternalA, which calls kernel32.CreateProcessInternalW, which calls ntdll.ZwCreateUserProcess. This function goes: mov eax, 0xAA xor ecx, ecx lea edx, dword ptr [esp+4] call dword ptr fs:[0xC0] add esp, 4 retn 0x2C So I follow the call to fs:[0xC0], which contains a single instruction: jmp far 0x33:0x74BE271E But when I step this instruction, Olly just comes back to ntdll.ZwCreateUserProcess at the add esp, 4 right after the call (which is not at 0x74BE271E). I put a breakpoint at retn 0x2C, and I find that the new process was somehow created during the execution of add esp, 4. So I'm assuming there's some magic involved in the far jump. I tried to change the CS register to 0x33 and EIP to 0x74BE271E instead of actually executing the far jump, but that just gave me an access violation after a few instructions. What's going on here? I need to be able to delve deeper beyond the abstraction of this ZwCreateUserProcess to figure out how exactly Windows creates processes.

    Read the article

  • Memory mapped files and "soft" page faults. Unavoidable?

    - by Robert Oschler
    I have two applications (processes) running under Windows XP that share data via a memory mapped file. Despite all my efforts to eliminate per iteration memory allocations, I still get about 10 soft page faults per data transfer. I've tried every flag there is in CreateFileMapping() and CreateFileView() and it still happens. I'm beginning to wonder if it's just the way memory mapped files work. If anyone there knows the O/S implementation details behind memory mapped files I would appreciate comments on the following theory: If two processes share a memory mapped file and one process writes to it while another reads it, then the O/S marks the pages written to as invalid. When the other process goes to read the memory areas that now belong to invalidated pages, this causes a soft page fault (by design) and the O/S knows to reload the invalidated page. Also, the number of soft page faults is therefore directly proportional to the size of the data write. My experiments seem to bear out the above theory. When I share data I write one contiguous block of data. In other words, the entire shared memory area is overwritten each time. If I make the block bigger the number of soft page faults goes up correspondingly. So, if my theory is true, there is nothing I can do to eliminate the soft page faults short of not using memory mapped files because that is how they work (using soft page faults to maintain page consistency). What is ironic is that I chose to use a memory mapped file instead of a TCP socket connection because I thought it would be more efficient. Note, if the soft page faults are harmless please note that. I've heard that at some point if the number is excessive, the system's performance can be marred. If soft page faults intrinsically are not significantly harmful then if anyone has any guidelines as to what number per second is "excessive" I'd like to hear that. Thanks.

    Read the article

  • Shared value in parallel python

    - by Jonathan
    Hey all- I'm using ParallelPython to develop a performance-critical script. I'd like to share one value between the 8 processes running on the system. Please excuse the trivial example but this illustrates my question. def findMin(listOfElements): for el in listOfElements: if el < min: min = el import pp min = 0 myList = range(100000) job_server = pp.Server() f1 = job_server.submit(findMin, myList[0:25000]) f2 = job_server.submit(findMin, myList[25000:50000]) f3 = job_server.submit(findMin, myList[50000:75000]) f4 = job_server.submit(findMin, myList[75000:100000]) The pp docs don't seem to describe a way to share data across processes. Is it possible? If so, is there a standard locking mechanism (like in the threading module) to confirm that only one update is done at a time? l = Lock() if(el < min): l.acquire if(el < min): min = el l.release I understand I could keep a local min and compare the 4 in the main thread once returned, but by sharing the value I can do some better pruning of my BFS binary tree and potentially save a lot of loop iterations. Thanks- Jonathan

    Read the article

  • Bibliography behaves strange in lyx.

    - by Orjanp
    Hi! I have created a Bibliography section in my document written in lyx. It uses a book layout. For some reason it did start over again when I added some more entries. The new entries was made some time later than the first ones. I just went down to key-27 and hit enter. Then it started on key-1 again. Does anyone know why it behaves like this? The lyx code is below. \begin{thebibliography}{34} \bibitem{key-6}Lego mindstorms, http://mindstorms.lego.com/en-us/default.aspx \bibitem{key-7}C.A.R. Hoare. Communicating sequential processes. Communications of the ACM, 21(8):666-677, pages 666\textendash{}677, August 1978. \bibitem{key-8}C.A.R. Hoare. Communicating sequential processes. Prentice-Hall, 1985. \bibitem{key-9}CSPBuilder, http://code.google.com/p/cspbuilder/ \bibitem{key-10}Rune Møllegård Friborg and Brian Vinter. CSPBuilder - CSP baset Scientific Workflow Modelling, 2008. \bibitem{key-11}Labview, http://www.ni.com/labview \bibitem{key-12}Robolab, http://www.lego.com/eng/education/mindstorms/home.asp?pagename=robolab \bibitem{key-13}http://code.google.com/p/pycsp/ \bibitem{key-14}Paparazzi, http://paparazzi.enac.fr \bibitem{key-15}Debian, http://www.debian.org \bibitem{key-16}Ubuntu, http://www.ubuntu.com \bibitem{key-17}GNU, http://www.gnu.org \bibitem{key-18}IVY, http://www2.tls.cena.fr/products/ivy/ \bibitem{key-19}Tkinter, http://wiki.python.org/moin/TkInter \bibitem{key-20}pyGKT, http://www.pygtk.org/ \bibitem{key-21}pyQT4, http://wiki.python.org/moin/PyQt4 \bibitem{key-22}wxWidgets, http://www.wxwidgets.org/ \bibitem{key-23}wxPython GUI toolkit, http://www.wxPython.org \bibitem{key-24}Python programming language, http://www.python.org \bibitem{key-25}wxGlade, http://wxglade.sourceforge.net/ \bibitem{key-26}http://numpy.scipy.org/ \bibitem{key-27}http://www.w3.org/XML/ \bibitem{key-1}IVY software bus, http://www2.tls.cena.fr/products/ivy/ \bibitem{key-2}sdas \bibitem{key-3}sad \bibitem{key-4}sad \bibitem{key-5}fsa \bibitem{key-6}sad \bibitem{key-7} \end{thebibliography}

    Read the article

  • Graceful termination of NSApplication with Core Data and Grand Central Dispatch (GCD)

    - by Vincent Mac
    I have an Cocoa Application (Mac OS X SDK 10.7) that is performing some processes via Grand Central Dispatch (GCD). These processes are manipulating some Core Data NSManagedObjects (non-document-based) in a manner that I believe is thread safe (creating a new managedObjectContext for use in this thread). The problem I have is when the user tries to quit the application while the dispatch queue is still running. The NSApplication delegate is being called before actually quitting. - (NSApplicationTerminateReply)applicationShouldTerminate:(NSApplication *)sender I get an error "Could not merge changes." Which is somewhat expected since there are still operations being performed through the different managedObjectContext. I am then presented with the NSAlert from the template that is generated with a core data application. In the Threading Programming Guide there is a section called "Be Aware of Thread Behaviors at Quit Time" which alludes to using replyToApplicationShouldTerminate: method. I'm having a little trouble implementing this. What I would like is for my application to complete processing the queued items and then terminate without presenting an error message to the user. It would also be helpful to update the view or use a sheet to let the user know that the app is performing some action and will terminate when the action is complete. Where and how would I implement this behavior?

    Read the article

  • Run a .java file using ProcessBuilder

    - by David K
    I'm a novice programmer working in Eclipse, and I need to get multiple processes running (this is going to be a simulation of a multi-computer system). My initial hackup used multiple threads to multiple classes, but now I'm trying to replace the threads with processes. From my reading, I've gleaned that ProcessBuilder is the way to go. I have tried many many versions of the input you see below, but cannot for the life of me figure out how to properly use it. I am trying to run the .java files I previously created as classes (which I have modified). I eventually just made a dummy test.java to make sure my process is working properly - its only function is to print that it ran. My code for the two files are below. Am I using ProcessBuilder correctly? Is this the correct way to read the output of my subprocess? Any help would be much appreciated. David primary process package Control; import java.io.*; import java.lang.*; public class runSPARmatch { /** * @param args */ public static void main(String args[]) { try { ProcessBuilder broker = new ProcessBuilder("javac.exe","test.java","src\\Broker\\"); Process runBroker = broker.start(); Reader reader = new InputStreamReader(runBroker.getInputStream()); int ch; while((ch = reader.read())!= -1) System.out.println((char)ch); reader.close(); runBroker.waitFor(); System.out.println("Program complete"); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } subprocess package Broker; public class test { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.println("This works"); } }

    Read the article

  • NonStop ODBC: how the connections (ODBC servers) are assigned to CPUs?

    - by Vladimir Dyuzhev
    We have an ODBC pool running on a NonStop server. The pool is connected to SQL/MX. This pool is used by a few external Java applications, each of which has an JDBC pool connected to ODBC pool (e.g. 14 connections per application). With time (after a few application recycles) we see an imbalance between CPUs -- some have 8 ODBC processes running, some only 5. That leads to CPU time imbalance too. Up to this point we assumed that a CPU is assigned to ODBC process in round-robin fashion. That would maintain the number of ODBC processes more or less equally distributed. It's not the case though. Is there any information on how ODBC pool decided which CPU to choose for every new allocated process? Does it look at CPU load? Available memory? Something else? Sadly, even HP's own people (available to us, that is) couldn't answer those questions with certainty. :-(

    Read the article

  • Architecture Guidance Needed?

    - by vijay
    We are about to automate number of process for our reporting team. (The reports are like daily reports, weekly reports, monthly reports, etc..) Mostly the process is like pulling some data from the oracle and then fill them in particular excel template files. Each reports and so their templates are different from each other. Except the excel file manipulation, there are hardly any business logic behind these. Client wanted an integrated tool and all the automated processes are placed as menus/submenus. Right now roughly there are around 30 process waiting to be automated. And we are expecting more new reports in the next quarter. I am nowhere to near having any practical experience when comes to architecuring. Already i have been maintaining two or three systems(they are more than 4yrs old.) for this prestegious client.The possiblity of the above mentioned tool will be manintained for another 3 yrs is very likely. From my past experience i've been through the pain of implmenting change requests to the rigd & undocumented code base resulting in the break down of the system and then eventually myself. So My main and top most concern is the maintainablity. When i was searching for these i came across this link, Smart Clients Using CAB and SCSF is the above link appropriate for my requirement? Also Should i place each automated processes in separate forms under a single project, or place them in separate projects under a single solution.. Please correct me if have missed any other important information. Thx.

    Read the article

  • what webserver / mod / technique should I use to serve everything from memory?

    - by reinier
    I've lots of lookuptables from which I'll generate my webresponse. I think IIS with Asp.net enables me to keep static lookuptables in memory which I can use to serve up my responses very fast. Are there however also non .net solutions which can do the same? I've looked at fastcgi, but I think this starts X processes, of which anyone can handle Y requests. But the processes are by definition shielded from eachother. I could configure fastcgi to use just 1 process, but does this have scalability implications? anything using PHP or any other interpreted language won't fly because it is also cgi or fastcgi bound right? I understand memcache could be an option, though this would require another (local) socket connection which I'd rather avoid since everything in memory would be much faster. The solution can work under WIndows or Unix... it doesn't matter too much. The only thing which matters is that there will be a lot of requests (100/sec now and growing to 500/sec in a year), and I want to reduce the amount of webservers needed to process it. The current solution is done using PHP and memcache (and the occasional hit to the SQL server backend). Although it is fast (for php anyway), Apache has real problems when the 50/sec is passed. I've put a bounty on this question since I've not seen enough responses to make a wise choice. At the moment I'm considering either Asp.net or fastcgi with C(++).

    Read the article

  • S3 Backup Memory Usage in Python

    - by danpalmer
    I currently use WebFaction for my hosting with the basic package that gives us 80MB of RAM. This is more than adequate for our needs at the moment, apart from our backups. We do our own backups to S3 once a day. The backup process is this: dump the database, tar.gz all the files into one backup named with the correct date of the backup, upload to S3 using the python library provided by Amazon. Unfortunately, it appears (although I don't know this for certain) that either my code for reading the file or the S3 code is loading the entire file in to memory. As the file is approximately 320MB (for today's backup) it is using about 320MB just for the backup. This causes WebFaction to quit all our processes meaning the backup doesn't happen and our site goes down. So this is the question: Is there any way to not load the whole file in to memory, or are there any other python S3 libraries that are much better with RAM usage. Ideally it needs to be about 60MB at the most! If this can't be done, how can I split the file and upload separate parts? Thanks for your help. This is the section of code (in my backup script) that caused the processes to be quit: filedata = open(filename, 'rb').read() content_type = mimetypes.guess_type(filename)[0] if not content_type: content_type = 'text/plain' print 'Uploading to S3...' response = connection.put(BUCKET_NAME, 'daily/%s' % filename, S3.S3Object(filedata), {'x-amz-acl': 'public-read', 'Content-Type': content_type})

    Read the article

  • Storing an object to use in multiple classes

    - by Aaron Sanders
    I am wondering the best way to store an object in memory that is used in a lot of classes throughout an application. Let me set up my problem for you: We have multiple databases, 1 per customer. We also have a master table and each row is detailed information about the databases such as database name, server IP it's located and a few config settings. I have an application that loops through those multiple databases and runs some updates on them. The settings I mentioned above are updated each loop iteration into memory. The application then runs through series of processes that include multiple classes using this data. The data never changes during the processes, only during the loop iteration. The variables are related to a customer, so I have them stored in a customer class. I suppose I could make all of the members shared or should I use a singleton for the customer class? I've never actually used a singleton, only read they are good in this type of situation. Are there better solutions to this type of scenario? Also, I could have plans for this application to be multithreaded later. Sorry if this is confusing. If you have questions, let me know and I will answer them. Thanks for your help.

    Read the article

  • SQL Concurrent test update question

    - by ptoinson
    Howdy Folks, I have a SQLServer 2008 database in which I have a table for Tags. A tag is just an id and a name. The definition of the tags table looks like: CREATE TABLE [dbo].[Tag]( [ID] [int] IDENTITY(1,1) NOT NULL, [Name] [varchar](255) NOT NULL CONSTRAINT [PK_Tag] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ) Name is also a unique index. further I have several processes adding data to this table at a pretty rapid rate. These processes use a stored proc that looks like: ALTER PROC [dbo].[lg_Tag_Insert] @Name varchar(255) AS DECLARE @ID int SET @ID = (select ID from Tag where Name=@Name ) if @ID is null begin INSERT Tag(Name) VALUES (@Name) RETURN SCOPE_IDENTITY() end else begin return @ID end My issues is that, other than being a novice at concurrent database design, there seems to be a race condition that is causing me to occasionally get an error that I'm trying to enter duplicate keys (Name) into the DB. The error is: Cannot insert duplicate key row in object 'dbo.Tag' with unique index 'IX_Tag_Name'. This makes sense, I'm just not sure how to fix this. If it where code I would know how to lock the right areas. SQLServer is quite a different beast. First question is what is the proper way to code this 'check, then update pattern'? It seems I need to get an exclusive lock on the row during the check, rather than a shared lock, but it's not clear to me the best way to do that. Any help in the right direction will be greatly appreciated. Thanks in advance.

    Read the article

  • linux new/delete, malloc/free large memory blocks

    - by brian_mk
    Hi folks, We have a linux system (kubuntu 7.10) that runs a number of CORBA Server processes. The server software uses glibc libraries for memory allocation. The linux PC has 4G physical memory. Swap is disabled for speed reasons. Upon receiving a request to process data, one of the server processes allocates a large data buffer (using the standard C++ operator 'new'). The buffer size varies depening upon a number of parameters but is typically around 1.2G Bytes. It can be up to about 1.9G Bytes. When the request has completed, the buffer is released using 'delete'. This works fine for several consecutive requests that allocate buffers of the same size or if the request allocates a smaller size than the previous. The memory appears to be free'd ok - otherwise buffer allocation attempts would eventually fail after just a couple of requests. In any case, we can see the buffer memory being allocated and freed for each request using tools such as KSysGuard etc. The problem arises when a request requires a buffer larger than the previous. In this case, operator 'new' throws an exception. It's as if the memory that has been free'd from the first allocation cannot be re-allocated even though there is sufficient free physical memory available. If I kill and restart the server process after the first operation, then the second request for a larger buffer size succeeds. i.e. killing the process appears to fully release the freed memory back to the system. Can anyone offer an explanation as to what might be going on here? Could it be some kind of fragmentation or mapping table size issue? I am thinking of replacing new/delete with malloc/free and use mallopt to tune the way the memory is being released to the system. BTW - I'm not sure if it's relevant to our problem, but the server uses Pthreads that get created and destroyed on each processing request. Cheers, Brian.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >