Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 82/401 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • iPhone StoreKit development: Can't create a test user?

    - by Cruinh
    I'm trying to get up to speed on how to develop an in-app store using the new 3.0 SDK... but I'm a bit stuck in that I don't seem to be able to create a test account. The iTunes Connect FAQ states "To create test users for your sandbox environment, go to the Manage Users section of iTunes Connect and select In App Purchase Test User. Your test users will not have iTunes Connect access. They will just have access to the sandbox environment." ...but when I go to that Manage Users section, there's no mention of any sort of "In App Purschase Test User" or how to make one. Did I miss doing something to enable that? There is another section of that same FAQ that seems to suggest I need to have a paid app in the store first? Is that the reason I can't even use the test environment?

    Read the article

  • Get Janrain to work with Sproutcore?

    - by weng
    Since Sproutcore is not dependent on the backend server to render pages, I'm struggling on how to get it working with Janrain. In one of the login processes of Janrain a HTTP request is sent to a backend server that processes the information and render a logged in page to the user. Since I'm using Sproutcore and don't have a backend server but using CouchDB to serve Sproutcore I wonder how I am going to implement Janrain successfully. Even if I'm using a backend server like Node.js I'm still not sure how to implement Janrain. Has anyone implemented Janrain in a Sproutcore app?

    Read the article

  • Atoms and references

    - by StackedCrooked
    According to the book Programming Clojure refs manage coordinated, synchronous changes to shared state and atoms manage uncoordinated, synchronous changes to shared state. If I understood correctly "coordinated" implies multiple changes are encapsulated as one atomic operation. If that is the case then it seems to me that coordination only requires using a dosync call. For example what is the difference between: (def i (atom 0)) (def j (atom 0)) (dosync (swap! i inc) (swap! j dec)) and: (def i (ref 0)) (def j (ref 0)) (dosync (alter i inc) (alter j dec))

    Read the article

  • C++ Asymptotic Profiling

    - by Travis
    I have a performance issue where I suspect one standard C library function is taking too long and causing my entire system (suite of processes) to basically "hiccup". Sure enough if I comment out the library function call, the hiccup goes away. This prompted me to investigate what standard methods there are to prove this type of thing? What would be the best practice for testing a function to see if it causes an entire system to hang for a sec (causing other processes to be momentarily starved)? I would at least like to definitively correlate the function being called and the visible freeze. Thanks

    Read the article

  • internet explorer, google chrome injection

    - by Volim Te
    I wrote code that injects a function in Internet Explorer/Chrome but it doesn't work with these processes. Basically, it fills one big structure with all the APIs my function needs, strings, and other data, then it opens a process to get a handle, virtualallocex to allocate enough memory to store a function and structure there, and it writes the function and the structure in allocated memory. It then runs createremotethread there with the function as a starting address and structure as parameter. It works all great with calc/notepad/winamp processes but I have problems with browser injection. I'm wondering what could it be, I'm using these APIs. x.xCreateFile x.xWriteFile x.xCloseHandle x.xSleep x.xVirtualAlloc x.xVirtualFree x.xMessageBox x.xLoadLibrary x.xShellExecute Is it because browsers are protected now and they're running with lowest privileges?

    Read the article

  • how much concurrent http request can erlang handle

    - by user209123
    I am developing a application for benchmarking purposes, for which I require to create large number of http connection in a short time, I created a program in java to test how much threads is java able to create, it turns out in my 2GB single core machine, the limit is variable between 5000 and 6000 with 1 GB of memory given to JVM after which it hits outofmemoryerror with heap limit reached. It is suggested around that erlang will be able to generate much more concurrent processes, I am willing to learn erlang if it is capable of solving the problem , although I am interested in knowing can erlang be able to say generate somewhere around 100000 processes which are essentially http requests waiting for responses, in a matter of few seconds without reaching any limit like memory error etc.,

    Read the article

  • Task vs. process, is there really any difference?

    - by DASKAjA
    Hi there, I'm studying for my final exams in my CS major on the subject distributed systems and operating systems. I'm in the need for a good definition for the terms task, process and threads. So far I'm confident that a process is the representation of running (or suspended, but initiated) program with its own memory, program counter, registers, stack, etc (process control block). Processes can run threads which share memory, so that communication via shared memory is possible in contrast to processes which have to communicate via IPC. But what's the difference between tasks and process. I often read that they're interchangable and that the term task isn't used anymore. Is that really true?

    Read the article

  • Improving HTML scrapper efficiency with pcntl_fork()

    - by Michael Pasqualone
    With the help from two previous questions, I now have a working HTML scrapper that feeds product information into a database. What I am now trying to do is improve efficiently by wrapping my brain around with getting my scrapper working with pcntl_fork. If I split my php5-cli script into 10 separate chunks, I improve total runtime by a large factor so I know I am not i/o or cpu bound but just limited by the linear nature of my scraping functions. Using code I've cobbled together from multiple sources, I have this working test: <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $hrefArray = array("http://slashdot.org", "http://slashdot.org", "http://slashdot.org", "http://slashdot.org"); function doDomStuff($singleHref,$childPid) { $html = new DOMDocument(); $html->loadHtmlFile($singleHref); $xPath = new DOMXPath($html); $domQuery = '//div[@id="slogan"]/h2'; $domReturn = $xPath->query($domQuery); foreach($domReturn as $return) { $slogan = $return->nodeValue; echo "Child PID #" . $childPid . " says: " . $slogan . "\n"; } } $pids = array(); foreach ($hrefArray as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref,$childPid); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); Which raises the following questions: 1) Given my hrefArray contains 4 urls - if the array was to contain say 1,000 product urls this code would spawn 1,000 child processes? If so, what is the best way to limit the amount of processes to say 10, and again 1,000 urls as an example split the child work load to 100 products per child (10 x 100). 2) I've learn that pcntl_fork creates a copy of the process and all variables, classes, etc. What I would like to do is replace my hrefArray variable with a DOMDocument query that builds the list of products to scrape, and then feeds them off to child processes to do the processing - so spreading the load across 10 child workers. My brain is telling I need to do something like the following (obviously this doesn't work, so don't run it): <?php libxml_use_internal_errors(true); ini_set('max_execution_time', 0); ini_set('max_input_time', 0); set_time_limit(0); $maxChildWorkers = 10; $html = new DOMDocument(); $html->loadHtmlFile('http://xxxx'); $xPath = new DOMXPath($html); $domQuery = '//div[@id=productDetail]/a'; $domReturn = $xPath->query($domQuery); $hrefsArray[] = $domReturn->getAttribute('href'); function doDomStuff($singleHref) { // Do stuff here with each product } // To figure out: Split href array into $maxChilderWorks # of workArray1, workArray2 ... workArray10. $pids = array(); foreach ($workArray(1,2,3 ... 10) as $singleHref) { $pid = pcntl_fork(); if ($pid == -1) { die("Couldn't fork, error!"); } elseif ($pid > 0) { // We are the parent $pids[] = $pid; } else { // We are the child $childPid = posix_getpid(); doDomStuff($singleHref); exit(0); } } foreach ($pids as $pid) { pcntl_waitpid($pid, $status); } // Clear the libxml buffer so it doesn't fill up libxml_clear_errors(); But what I can't figure out is how to build my hrefsArray[] in the master/parent process only and feed it off to the child process. Currently everything I've tried causes loops in the child processes. I.e. my hrefsArray gets built in the master, and in each subsequent child process. I am sure I am going about this all totally wrong, so would greatly appreciate just general nudge in the right direction.

    Read the article

  • removing a line from a text file?

    - by Blackbinary
    Hi all. I am working with a text file, which contains a list of processes under my programs control, along with relevant data. At some point, one of the processes will finish, and thus will need to be removed from the file (as its no longer under control). Here is a sample of the file contents (which has enteries added "randomly"): PID=25729 IDLE=0.200000 BUSY=0.300000 USER=-10.000000 PID=26416 IDLE=0.100000 BUSY=0.800000 USER=-20.000000 PID=26522 IDLE=0.400000 BUSY=0.700000 USER=-30.000000 So for example, if I wanted to remove the line that says PID=26416.... how could I do that, without writing the file over again? I can use external unix commands, however I am not very familiar with them so please if that is your suggestion, give an example. Thanks!

    Read the article

  • Reading and writing to SysV shared memory without synchronization (use of semaphores, C/C++, Linux)

    - by user363778
    Hi, I use SysV shared memory to let two processes communicate with each other. I do not want the code to become to complex so I wondered if I really had to use semaphores to synchronize the access to the shared memory. In my C/C++ program the parent process reads from the shared memory and the child process writes to the shared memory. I wrote two test applications to see if I could produce some kind of error like a segmentation fault, but I couldn't (Ubuntu 10.04 64bit). Even two processes writing non stop in a while loop to the same shared memory did not produce any error. I hope someone has experience concerning this matter and can tell me if I really must use semaphores to synchronize the access or if I am OK without synchronization. Thanks

    Read the article

  • Sharing some info with all DLLs pulled into a process

    - by JBRWilkinson
    Hi all, We've got an Enterprise system which has many processes (EXEs, services, DCOM servers, COM+ apps, ISAPI, MMC snapins) all of which make use of many COM components. We've recently seen failures in some of the customer deployments, but are finding it hard to troubleshoot the cause. In order to track down the problem, we've augmented the entire source with logging statements where errors occur. In order to identify which logs came from what processes, the C++ logging code (compiled into all components) uses the EXE name to name the log. This is good for some cases, but not all - COM+ apps, ISAPI and MMC snapins all have system EXE names and the logs end up interleaved. I saw this post about shared data sections which might help, but what I don't understand is who decides what goes in the shared section. Is there any way I can guarantee that a particular piece of code writes into the shared section before anyone else reads it? Or is there a better solution to this problem?

    Read the article

  • How do I view how many concurrent long polling requests there are on my server?

    - by Pascal
    My host is Joyent. My host says I have 15 process limit and prstat -J shows those processes but that doesn't tell me how many long polling requests are currently being served. I could record it myself but that would add alot of performance overhead. I need to know when the server is at its long polling limits. I know this limit occurs far before the memory or CPU is used up. From experimentation, I've already verified that the number of long polls open is NOT equivalant to the number of processes running, probably because each process has multiple threads, each serving a request. thanks.

    Read the article

  • Threaded Django task doesn't automatically handle transactions or db connections?

    - by Gabriel Hurley
    I've got Django set up to run some recurring tasks in their own threads, and I noticed that they were always leaving behind unfinished database connection processes (pgsql "Idle In Transaction"). I looked through the Postgres logs and found that the transactions weren't being completed (no ROLLBACK). I tried using the various transaction decorators on my functions, no luck. I switched to manual transaction management and did the rollback manually, that worked, but still left the processes as "Idle". So then I called connection.close(), and all is well. But I'm left wondering, why doesn't Django's typical transaction and connection management work for these threaded tasks that are being spawned from the main Django thread?

    Read the article

  • Sql Server performance

    - by Jose
    I know that I can't get a specific answer to my question, but I would like to know if I can find the tools to get to my answer. Ok we have a Sql Server 2008 database that for the last 4 days has had moments where for 5-20 minutes becomes unresponsive for specific queries. e.g. The following queries run in different query windows simultaneously have the following results SELECT * FROM Assignment --hangs indefinitely SELECT * FROM Invoice -- works fine Many of the tables have non-clustered indexes to help speed up SELECTs Here's what I know: 1) The same query will either hang indefinitely or run normally. 2) In Activity Monitor in the processes tab there are normally around 80-100 processes running I think that what's happening is 1) A user updates a table 2) This causes one or more indexes to get updated 3) Another user issues a select while the index is updating Is there a way I can figure out why at a specific moment in time SQL Server is being unresponsive for a specific query?

    Read the article

  • Resources for memory management in embedded application

    - by Elazar Leibovich
    How should I manage memory in my mission critical embedded application? I found some articles with google, but couldn't pinpoint a really useful practical guide. The DO-178b forbids dynamic memory allocations, but how will you manage the memory than? preallocate everything in advance and send a pointer to each function that needs allocation? Allocate it on the stack? Use a global static allocator (but then it's very similar to dynamic allocation)? Answers can be of the form of regular answer, reference to a resource, or reference to good opensource embedded system for example. clarification: The issue here is not whether or not memory management is availible for the embedded system. But what is a good design for an embedded system, to maximize reliability.

    Read the article

  • malloc hangs in Linux

    - by Rahul
    I am using Linux on a 16 G with 2 quad core CPU. There are 8 processes which are doing some work (CPU intensive/network i/o). Out of which 4 have a memory leak (These are test conditions so no problem in having leaks here). Total space is occupied by all processes is around 15.4 G only 200 MB is free in system. Things are fine for some hours. But after that malloc hangs (for a process which doesn't have a memory leak). Its stuck for for more than 4 minutes (Note CPU is not 100% but io has gone up signficantly). Now there is no problem in the hanged process. (It has not corrupted the memory). What malloc is doing? (is it tryibg to defragment or builidng up swap space).I am using SUSE 10. Any pointers?

    Read the article

  • PHP upload to GoDaddy hosted site

    - by 105894384987190582154
    Hi, relatively new to both hosting and PHP, so apologies for (probably) missing the obvious, but… I built a page which would allow file uploads to my site, following the example laid out here: W3Schools PHP upload exercise Through the File Manage on my Godaddy hosting, I created a folder named ’upload’ so that the file would land there after being uploaded through the page I had built. Part of the returned page that appears after submitted the file reads: Temp file: d:temptmpphpE4C9.tmp Stored in: upload/testfile.txt which would indicate that the file has been sucesscully uploaded given the code in the example. However, I cannot see the file in the ’upload’ folder via my File Manage, or anywhere else on the hosting of my site (as far as I can see). I also cannot see the ’temp’ folder anywhere either… Any help or clarification would be greatly appreciated. Thanks Tim

    Read the article

  • Creating an n tiered application

    - by aaron
    I am researching architecture for a project that will be started next year. It is mainly a c# web app, but there will be a service layer so that it can talk to our facebook/iphone app. There are a few long running processes, which means that I will be creating a windows process that can handle those. I’m thinking of putting the entire app in the windows service instead of just the long running processes. Asp - wcf - bll Vs Asp - bll I know this will be more scalable. But it is probably overkill as everything will be running on the same box, even the database. This could change down the road if the server can’t handle the traffic like marketing says it will. I don’t have access to production hardware, just my crappy testing box and my local machine. Has anyone decided to go down this route? But mostly, what is the best way to test both methods to get some metrics?

    Read the article

  • Joomla , forms with upload and custom field from inside the administration panel

    - by Stathis
    I want a plugin for joomla like jforms or chronoforms in order to make a form to upload videos along with other custom fields to db and manage them. The only problem is I want this functionality to be made from inside the administrator console and not to appear on a page at my site's frontend. My site does not have a login service , so I need to make the admin able to login to administration panel and from there to upload and manage videos. Do you know of a plugin wich supports this functionality? Thank you in advance.

    Read the article

  • Please help me regarding Time Zones of different Countries

    - by Rajesh Rolen- DotNet Developer
    I am creating a web site which in which the service providers can create their schedule (date and time) at which they can provide service and client can book them. but the problem is that service providers are from different countries. so their is Time zone problem. please give me a best solution for it, that i should convert the time of service providers (while inserting schedule) to GMT in javascript (client side) or in my CS file (server side) how to manage all this.. plez give me a link of any such demo in which time zone are handled properly.... if i converts it at client side then problem is that may be client have set wrong time zone on his pc... and if i convert it at server side.. than how to manage .. how to show it to service providers their schedule and to clients accordingly.. please help me out.. i am using asp.net (VS 2005) and sql server.

    Read the article

  • using JDBC with persistence.xml

    - by moshe shamy
    I am building a framework that manage the access to the database. the framework getting tasks from the user and handle a connection pooling that manage the access to the database. the user just send me SQL commands. One of the feature that i would like to support is working with JPA, in this case i will provide entity manager. in some cases i would like to provide JDBC access as well as JPA access. the arguments for the database are written in XML file. so for JPA i need to write the property in persistence.xml so it will be not so smart to write again the same arguments for JDBC. do you know if i can get the arguments of the database from persistence.xml, do you know if there is a source code that do it. or should i parse persistence.xml by myself?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >