Search Results

Search found 8782 results on 352 pages for 'restart processes'.

Page 40/352 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • What processes would make the selling of a hard drive that previously held sensitive data justifiable? [closed]

    - by user12583188
    Possible Duplicate: Securely erasing all data from a hard drive In my personal collection are an increasing number of relatively new drives, only put on the shelf due to upgrades; in the past I have never sold hard drives with used machines for fear of having the encrypted password databases that have been stored on them compromised, but as their numbers increase I find myself more tempted to do so (due to the $$$ I know they're worth on the used market). What tools then exist to make the recovery of data from said drives difficult to the extent that selling them could be justified? Another way of saying this would be: what tools/method exist for making the attempts at recovery of any data previously stored on a certain drive impractical? I assume that it is always possible to recover data from a drive that is in working order. I assume also there are some methods for preventing recovery of data due a program called dban, and one particular feature in macOSX that deals with permanently deleting data from a disk.

    Read the article

  • New 64 bit linux system has regular processes (ps, grep etc) taking up way too much VIRT mem

    - by user42980
    We just moved from a 32-bit machine to a 64-bit machine. We have quickly ran out of memory despite the new boxes have twice as much ram as the old boxes. Running a simple ps command will illustrate the problem. New machine: 132 prod-Charlotte1-node1 ~/public_html/rearch/cgi-bin ps aux | grep ps root 293 0.0 0.0 0 0 ? S< May09 0:00 [kpsmoused] xamine 2267 1.0 0.0 63728 928 pts/3 R+ 16:50 0:00 ps aux xamine 2268 0.0 0.0 61172 752 pts/3 S+ 16:50 0:00 grep ps Old machine: 132 prod-116431-node1:/home/xamine ps aux | grep ps xamine 23191 0.0 0.0 2332 768 pts/6 R+ 15:41 0:00 ps aux xamine 23192 0.0 0.0 3668 692 pts/6 S+ 15:41 0:00 grep ps Notice that the ps process is using 63M of VIRT mem vs 2 on the old machine. New Machine: Enterprise Linux Enterprise Linux Server release 5.4 (Carthage) Red Hat Enterprise Linux Server release 5.4 (Tikanga) Old Machine: Red Hat Enterprise Linux ES release 4 (Nahant Update 4) Thanks for any thoughts you have!

    Read the article

  • What causes "system call failed" and there are hanging ie processes?

    - by TecBrat
    System: Windows 7 Home Premium IE: 11.0.9600.17107 When I have had many, many apps and windows open, sometimes I'll try to access a folder and get a dialog that says "System Call Failed". I have found the fix for it is to open the task manager and End Process Tree on iexplore.exe and iexplore.exe *32. Often times there will be several of these even when I have closed all my browser windows. Does anyone have any experience with this error?

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • ISO, Six Sigma, SEI-CMM, etc., in Fortune 500 companies

    - by CMR
    Do large corporations and product companies follow any standard quality models/processes at all? For example, I have seen that many large organizations have proprietary processes in IT and software development. Back in the days (even before Motorola's Iridium project,) I remember many IT companies scampering for SEI-CMM certification. Do any of the Fortune 500 company try to adopt these quality processes? In my limited experience I have not seen them undergoing audits for adherence to processes. Most of the audits are either financial, or issues pertaining to legalities. Am I just being ignorant, or is this true? If true, how stringently do the companies adhere to the processes?

    Read the article

  • Trouble with printer canon lbp2900-Ubuntu 14.04

    - by user288922
    I use Ubuntu 14.04. I installed the printer Canon LBP2900 following the guide and did all steps and now the printer works. I added CCPD in Startup Application (with command: sudo /etc/init.d/ccpd start). But after I log out or restart my pc, the printer can't print until I have to open terminal and run sudo /etc/init.d/ccpd restart && sudo /etc/init.d/cups restart. Once more, after restarting my pc I’ve just only printed 1 job (of course it can't print), and I run sudo /etc/init.d/ccpd restart && sudo /etc/init.d/cups restart, the printer will work but it prints the same job three times.

    Read the article

  • the new distro ubuntu 12.10 has 2 system Bugs, reinstall the iso also exist [closed]

    - by sunpoison
    Possible Duplicate: How do I report a bug? discovered 2 bugs:: 1)cannot restart the computer when you click the top right icon power key "restart", it will stay on the splash image "ubuntu...",i wait for about 7 minutes and more, it also cannot restart, so, i have to uninstall the battery of the laptop...... 2)when i first update to the 12.10,not run for few hours, the "input method icon" on the top state bar is suddenly hide, when i use the shortcuts to switch the input method with "ctrl +space" it doesn' work at all, i restart the laptop, it can change the input method, but the state icon "a small keyboard icon is escape, i goto the "i bus" settings, but when i how to adjust it, the icon still can't display on the top bar (now i reinstall it use the iso file, this bug is fixed, but it cannot restart like the former one) it must has some problems of the "i bus" input method

    Read the article

  • Excessive httpd processes to stack up on my Rails + Apache2 + Passenger production setup?

    - by LeoAlmighty
    I have a Rails + Apache2 + Postgres + Passenger application running in production mode in OSX Snow Leopard. The application serves as a data warehouse for another application in the cloud so I'm constantly getting API calls to my OSX production build. After a recent reboot, I'm finding a ton of httpd processes stacking up and eventually requiring an apache reboot. I haven't changed any settings, everything was running fine before. Any ideas on the best way to troubleshoot this? $ ps -ef|grep httpd 0 6203 1 0 0:00.20 ?? 0:00.47 /usr/sbin/httpd -D FOREGROUND 70 6222 6203 0 0:00.05 ?? 0:00.11 /usr/sbin/httpd -D FOREGROUND 70 6224 6203 0 0:00.31 ?? 0:00.50 /usr/sbin/httpd -D FOREGROUND 70 6233 6203 0 0:00.05 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6234 6203 0 0:00.43 ?? 0:00.64 /usr/sbin/httpd -D FOREGROUND 70 6243 6203 0 0:00.02 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6319 6203 0 0:00.08 ?? 0:00.16 /usr/sbin/httpd -D FOREGROUND 70 6334 6203 0 0:00.02 ?? 0:00.05 /usr/sbin/httpd -D FOREGROUND 70 6469 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6487 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6593 6203 0 0:00.36 ?? 0:00.48 /usr/sbin/httpd -D FOREGROUND 70 6709 6203 0 0:00.04 ?? 0:00.08 /usr/sbin/httpd -D FOREGROUND 70 6718 6203 0 0:00.04 ?? 0:00.10 /usr/sbin/httpd -D FOREGROUND 70 6834 6203 0 0:00.01 ?? 0:00.03 /usr/sbin/httpd -D FOREGROUND 70 6852 6203 0 0:00.00 ?? 0:00.00 /usr/sbin/httpd -D FOREGROUND 70 6853 6203 0 0:00.01 ?? 0:00.02 /usr/sbin/httpd -D FOREGROUND

    Read the article

  • How to synchronize two (or n) replication processes for SQL Server databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above? Or, in other words, how to compare last time of replication in each database? UPD: The only way I see to synchronize is to continuously write timestamps into service tables in each database and to check these timestamps on replicated servers. Is that acceptable solution?

    Read the article

  • can upstart expect/respawn be used on processes that fork more than twice?

    - by johnjamesmiller
    I am using upstart to start/stop/automatically restart daemons. One of the daemons forks 4 times. The upstart cookbook states that it only supports forking twice. Is there a workaround? how it fails If I try to use expect daemon or expect fork upstart uses the pid of the second fork. When I try to stop the job nobody responds to upstarts SIGKILL signal and it hangs until you exhaust the pid space and loop back around. It gets worse if you add respawn. Upstart thinks the job died and immediately starts another one. bug acknowledged by upstream A bug has been entered for upstart. The solutions presented are stick with the old sysvinit, rewrite your daemon, or wait for a re-write. rhel is close to 2 years behind the latest upstart package so by the time the rewrite is released and we get updated the wait will probably be 4 years. The daemon is written by a subcontractor of a subcontractor of a contractor so it will not be fixed any time soon either.

    Read the article

  • Is it possible to restart the phone with Android SDK or NDK?

    - by J Andy
    Is it possible to programmatically restart the phone from a application (service) running on top of the Dalvik VM? If the SDK does not provide this functionality, then how about using the NDK and calling some functions provided by the kernel? I know this option is not preferred (not stable enough libs), but if it's the only option, I'll have to consider that as well.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • Starting/Stopping Custom PHP Chat Server Linux Service (CentOS)

    - by chad
    I have been trying all night to get this service working properly. I created this script from a template and am very new to bash coding. I wrote a fully functioning chat server in php which runs endlessly, but now want to make it a dedicated service. I want to do this so that it starts on server boot and boots back up if possible when there are any down-times with the server. The issue is that I need this thing to run in a detached screen so that I can monitor packet data or send server commands via SSH when need-be. The main problem that i'm having is that it needs to have its own PID when it starts so that I can stop/restart it when needed. I am the type who grinds on coding until I figure it out, but this is so new to me that it seems the learning curve here is very steep and frustrating. Below is my code if anybody can please help me with this one, i've gotten so tired I can't even concentrate any more :( #!/bin/sh # # chatserver # # chkconfig: 345 20 90 # description: chatServer Linux Service Daemon \ # for general server handling ### BEGIN INIT INFO # Provides: chatserver # Required-Start: $local_fs $network $named $syslog # Required-Stop: $local_fs $syslog # Default-Start: 3 4 5 # Default-Stop: 0 1 2 6 # Short-Description: This service maintains the chatServer # Description: chatServer Linux Service Daemon # for general server handling ### END INIT INFO # Source function library. . /etc/rc.d/init.d/functions exec="screen php -q /var/www/html/chatServer.php" prog="chatserver" config="/etc/sysconfig/$prog" pidfile="/var/run/chatserver.pid" [ -e /etc/sysconfig/$prog ] && . /etc/sysconfig/$prog lockfile=/var/lock/subsys/$prog start() { #$exec || exit 5 echo -n $"Starting $prog: " daemon $exec --name=$exec --pidfile=$pidfile retval=$? echo [ $retval -eq 0 ] && touch $lockfile return $retval } stop() { echo -n $"Stopping $prog: " killproc -p $pidfile rm -f $pidfile retval=$? echo [ $retval -eq 0 ] && rm -f $lockfile return $retval } restart() { stop start } reload() { restart } force_reload() { restart } rh_status() { # run checks to determine if the service is running or use generic status status $prog } rh_status_q() { rh_status >/dev/null 2>&1 } case "$1" in start) rh_status_q && exit 0 $1 ;; stop) rh_status_q || exit 0 $1 ;; restart) $1 ;; reload) rh_status_q || exit 7 $1 ;; force-reload) force_reload ;; status) rh_status ;; condrestart|try-restart) rh_status_q || exit 0 restart ;; *) echo $"Usage: $0 {start|stop|status|restart|condrestart|try-restart|reload|force-reload}" exit 2 esac exit $?

    Read the article

  • vsftpd chroot_local_user does nothing

    - by Reinderien
    I'm setting up a vsftpd server on: Linux 2.6.32-26-server #48-Ubuntu SMP Wed Nov 24 10:28:32 UTC 2010 x86_64 GNU/Linux When I set chroot_local_user=YES, there is no effect (I can still see / when I log in). There is nothing in syslog or /var/log/vsftpd.log to indicate what's wrong. I know that I'm editing the right conf file and that other settings do come into effect when I restart the daemon, because these work: ssl_enable=YES force_local_data_ssl=YES force_local_logins_ssl=YES Any idea what's wrong? Thanks. Edit: I've touched /etc/vsftpd.chroot_list for it to be empty (no chroot-denied users), and have added: chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list Then to restart: sudo /etc/init.d/vsftpd restart Rather than invoking init scripts through /etc/init.d, use the service(8) utility, e.g. service vsftpd restart Since the script you are attempting to invoke has been converted to an Upstart job, you may also use the restart(8) utility, e.g. restart vsftpd vsftpd start/running, process 5606 Still no effect.

    Read the article

  • Find out when all processes in (void) is done?

    - by Emil
    Hey. I need to know how you can find out when all processes (loaded) from a - (void) are done, if it's possible. Why? I'm loading in data for a UITableView, and I need to know when a Loading... view can be replaced with the UITableView, and when I can start creating the cells. This is my code: - (void) reloadData { NSAutoreleasePool *releasePool = [[NSAutoreleasePool alloc] init]; NSLog(@"Reloading data."); NSURL *urlPosts = [NSURL URLWithString:[NSString stringWithFormat:@"%@", URL]]; NSError *lookupError = nil; NSString *data = [[NSString alloc] initWithContentsOfURL:urlPosts encoding:NSUTF8StringEncoding error:&lookupError]; postsData = [data componentsSeparatedByString:@"~"]; [data release], data = nil; urlPosts = nil; self.numberOfPosts = [[postsData objectAtIndex:0] intValue]; self.postsArrayID = [[postsData objectAtIndex:1] componentsSeparatedByString:@"#"]; self.postsArrayDate = [[postsData objectAtIndex:2] componentsSeparatedByString:@"#"]; self.postsArrayTitle = [[postsData objectAtIndex:3] componentsSeparatedByString:@"#"]; self.postsArrayComments = [[postsData objectAtIndex:4] componentsSeparatedByString:@"#"]; self.postsArrayImgSrc = [[postsData objectAtIndex:5] componentsSeparatedByString:@"#"]; NSMutableArray *writeToPlist = [NSMutableArray array]; NSMutableArray *writeToNoImagePlist = [NSMutableArray array]; NSMutableArray *imagesStored = [NSMutableArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"]]; int loop = 0; for (NSString *postID in postsArrayID) { if ([imagesStored containsObject:[NSString stringWithFormat:@"%@.png", postID]]){ NSLog(@"Allready stored, jump to next. ID: %@", postID); continue; } NSLog(@"%@.png", postID); NSData *imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:[postsArrayImgSrc objectAtIndex:loop]]]; // If image contains anything, set cellImage to image. If image is empty, try one more time or use noImage.png, set in IB if (imageData == nil){ NSLog(@"imageData is empty before trying .jpeg"); // If image == nil, try to replace .jpg with .jpeg, and if that worked, set cellImage to that image. If that is also nil, use noImage.png, set in IB. imageData = [NSData dataWithContentsOfURL:[NSURL URLWithString:[[postsArrayImgSrc objectAtIndex:loop] stringByReplacingOccurrencesOfString:@".jpg" withString:@".jpeg"]]]; } if (imageData != nil){ NSLog(@"imageData is NOT empty when creating file"); [fileManager createFileAtPath:[rootPath stringByAppendingPathComponent:[NSString stringWithFormat:@"images/%@.png", postID]] contents:imageData attributes:nil]; [writeToPlist addObject:[NSString stringWithFormat:@"%@.png", postID]]; } else { [writeToNoImagePlist addObject:[NSString stringWithFormat:@"%@", postID]]; } imageData = nil; loop++; NSLog(@"imagePlist: %@\nnoImagePlist: %@", writeToPlist, writeToNoImagePlist); } NSMutableArray *writeToAllPlist = [NSMutableArray arrayWithArray:writeToPlist]; [writeToPlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:nowPlist]]; [writeToAllPlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"]]]; [writeToNoImagePlist addObjectsFromArray:[NSArray arrayWithContentsOfFile:[rootPath stringByAppendingPathComponent:@"noImage.plist"]]]; [writeToPlist writeToFile:nowPlist atomically:YES]; [writeToAllPlist writeToFile:[rootPath stringByAppendingPathComponent:@"imagesStored.plist"] atomically:YES]; [writeToNoImagePlist writeToFile:[rootPath stringByAppendingPathComponent:@"noImage.plist"] atomically:YES]; [releasePool release]; }

    Read the article

  • Node.js MMO - process and/or map division

    - by Gipsy King
    I am in the phase of designing a mmo browser based game (certainly not massive, but all connected players are in the same universe), and I am struggling with finding a good solution to the problem of distributing players across processes. I'm using node.js with socket.io. I have read this helpful article, but I would like some advice since I am also concerned with different processes. Solution 1: Tie a process to a map location (like a map-cell), connect players to the process corresponding to their location. When a player performs an action, transmit it to all other players in this process. When a player moves away, he will eventually have to connect to another process (automatically). Pros: Easier to implement Cons: Must divide map into zones Player reconnection when moving into a different zone is probably annoying If one zone/process is always busy (has players in it), it doesn't really load-balance, unless I split the zone which may not be always viable There shouldn't be any visible borders Solution 1b: Same as 1, but connect processes of bordering cells, so that players on the other side of the border are visible and such. Maybe even let them interact. Solution 2: Spawn processes on demand, unrelated to a location. Have one special process to keep track of all connected player handles, their location, and the process they're connected to. Then when a player performs an action, the process finds all other nearby players (from the special player-process-location tracking node), and instructs their matching processes to relay the action. Pros: Easy load balancing: spawn more processes Avoids player reconnecting / borders between zones Cons: Harder to implement and test Additional steps of finding players, and relaying event/action to another process If the player-location-process tracking process fails, all other fail too I would like to hear if I'm missing something, or completely off track.

    Read the article

  • How to auto-restart a python script on fail?

    - by norm
    This post describes how to keep a child process alive in a BASH script: http://stackoverflow.com/questions/696839/how-do-i-write-a-bash-script-to-restart-a-process-if-it-dies This worked great for calling another BASH script. However, I tried executing something similar where the child process is a Python script: #!/bin/bash PYTHON=/usr/bin/python2.6 function myprocess { $PYTHON daemon.py start } NOW=$(date +"%b-%d-%y") until myprocess; do echo "$NOW Prog crashed. Restarting..." >> error.txt sleep 1 done Now the behaviour is completely different. It seems the python script is no longer a child of of the bash script but seems to have 'taken over' the BASH scripts PID - so there is no longer a BASH wrapper round the called script...why?

    Read the article

  • Why do my Xcode default font starts to look ugly after some time, until I restart?

    - by mystify
    I plugged in an external monitor. All resolutions match perfectly. MacBookPro LCD is closed. After about 10 minutes my fonts in Xcode start to look very bad. Only in Xcode. When I restart the mac and don't use an external monitor, fonts look all right again. When I attach the monitor again, fonts look nice. Then I close XCode and reopen it: Fonts suck. All other fonts look great. It seems like Xcode isn't antialiasing them properly after something happens. For my observation it happens when I quit and reopen Xcode while an external monitor is in use. Only way to fix it then is to completely reboot. Is there a fix for this problem?

    Read the article

  • How do I restore an Apache configuration after a hard restart?

    - by iPhoneARguy
    I installed Apache on my Ubuntu server, and when it installed, it created a directory ~/Apache/ with directories such as htdocs, logs, cgi-bin, etc. I was then able to create several pages and scripts which I placed into the corresponding directories (~/Apache/htdocs, ~/Apache/cgi-bin), and I was able to serve these files on the web. Unfortunately, my server had an unexpected hard restart, and since then, it seems like the files aren't being served. The server is still up, but when I navigate to the URL, I get a page that says "It works!" instead of what was expected. Does anyone know how I can get Apache to serve from ~/Apache/htdocs again? I tried to RTFM and Google for an answer already but it wasn't very helpful.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >