Search Results

Search found 5195 results on 208 pages for 'shift reduce conflict'.

Page 13/208 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • How do you fix an SVN 409 Conflict Error

    - by NerdStarGamer
    I used to use SVN 1.4 on OS X Leopard and everything was fine. A couple of weeks ago I installed a fresh copy of OS X 10.6. The version of SVN that comes with Snow Leopard is 1.6.5. I went ahead and built my own copy with 1.6.6. I'm using the built in apache server and just hosting repositories locally. Everything appeared to work fine until I actually tried to commit something. Everytime I try to commit a change, I get the following message: Transmitting file data .svn: Commit failed (details follow): svn: MERGE of '/svn/svn2': 409 Conflict (http://localhost) This happens with my old repositories, so I created a couple of new ones. Same deal. I also tried using the 1.6.5 version that comes with the system...same. Finally, I tried upgrading to the latest stable SVN (1.6.9) and still got the same problem. The Apache error logs the following for each failed commit: [Mon Mar 29 19:53:10 2010] [error] [client ::1] Could not MERGE resource "/svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1" into "/svn/svn2". [409, #0] [Mon Mar 29 19:53:10 2010] [error] [client ::1] An error occurred while committing the transaction. [409, #2] [Mon Mar 29 19:53:10 2010] [error] [client ::1] Can't open directory '/usr/local/svn/svn2/db/transactions/5-6.txn/\xeb\xa9\x0f\x1f': No such file or directory [409, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Could not DELETE /svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1. [500, #0] [Mon Mar 29 19:53:11 2010] [error] [client ::1] could not open transaction. [500, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Can't open file '/usr/local/svn/svn2/db/transactions/5-6.txn/props': No such file or directory [500, #2] And from the access log: ::1 - - [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 401 401 ::1 - user [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 200 188 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/vcc/default HTTP/1.1" 207 398 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/bln/6 HTTP/1.1" 207 449 ::1 - user [30/Mar/2010:13:02:20 -0400] "REPORT /svn/svn2/!svn/vcc/default HTTP/1.1" 200 1172 Curiously, the commit does actually commit the changes, but the working copy doesn't see that and everything gets screwy. I've tried to Google every variation I can think of for this problem, but the search results are pretty much useless. I'm not using TortoiseSVN or anything special and commits fail on a new repository, so I know it's not a problem with my old repos. Any help would be greatly appreciated.

    Read the article

  • SQL timetable for employee

    - by latinunit-net
    Hi guys, i need to create an employee shift database. so i have 3 tables so far, employee, employee_shift, and shift im suppose to calculate how many shifts an employee has done at the end of the month, my question means, because a month has 30 days some have 28 and 31 days. this means i need to create in the shift table 31 different variations? one for each day of the month? in order to calculate which employee has worked the most? in my business relation it says an employee has either 1 or 2 shifts per day therefore do i have to have 60 different rows of variations? im i right or is there an easy way to work it out

    Read the article

  • .BAT Switches Parsing

    - by giiYanJ
    Erm.. I'm working with a switches parsing in BAT files, which goes like this: <commands> -a -b -x -y -z -u abc ...... The user may input a lot of switch, or none. So I used looped shift to make parsing infinite switches possible: :loop IF "%1"=="-a" ... SHIFT GOTO loop But when the script ends, I always get cmd executing the switches and showing up error like '-n' is not recognized as an internal... So, someone got any idea? Thanks a lot... P/S: Make solution sticks with BAT script if possible as using other language may cause dependencies problem as this script is aimed on ANY computer with Windows. Finally, thanks again =) EDIT: Tried shf301 suggestion, i found out that I used DEL %0 to delete itself but it seems the %0 is shifted into the arguments because of the SHIFT command.

    Read the article

  • Resolve naming conflict in included XSDs for JAXB compilation

    - by Jason Faust
    I am currently trying to compile with JAXB (IBM build 2.1.3) a pair of schema files into the same package. Each will compile on it's own, but when trying to compile them together i get a element naming conflict due to includes. My question is; is there a way to specify with an external binding a resolution to the naming collision. Example files follow. In the example the offending element is called "Common", which is defined in both incA and incB: incA.xsd <?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/" xmlns:tns="http://www.example.org/" elementFormDefault="qualified"> <complexType name="TypeA"> <sequence> <element name="ElementA" type="string"></element> </sequence> </complexType> <!-- Conflicting element --> <element name="Common" type="tns:TypeA"></element> </schema> incB.xsd <?xml version="1.0" encoding="UTF-8"?> <schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.example.org/" xmlns:tns="http://www.example.org/" elementFormDefault="qualified"> <complexType name="TypeB"> <sequence> <element name="ElementB" type="int"></element> </sequence> </complexType> <!-- Conflicting element --> <element name="Common" type="tns:TypeB"></element> </schema> A.xsd <?xml version="1.0" encoding="UTF-8"?> <schema targetNamespace="http://www.example.org/" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://www.example.org/"> <include schemaLocation="incA.xsd"></include> <complexType name="A"> <sequence> <element ref="tns:Common"></element> </sequence> </complexType> </schema> B.xsd <?xml version="1.0" encoding="UTF-8"?> <schema targetNamespace="http://www.example.org/" elementFormDefault="qualified" xmlns="http://www.w3.org/2001/XMLSchema" xmlns:tns="http://www.example.org/"> <include schemaLocation="incB.xsd"></include> <complexType name="B"> <sequence> <element ref="tns:Common"></element> </sequence> </complexType> </schema> Compiler error when both are compiled from one evocation of xjb: [ERROR] 'Common' is already defined line 9 of file:/C:/temp/incB.xsd [ERROR] (related to above error) the first definition appears here line 9 of file:/C:/temp/incA.xsd (For reference, this is a generalization to resolve an issue with compiling the OAGIS8 SP3 package)

    Read the article

  • Need help implementing this algorithm with map reduce(hadoop)

    - by Julia
    Hi all! i have algorithm that will go through a large data set read some text files and search for specific terms in those lines. I have it implemented in Java, but I didnt want to post code so that it doesnt look i am searching for someone to implement it for me, but it is true i really need a lot of help!!! This was not planned for my project, but data set turned out to be huge, so teacher told me I have to do it like this. I was reading about MapReduce and thaught that i first do the standard implementation and then it will be more/less easier to do it with mapreduce. But didnt happen, since algorithm is quite stupid and nothing special, and map reduce...i cant wrap my mind around it. So here is shortly pseudo code of my algorithm LIST termList (there is method that creates this list from lucene index) FOLDER topFolder INPUT topFolder IF it is folder and not empty list files (there are 30 sub folders inside) FOR EACH sub folder GET file "CheckedFile.txt" analyze(CheckedFile) ENDFOR END IF Method ANALYZE(CheckedFile) read CheckedFile WHILE CheckedFile has next line GET line FOR(loops through termList) GET third word from line IF third word = term from list append whole line to string buffer ENDIF ENDFOR END WHILE OUTPUT string buffer to file Also, as you can see, each time when "analyze" is called, new file has to be created, i understood that map reduce is difficult to write to many outputs??? I understand mapreduce intuition, and my example seems perfectly suited for mapreduce, but when it comes to do this, obviously I do not know enough and i am STUCK! Please please help.

    Read the article

  • How to reduce a data frame keeping the order for other columns

    - by betabandido
    I am trying to reduce a data frame using the max function on a given column. I would like to preserve other columns but keeping the values from the same rows where each maximum value was selected. An example will make this explanation easier. Let us assume we have the following data frame: dframe <- data.frame(list(BENCH=sort(rep(letters[1:4], 4)), CFG=rep(1:4, 4), VALUE=runif(4 * 4) )) This gives me: BENCH CFG VALUE 1 a 1 0.98828096 2 a 2 0.19630597 3 a 3 0.83539540 4 a 4 0.90988296 5 b 1 0.01191147 6 b 2 0.35164194 7 b 3 0.55094787 8 b 4 0.20744004 9 c 1 0.49864470 10 c 2 0.77845408 11 c 3 0.25278871 12 c 4 0.23440847 13 d 1 0.29795494 14 d 2 0.91766057 15 d 3 0.68044728 16 d 4 0.18448748 Now, I want to reduce the data in order to select the maximum VALUE for each different BENCH: aggregate(VALUE ~ BENCH, dframe, FUN=max) This gives me the expected result: BENCH VALUE 1 a 0.9882810 2 b 0.5509479 3 c 0.7784541 4 d 0.9176606 Next, I tried to preserve other columns: aggregate(cbind(VALUE, CFG) ~ BENCH, dframe, FUN=max) This reduction returns: BENCH VALUE CFG 1 a 0.9882810 4 2 b 0.5509479 4 3 c 0.7784541 4 4 d 0.9176606 4 Both VALUE and CFG are reduced using max function. But this is not what I want. For instance, in this example I would like to obtain: BENCH VALUE CFG 1 a 0.9882810 1 2 b 0.5509479 3 3 c 0.7784541 2 4 d 0.9176606 2 where CFG is not reduced, but it just keeps the value associated to the maximum VALUE for each different BENCH. How could I change my reduction in order to obtain the last result shown?

    Read the article

  • Firefox/Google Chrome extension to darken pages & reduce eye strain

    - by megafish
    Is there an extension or add-on like Stylish which lets you easily toggle back and forth between affected (Stylish) and standard (or untainted) view? I've tried changing colors in Firefox (Settings Content Colors) but there is no quick toggle between the states. Firefox or Google Chrome, whichever one has the extension. Doesn't matter since I'll switch to using that as my primary development browser.

    Read the article

  • ShareWebDb_log.ldf is 97GB - How to reduce?

    - by pierre
    We run an SBS 2008 server with WSS. On the drive I have set aside for WSS, I'm fast running out of space due to the ShareWebDb_log.ldf file: I've tried doing to do what I've read online - change the recovery mode, backup and truncate - but I can't actually see how I do this via the SQL Server Management Studio tool. Can anyone shed any light? The database for this file does not show in the list, and I cannot expand Management SQL Server Logs (because it's too large?).

    Read the article

  • Sharing disk volumes across OpenVZ guests to reduce Package Management Overhead

    - by andyortlieb
    Is it feasible to create a single "master" OpenVZ guest who would only be used for package management, and use something like mount --bind on several other OpenVZ guests sort of trick them into using the environment installed by the master guest? The point of this would be so that users can maintain their own containers, and yet stay in sync with the master development environment, so they'll always have the latest & greatest requirements without worrying too much about system administration. If they need to install their own packages, could put them in /opt, or /usr/local (or set a path to their home directory)? To rephrase, I would like several (developer's, for example) OpenVZ guests whose /bin, /usr (and so on...) actually refer to the same disk location as that of a master OpenVZ guest who can be started up to install and update common packages for the environment to be shared by all of this group of OpenVZ guests. For what it's worth, we're running Debian 6. Edit: I have tried mounting (bind, and readonly) /bin, /lib, /sbin, /usr in this fashion and it refuses to start the containers stating that files are already mounted or otherwise in use: Starting container ... vzquota : (error) Quota on syscall for id 1102: Device or resource busy vzquota : (error) Possible reasons: vzquota : (error) - Container's root is already mounted vzquota : (error) - there are opened files inside Container's private area vzquota : (error) - your current working directory is inside Container's vzquota : (error) private area vzquota : (error) Currently used file(s): /var/lib/vz/private/1102/sbin /var/lib/vz/private/1102/usr /var/lib/vz/private/1102/lib /var/lib/vz/private/1102/bin vzquota on failed [3] If I unmount these four volumes, and start the guest, and then mount them after the guest has started, the guest never sees them mounted.

    Read the article

  • Reduce Mac OS X's Flash CPU usage

    - by elhombre
    I have been experiencing high CPU usage (138%) on my MacBook, while looking at flash videos on the internet with the Firefox browser. Mostly this usage makes itself noticeable by the loud noise of the fans and a hot MacBook which is very annoying for me. Does anyone know how to solve this problem or a workaround in of any kind?

    Read the article

  • How to reduce RAM consumption when my server is idle

    - by Julien Genestoux
    We use Slicehost, with 512MB instances. We run Ubuntu 9.10 on them. I installed a few packages, and I'm now trying to optimize RAM consumption before running anything on there. A simple ps gives me the list of running processes : # ps faux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 2 0.0 0.0 0 0 ? S< Jan04 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? S< Jan04 0:15 \_ [migration/0] root 4 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [ksoftirqd/0] root 5 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/0] root 6 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [events/0] root 7 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [cpuset] root 8 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [khelper] root 9 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [async/mgr] root 10 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenwatch] root 11 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xenbus] root 13 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/1] root 14 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/1] root 15 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/1] root 16 0.0 0.0 0 0 ? S< Jan04 0:07 \_ [events/1] root 17 0.0 0.0 0 0 ? S< Jan04 0:02 \_ [migration/2] root 18 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/2] root 19 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/2] root 20 0.0 0.0 0 0 ? R< Jan04 0:07 \_ [events/2] root 21 0.0 0.0 0 0 ? S< Jan04 0:04 \_ [migration/3] root 22 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksoftirqd/3] root 23 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [watchdog/3] root 24 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [events/3] root 25 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/0] root 26 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/1] root 27 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/2] root 28 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kintegrityd/3] root 29 0.0 0.0 0 0 ? S< Jan04 0:01 \_ [kblockd/0] root 30 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/1] root 31 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/2] root 32 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kblockd/3] root 33 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kseriod] root 34 0.0 0.0 0 0 ? S Jan04 0:00 \_ [khungtaskd] root 35 0.0 0.0 0 0 ? S Jan04 0:05 \_ [pdflush] root 36 0.0 0.0 0 0 ? S Jan04 0:06 \_ [pdflush] root 37 0.0 0.0 0 0 ? S< Jan04 1:02 \_ [kswapd0] root 38 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/0] root 39 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/1] root 40 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/2] root 41 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [aio/3] root 42 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsIO] root 43 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 44 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 45 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 46 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsCommit] root 47 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [jfsSync] root 48 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfs_mru_cache] root 49 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/0] root 50 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/1] root 51 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/2] root 52 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfslogd/3] root 53 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/0] root 54 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/1] root 55 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/2] root 56 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsdatad/3] root 57 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/0] root 58 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/1] root 59 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/2] root 60 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [xfsconvertd/3] root 61 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 62 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 63 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 64 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [glock_workqueue] root 65 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 66 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 67 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 68 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [delete_workqueu] root 69 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 70 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kslowd] root 71 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/0] root 72 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/1] root 73 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/2] root 74 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [crypto/3] root 77 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/0] root 78 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/1] root 79 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/2] root 80 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [net_accel/3] root 81 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/0] root 82 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/1] root 83 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/2] root 84 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [sfc_netfront/3] root 310 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [kstriped] root 315 0.0 0.0 0 0 ? S< Jan04 0:00 \_ [ksnapd] root 1452 0.0 0.0 0 0 ? S< Jan04 4:31 \_ [kjournald] root 1 0.0 0.1 19292 948 ? Ss Jan04 0:15 /sbin/init root 1545 0.0 0.1 13164 1064 ? S Jan04 0:00 upstart-udev-bridge --daemon root 1547 0.0 0.1 17196 996 ? S<s Jan04 0:00 udevd --daemon root 1728 0.0 0.2 20284 1468 ? S< Jan04 0:00 \_ udevd --daemon root 1729 0.0 0.1 17192 792 ? S< Jan04 0:00 \_ udevd --daemon root 1881 0.0 0.0 8192 152 ? Ss Jan04 0:00 dd bs=1 if=/proc/kmsg of=/var/run/rsyslog/kmsg syslog 1884 0.0 0.2 185252 1200 ? Sl Jan04 1:00 rsyslogd -c4 103 1894 0.0 0.1 23328 700 ? Ss Jan04 1:08 dbus-daemon --system --fork root 2046 0.0 0.0 136 32 ? Ss Jan04 4:05 runsvdir -P /etc/service log: gems/custom_require.rb:31:in `require'??from /mnt/app/superfeedr-firehoser/current/script/component:52?/opt/ruby-enterprise/lib/ruby/si root 2055 0.0 0.0 112 32 ? Ss Jan04 0:00 \_ runsv chef-client root 2060 0.0 0.0 132 40 ? S Jan04 0:02 | \_ svlogd -tt ./main root 2056 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_2 root 2059 0.0 0.0 132 40 ? S Jan04 0:29 | \_ svlogd /var/log/superfeedr-firehoser_2 root 2057 0.0 0.0 112 28 ? Ss Jan04 0:20 \_ runsv superfeedr-firehoser_1 root 2062 0.0 0.0 132 44 ? S Jan04 0:26 \_ svlogd /var/log/superfeedr-firehoser_1 root 2058 0.0 0.0 18708 316 ? Ss Jan04 0:01 cron root 2095 0.0 0.1 49072 764 ? Ss Jan04 0:06 /usr/sbin/sshd root 9832 0.0 0.5 78916 3500 ? Ss 00:37 0:00 \_ sshd: root@pts/0 root 9846 0.0 0.3 17900 2036 pts/0 Ss 00:37 0:00 \_ -bash root 10132 0.0 0.1 15020 1064 pts/0 R+ 09:51 0:00 \_ ps faux root 2180 0.0 0.0 5988 140 tty1 Ss+ Jan04 0:00 /sbin/getty -8 38400 tty1 root 27610 0.0 1.4 47060 8436 ? S Apr04 2:21 python /usr/sbin/denyhosts --daemon --purge --config=/etc/denyhosts.conf --config=/etc/denyhosts.conf root 22640 0.0 0.7 119244 4164 ? Ssl Apr05 0:05 /usr/sbin/console-kit-daemon root 10113 0.0 0.0 3904 316 ? Ss 09:46 0:00 /usr/sbin/collectdmon -P /var/run/collectdmon.pid -- -C /etc/collectd/collectd.conf root 10114 0.0 0.2 201084 1464 ? Sl 09:46 0:00 \_ collectd -C /etc/collectd/collectd.conf -f As you can see there is nothing serious here. If I sum up the RSS line on all this, I get the following : # ps -aeo rss | awk '{sum+=$1} END {print sum}' 30096 Which makes sense. However, I have a pretty big surprise when I do a free: # free total used free shared buffers cached Mem: 591180 343684 247496 0 25432 161256 -/+ buffers/cache: 156996 434184 Swap: 1048568 0 1048568 As you can see 60% of the available memory is already consumed... which leaves me with only 40% to run my own applications if I want to avoid swapping. Quite disapointing! 2 questions arise : Where is all this memory? How to take some of it back for my own apps?

    Read the article

  • How to reduce disk thrashing (paging)?

    - by skevar7
    I have 4 GB of RAM, but Windows still thrashes disk sometimes (especially often when an application is minimized for some time and then I activate it again). Completely stupid, because Task Manager shows 2 GB of RAM are free. Is there any way to prevent Windows swapping out program memory? I tried setting Superfetch to cache startup files only (it helped a bit) and turning off paging file (it helped much, and worked well for me in Windows XP; but Windows Vista/Windows 7 don't allow that - it shows "low on memory" message frequently, even when I have 1 GB of RAM free.) What can you advise me to do?

    Read the article

  • How to reduce celeryd memory consumption?

    - by Gringo Suave
    I'm using celery 2.5.1 with django on a micro ec2 instance with 613mb memory and as such have to keep memory consumption down. Currently I'm using it only for the scheduler "celery beat" as a web interface to cron, though I hope to use it for more in the future. I've noticed it is the biggest consumer of memory on my micro machine even though I have configured the number of workers to one. I don't have many other options set in settings.py: import djcelery djcelery.setup_loader() BROKER_BACKEND = 'djkombu.transport.DatabaseTransport' CELERYBEAT_SCHEDULER = 'djcelery.schedulers.DatabaseScheduler' CELERY_RESULT_BACKEND = 'database' BROKER_POOL_LIMIT = 2 CELERYD_CONCURRENCY = 1 CELERY_DISABLE_RATE_LIMITS = True CELERYD_MAX_TASKS_PER_CHILD = 20 CELERYD_SOFT_TASK_TIME_LIMIT = 5 * 60 CELERYD_TASK_TIME_LIMIT = 6 * 60 Here's the details via top: PID USER NI CPU% VIRT SHR RES MEM% Command 1065 wuser 10 0.0 283M 4548 85m 14.3 python manage_prod.py celeryd --beat 1025 wuser 10 1.0 577M 6368 67m 11.2 python manage_prod.py celeryd --beat 1071 wuser 10 0.0 578M 2384 62m 10.6 python manage_prod.py celeryd --beat That's about 214mb of memory (and not much shared) to run a cron job occasionally. Have I done anything wrong, or can this be reduced about ten-fold somehow? ;) Update: here's my upstart config: description "Celery Daemon" start on (net-device-up and local-filesystems) stop on runlevel [016] nice 10 respawn respawn limit 5 10 chdir /home/wuser/wuser/ env CELERYD_OPTS=--concurrency=1 exec sudo -u wuser -H /usr/bin/python manage_prod.py celeryd --beat --concurrency=1 --loglevel info --logfile /var/tmp/celeryd.log Update 2: I notice there is one root process, one user child process, and two grandchildren from that. So I think it isn't a matter of duplicate startup. root 34580 1556 sudo -u wuser -H /usr/bin/python manage_prod.py celeryd wuser 577M 67548 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 578M 63784 +- python manage_prod.py celeryd --beat --concurrency=1 wuser 271M 76260 +- python manage_prod.py celeryd --beat --concurrency=1

    Read the article

  • Subsequent runs of rsync locally don't reduce data transferred

    - by sharakan
    I have an EC2 instance with data I want to sync to a mounted, but remote, volume, as a backup. rsync seems like the way to go with this, so as a test I took my test file (a Postgres pg_dump file) and used rsync -v to copy it to the mounted volume: [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3416650.09 bytes/sec total size is 821603948 speedup is 1.00 Then, I ran it again, expecting to see minimal sent/received numbers because it would just be checksums. Instead... [ec2-user work]$ rsync -v dump.sql.1 ../backup/dump.sql dump.sql.1 sent 821704315 bytes received 31 bytes 3402502.47 bytes/sec total size is 821603948 speedup is 1.00 I'm new to rsync so perhaps I'm missing something, but isn't the idea that the source and destination files are checked for differences, and then a patch is generated and applied to the destination? Why is this not reducing the amount of data 'sent' to just the size of the checksums? Some background if it's relevant: the mounted volume is using s3fs, mounted with s3fs <bucketname> backup.

    Read the article

  • How to reduce the pain of the command prompt

    - by Adam
    I want to learn to use the command prompt better on Windows to have more control over what I do and just for the learning experience. The main annoyance I have right now is all of the typing. If I want to perform an operation on a file with a large path I'm sitting there typing it out for a minute at least, and if I make a mistake I have to press the up arrow key and scroll through the entire thing and find what I did wrong. Is there any tools to make this easier?

    Read the article

  • How to reduce Windows XP computer boot time?

    - by Suma
    Are there any specific known steps which I could take to make my computer with Windows XP Professional booting faster? I am interested in speeding up following stages in particular: loading the OS (Windows logo, up to the moment login screen appears) log in user (from the moment you type your user name and password up to the moment all memory resident programs and services are loaded and the computer is really ready to use)

    Read the article

  • Backup AWS Dynamodb to S3

    - by Ali
    It has been suggested on Amazon docs http://aws.amazon.com/dynamodb/ among other places, that you can backup your dynamodb tables using Elastic Map Reduce, I have a general understanding of how this could work but I couldn't find any guides or tutorials on this, So my question is how can I automate dynamodb backups (using EMR)? So far, I think I need to create a "streaming" job with a map function that reads the data from dynamodb and a reduce that writes it to S3 and I believe these could be written in Python (or java or a few other languages). Any comments, clarifications, code samples, corrections are appreciated.

    Read the article

  • integer constant does 'not reduce to an integer'

    - by Dan Morgan
    I use this code to set my constants // Constants.h extern NSInteger const KNameIndex; // Constants.m NSInteger const KNameIndex = 0; And in a switch statement within a file that imports the Constant.h file I have this: switch (self.sectionFromParentTable) { case KNameIndex: self.types = self.facilityTypes; break; ... I get error at compile that read this: "error:case label does not reduce to an integer constant" Any ideas what might be messed up?

    Read the article

  • Reduce EBS volume

    - by Martin
    I know about increasing, but is there a way reduce the size of an EBS volume? Like I've put effort into my AMI but soon realized it's way to big for my needs.

    Read the article

  • How to reduce cpu and ram usage?

    - by Hellboy
    i am going to "read" (video/big) files from server (shared environment) to clients (webbrowsers) via php and would like to know first if there is a way to reduce cpu and ram usage somehow (as i have those limited). thanks.

    Read the article

  • prolog: reduce then write the value of a predicate

    - by jreid9001
    This is some of the code I am writing assert(bar(foo)), assert(foo(bar-5)), I'm not sure if it works though. I'm trying to get it to reduce foo by 5. I need a way to write the value of foo, but haven't found a way too. write('foo is' + foo) would be the logical way to me, but doesn't seem to work.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >