Search Results

Search found 5942 results on 238 pages for 'total'.

Page 43/238 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • xen domUs crashes or unavailability

    - by Rush
    I've xen server with 8 domU. Server is Xeon E31270 with 16gb ram. I think it is enough for 8 machines. Sometimes domU's crashes and i can't figure out the reason. After crash i can connect to console and there is somthing like this: Oct 8 22:20:49 server kernel: [30892.320780] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.320790] Node 0 DMA: 10*4kB 3*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8080kB Oct 8 22:20:49 server kernel: [30892.320817] Node 0 DMA32: 648*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5760kB Oct 8 22:20:49 server kernel: [30892.320842] 1491 total pagecache pages Oct 8 22:20:49 server kernel: [30892.320847] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.320852] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.320858] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.320862] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.324024] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.324024] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.324024] 424467 pages shared Oct 8 22:20:49 server kernel: [30892.324024] 503538 pages non-shared Oct 8 22:20:49 server kernel: [30892.330308] apache2 invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0 Oct 8 22:20:49 server kernel: [30892.330322] apache2 cpuset=/ mems_allowed=0 Oct 8 22:20:49 server kernel: [30892.330330] Pid: 23938, comm: apache2 Not tainted 2.6.32-5-xen-amd64 #1 Oct 8 22:20:49 server kernel: [30892.330337] Call Trace: Oct 8 22:20:49 server kernel: [30892.330349] [<ffffffff810b7180>] ? oom_kill_process+0x7f/0x23f Oct 8 22:20:49 server kernel: [30892.330358] [<ffffffff810b76a4>] ? __out_of_memory+0x12a/0x141 Oct 8 22:20:49 server kernel: [30892.330367] [<ffffffff810b77fb>] ? out_of_memory+0x140/0x172 Oct 8 22:20:49 server kernel: [30892.330376] [<ffffffff810bb59c>] ? __alloc_pages_nodemask+0x4e5/0x5f5 Oct 8 22:20:49 server kernel: [30892.330385] [<ffffffff810cc224>] ? do_wp_page+0x386/0x707 Oct 8 22:20:49 server kernel: [30892.330395] [<ffffffff8100c3a5>] ? __raw_callee_save_xen_pud_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330404] [<ffffffff8100c369>] ? __raw_callee_save_xen_pmd_val+0x11/0x1e Oct 8 22:20:49 server kernel: [30892.330412] [<ffffffff810cdfc7>] ? handle_mm_fault+0x7aa/0x80f Oct 8 22:20:49 server kernel: [30892.330422] [<ffffffff8130f906>] ? do_page_fault+0x2e0/0x2fc Oct 8 22:20:49 server kernel: [30892.330433] [<ffffffff8130d7a5>] ? page_fault+0x25/0x30 Oct 8 22:20:49 server kernel: [30892.330439] Mem-Info: Oct 8 22:20:49 server kernel: [30892.330443] Node 0 DMA per-cpu: Oct 8 22:20:49 server kernel: [30892.330450] CPU 0: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330463] CPU 1: hi: 0, btch: 1 usd: 0 Oct 8 22:20:49 server kernel: [30892.330466] Node 0 DMA32 per-cpu: Oct 8 22:20:49 server kernel: [30892.330469] CPU 0: hi: 186, btch: 31 usd: 0 Oct 8 22:20:49 server kernel: [30892.330472] CPU 1: hi: 186, btch: 31 usd: 60 Oct 8 22:20:49 server kernel: [30892.330476] active_anon:342076 inactive_anon:115398 isolated_anon:0 Oct 8 22:20:49 server kernel: [30892.330477] active_file:268 inactive_file:481 isolated_file:0 Oct 8 22:20:49 server kernel: [30892.330477] unevictable:1125 dirty:2 writeback:13 unstable:0 Oct 8 22:20:49 server kernel: [30892.330478] free:3410 slab_reclaimable:1718 slab_unreclaimable:6946 Oct 8 22:20:49 server kernel: [30892.330478] mapped:899 shmem:113 pagetables:35697 bounce:0 Oct 8 22:20:49 server kernel: [30892.330502] Node 0 DMA free:8036kB min:32kB low:40kB high:48kB active_anon:1144kB inactive_anon:1268kB active_file:8kB inactive_file:8kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:11792kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:224kB kernel_stack:16kB pagetables:1228kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330518] lowmem_reserve[]: 0 2004 2004 2004 Oct 8 22:20:49 server kernel: [30892.330523] Node 0 DMA32 free:5604kB min:5708kB low:7132kB high:8560kB active_anon:1367160kB inactive_anon:460324kB active_file:1064kB inactive_file:1916kB unevictable:4500kB isolated(anon):0kB isolated(file):0kB present:2052320kB mlocked:4500kB dirty:8kB writeback:52kB mapped:3600kB shmem:452kB slab_reclaimable:6872kB slab_unreclaimable:27560kB kernel_stack:3528kB pagetables:141560kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:992 all_unreclaimable? no Oct 8 22:20:49 server kernel: [30892.330539] lowmem_reserve[]: 0 0 0 0 Oct 8 22:20:49 server kernel: [30892.330544] Node 0 DMA: 1*4kB 2*8kB 13*16kB 10*32kB 7*64kB 3*128kB 2*256kB 2*512kB 1*1024kB 2*2048kB 0*4096kB = 8036kB Oct 8 22:20:49 server kernel: [30892.330579] Node 0 DMA32: 609*4kB 2*8kB 1*16kB 0*32kB 1*64kB 0*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 5604kB Oct 8 22:20:49 server kernel: [30892.330605] 1522 total pagecache pages Oct 8 22:20:49 server kernel: [30892.330610] 0 pages in swap cache Oct 8 22:20:49 server kernel: [30892.330615] Swap cache stats: add 0, delete 0, find 0/0 Oct 8 22:20:49 server kernel: [30892.330621] Free swap = 0kB Oct 8 22:20:49 server kernel: [30892.330625] Total swap = 0kB Oct 8 22:20:49 server kernel: [30892.333018] 524288 pages RAM Oct 8 22:20:49 server kernel: [30892.333018] 11010 pages reserved Oct 8 22:20:49 server kernel: [30892.333018] 424367 pages shared Oct 8 22:20:49 server kernel: [30892.333018] 503658 pages non-shared Seems like there isn't enough memory for this domU. But there is no any memory problems reported in munin monitoring: As you see system uses around 0.2G and 1G is available. So my question is: Is it xen specific problem, that real memory usage and memory usage that shows munin are different (I've never seen such problems oh real hardware machines)? Or maybe it is just monitoring problem, that can't catch moment when there is unusual high load and domU go down? And how I can to defeat this problem? it is really annoying to catch messages in e-mail that domU went down. Btw, such situation was when domU had 2G memory.

    Read the article

  • Neo4j increasing latency as SKIP increases on Cypher query + REST API

    - by voldomazta
    My setup: Java(TM) SE Runtime Environment (build 1.7.0_45-b18) Java HotSpot(TM) 64-Bit Server VM (build 24.45-b08, mixed mode) Neo4j 2.0.0-M06 Enterprise First I made sure I warmed up the cache by executing the following: START n=node(*) RETURN COUNT(n); START r=relationship(*) RETURN count(r); The size of the table is 63,677 nodes and 7,169,995 relationships Now I have the following query: START u1=node:node_auto_index('uid:39') MATCH (u1:user)-[w:WANTS]->(c:card)<-[h:HAS]-(u2:user) WHERE u2.uid <> 39 WITH u2.uid AS uid, (CASE WHEN w.qty < h.qty THEN w.qty ELSE h.qty END) AS have RETURN uid, SUM(have) AS total ORDER BY total DESC SKIP 0 LIMIT 25 This UID has about 40k+ results that I want to be able to put a pagination to. The initial skip was around 773ms. I tried page 2 (skip 25) and the latency was around the same even up to page 500 it only rose up to 900ms so I didn't really bother. Now I tried some fast forward paging and jumped by thousands so I did 1000, then 2000, then 3000. I was hoping the ORDER BY arrangement will already have been cached by Neo4j and using SKIP will just move to that index in the result and wont have to iterate through each one again. But for each thousand skip I made the latency increased by alot. It's not just cache warming because for one I already warmed up the cache and two, I tried the same skip a couple of times for each skip and it yielded the same results: SKIP 0: 773ms SKIP 1000: 1369ms SKIP 2000: 2491ms SKIP 3000: 3899ms SKIP 4000: 5686ms SKIP 5000: 7424ms Now who the hell would want to view 5000 pages of results? 40k even?! :) Good point! I will probably put a cap on the maximum results a user can view but I was just curious about this phenomenon. Will somebody please explain why Neo4j seems to be re-iterating through stuff which appears to be already known to it? Here is my profiling for the 0 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(0)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14 of type Any),false)"], limit="Add(Literal(0),Literal(25))", _rows=25, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE65c4d6a2-1930-4f32-8fd9-5e4399ce6f14,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) And for the 5000 skip: ==> ColumnFilter(symKeys=["uid", " INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872"], returnItemNames=["uid", "total"], _rows=25, _db_hits=0) ==> Slice(skip="Literal(5000)", _rows=25, _db_hits=0) ==> Top(orderBy=["SortItem(Cached( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872 of type Any),false)"], limit="Add(Literal(5000),Literal(25))", _rows=5025, _db_hits=0) ==> EagerAggregation(keys=["uid"], aggregates=["( INTERNAL_AGGREGATE99329ea5-03cd-4d53-a6bc-3ad554b47872,Sum(have))"], _rows=41659, _db_hits=0) ==> ColumnFilter(symKeys=["have", "u1", "uid", "c", "h", "w", "u2"], returnItemNames=["uid", "have"], _rows=146826, _db_hits=0) ==> Extract(symKeys=["u1", "c", "h", "w", "u2"], exprKeys=["uid", "have"], _rows=146826, _db_hits=587304) ==> Filter(pred="((NOT(Product(u2,uid(0),true) == Literal(39)) AND hasLabel(u1:user(0))) AND hasLabel(u2:user(0)))", _rows=146826, _db_hits=146826) ==> TraversalMatcher(trail="(u1)-[w:WANTS WHERE (hasLabel(NodeIdentifier():card(1)) AND hasLabel(NodeIdentifier():card(1))) AND true]->(c)<-[h:HAS WHERE (NOT(Product(NodeIdentifier(),uid(0),true) == Literal(39)) AND hasLabel(NodeIdentifier():user(0))) AND true]-(u2)", _rows=146826, _db_hits=293696) The only difference is the LIMIT clause on the Top function. I hope we can make this work as intended, I really don't want to delve into doing an embedded Neo4j + my own Jetty REST API for the web app.

    Read the article

  • Optimum number of threads while multitasking

    - by Gun Deniz
    I know similar questions have been asked but I think my case is a little bit diffrent. Let's say I have a computer with 8 cores and infinite memory with a Linux OS. I have a calculation software called Gaussian that can take advantage of multithreading. So I set its thread count to 8 for a single calculation for maximum speed. However I really can't decide what to do when I need to do run for instance 8 calculations simultaneously. In that case should I set the thread count to 1(total 8 threads spawned in 8 processes) or keep it 8(total 64 threads spawned in 8 processes) for each job? Does it really matter much? A related question is does the OS automatically does the core-parking to diffrent cores for each thread?

    Read the article

  • Unusually high dentry cache usage

    - by Wolfgang Stengel
    Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cached Mem: 129048 76435 52612 0 144 7675 -/+ buffers/cache: 68615 60432 Swap: 8191 0 8191 Meminfo: cat /proc/meminfo MemTotal: 132145324 kB MemFree: 53620068 kB Buffers: 147760 kB Cached: 8239072 kB SwapCached: 0 kB Active: 20300940 kB Inactive: 6512716 kB Active(anon): 18408460 kB Inactive(anon): 24736 kB Active(file): 1892480 kB Inactive(file): 6487980 kB Unevictable: 8608 kB Mlocked: 8608 kB SwapTotal: 8388600 kB SwapFree: 8388600 kB Dirty: 11416 kB Writeback: 0 kB AnonPages: 18436224 kB Mapped: 94536 kB Shmem: 6364 kB Slab: 46240380 kB SReclaimable: 44561644 kB SUnreclaim: 1678736 kB KernelStack: 9336 kB PageTables: 457516 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 72364108 kB Committed_AS: 22305444 kB VmallocTotal: 34359738367 kB VmallocUsed: 480164 kB VmallocChunk: 34290830848 kB HardwareCorrupted: 0 kB AnonHugePages: 12216320 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 5604 kB DirectMap2M: 2078720 kB DirectMap1G: 132120576 kB Slabtop: slabtop --once Active / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure 125 Swappiness: cat /proc/sys/vm/swappiness 0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0 tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0 cgroup /cgroup/devices cgroup rw,relatime,devices 0 0 cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0 cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0 /etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 /etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo 16 21 0:3 / /proc rw,relatime - proc proc rw 17 21 0:0 / /sys rw,relatime - sysfs sysfs rw 18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=755 19 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000 20 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw 21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered 22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw 23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered 24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 25 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 26 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw 27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset 28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu 29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct 30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory 31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices 32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer 33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls 34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio 35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 36 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 37 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw 39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.vol volume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2 end-volume volume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 end-volume volume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distribute end-volume volume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocache end-volume ### Add io-threads for parallel requisitions volume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writeback end-volume volume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreads end-volume

    Read the article

  • What Windows licenses are required to run additional terminal service sessions

    - by John P
    We need to build out a server running Windows 2008 R2 Standard that can allow up to 10 simultaneous RDP/Terminal Services connections and I'm a little confused about how the CAL licenses work. From one source I was told I needed 10 "server CALs" and an additional 10 "RDP CALs" (total of 20 CALs). From another, I was told I just needed the 10 "RDP CALs", which implicitly came with the server CAL. The Microsoft licensing website (http://www.microsoft.com/windowsserver2008/en/us/licensing-rds.aspx) seems to support scenario #1, but it is still not real clear what those server CALs are needed for. Also, can we use the 2 "built-in" RDP clients, meaning we only need to purchase 8 CALs to reach a total of 10?

    Read the article

  • About global.asax and the events there

    - by eski
    So what i'm trying to understand is the whole global.asax events. I doing a simple counter that records website visits. I am using MSSQL. Basicly i have two ints. totalNumberOfUsers - The total visist from begining. currentNumberOfUsers - Total of users viewing the site at the moment. So the way i understand global.asax events is that every time someone comes to the site "Session_Start" is fired once. So once per user. "Application_Start" is fired only once the first time someone comes to the site. Going with this i have my global.asax file here. <script runat="server"> string connectionstring = ConfigurationManager.ConnectionStrings["ConnectionString1"].ConnectionString; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup Application.Lock(); Application["currentNumberOfUsers"] = 0; Application.UnLock(); string sql = "Select c_hit from v_counter where (id=1)"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); cmd.Connection.Open(); cmd.ExecuteNonQuery(); SqlDataReader reader = cmd.ExecuteReader(); while (reader.Read()) { Application.Lock(); Application["totalNumberOfUsers"] = reader.GetInt32(0); Application.UnLock(); } reader.Close(); cmd.Connection.Close(); } void Application_End(object sender, EventArgs e) { // Code that runs on application shutdown } void Application_Error(object sender, EventArgs e) { // Code that runs when an unhandled error occurs } void Session_Start(object sender, EventArgs e) { // Code that runs when a new session is started Application.Lock(); Application["totalNumberOfUsers"] = (int)Application["totalNumberOfUsers"] + 1; Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] + 1; Application.UnLock(); string sql = "UPDATE v_counter SET c_hit = @hit WHERE c_type = 'totalNumberOfUsers'"; SqlConnection connect = new SqlConnection(connectionstring); SqlCommand cmd = new SqlCommand(sql, connect); SqlParameter hit = new SqlParameter("@hit", SqlDbType.Int); hit.Value = Application["totalNumberOfUsers"]; cmd.Parameters.Add(hit); cmd.Connection.Open(); cmd.ExecuteNonQuery(); cmd.Connection.Close(); } void Session_End(object sender, EventArgs e) { // Code that runs when a session ends. // Note: The Session_End event is raised only when the sessionstate mode // is set to InProc in the Web.config file. If session mode is set to StateServer // or SQLServer, the event is not raised. Application.Lock(); Application["currentNumberOfUsers"] = (int)Application["currentNumberOfUsers"] - 1; Application.UnLock(); } </script> In the page_load i have this protected void Page_Load(object sender, EventArgs e) { l_current.Text = Application["currentNumberOfUsers"].ToString(); l_total.Text = Application["totalNumberOfUsers"].ToString(); } So if i understand this right, every time someone comes to the site both the currentNumberOfUsers and totalNumberOfUsers are incremented with 1. But when the session is over the currentNumberOfUsers is decremented with 1. If i go to the site with 3 types of browsers with the same computer i should have 3 in hits on both counters. Doing this again after hours i should have 3 in current and 6 in total, right ? The way its working right now is the current goes up to 2 and the total is incremented on every postback on IE and Chrome but not on firefox. And one last thing, is this the same thing ? Application["value"] = 0; value = Application["value"] //OR Application.Set("Value", 0); Value = Application.Get("Value");

    Read the article

  • Why does the java -Xmx not working?

    - by Zenofo
    In my Ubuntu 11.10 VPS, Before I run the jar file: # free -m total used free shared buffers cached Mem: 256 5 250 0 0 0 -/+ buffers/cache: 5 250 Swap: 0 0 0 Run a jar file that limited to maximum of 32M memory: java -Xms8m -Xmx32m -jar ./my.jar Now the memory state as follows: # free -m total used free shared buffers cached Mem: 256 155 100 0 0 0 -/+ buffers/cache: 155 100 Swap: 0 0 0 This jar occupied 150M memory. And I can't run any other java command: # java -version Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. # java -Xmx8m -version Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I want to know why the -Xmx parameter does not take effect? How can I limit the jar file using the memory?

    Read the article

  • Excel sum from column based on another column

    - by jsmars
    I have two columns. The values in the first one are either blank or have a 1. The values in the second one is a number. I also have a variable field. At the bottom of each column, I'd like to have a "total" field, which checks if there is a value (of 1) in the first column, and if there is, adds this up from the value of the second column (on the same row) and multiplies it by the variable. for example: variable 10 name1 name2 counter 1 2 1 3 1 1 3 1 4 totals 100 50 since name1 has 3 1's in it's column, it takes each value from the counter column, and multiplies it by the variable, and outputs the total I'm sorry if this has been asked, I've tried searching but I have a hard time understanding the excel syntaxes. Thanks!

    Read the article

  • Munin Aggregate Graphs from several servers

    - by Sparsh Gupta
    I am using DNS round robin load balancing and have divided my total traffic onto multiple servers. Each server does around 300-400req/second but I am interested in having an aggregate graph telling me the TOTAL of all requests per second served by our architecture. Is there any way I can do this. Right now each graph in Munin comes as a separate graph as they depict things on one server. I am using configuration as follow which doesn't work doesnt work for me, does this configuration got errors? [TRAFFIC.AGGREGATED] update no requests.graph_title nGinx requests requests.graph_vlabel nGinx requests per second requests.draw LINE2 requests.graph_args --base 1000 requests.graph_category nginx requests.label req/sec requests.type DERIVE requests.min 0 requests.graph_order output requests.output.sum \ lb1.visualwebsiteoptimizer.com:nginx_request_lb1.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb2.visualwebsiteoptimizer.com_request.request \ lb3.visualwebsiteoptimizer.com:nginx_request_lb3.visualwebsiteoptimizer.com_request.request

    Read the article

  • Ms Excel 2010 Importing Data in One ROW and getting sum particular CELL

    - by Omeshanker
    I am importing data by using .txt file to MS Excel and whole data is imported in ONE ROW. I want to get SUM of those values which corresponds to a particular Month. For Example :- Name Month Total Value Mark Jan 2000 Mark Jan 1500 Mark Feb 2900 Mark Feb 3000 I want to get the TOTAL value in the Month Jan in a particular Cell. Kindly tell me how to proceed. NOTE: Whole data is imported in one ROW only. So the formula should add automatically those values which it finds out on the row. Thanks Omesh

    Read the article

  • High memory utilization by sqlservr.exe process

    - by abdul samad
    Sub:High memory utilization by sqlservr.exe process. When I look into task manager --processes or by using perfmon memory counters(Sqlserver:memory manager:Target server memory and Total server memory) I am getting high memory utilization by sqlservr.exe process nearly 8 GB (Target server memory counter) and 7.95 GB (Total server memory). and when I restart the MSSQLSERVER service it again shoots up to the same size. I am getting this issue quite frequently. Please help me out in identifying why sql server is using so much memory and how to find out what query , stored procedure etc is making sql server use that much memory. * I am not using any triggers or cursors in my code. Thanks

    Read the article

  • How do I negotiate for colo space?

    - by randy melder
    I guess this isn't a technical question, but it definitely is something IT teams deal with, so here goes: I'm looking at getting a rack at a local colocation facility. I'm weighing the options versus building out in a cloud platform. We are REALLY low bandwidth and power. There's a total of six hosts for the total operation. You can assume we use <= 10 amps of power and <= 2Mbps 95th percentile. Do you have any advice for getting the best deal?

    Read the article

  • How do I analyze an Apache Bench result?

    - by Alan Hoffmeister
    I need some help with analyzing a log from Apache Bench: Benchmarking texteli.com (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Completed 600 requests Completed 700 requests Completed 800 requests Completed 900 requests Completed 1000 requests Finished 1000 requests Server Software: Server Hostname: texteli.com Server Port: 80 Document Path: /4f84b59c557eb79321000dfa Document Length: 13400 bytes Concurrency Level: 200 Time taken for tests: 37.030 seconds Complete requests: 1000 Failed requests: 0 Write errors: 0 Total transferred: 13524000 bytes HTML transferred: 13400000 bytes Requests per second: 27.01 [#/sec] (mean) Time per request: 7406.024 [ms] (mean) Time per request: 37.030 [ms] (mean, across all concurrent requests) Transfer rate: 356.66 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 27 37 19.5 34 319 Processing: 80 6273 1673.7 6907 8987 Waiting: 47 3436 2085.2 3345 8856 Total: 115 6310 1675.8 6940 9022 Percentage of the requests served within a certain time (ms) 50% 6940 66% 6968 75% 6988 80% 7007 90% 7025 95% 7078 98% 8410 99% 8876 100% 9022 (longest request) What this results can tell me? Isn't 27 rps too slow?

    Read the article

  • `rsync` NEVER uses its 'famous' delta-transfer!

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • Best Configuration For Youtube HD 720p

    - by Eray Alakese
    Hello, I'm recording a screencast with CamStudio (http://camstudio.org/) . My computer's screen resolution is 1280*800 , so video's resolution is 1280*800, too. I'm using Microsoft Video 1 codec when recording. I was record 9 minutes video and this video's size is 214 MB . I will upload this video to Youtube. I'm coding a web site at the video, because of this, video must be quality (720p) . I want to reduce file's size, before upload . I'm using Total Video Converter (http://www.effectmatrix.com/total-video-converter/) . But when i convert to FLV , video's size increase to 250MB :) I don't know, how can i configure this setting and which file type should i choose. (screenshot here : http://i.imgur.com/Se0EP.jpg )

    Read the article

  • many unknow process name as "sudo"

    - by joaner
    my server free memoney is less and less, And many process COMMAND are"sudo" when use top and enter M. I don't understand root user need to use "sudo". I want to know the way these processes are generated ? Can I kill ? Tasks: 185 total, 1 running, 184 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3967848k total, 3484196k used, 483652k free, 218532k buffers Swap: 4112376k total, 0k used, 4112376k free, 2932864k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22219 mysql 20 0 582m 67m 5492 S 0.0 1.7 0:01.75 mysqld 22337 daemon 20 0 327m 31m 3440 S 0.0 0.8 0:01.58 httpd 22252 daemon 20 0 321m 26m 3416 S 0.0 0.7 0:01.25 httpd 22263 daemon 20 0 319m 23m 3396 S 0.0 0.6 0:00.71 httpd 22253 daemon 20 0 310m 18m 3444 S 0.0 0.5 0:00.69 httpd 22251 root 20 0 28392 12m 3640 S 0.0 0.3 0:00.09 httpd 2422 root 20 0 9192 3608 2184 S 0.0 0.1 0:00.32 ssh 13613 root 20 0 38220 3572 1044 S 0.0 0.1 0:22.31 rsyslogd 2423 root 20 0 11556 3420 2692 S 0.0 0.1 0:00.11 sshd 22570 root 20 0 11716 3408 2676 S 0.0 0.1 0:00.08 sshd 3351 root 20 0 10384 2540 2000 S 0.0 0.1 0:00.06 sudo 30870 root 20 0 10384 2528 2000 S 0.0 0.1 0:00.06 sudo 14356 dkim-mil 20 0 49664 2444 1468 S 0.0 0.1 0:03.91 dkim-filter 2085 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 7741 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 29838 root 20 0 10376 2344 1824 S 0.0 0.1 0:00.00 sudo 2006 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 29747 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 30602 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 30935 root 20 0 10376 2340 1824 S 0.0 0.1 0:00.00 sudo 2259 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 2503 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 2515 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 7718 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 7745 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 29845 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30172 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30352 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30548 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30598 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30897 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo 30899 root 20 0 10376 2336 1824 S 0.0 0.1 0:00.00 sudo

    Read the article

  • Exchange - get age range of items using Powershell

    - by marcwenger
    We are going to be implementing personal archives for Exchange in our organization. For us to get a good grasp on how much space is needed, we need to get an idea of the age of items that we currently have. Is it possible to have a powershell script that tells me the total size and number of items given certain date ranges of all mailboxes in all databases? What I'd like to have is the 1) number of items, 2) total size of times (GB) - all grouped by date ranges (Less than 15 days, 15-30 days, 30-60 days, 60-90 days, more than 90). Another possibility would be to have it also grouped by mailbox database

    Read the article

  • Server not responding to SSH and HTTP but ping works

    - by yes123
    Hello guys, I requested an hard reboot because none of ssh and http worked. Ping worked normally. Which logs should i check to understand what was the problem? Thanks! (debian 6 on lamp) Edit: my memory and swap: Mem: 4040068k total, 1114920k used, 2925148k free, 109212k buffers Swap: 1051384k total, 0k used, 1051384k free, 283820k cached 4 GB ram (and more than 1TB of HDD) The cause is from 2 days ago: look how the usage of swap goes +60% in less than 10hours My control panel reports this as top 5 memory usage process: If every apache2 process is 190MB large that sux because IF i do TOP i have 262 sleeping process most of them are apache2! My apache mpm_prefork settings are: <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 ServerLimit 1500 MaxClients 1500 MaxRequestsPerChild 2000 </IfModule> KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 4

    Read the article

  • rsync server side limit bandwidth/connection

    - by c2h2
    In a VOIP application, I have upto 3000 clients rsync audio files from there linux server in a daily, server is placed at a data center (10Mbps in/out bound), the server works as a VOIP sip server running FreeSWITCH (low ping latency should be ensured.) Therefore I would like to have server side control of rsync which controls: Limit total outbound bandwidth. Limit total number of connections. (Reject clients while at max number of connection and let it retry after a specific time frame.) OPTIONAL: list/kill individual connections. Normally I would use ssh + rsync + pem_keys with some extra options, but above requirements are not feasible by simple command lines. Can anyone point me some direction. or show some scripts/tools? I would also probably integrate them and release on github. Thanks!

    Read the article

  • How to change resolution in Ubuntu 10.04, where xvinfo is showing no adapters present?

    - by YumYum
    I am trying to maximize my resolution where I have Resolution: 800x600 (4:3) and Refresh rate: 61Hz I tried the following, but it did not work: $ xvinfo X-Video Extension version 2.2 screen #0 no adaptors present $ cvt 1920 1080 # 1920x1080 59.96 Hz (CVT 2.07M9) hsync: 67.16 kHz; pclk: 173.00 MHz Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync $ xrandr --newmode clever_name 173.00 1920 2048 2248 2576 1080 1083 1088 1120 $ xrandr -q Screen 0: minimum 640 x 480, current 800 x 600, maximum 800 x 600 default connected 800x600+0+0 0mm x 0mm 800x600 61.0* 640x480 60.0 clever_name (0x11d) 173.0MHz h: width 1920 start 2048 end 2248 total 2576 skew 0 clock 67.2KHz v: height 1080 start 1083 end 1088 total 1120 clock 60.0Hz $ xrandr --addmode default clever_name $ xrandr --output default --mode clever_name xrandr: Configure crtc 0 failed

    Read the article

  • MS Access Query Criteria Issue

    - by xxl3ww
    Currently I have a MS Access database query that has a field named FedEXDetTotal that totals 9 FedEX charge fields. I have another field that is from our inhouse system called "Total Charge". This is just a normal number field. I have created another Field in this query Diff: [FedEXDetTotal]-[Total Charge] This tells me the difference between the Fedex charge and what we actually charged. Everything works OK with this, but when I try to put the criteria 5 for the Diff field, when I run the query, I get a prompt saying "Enter Parameter Value FedEXDetTotal". Why is Access doing this? How do I get around this? I'm trying to start out with something simple(5), but what I really want is [Forms]![Dis].[txtbox_Diff].

    Read the article

  • How can I get the size of an Amazon S3 bucket?

    - by Garret Heaton
    I'd like to graph the size (in bytes, and # of items) of an Amazon S3 bucket and am looking for an efficient way to get the data. The s3cmd tools provide a way to get the total file size using s3cmd du s3://bucket_name, but I'm worried about its ability to scale since it looks like it fetches data about every file and calculates its own sum. Since Amazon charges users in GB-Months it seems odd that they don't expose this value directly. Although Amazon's REST API returns the number of items in a bucket, [s3cmd] doesn't seem to expose it. I could do s3cmd ls -r s3://bucket_name | wc -l but that seems like a hack. The Ruby AWS::S3 library looked promising, but only provides the # of bucket items, not the total bucket size. Is anyone aware of any other command line tools or libraries (prefer Perl, PHP, Python, or Ruby) which provide ways of getting this data?

    Read the article

  • How to remove bad disk from LVM2 with the less data loss on other PVs?

    - by Walkman
    I had a LVM2 volume with two disks. The larger disk became corrupt, so I cant pvmove. What is the best way to remove it from the group to save the most data from the other disk? Here is my pvdisplay output: Couldn't find device with uuid WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3. --- Physical volume --- PV Name unknown device VG Name media PV Size 1,82 TiB / not usable 1,05 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 476932 Free PE 0 Allocated PE 476932 PV UUID WWeM0m-MLX2-o0da-tf7q-fJJu-eiGl-e7UmM3 --- Physical volume --- PV Name /dev/sdb1 VG Name media PV Size 931,51 GiB / not usable 3,19 MiB Allocatable yes (but full) PE Size 4,00 MiB Total PE 238466 Free PE 0 Allocated PE 238466 PV UUID oUhOcR-uYjc-rNTv-LNBm-Z9VY-TJJ5-SYezce So I want to remove the unknown device (not present in the system). Is it possible to do this without a new disk ? The filesystem is ext4.

    Read the article

  • Upgrading from 32 to 64 Bit Windows 7, without losing program installations/games

    - by Fogest
    I recently built a new computer and put the wrong installation of Windows on (32 bit), meaning I cannot use all of my RAM. I would like to upgrade to the 64 bit version, though I already have downloaded many programs and games which would total to around 30 GB give or take. I don't have the kind of data usage with my ISP to re-download this much data again, until next month (total GB will be higher as time goes on). I know there is Windows Easy Transfer, but it is not so much my data itself I'm worried about, it is more having to re-download and install a bunch of games and applications. Is it possible to perform an upgrade from 32 bit to 64 bit without this loss?

    Read the article

  • Resumable upload from Java client to Grails web application?

    - by dersteps
    After almost 2 workdays of Googling and trying several different possibilities I found throughout the web, I'm asking this question here, hoping that I might finally get an answer. First of all, here's what I want to do: I'm developing a client and a server application with the purpose of exchanging a lot of large files between multiple clients on a single server. The client is developed in pure Java (JDK 1.6), while the web application is done in Grails (2.0.0). As the purpose of the client is to allow users to exchange a lot of large files (usually about 2GB each), I have to implement it in a way, so that the uploads are resumable, i.e. the users are able to stop and resume uploads at any time. Here's what I did so far: I actually managed to do what I wanted to do and stream large files to the server while still being able to pause and resume uploads using raw sockets. I would send a regular request to the server (using Apache's HttpClient library) to get the server to send me a port that was free for me to use, then open a ServerSocket on the server and connect to that particular socket from the client. Here's the problem with that: Actually, there are at least two problems with that: I open those ports myself, so I have to manage open and used ports myself. This is quite error-prone. I actually circumvent Grails' ability to manage a huge amount of (concurrent) connections. Finally, here's what I'm supposed to do now and the problem: As the problems I mentioned above are unacceptable, I am now supposed to use Java's URLConnection/HttpURLConnection classes, while still sticking to Grails. Connecting to the server and sending simple requests is no problem at all, everything worked fine. The problems started when I tried to use the streams (the connection's OutputStream in the client and the request's InputStream in the server). Opening the client's OutputStream and writing data to it is as easy as it gets. But reading from the request's InputStream seems impossible to me, as that stream is always empty, as it seems. Example Code Here's an example of the server side (Groovy controller): def test() { InputStream inStream = request.inputStream if(inStream != null) { int read = 0; byte[] buffer = new byte[4096]; long total = 0; println "Start reading" while((read = inStream.read(buffer)) != -1) { println "Read " + read + " bytes from input stream buffer" //<-- this is NEVER called } println "Reading finished" println "Read a total of " + total + " bytes" // <-- 'total' will always be 0 (zero) } else { println "Input Stream is null" // <-- This is NEVER called } } This is what I did on the client side (Java class): public void connect() { final URL url = new URL("myserveraddress"); final byte[] message = "someMessage".getBytes(); // Any byte[] - will be a file one day HttpURLConnection connection = url.openConnection(); connection.setRequestMethod("GET"); // other methods - same result // Write message DataOutputStream out = new DataOutputStream(connection.getOutputStream()); out.writeBytes(message); out.flush(); out.close(); // Actually connect connection.connect(); // is this placed correctly? // Get response BufferedReader in = new BufferedReader(new InputStreamReader(connection.getInputStream())); String line = null; while((line = in.readLine()) != null) { System.out.println(line); // Prints the whole server response as expected } in.close(); } As I mentioned, the problem is that request.inputStream always yields an empty InputStream, so I am never able to read anything from it (of course). But as that is exactly what I'm trying to do (so I can stream the file to be uploaded to the server, read from the InputStream and save it to a file), this is rather disappointing. I tried different HTTP methods, different data payloads, and also rearranged the code over and over again, but did not seem to be able to solve the problem. What I hope to find I hope to find a solution to my problem, of course. Anything is highly appreciated: hints, code snippets, library suggestions and so on. Maybe I'm even having it all wrong and need to go in a totally different direction. So, how can I implement resumable file uploads for rather large (binary) files from a Java client to a Grails web application without manually opening ports on the server side?

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >