Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 173/196 | < Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >

  • iptables - quick safety eval & limit max conns over time

    - by Peter Hanneman
    Working on locking down a *nix server box with some fancy iptable(v1.4.4) rules. I'm approaching the matter with a "paranoid, everyone's out to get me" style, not necessarily because I expect the box to be a hacker magnet but rather just for the sake of learning iptables and *nix security more throughly. Everything is well commented - so if anyone sees something I missed please let me know! The *nat table's "--to-ports" point to the only ports with actively listening services. (aside from pings) Layer 2 apps listen exclusively on chmod'ed sockets bridged by one of the layer 1 daemons. Layers 3+ inherit from layer 2 in a similar fashion. The two lines giving me grief are commented out at the very bottom of the *filter rules. The first line runs fine but it's all or nothing. :) Many thanks, Peter H. *nat #Flush previous rules, chains and counters for the 'nat' table -F -X -Z #Redirect traffic to alternate internal ports -I PREROUTING --src 0/0 -p tcp --dport 80 -j REDIRECT --to-ports 8080 -I PREROUTING --src 0/0 -p tcp --dport 443 -j REDIRECT --to-ports 8443 -I PREROUTING --src 0/0 -p udp --dport 53 -j REDIRECT --to-ports 8053 -I PREROUTING --src 0/0 -p tcp --dport 9022 -j REDIRECT --to-ports 8022 COMMIT *filter #Flush previous settings, chains and counters for the 'filter' table -F -X -Z #Set default behavior for all connections and protocols -P INPUT DROP -P OUTPUT DROP -A FORWARD -j DROP #Only accept loopback traffic originating from the local NIC -A INPUT -i lo -j ACCEPT -A INPUT ! -i lo -d 127.0.0.0/8 -j DROP #Accept all outgoing non-fragmented traffic having a valid state -A OUTPUT ! -f -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT #Drop fragmented incoming packets (Not always malicious - acceptable for use now) -A INPUT -f -j DROP #Allow ping requests rate limited to one per second (burst ensures reliable results for high latency connections) -A INPUT -p icmp --icmp-type 8 -m limit --limit 1/sec --limit-burst 2 -j ACCEPT #Declaration of custom chains -N INSPECT_TCP_FLAGS -N INSPECT_STATE -N INSPECT #Drop incoming tcp connections with invalid tcp-flags -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL ALL -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL NONE -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,FIN FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,PSH PSH -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ACK,URG URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags FIN,RST FIN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags SYN,RST SYN,RST -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,FIN,PSH,URG -j DROP -A INSPECT_TCP_FLAGS -p tcp --tcp-flags ALL SYN,RST,ACK,FIN,URG -j DROP #Accept incoming traffic having either an established or related state -A INSPECT_STATE -m state --state ESTABLISHED,RELATED -j ACCEPT #Drop new incoming tcp connections if they aren't SYN packets -A INSPECT_STATE -m state --state NEW -p tcp ! --syn -j DROP #Drop incoming traffic with invalid states -A INSPECT_STATE -m state --state INVALID -j DROP #INSPECT chain definition -A INSPECT -p tcp -j INSPECT_TCP_FLAGS -A INSPECT -j INSPECT_STATE #Route incoming traffic through the INSPECT chain -A INPUT -j INSPECT #Accept redirected HTTP traffic via HA reverse proxy -A INPUT -p tcp --dport 8080 -j ACCEPT #Accept redirected HTTPS traffic via STUNNEL SSH gateway (As well as tunneled HTTPS traffic destine for other services) -A INPUT -p tcp --dport 8443 -j ACCEPT #Accept redirected DNS traffic for NSD authoritative nameserver -A INPUT -p udp --dport 8053 -j ACCEPT #Accept redirected SSH traffic for OpenSSH server #Temp solution: -A INPUT -p tcp --dport 8022 -j ACCEPT #Ideal solution: #Limit new ssh connections to max 10 per 10 minutes while allowing an "unlimited" (or better reasonably limited?) number of established connections. #-A INPUT -p tcp --dport 8022 --state NEW,ESTABLISHED -m recent --set -j ACCEPT #-A INPUT -p tcp --dport 8022 --state NEW -m recent --update --seconds 600 --hitcount 11 -j DROP COMMIT *mangle #Flush previous rules, chains and counters in the 'mangle' table -F -X -Z COMMIT

    Read the article

  • Linux software RAID6: rebuild slow

    - by Ole Tange
    I am trying to find the bottleneck in the rebuilding of a software raid6. ## Pause rebuilding when measuring raw I/O performance # echo 1 > /proc/sys/dev/raid/speed_limit_min # echo 1 > /proc/sys/dev/raid/speed_limit_max ## Drop caches so that does not interfere with measuring # sync ; echo 3 | tee /proc/sys/vm/drop_caches >/dev/null # time parallel -j0 "dd if=/dev/{} bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 7.30336 s, 144 MB/s [... similar for each disk ...] # time parallel -j0 "dd if=/dev/{} skip=15000000 bs=256k count=4000 | cat >/dev/null" ::: sdbd sdbc sdbf sdbm sdbl sdbk sdbe sdbj sdbh sdbg 4000+0 records in 4000+0 records out 1048576000 bytes (1.0 GB) copied, 12.7991 s, 81.9 MB/s [... similar for each disk ...] So we can read sequentially at 140 MB/s in the outer tracks and 82 MB/s in the inner tracks on all the drives simultaneously. Sequential write performance is similar. This would lead me to expect a rebuild speed of 82 MB/s or more. # echo 800000 > /proc/sys/dev/raid/speed_limit_min # echo 800000 > /proc/sys/dev/raid/speed_limit_max # cat /proc/mdstat md2 : active raid6 sdbd[10](S) sdbc[9] sdbf[0] sdbm[8] sdbl[7] sdbk[6] sdbe[11] sdbj[4] sdbi[3](F) sdbh[2] sdbg[1] 27349121408 blocks super 1.2 level 6, 128k chunk, algorithm 2 [9/8] [UUU_UUUUU] [=========>...........] recovery = 47.3% (1849905884/3907017344) finish=855.9min speed=40054K/sec But we only get 40 MB/s. And often this drops to 30 MB/s. # iostat -dkx 1 sdbc 0.00 8023.00 0.00 329.00 0.00 33408.00 203.09 0.70 2.12 1.06 34.80 sdbd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 sdbe 13.00 0.00 8334.00 0.00 33388.00 0.00 8.01 0.65 0.08 0.06 47.20 sdbf 0.00 0.00 8348.00 0.00 33388.00 0.00 8.00 0.58 0.07 0.06 48.00 sdbg 16.00 0.00 8331.00 0.00 33388.00 0.00 8.02 0.71 0.09 0.06 48.80 sdbh 961.00 0.00 8314.00 0.00 37100.00 0.00 8.92 0.93 0.11 0.07 54.80 sdbj 70.00 0.00 8276.00 0.00 33384.00 0.00 8.07 0.78 0.10 0.06 48.40 sdbk 124.00 0.00 8221.00 0.00 33380.00 0.00 8.12 0.88 0.11 0.06 47.20 sdbl 83.00 0.00 8262.00 0.00 33380.00 0.00 8.08 0.96 0.12 0.06 47.60 sdbm 0.00 0.00 8344.00 0.00 33376.00 0.00 8.00 0.56 0.07 0.06 47.60 iostat says the disks are not 100% busy (but only 40-50%). This fits with the hypothesis that the max is around 80 MB/s. Since this is software raid the limiting factor could be CPU. top says: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 38520 root 20 0 0 0 0 R 64 0.0 2947:50 md2_raid6 6117 root 20 0 0 0 0 D 53 0.0 473:25.96 md2_resync So md2_raid6 and md2_resync are clearly busy taking up 64% and 53% of a CPU respectively, but not near 100%. The chunk size (128k) of the RAID was chosen after measuring which chunksize gave the least CPU penalty. If this speed is normal: What is the limiting factor? Can I measure that? If this speed is not normal: How can I find the limiting factor? Can I change that?

    Read the article

  • Asus P8P67 Rev. 3.1 Motherboard issues powering on and saving settings

    - by Scott
    Edit: New Information Have some updated information from the old question below: So basically my issue right now is somewhat similar, but I've been able to rule out a couple of things. I don't think this has anything to do with light on the motherboard. No matter what lights are on/off on the motherboard when the computer is off, they don't affect this issue. The main power LED on the Mobo is always lit when the power supply is turned on, and that's what matters anyway. Even when the main power LED is on, the PC will NOT boot up the first time I hit the power switch. I have to go reset the power supply (make all lights turn off on the Mobo and back on), and THEN hit the power switch. Then everything boots up. Also, the BIOS settings are reset every time this happens. Asus Tech Support told me to try jumping the power with something metal to try and rule out that it's a problem with the connectors getting power, or if it's a problem with the case power switch pins - haven't done that yet though. Any ideas? This is a lot simpler than it was before when I thought it had to do with certain LED indicators for RAM, EPU, etc. Original Question So I built my new desktop just about 3 weeks ago. I've been having a few issues which I think are all related to my motherboard, an Asus P8P67 Revision 3.1, but I'm not 100% sure as this is really the first from-scratch build I've ever done. I've posted these questions on the Asus forums, Asus Tech Support, and the Corsair forums as well as I thought it might have something to do with my power supply at one point. None of these avenues have solved my issue until now completely, so I thought I'd come here to see what you guys think. Here's what's happening: My computer is off, and I go to power it on. I press the power switch on the case (Antec Nine Hundred), and nothing seems to happen. Upon further inspection, I see that what this actually does is simply turn on the EPU LED on my motherboard, but doesn't actually boot anything up. I then have to go and flip the main power switch on the power supply off and back on. What this does is turn off all lights on the Motherboard after a few seconds, and turn them all back on (including the EPU LED that was off before I hit the power switch the first time). Now, hitting the power switch works. The machine boots up fine, and starts going through the boot up process. As a side note: My Motherboard is set to "Force BIOS", and every single time I change this to do the opposite, the next time my computer boots up that change reverts itself. I think this may be due to the fact that I am doing the hard reset on the power supply each time, but I'm not sure. I had thought that the Motherboard would keep its BIOS settings unless you did something to the Mobo itself - so this may be a related issue, or something else completely. That's basically it. Once it's on, it's on. It works fine, recognizes all of my hardware, and runs great. All fans/lights in the case work great, and I'm getting standard readings. The next time I go to shut the computer down however, I can expect the same exact process getting it up and running, including being forced to go into BIOS and exit again before I can load Windows. Another side note: If I power on my computer using the power switch DIRECTLY after shutting it down, it powers right back on (I think this is because the EPU LED light doesn't have time to turn off). It looks as if as long as the EPU LED is lit up on the motherboard before I hit the power switch on the case, the thing will boot up fine (although this doesn't explain the "Force BIOS" issue, at least it's something). Any ideas? Thanks guys. P.S. - System Specs Asus P8P67 Rev. 3.1 Motherboard Intel Core i7 2600K Processor 16GB (4x4GB) G-Skill 1600 RAM NVIDIA EVGA GTX 570 Video Card Crucial 128GB SSD HD Corsair 850W Power Supply Seagate 2TB HDD

    Read the article

  • Determining the required depth and specifications for a server cabinet

    - by Bingu Bingme
    I'm trying to understand the considerations ("why") that go into determining the specifications ("what") for a rackmount server cabinet, in order to determine what sort of rack I should purchase for my home use. Since this is for home use, I won't be following certain best practices (eg. hot/cold aisle, not even air conditioning) and may be willing to sacrifice in various areas in order to reduce cost and footprint - but please advise if there are safety concerns or other considerations to note. The most basic specs for a server cabinet are the dimensions (external width x external depth x usable height). Width: commonly 600mm or 800mm (if the use case requires extra clearance around the sides, such as if there is lots of cabling). In my case and most common cases, I'm going to stick with 600mm. Height: Select a sufficiently tall rack to fit my equipment. But how much may I stuff into it? Eg, if there is a 15U rack, can I really populate it with 15U of servers, or should I leave 1U at top and bottom for air circulation? Depth: Racks commonly have external depth of 600mm (network equipment), 800mm, 1000mm, or even longer. I'm trying to see how to fit into the 800mm depth. With reference to http://www.server-racks.com/rack-mount-depth.html, I'm hoping to have the front and rear posts mounted ~ 28.5" (72cm) apart, which would leave only 8cm for front space and rear space. How much rear space (from rear posts to back of rack) do I really need? I won't use cable management arms, so can I mount a 72cm depth server since the power, KVM, network cables won't take up much depth? My most important equipment are all < 60cm depth (4U chassis) and should comfortably fit within the 800mm cabinet. The rest of the equipment are very old 1U servers that range from 65-72cm depth. I might still want to make further use of them, or I might discard them since they are so old. Even if the 72cm servers cannot be powered on in an 800mm rack, I should be able to use them as 1U shelves. But, what server depth can I expect to be able to operate? Or am I forced to upgrade to 1000mm depth racks in order to use any servers deeper than 60cm? With reference to best practices for HP racks, some other specs and installation considerations: There aren't any minimum recommendations for clearance on the sides of the rack. It is recommended to leave 48" front clearance. The 48" front clearance is based on 32" chassis depth, 13" to extend the rack rails and mate the inner/outer rails, and 3" for movement. If I don't use such rails (eg, use shelves instead), it should be sufficient to leave front clearance of chassis depth + 3". It is recommended to leave 30" rear clearance "to provide space for servicing the rack". I'm planning to back the rack into a corner of the room, and wheel it slightly out when I need to access the rear. If the wheeling plan is ok, I still need to know how much rear clearance is required for air circulation and ventilation purposes. Castor wheels and stabilising feet. Since I'm backing the rack into a corner of the room, I'll only be able to set the stabilising feet on the front corners. Thoughts on safety? The rack that I'm considering has front glass doors with side ventilation slits and fully perforated rear doors. I'm hoping this will be a good balance between temperature and noise (only ventilation slits facing out the front, while the rear is facing the walls). Or is the sound of high-rpm fans going to escape through the front slits anyway and destroy my sanity?

    Read the article

  • Lustre - issues with simple setup

    - by ethrbunny
    Issue: I'm trying to assess the (possible) use of Lustre for our group. To this end I've been trying to create a simple system to explore the nuances. I can't seem to get past the 'llmount.sh' test with any degree of success. What I've done: Each system (throwaway PCs with 70Gb HD, 2Gb RAM) is formatted with CentOS 6.2. I then update everything and install the Lustre kernel from downloads.whamcloud.com and add on the various (appropriate) lustre and e2fs RPM files. Systems are rebooted and tested with 'llmount.sh' (and then cleared with 'llmountcleanup.sh'). All is well to this point. First I create an MDS/MDT system via: /usr/sbin/mkfs.lustre --mgs --mdt --fsname=lustre --device-size=200000 --param sys.timeout=20 --mountfsoptions=errors=remount-ro,user_xattr,acl --param lov.stripesize=1048576 --param lov.stripecount=0 --param mdt.identity_upcall=/usr/sbin/l_getidentity --backfstype ldiskfs --reformat /tmp/lustre-mdt1 and then mkdir -p /mnt/mds1 mount -t lustre -o loop,user_xattr,acl /tmp/lustre-mdt1 /mnt/mds1 Next I take 3 systems and create a 2Gb loop mount via: /usr/sbin/mkfs.lustre --ost --fsname=lustre --device-size=200000 --param sys.timeout=20 --mgsnode=lustre_MDS0@tcp --backfstype ldiskfs --reformat /tmp/lustre-ost1 mkdir -p /mnt/ost1 mount -t lustre -o loop /tmp/lustre-ost1 /mnt/ost1 The logs on the MDT box show the OSS boxes connecting up. All appears ok. Last I create a client and attach to the MDT box: mkdir -p /mnt/lustre mount -t lustre -o user_xattr,acl,flock luster_MDS0@tcp:/lustre /mnt/lustre Again, the log on the MDT box shows the client connection. Appears to be successful. Here's where the issues (appear to) start. If I do a 'df -h' on the client it hangs after showing the system drives. If I attempt to create files (via 'dd') on the lustre mount the session hangs and the job can't be killed. Rebooting the client is the only solution. If I do a 'lctl dl' from the client it shows that only 2/3 OST boxes are found and 'UP'. [root@lfsclient0 etc]# lctl dl 0 UP mgc MGC10.127.24.42@tcp 282d249f-fcb2-b90f-8c4e-2f1415485410 5 1 UP lov lustre-clilov-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 2 UP lmv lustre-clilmv-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 4 3 UP mdc lustre-MDT0000-mdc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 4 UP osc lustre-OST0000-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 5 UP osc lustre-OST0003-osc-ffff880037e4d400 00fc176e-3156-0490-44e1-da911be9f9df 5 Doing a 'lfs df' from the client shows: [root@lfsclient0 etc]# lfs df UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 149944 16900 123044 12% /mnt/lustre[MDT:0] OST0000 : inactive device OST0001 : Resource temporarily unavailable OST0002 : Resource temporarily unavailable lustre-OST0003_UUID 187464 24764 152636 14% /mnt/lustre[OST:3] filesystem summary: 187464 24764 152636 14% /mnt/lustre Given that each OSS box has a 2Gb (loop) mount I would expect to see this reflected in available size. There are no errors on the MDS/MDT box to indicate that multiple OSS/OST boxes have been lost. EDIT: each system has all other systems defined in /etc/hosts and entries in iptables to provide access. SO: I'm clearly making several mistakes. Any pointers as to where to start correcting them?

    Read the article

  • Computer not POSTing "randomly"?

    - by smoth190
    I have a custom built PC that is exhibiting some...odd... behavior, something I've never seen before. It was working fine one day, and the next day, it wouldn't start. Seeing as I wanted an upgrade anyway, I purchased a new motherboard that was compatible with all my parts. While replacing the motherboard, I accidentally damaged the CPU. Well, I wanted a new one anyway... so I got a new one. Seeing as I was replacing a ton of parts already, I bought a new PSU because the old one was super loud. When I slapped it all together, it starts up, lights, fans, drives, they all start. But I get no display from the monitor. No beeps, which I believe means it doesn't POST. I figured it was the RAM, because after removing the sound card and graphics card, there was nothing else that I hadn't replaced. When I remove both sticks of RAM, I get a continuous beeping, and according to the mobo handbook, means no RAM. So I think the mobo is functional, or atleast partly. I bought new RAM, but it still didn't work. I tried 3 monitors, with both VGA and DIV. So it's probably not the monitor, either. Now, let me get to the random part. Every 20 or so boots (I should also mention, for about 3 out of 5 boots I have to unplug the PC because it won't powerdown via the button), it will POST and I'll get display. Then, after about 2 or 3 resets, it won't work again. This confuses me so much, because even when I change nothing, it will/will not work. My thought is that maybe it has something to do with the RAM not clearing or something. I also reset the CMOS battery, incase that had anything to do with it, but no eval. I found some weird suggestion online about holding the power button for 30 seconds while it was unplugged. That did nothing, but I didn't expect it would... I've replaced just about the entire computer, and all the parts are compatible. Done about everything I can think of, but nothing has worked. Hopefully someone can help me here. And as I side note: When I do get my computer to boot, it says my hardware has changed and I have to re-activate windows. But it says I have to call Microsoft to do it. So I get this fancy automated voice that asks me to enter in a code into windows, then it asks me "How many computers have you activated with this copy of Windows?". Well, I had it on my computer before I replaced everything, so I said 1. Then he yelled at me for violating my 1-use license. I dunno what's going on there, do I have to re-purchase Windows 7? And they wonder why people pirate software... That's just a bonus question, though. Specs: 8GB of DDR2 RAM (Corsair) AMD CPU (I don't know what GHz or model because I can't find the box... (I think its 4 cores of 2.8Ghz) ASRock A785GM-LE Motherboard

    Read the article

  • Is this valid JFS partition?

    - by Coolmax
    This is my first question on StackExchange. My teacher gave my his laptop (with Fedora 16 on it) and compact flash card with data. He want to have access to files on card, but he couldn't get access to it. The problem is Linux don't know what type of partion is. I suppose there is JFS: root@debian:~# dmesg |grep sdc [ 9066.908223] sd 3:0:0:1: [sdc] 3940272 512-byte logical blocks: (2.01 GB/1.87 GiB) [ 9066.962307] sd 3:0:0:1: [sdc] Write Protect is off [ 9066.962310] sd 3:0:0:1: [sdc] Mode Sense: 03 00 00 00 [ 9066.962312] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028420] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.028637] sdc: unknown partition table [ 9067.097065] sd 3:0:0:1: [sdc] Assuming drive cache: write through [ 9067.097281] sd 3:0:0:1: [sdc] Attached SCSI removable disk and some of data: root@debian:~# hexdump -Cn 65536 /dev/sdc 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00008000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 00008010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 00008020 00 20 00 00 00 09 20 10 02 00 00 00 00 00 00 00 |. .... .........| 00008030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 00008040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 00008050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 00008060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 00008070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00008080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 00008090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 000080a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 000080b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 000080c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * [cut] * 0000f000 4a 46 53 31 01 00 00 00 48 63 0e 00 00 00 00 00 |JFS1....Hc......| 0000f010 00 10 00 00 0c 00 03 00 00 02 00 00 09 00 00 00 |................| 0000f020 00 20 00 00 00 09 20 10 00 00 00 00 00 00 00 00 |. .... .........| 0000f030 04 00 00 00 26 00 00 00 02 00 00 00 24 00 00 00 |....&.......$...| 0000f040 41 03 00 00 16 00 00 00 00 02 00 00 a0 cc 01 00 |A...............| 0000f050 37 00 00 00 69 cc 01 00 b6 d8 ac 4b 00 00 00 00 |7...i......K....| 0000f060 32 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 |2...............| 0000f070 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f080 00 00 00 00 00 00 00 00 90 15 e5 5f e3 c4 45 fa |..........._..E.| 0000f090 9d 6a 5c b5 4f da 62 1a 00 00 00 00 00 00 00 00 |.j\.O.b.........| 0000f0a0 00 00 00 00 00 00 00 00 c3 c9 01 00 ed 81 00 00 |................| 0000f0b0 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 0000f0c0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00010000 I'm total newbie to filesystems. I googled and found that JFS superblock may starts on 0x8000 offset. But what next? How to mount this card? If there would be normal partition table I would expect 55 AA on 510th and 511th byte, but first 8000 bytes are clear. Any help would be greatly appreciated. And sorry for my bad english :) Kind regards.

    Read the article

  • linux raid 1: right after replacing and syncing one drive, the other disk fails - understanding what is going on with mdstat/mdadm

    - by devicerandom
    We have an old RAID 1 Linux server (Ubuntu Lucid 10.04), with four partitions. A few days ago /dev/sdb failed, and today we noticed /dev/sda had pre-failure ominous SMART signs (~4000 reallocated sector count). We replaced /dev/sdb this morning and rebuilt the RAID on the new drive, following this guide: http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array Everything went smooth until the very end. When it looked like it was finishing to synchronize the last partition, the other old one failed. At this point I am very unsure of the state of the system. Everything seems working and the files seem to be all accessible, just as if it synchronized everything, but I'm new to RAID and I'm worried about what is going on. The /proc/mdstat output is: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md3 : active raid1 sdb4[2](S) sda4[0] 478713792 blocks [2/1] [U_] md2 : active raid1 sdb3[1] sda3[2](F) 244140992 blocks [2/1] [_U] md1 : active raid1 sdb2[1] sda2[2](F) 244140992 blocks [2/1] [_U] md0 : active raid1 sdb1[1] sda1[2](F) 9764800 blocks [2/1] [_U] unused devices: <none> The order of [_U] vs [U_]. Why aren't they consistent along all the array? Is the first U /dev/sda or /dev/sdb? (I tried looking on the web for this trivial information but I found no explicit indication) If I read correctly for md0, [_U] should be /dev/sda1 (down) and /dev/sdb1 (up). But if /dev/sda has failed, how can it be the opposite for md3 ? I understand /dev/sdb4 is now spare because probably it failed to synchronize it 100%, but why does it show /dev/sda4 as up? Shouldn't it be [__]? Or [_U] anyway? The /dev/sda drive now cannot even be accessed by SMART anymore apparently, so I wouldn't expect it to be up. What is wrong with my interpretation of the output? I attach also the outputs of mdadm --detail for the four partitions: /dev/md0: Version : 00.90 Creation Time : Fri Jan 21 18:43:07 2011 Raid Level : raid1 Array Size : 9764800 (9.31 GiB 10.00 GB) Used Dev Size : 9764800 (9.31 GiB 10.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:27:33 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : a3b4dbbd:859bf7f2:bde36644:fcef85e2 Events : 0.7704 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 1 - faulty spare /dev/sda1 /dev/md1: Version : 00.90 Creation Time : Fri Jan 21 18:43:15 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:39:06 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 8bcd5765:90dc93d5:cc70849c:224ced45 Events : 0.1508280 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 18 1 active sync /dev/sdb2 2 8 2 - faulty spare /dev/sda2 /dev/md2: Version : 00.90 Creation Time : Fri Jan 21 18:43:19 2011 Raid Level : raid1 Array Size : 244140992 (232.83 GiB 250.00 GB) Used Dev Size : 244140992 (232.83 GiB 250.00 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:46:44 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 1 Spare Devices : 0 UUID : 2885668b:881cafed:b8275ae8:16bc7171 Events : 0.2289636 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 19 1 active sync /dev/sdb3 2 8 3 - faulty spare /dev/sda3 /dev/md3: Version : 00.90 Creation Time : Fri Jan 21 18:43:22 2011 Raid Level : raid1 Array Size : 478713792 (456.54 GiB 490.20 GB) Used Dev Size : 478713792 (456.54 GiB 490.20 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 3 Persistence : Superblock is persistent Update Time : Tue Nov 5 17:19:20 2013 State : clean, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Number Major Minor RaidDevice State 0 8 4 0 active sync /dev/sda4 1 0 0 1 removed 2 8 20 - spare /dev/sdb4 The active sync on /dev/sda4 baffles me. I am worried because if tomorrow morning I have to replace /dev/sda, I want to be sure what should I sync with what and what is going on. I am also quite baffled by the fact /dev/sda decided to fail exactly when the raid finished resyncing. I'd like to understand what is really happening. Thanks a lot for your patience and help. Massimo

    Read the article

  • Gaming blew fuse and causes funny smell: how to overcome?

    - by George Tomlinson
    I've been gaming for a while now. When playing certain games this PC goes into overdrive. The fan/fans start/s to sound like a jet engine it/they get/s so busy. Also I have smelt burning when this has happened. The fuse blew on the 4 socket adapter I was using recently. On the following thread someone said this could be due to the PSU not being strong enough to handle the load, in what it seems could be a related issue someone had, although the person who posted this question did say that blowing a fan on their PC stopped it crashing in that case: http://www.tomshardware.co.uk/answers/id-2047543/gtx-650-overheating-issue.html. This is exactly what they said: Your GPU isn't overheating. 70+ before it would shutdown and cause a restart. Make sure your PSU is strong enough to handle your new system at load and possibly run Memtest to check your RAM (although not BSOD'ing and just shutting down points to the PSU). This (the PSU part) makes more sense to me than it being to do with dust etc, since it seems a more plausible explanation of why the fuse blew. The PC has no problems except when playing certain games: i.e. TERA Rising and WoW with add-ons (I think WoW is ok as long as I don't have more than 1 add-on (Healers Have To Die)). I'm just wondering if anyone knows or can suggest what I might be able to do to be able to play these games without this problem occurring. The PC's spec is this: Display: NVIDIA GeForce GTX 650 8GB RAM (6 available) Processor: AMD FX (tm) - 8120 Eight-Core Processor - 3.1 GHz, 4 Cores, 8 Logical Processors I have read on another post that forcing vsync in the Nvidia Control Panel helped with what seems could be a similar problem, so I plan to see if that solves it, God permitting. EDIT: I tried the Vsync thing, and it seems the situation may have improved, although this may be due to something else: i.e. maybe the PC was working harder yesterday, due to just having downloaded a few things or lots of things running. I'm still noticing the funny smell when playing TERA. It's not so much burning: it's more like glue. The smell might have had a burning element to it in the past, but I think it's always had a glue element. EDIT 2: the PSU is an 'ATX Switching Power Supply', Model E-500ATX. Other info it gives on the PSU is 230V, Current 10A and Frequency 50-60Hz. It also has some other info which I can supply if necessary. Putting the PC plug in the wall socket instead of the power strip seems like it might have reduced the load on the PC quite a bit: I think it sounds less stressed. it has been off for a while whilst I took the side panel off though, so I'll wait to see what happens before getting too excited. EDIT 3: hmm. So here's the latest: just playing TERA. The fan's running quite fast again. Hard to tell whether switching to the wall socket has made a difference in terms of strain on the PC: I don't know if one would expect it to. Still seems like it might have helped though. Oh and there didn't seem to be much dust in the PC, although I didn't disconnect any components. I'm still getting the glue type smell. ASIDE: reminds me of someone on a PC near me at the library once who was actually sniffing glue right there in front of everyone while on the PC and he started talking about how he was sniffing glue. lol. That's no joke. EDIT 4: So the questions now are: Question 1: Is the smell something I should sort out? (If so, how might I do this?) Question 2: is it necessary to take any steps to prevent blowing another fuse (and if so which step/s?).

    Read the article

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • What networking hardware do I need in this situation (Fairpoint [ISP] "E-DIA" connection)?

    - by Tegeril
    Right away you'd probably want to say, "Well just ask Fairpoint." I've done that, a number of times in as many different ways I can phrase it and just keep hitting a brick wall where they will not commit to giving any useful information and instead recommend contracting an outside firm and spending a pile of money. Anyway... I'm trying to help a family member out with an office connection that is being setup. I've managed to scrape tiny details here and there from our discussions with the ISP (Fairpoint in Maine) about what is going to be done and what is going to be needed. This is the connection that is being setup: http://www.fairpoint.com/enterprise/vantagepoint/e-dia/index.jsp Information I have been given: Via this connection I can get IPs across different C blocks if that were necessary (it is not) Fairpoint is bringing hardware with them that they claim simply does the conversion from whatever line is coming in the building to ethernet, they have referred to this as the "Fairpoint Netvanta" which I know suggests a line of products that I have looked up, but some (most? all?) of those seems to handle all the routing that I saw. Fairpoint says that I need to bring my own router to sit behind their device. They have literally declined to even suggest products that have worked for other clients in the past and fall back on "any business router works, not a home router." That alone makes my head spin. Detail and clarity hit a brick wall from there. At one moment I got them to cough up that the router I provide needs to be able to do VPN tunneling but they typically fall back to "not a home router" and I was even given "just a business router, Cisco or something, it'll be $500-$1000". Now I know that VPN tunneling routers exist well below that price point and since this connection is going to one machine, possibly two only via ethernet, my desire to purchase networking hardware that over-delivers what I need is not very high. They are literally setting all this up, have provided no configuration details for after they finish, and expect me to just plunk a $500+ router behind it and cross my fingers or contract out to a third party company. If there were other options available for the location, I would have dropped them in a second, but there aren't. The device that is connected requires a static IP and I'm honestly a bit hazy on the necessity of an additional router behind their device and generally a bit over my head. I presume that the router needs to be able to serve external static IPs to its clients, but I really don't know what is going to show up when they come to do the install. This was originally going to be run via an ADSL bridge modem with a range of static IPs (which is easy and is currently setup properly) but the location is too far from the telco to get speeds that we really want for upload and this is also a connection that needs high availability. Any suggestions would be greatly appreciated (I see a number of options in the Cisco Small Business line and other competitors that aren't going to break the bank…), especially if you've worked with Fairpoint before! Thanks for reading my wall of text.

    Read the article

  • Validating an XML document fragment against XML schema

    - by shylent
    Terribly sorry if I've failed to find a duplicate of this question. I have a certain document with a well-defined document structure. I am expressing that structure through an XML schema. That data structure is operated upon by a RESTful service, so various nodes and combinations of nodes (not the whole document, but fragments of it) are exposed as "resources". Naturally, I am doing my own validation of the actual data, but it makes sense to validate the incoming/outgoing data against the schema as well (before the fine-grained validation of the data). What I don't quite grasp is how to validate document fragments given the schema definition. Let me illustrate: Imagine, the example document structure is: <doc-root> <el name="foo"/> <el name="bar"/> </doc-root> Rather a trivial data structure. The schema goes something like this: <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="doc-root"> <xsd:complexType> <xsd:sequence> <xsd:element name="el" type="myCustomType" /> </xsd:sequence> </xsd:complexType> </xsd:element> <xsd:complexType name="myCustomType"> <xsd:attribute name="name" use="required" /> </xsd:complexType> </xsd:schema> Now, imagine, I've just received a PUT request to update an 'el' object. Naturally, I would receive not the full document or not any xml, starting with 'doc-root' at its root, but the 'el' element itself. I would very much like to validate it against the existing schema then, but just running it through a validating parser wouldn't work, since it will expect a 'doc-root' at the root. So, again, the question is, - how can one validate a document fragment against an existing schema, or, perhaps, how can a schema be written to allow such an approach. Hope it made sense.

    Read the article

  • Silverlight and Unexpected Font Sizes

    - by Eric J.
    Someone please teach me to fish here... I'm just learning Silverlight and have ran into a few situations where the font size actually used is drastically different than I would expect. There's probably something conceptual that I'm missing. Case A In one instance, I have defined a user control that presents a Label to show text. If one clicks on the label, the label (that is in a stack panel, in the user control) is replaced with a TextBox. When used at the top of a page (as in the example below with lblName) the label text is very small (around 8 points). When clicked on, the text box that replaces the label uses the specified fonts size. That same user control, used in different parts of the app, uses the same font for Label and TextBox. <Grid x:Name="LayoutRoot" Background="White"> <Grid.RowDefinitions> <RowDefinition Height="33" /> <RowDefinition Height="267*" /> </Grid.RowDefinitions> <StackPanel Height="Auto" HorizontalAlignment="Left" Name="stackPanel" VerticalAlignment="Top" Width="Auto" Grid.Row="1" /> <my:EditLabel Height="33" HorizontalAlignment="Left" x:Name="lblName" VerticalAlignment="Top" Width="Auto" FlexText="{Binding Name, Mode=TwoWay}" FontSize="20" MinHeight="24" /> </Grid> Case B I'm using the LiquidMenu.Menu control to pop up a menu when a button is pressed. The font looks huge compared to the rest of my page (maybe 36 points?). I tried forcing it to a very small by explicitly setting it to 8pt, but that had no effect. <Grid x:Name="LayoutRoot" Background="{x:Null}"> <StackPanel x:Name="labelStackPanel" Orientation="Horizontal"> <TextBlock Height="24" HorizontalAlignment="Left" Name="labelText" VerticalAlignment="Top" Width="200" Text="(Value Goes Here)" /> </StackPanel> <liquidMenu:Menu x:Name="popupMenu" Canvas.Left="40" Canvas.Top="40" ItemSelected="MenuList_ItemSelected" Visibility="Collapsed" Height="Auto" FontSize="8"> <liquidMenu:MenuItem ID="delete" Icon="Images/Delete10.png" Text="Delete" Shortcut="Del" /> <liquidMenu:MenuItem ID="exclusive" Icon="" Text="Exclusive" Shortcut="Ctrl+E" /> <liquidMenu:MenuItem ID="properties" Icon="" Text="Properties" Shortcut="Ctrl+P" /> </liquidMenu:Menu> </Grid> Answers to these specific issues are great, a new way to think about this type of issue so that I understand how to control font size is better.

    Read the article

  • Using depends with the jQuery Validation plugin

    - by Glenn Slaven
    I've got a form with a bunch of textboxes that are disabled by default, then enabled by use of a checkbox next to each one. When enabled, the values in these textboxes are required to be a valid number, but when disabled they don't need a value (obviously). I'm using the jQuery Validation plugin to do this validation, but it doesn't seem to be doing what I expect. When I click the checkbox and disable the textbox, I still get the invalid field error despite the depends clause I've added to the rules (see code below). Oddly, what actually happens is that the error message shows for a split second then goes away. Here is a sample of the list of checkboxes & textboxes: <ul id="ItemList"> <li> <label for="OneSelected">One</label><input id="OneSelected" name="OneSelected" type="checkbox" value="true" /> <input name="OneSelected" type="hidden" value="false" /> <input disabled="disabled" id="OneValue" name="OneValue" type="text" /> </li> <li> <label for="TwoSelected">Two</label><input id="TwoSelected" name="TwoSelected" type="checkbox" value="true" /> <input name="TwoSelected" type="hidden" value="false" /> <input disabled="disabled" id="TwoValue" name="TwoValue" type="text" /> </li> </ul> And here is the jQuery code I'm using //Wire up the click event on the checkbox jQuery('#ItemList :checkbox').click(function(event) { var textBox = jQuery(this).siblings(':text'); textBox.valid(); if (!jQuery(this).attr("checked")) { textBox.attr('disabled', 'disabled'); textBox.val(''); } else { textBox.removeAttr('disabled'); textBox[0].focus(); } }); //Add the rules to each textbox jQuery('#ItemList :text').each(function(e) { jQuery(this).rules('add', { required: { depends: function(element) { return jQuery(element).siblings(':checkbox').attr('checked'); } }, number: { depends: function(element) { return jQuery(element).siblings(':checkbox').attr('checked'); } } }); }); Ignore the hidden field in each li it's there because I'm using asp.net MVC's Html.Checkbox method.

    Read the article

  • Winforms TabControl causing spurious Paint events for UserControl

    - by Tom Bushell
    For our project, we've written a WinForms UserControl for graphing. We're seeing some strange behavior when our control is sited in a TabControl - our control continuously fires Paint events, even when there is absolutely no activity by the user. We only see this in the TabControl. When we site our control in other containers such as Forms or Splitters, Paint is only fired when you'd expect e.g. when the control is first displayed, etc. Can anyone suggest why this might be happening? Here's a stack trace from a breakpoint in our control's Paint handler, if that's any help. OverlordFrontEnd.exe!OverlordFrontEnd.MainForm.graphControl_Paint(object sender = BI_BaseGraphXY.BaseGraphXY}, System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1031,Height=408}}) Line 422 C# System.Windows.Forms.dll!System.Windows.Forms.Control.OnPaint(System.Windows.Forms.PaintEventArgs e) + 0x73 bytes BI_AppCore.dll!BI_BaseGraphXY.BaseGraphXY.OnPaint(System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1031,Height=408}}) Line 377 + 0xb bytes C# System.Windows.Forms.dll!System.Windows.Forms.Control.PaintTransparentBackground(System.Windows.Forms.PaintEventArgs e, System.Drawing.Rectangle rectangle, System.Drawing.Region transparentRegion = null) + 0x16c bytes System.Windows.Forms.dll!System.Windows.Forms.Control.PaintBackground(System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1029,Height=406}}, System.Drawing.Rectangle rectangle, System.Drawing.Color backColor, System.Drawing.Point scrollOffset) + 0xbc bytes System.Windows.Forms.dll!System.Windows.Forms.Control.PaintBackground(System.Windows.Forms.PaintEventArgs e, System.Drawing.Rectangle rectangle) + 0x63 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.OnPaintBackground(System.Windows.Forms.PaintEventArgs pevent) + 0x59 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.PaintWithErrorHandling(System.Windows.Forms.PaintEventArgs e = {ClipRectangle = {X=0,Y=0,Width=1029,Height=406}}, short layer, bool disposeEventArgs = false) + 0x74 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.WmPaint(ref System.Windows.Forms.Message m) + 0x1ba bytes System.Windows.Forms.dll!System.Windows.Forms.Control.WndProc(ref System.Windows.Forms.Message m) + 0x33e bytes System.Windows.Forms.dll!System.Windows.Forms.Control.ControlNativeWindow.OnMessage(ref System.Windows.Forms.Message m) + 0x10 bytes System.Windows.Forms.dll!System.Windows.Forms.Control.ControlNativeWindow.WndProc(ref System.Windows.Forms.Message m) + 0x31 bytes System.Windows.Forms.dll!System.Windows.Forms.NativeWindow.Callback(System.IntPtr hWnd, int msg = 15, System.IntPtr wparam, System.IntPtr lparam) + 0x5a bytes [Native to Managed Transition] [Managed to Native Transition] System.Windows.Forms.dll!System.Windows.Forms.Application.ComponentManager.System.Windows.Forms.UnsafeNativeMethods.IMsoComponentManager.FPushMessageLoop(int dwComponentID, int reason = -1, int pvLoopData = 0) + 0x24e bytes System.Windows.Forms.dll!System.Windows.Forms.Application.ThreadContext.RunMessageLoopInner(int reason = -1, System.Windows.Forms.ApplicationContext context = {Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.WinFormsAppContext}) + 0x177 bytes System.Windows.Forms.dll!System.Windows.Forms.Application.ThreadContext.RunMessageLoop(int reason, System.Windows.Forms.ApplicationContext context) + 0x61 bytes System.Windows.Forms.dll!System.Windows.Forms.Application.Run(System.Windows.Forms.ApplicationContext context) + 0x18 bytes Microsoft.VisualBasic.dll!Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.OnRun() + 0x81 bytes Microsoft.VisualBasic.dll!Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.DoApplicationModel() + 0xef bytes Microsoft.VisualBasic.dll!Microsoft.VisualBasic.ApplicationServices.WindowsFormsApplicationBase.Run(string[] commandLine) + 0x2c0 bytes OverlordFrontEnd.exe!OverlordFrontEnd.Program.Main() Line 36 + 0x10 bytes C# [Native to Managed Transition] [Managed to Native Transition] mscorlib.dll!System.AppDomain.ExecuteAssembly(string assemblyFile, System.Security.Policy.Evidence assemblySecurity, string[] args) + 0x3a bytes Microsoft.VisualStudio.HostingProcess.Utilities.dll!Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() + 0x2b bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart_Context(object state) + 0x66 bytes mscorlib.dll!System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext executionContext, System.Threading.ContextCallback callback, object state) + 0x6f bytes mscorlib.dll!System.Threading.ThreadHelper.ThreadStart() + 0x44 bytes

    Read the article

  • Long-running ASP.NET tasks

    - by John Leidegren
    I know there's a bunch of APIs out there that do this, but I also know that the hosting environment (being ASP.NET) puts restrictions on what you can reliably do in a separate thread. I could be completely wrong, so please correct me if I am, this is however what I think I know. A request typically timeouts after 120 seconds (this is configurable) but eventually the ASP.NET runtime will kill a request that's taking too long to complete. The hosting environment, typically IIS, employs process recycling and can at any point decide to recycle your app. When this happens all threads are aborted and the app restarts. I'm however not sure how aggressive it is, it would be kind of stupid to assume that it would abort a normal ongoing HTTP request but I would expect it to abort a thread because it doesn't know anything about the unit of work of a thread. If you had to create a programming model that easily and reliably and theoretically put a long running task, that would have to run for days, how would you accomplish this from within an ASP.NET application? The following are my thoughts on the issue: I've been thinking a long the line of hosting a WCF service in a win32 service. And talk to the service through WCF. This is however not very practical, because the only reason I would choose to do so, is to send tasks (units of work) from several different web apps. I'd then eventually ask the service for status updates and act accordingly. My biggest concern with this is that it would NOT be a particular great experience if I had to deploy every task to the service for it to be able to execute some instructions. There's also this issue of input, how would I feed this service with data if I had a large data set and needed to chew through it? What I typically do right now is this SELECT TOP 10 * FROM WorkItem WITH (ROWLOCK, UPDLOCK, READPAST) WHERE WorkCompleted IS NULL It allows me to use a SQL Server database as a work queue and periodically poll the database with this query for work. If the work item completed with success, I mark it as done and proceed until there's nothing more to do. What I don't like is that I could theoretically be interrupted at any point and if I'm in-between success and marking it as done, I could end up processing the same work item twice. I might be a bit paranoid and this might be all fine but as I understand it there's no guarantee that that won't happen... I know there's been similar questions on SO before but non really answers with a definitive answer. This is a really common thing, yet the ASP.NET hosting environment is ill equipped to handle long-running work. Please share your thoughts.

    Read the article

  • Cannot get UISearchBar Scope Bar to appear in Toolbar on iPad

    - by Jann
    This is really causing me fits. I put a toolbar on the IUView on the iPad. I added the following: Search Bar (not Search Bar and Search Display) to the toolbar. I set the options to be as follows: Show Cancel Button, Show Scope Bar, Scope Button Titles are: "Title1" and "Title2" (with Title2's radio button selected). Opaque, Clear Context and Auto Resize are checked. I hooked up the delegate of Search Bar to the "File's Owner" and linked it to IBOutlet theSearchBar. In my viewWillAppear I have the following: [theSearchBar setScopeButtonTitles:[NSArray arrayWithObjects:@"Near Me",@"Everywhere",nil]]; //Just in case: [theSearchBar setShowsScopeBar:YES]; //doesn't seem to do anything: //[theSearchBar sizeToFit]; searchDisplayController = [[UISearchDisplayController alloc] initWithSearchBar:theSearchBar contentsController:self]; [self setSearchDisplayController:searchDisplayController]; [searchDisplayController setDelegate:self]; [searchDisplayController setSearchResultsDataSource:self]; //again--does not seem to do anything..but people have suggested it: [theSearchBar sizeToFit]; Okay, so far, I thought, so good. So, I made the File's Owner .m file to be a delegate for: UISearchBarDelegate, UISearchDisplayDelegate. My issue: I have yet to implement the delegates necessary to do the search but still... shouldn't I be seeing the scopeBar next to the search field when I click into the search field? Just so you know I DO see the log of the characters I type, so the delegate is working. I have the following dummy functions in the .m file (just in case) // called when keyboard search button pressed - (void)searchBarSearchButtonClicked:(UISearchBar *)searchBar { NSLog(@"Search Button Clicked\n"); [theSearchBar resignFirstResponder]; } // called when cancel button pressed - (void)searchBarCancelButtonClicked:(UISearchBar *)searchBar { NSLog(@"Cancel Button Clicked\n"); [theSearchBar resignFirstResponder]; } - (void)searchBar:(UISearchBar *)searchBar textDidChange:(NSString *)searchText { NSLog(@"Search Text So Far: '%@'\n",searchText); } - (BOOL)searchBarShouldBeginEditing:(UISearchBar *)searchBar { return YES; } - (BOOL)searchBarShouldEndEditing:(UISearchBar *)searchBar { return YES; } Why doesn't the Scope Bar appear? A results UIPopoverController appears with the title "Results" and "No results found" (of course) when i type the first character in my search...but no scope bar. (not that i expect anything other than "No Results Found". I am wondering where the scope bar is supposed to appear...in the titleView of the UIPopover? In the toolbar to the right of the search area? Where?

    Read the article

  • GZip compression with WCF hosted on IIS7

    - by joniba
    So I'm going to add my query to the small ocean of questions on the subject. I'm trying to enable GZip compression on large soap responses from a WCF service. So far, I've followed instructions here and in a variety of other places to enable dynamic compression on IIS. Here's my dynamicTypes section from the applicationHost.config: <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> <add mimeType="application/xop+xml" enabled="true" /> <add mimeType="application/soap+xml" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> And also: <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> Though I'm not so clear on why that's needed. Threw some extra mime-types in there just in case. I've implemented IClientMessageInspector to add Accept-Encoding: gzip, deflate to my client's HttpRequests. Here's an example of a request-header taken from fiddler: POST http://[omitted]/TestMtomService/TextService.svc HTTP/1.1 Content-Type: application/soap+xml; charset=utf-8 Accept-Encoding: gzip, deflate Host: [omitted] Content-Length: 542 Expect: 100-continue Now, this doesn't work. There's simply no compression happening, no matter what the size of the message (tried up to 1.5Mb). I've looked at this post, but have not run into an exception as he describes, so I haven't tried the CodeProject implementation that he proposes. Also I've seen a lot of other implementations that are supposed to get this to work, but cannot make sense of them (e.g., msdn's GZip encoder). Why would I need to implement the encoder, or the code-project solution? Shouldn't IIS take care of the compression? So what else do I need to do to get this to work? Joni

    Read the article

  • ASP.NET MVC 2: Updating a Linq-To-Sql Entity with an EntitySet

    - by Simon
    I have a Linq to Sql Entity which has an EntitySet. In my View I display the Entity with it's properties plus an editable list for the child entites. The user can dynamically add and delete those child entities. The DefaultModelBinder works fine so far, it correctly binds the child entites. Now my problem is that I just can't get Linq To Sql to delete the deleted child entities, it will happily add new ones but not delete the deleted ones. I have enabled cascade deleting in the foreign key relationship, and the Linq To Sql designer added the "DeleteOnNull=true" attribute to the foreign key relationships. If I manually delete a child entity like this: myObject.Childs.Remove(child); context.SubmitChanges(); This will delete the child record from the DB. But I can't get it to work for a model binded object. I tried the following: // this does nothing public ActionResult Update(int id, MyObject obj) // obj now has 4 child entities { var obj2 = _repository.GetObj(id); // obj2 has 6 child entities if(TryUpdateModel(obj2)) //it sucessfully updates obj2 and its childs { _repository.SubmitChanges(); // nothing happens, records stay in DB } else ..... return RedirectToAction("List"); } and this throws an InvalidOperationException, I have a german OS so I'm not exactly sure what the error message is in english, but it says something along the lines of that the entity needs a Version (Timestamp row?) or no update check policies. I have set UpdateCheck="Never" to every column except the primary key column. public ActionResult Update(MyObject obj) { _repository.MyObjectTable.Attach(obj, true); _repository.SubmitChanges(); // never gets here, exception at attach } I've read alot about similar "problems" with Linq To Sql, but it seems most of those "problems" are actually by design. So am I right in my assumption that this doesn't work like I expect it to work? Do I really have to manually iterate through the child entities and delete, update and insert them manually? For such a simple object this may work, but I plan to create more complex objects with nested EntitySets and so on. This is just a test to see what works and what not. So far I'm disappointed with Linq To Sql (maybe I just don't get it). Would be the Entity Framework or NHibernate a better choice for this scenario? Or would I run into the same problem?

    Read the article

  • Cross-table linq query with EF4/POCO

    - by Basiclife
    Hi All, I'm new to EF(any version) and POCO. I'm trying to use POCO entities with a generic repository in a "code-first" mode(?) I've got some POCO Entities (no proxies, no lazy loading, nothing). I have a repository(of T as Entity) which provides me with basic get/getsingle/getfirst functionality which takes a lambda as a parameter (specifically a System.Func(Of T, Boolean)) Now as I'm returning the simplest possible POCO object, none of the relationship parameters work once they've been retrieved from the database (as I would expect). However, I had assumed (wrongly) that my lambda query passed to the repository would be able to use the links between entities as it would be executed against the DB before the simple POCO entities are generated. The flow is: GUI calls: Public Function GetAllTypesForCategory(ByVal CategoryID As Guid) As IEnumerable(Of ItemType) Return ItemTypeRepository.Get(Function(x) x.Category.ID = CategoryID) End Function Get is defined in Repository(of T as Entity): Public Function [Get](ByVal Query As System.Func(Of T, Boolean)) As IEnumerable(Of T) Implements Interfaces.IRepository(Of T).Get Return ObjectSet.Where(Query).ToList() End Function The code doesn't error when this method is called but does when I try to use the result set. (This seems to be a lazy loading behaviour so I tried adding the .ToList() to force eager loading - no difference) I'm using unity/IOC to wire it all up but I believe that's irrelevant to the issue I'm having NB: Relationships between entities are being configured properly and if I turn on proxies/lazy loading/etc... this all just works. I'm intentionally leaving all that turned off as some calls to the BL will be from a website but some will be via WCF - So I want the simplest possible objects. Also, I don't want a change in an object passed to the UI to be committed to the DB if another BL method calls Commit() Can someone please either point out how to make this work or explain why it's not possible? All I want to do is make sure the lambda I pass in is performed against the DB before the results are returned Many thanks. In case it matters, the container is being populated with everything as shown below: Container.AddNewExtension(Of EFRepositoryExtension)() Container.Configure(Of IEFRepositoryExtension)(). WithConnection(ConnectionString). WithContextLifetime(New HttpContextLifetimeManager(Of IObjectContext)()). ConfigureEntity(New CategoryConfig(), "Categories"). ConfigureEntity(New ItemConfig()). ... )

    Read the article

  • Resizing QT's QTextEdit to Match Text Height: maximumViewportSize()

    - by Aaron
    I am trying to use a QTextEdit widget inside of a form containing several QT widgets. The form itself sits inside a QScrollArea that is the central widget for a window. My intent is that any necessary scrolling will take place in the main QScrollArea (rather than inside any widgets), and any widgets inside will automatically resize their height to hold their contents. I have tried to implement the automatic resizing of height with a QTextEdit, but have run into an odd issue. I created a sub-class of QTextEdit and reimplemented sizeHint() like this: QSize OperationEditor::sizeHint() const { QSize sizehint = QTextBrowser::sizeHint(); sizehint.setHeight(this->fitted_height); return sizehint; } this-fitted_height is kept up-to-date via this slot that is wired to the QTextEdit's "contentsChanged()" signal: void OperationEditor::fitHeightToDocument() { this->document()->setTextWidth(this->viewport()->width()); QSize document_size(this->document()->size().toSize()); this->fitted_height = document_size.height(); this->updateGeometry(); } The size policy of the QTextEdit sub-class is: this->setSizePolicy(QSizePolicy::MinimumExpanding, QSizePolicy::Preferred); I took this approach after reading this post. Here is my problem: As the QTextEdit gradually resizes to fill the window, it stops getting larger and starts scrolling within the QTextEdit, no matter what height is returned from sizeHint(). If I initially have sizeHint() return some large constant number, then the QTextEdit is very big and is contained nicely within the outer QScrollArea, as one would expect. However, if sizeHint gradually adjusts the size of the QTextEdit rather than just making it really big to start, then it tops out when it fills the current window and starts scrolling instead of growing. I have traced this problem to be that, no matter what my sizeHint() returns, it will never resize the QTextEdit larger than the value returned from maximumViewportSize(), which is inherited from QAbstractScrollArea. Note that this is not the same number as viewport()-maximumSize(). I am unable to figure out how to set that value. Looking at QT's source code, maximumViewportSize() is returning "the size of the viewport as if the scroll bars had no valid scrolling range." This value is basically computed as the current size of the widget minus (2 * frameWidth + margins) plus any scrollbar widths/heights. This does not make a lot of sense to me, and it's not clear to me why that number would be used anywhere in a way that supercede's the sub-class's sizeHint() implementation. Also, it does seem odd that the single "frameWidth" integer is used in computing both the width and the height. Can anyone please shed some light on this? I suspect that my poor understanding of QT's layout engine is to blame here.

    Read the article

  • Defining an Entity Framework 1:1 association

    - by Craig Fisher
    I'm trying to define a 1:1 association between two entities (one maps to a table and the other to a view - using DefinedQuery) in an Entity Framework model. When trying to define the mapping for this in the designer, it makes me choose the (1) table or view to map the association to. What am I supposed to choose? I can choose either of the two tables but then I am forced to choose a column from that table (or view) for each end of the relationship. I would expect to be able to choose a column from one table for one end of the association and a column from the other table for the other end of the association, but there's no way to do this. Here I've chosen to map to the "DW_ WF_ClaimInfo" view and it is forcing me to choose two columns from that view - one for each end of the relationship. I've also tried defining the mapping manually in the XML as follows: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="DOCUMENT" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> But this gives: Error 2010: The Column 'DOCUMENT' specified as part of this MSL does not exist in MetadataWorkspace. Seems like it still expects both columns to come from the same table, which doesn't make sense to me. Furthermore, if I select the same key for each end, e.g.: <AssociationSetMapping Name="Entity1Entity2" TypeName="ClaimsModel.Entity1Entity2" StoreEntitySet="Entity1"> <EndProperty Name="Entity2"> <ScalarProperty Name="DOCUMENT" ColumnName="PK_DocumentId" /> </EndProperty> <EndProperty Name="Entity1"> <ScalarProperty Name="PK_DocumentId" ColumnName="PK_DocumentId" /> </EndProperty> </AssociationSetMapping> I then get: Error 3021: Problem in Mapping Fragment starting at line 675: Each of the following columns in table AssignedClaims is mapped to multiple conceptual side properties: AssignedClaims.PK_DocumentId is mapped to <AssignedClaimDW_WF_ClaimInfo.DW_WF_ClaimInfo.DOCUMENT, AssignedClaimDW_WF_ClaimInfo.AssignedClaim.PK_DocumentId> What am I not getting?

    Read the article

  • Cucumber error under bundler

    - by Senthil
    I've confirmed cucumber tests work fine without bundler, but with bundler it breaks. This is the error I get: You have a nil object when you didn't expect it! You might have expected an instance of Array. The error occurred while evaluating nil.each (NoMethodError) /home/senthil/rails/othersprograms/nmicho-teambox/vendor/plugins/cucumber/lib/cucumber/formatter/console.rb:148:in `record_tag_occurrences' /home/senthil/rails/othersprograms/nmicho-teambox/vendor/plugins/cucumber/lib/cucumber/formatter/pretty.rb:71:in `before_feature_element' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:190:in `__send__' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:190:in `send_to_all' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:188:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:188:in `send_to_all' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:179:in `broadcast' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:50:in `visit_feature_element' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/feature.rb:26:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/feature.rb:25:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/feature.rb:25:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:20:in `visit_feature' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:180:in `broadcast' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:19:in `visit_feature' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:29:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:17:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:17:in `each' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/features.rb:28:in `accept' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:14:in `visit_features' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:180:in `broadcast' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/ast/tree_walker.rb:13:in `visit_features' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/cli/main.rb:61:in `execute!' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/../lib/cucumber/cli/main.rb:20:in `execute' /var/lib/gems/1.8/gems/cucumber-0.6.1/bin/cucumber:8 /var/lib/gems/1.8/bin/cucumber:19:in `load' /var/lib/gems/1.8/bin/cucumber:19 I've searched all over Google and nothing. Oh btw the above tests is when I was trying to run cucumber tests in Teambox app. I've downloaded it fresh and it still breaks. But if I use their old versions where they didn't not have bundler installed, it works fine. Can anyone help? Thanks

    Read the article

  • How can I define a clojure type that implements the servlet interface?

    - by Rob Lachlan
    I'm attempting to use deftype (from the bleeding-edge clojure 1.2 branch) to create a java class that implements the java Servlet interface. I would expect the code below to compile (even though it's not very useful). (ns foo [:import [javax.servlet Servlet ServletRequest ServletResponse]]) (deftype servlet [] javax.servlet.Servlet (service [this #^javax.servlet.ServletRequest request #^javax.servlet.ServletResponse response] nil)) But it doesn't compile. The compiler produces the message: Mismatched return type: service, expected: void, had: java.lang.Object [Thrown class java.lang.IllegalArgumentException] Which doesn't make sense to me, because I'm returning nil. So the fact that the return type of the method is void shouldn't be a problem. For instance, for the java.util.Set interface: (deftype bar [#^Number n] java.util.Set (clear [this] nil)) compiles without issue. So what am I doing wrong with the Servlet interface? To be clear: I know that the typical case is to subclass one of the servlet abstract classes rather than implement this interface directly, but it should still be possible to do this. Stack Trace: The stack trace for the (deftype servlet... is: Mismatched return type: service, expected: void, had: java.lang.Object [Thrown class java.lang.IllegalArgumentException] Restarts: 0: [ABORT] Return to SLIME's top level. Backtrace: 0: clojure.lang.Compiler$NewInstanceMethod.parse(Compiler.java:6461) 1: clojure.lang.Compiler$NewInstanceExpr.build(Compiler.java:6119) 2: clojure.lang.Compiler$NewInstanceExpr$DeftypeParser.parse(Compiler.java:6003) 3: clojure.lang.Compiler.analyzeSeq(Compiler.java:5289) 4: clojure.lang.Compiler.analyze(Compiler.java:5110) 5: clojure.lang.Compiler.analyze(Compiler.java:5071) 6: clojure.lang.Compiler.eval(Compiler.java:5347) 7: clojure.lang.Compiler.eval(Compiler.java:5334) 8: clojure.lang.Compiler.eval(Compiler.java:5311) 9: clojure.core$eval__4350.invoke(core.clj:2364) 10: swank.commands.basic$eval_region__673.invoke(basic.clj:40) 11: swank.commands.basic$eval_region__673.invoke(basic.clj:31) 12: swank.commands.basic$eval__686$listener_eval__687.invoke(basic.clj:54) 13: clojure.lang.Var.invoke(Var.java:365) 14: foo$eval__2285.invoke(NO_SOURCE_FILE) 15: clojure.lang.Compiler.eval(Compiler.java:5343) 16: clojure.lang.Compiler.eval(Compiler.java:5311) 17: clojure.core$eval__4350.invoke(core.clj:2364) 18: swank.core$eval_in_emacs_package__320.invoke(core.clj:59) 19: swank.core$eval_for_emacs__383.invoke(core.clj:128) 20: clojure.lang.Var.invoke(Var.java:373) 21: clojure.lang.AFn.applyToHelper(AFn.java:169) 22: clojure.lang.Var.applyTo(Var.java:482) 23: clojure.core$apply__3776.invoke(core.clj:535) 24: swank.core$eval_from_control__322.invoke(core.clj:66) 25: swank.core$eval_loop__324.invoke(core.clj:71) 26: swank.core$spawn_repl_thread__434$fn__464$fn__465.invoke(core.clj:183) 27: clojure.lang.AFn.applyToHelper(AFn.java:159) 28: clojure.lang.AFn.applyTo(AFn.java:151) 29: clojure.core$apply__3776.invoke(core.clj:535) 30: swank.core$spawn_repl_thread__434$fn__464.doInvoke(core.clj:180) 31: clojure.lang.RestFn.invoke(RestFn.java:398) 32: clojure.lang.AFn.run(AFn.java:24) 33: java.lang.Thread.run(Thread.java:637)

    Read the article

  • What are the reasons why the CPU usage doesn’t go 100% with C# and APM?

    - by Martin
    I have an application which is CPU intensive. When the data is processed on a single thread, the CPU usage goes to 100% for many minutes. So the performance of the application appears to be bound by the CPU. I have multithreaded the logic of the application, which result in an increase of the overall performance. However, the CPU usage hardly goes above 30%-50%. I would expect the CPU (and the many cores) to go to 100% since I process many set of data at the same time. Below is a simplified example of the logic I use to start the threads. When I run this example, the CPU goes to 100% (on an 8/16 cores machine). However, my application which uses the same pattern doesn’t. public class DataExecutionContext { public int Counter { get; set; } // Arrays of data } static void Main(string[] args) { // Load data from the database into the context var contexts = new List<DataExecutionContext>(100); for (int i = 0; i < 100; i++) { contexts.Add(new DataExecutionContext()); } // Data loaded. Start to process. var latch = new CountdownEvent(contexts.Count); var processData = new Action<DataExecutionContext>(c => { // The thread doesn't access data from a DB, file, // network, etc. It reads and write data in RAM only // (in its context). for (int i = 0; i < 100000000; i++) c.Counter++; }); foreach (var context in contexts) { processData.BeginInvoke(context, new AsyncCallback(ar => { latch.Signal(); }), null); } latch.Wait(); } I have reduced the number of locks to the strict minimum (only the latch is locking). The best way I found was to create a context in which a thread can read/write in memory. Contexts are not shared among other threads. The threads can’t access the database, files or network. In other words, I profiled my application and I didn’t find any bottleneck. Why the CPU usage of my application doesn’t go about 50%? Is it the pattern I use? Should I create my own thread instead of using the .Net thread pool? Is there any gotchas? Is there any tool that you could recommend me to find my issue? Thanks!

    Read the article

< Previous Page | 169 170 171 172 173 174 175 176 177 178 179 180  | Next Page >