Search Results

Search found 9988 results on 400 pages for 'tv less in jersey'.

Page 89/400 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • Real benefits of tcp TIME-WAIT and implications in production environment

    - by user64204
    SOME THEORY I've been doing some reading on tcp TIME-WAIT (here and there) and what I read is that it's a value set to 2 x MSL (maximum segment life) which keeps a connection in the "connection table" for a while to guarantee that, "before your allowed to create a connection with the same tuple, all the packets belonging to previous incarnations of that tuple will be dead". Since segments received (apart from SYN under specific circumstances) while a connection is either in TIME-WAIT or no longer existing would be discarded, why not close the connection right away? Q1: Is it because there is less processing involved in dealing with segments from old connections and less processing to create a new connection on the same tuple when in TIME-WAIT (i.e. are there performance benefits)? If the above explanation doesn't stand, the only reason I see the TIME-WAIT being useful would be if a client sends a SYN for a connection before it sends remaining segments for an old connection on the same tuple in which case the receiver would re-open the connection but then get bad segments and and would have to terminate it. Q2: Is this analysis correct? Q3: Are there other benefits to using TIME-WAIT? SOME PRACTICE I've been looking at the munin graphs on a production server that I administrate. Here is one: As you can see there are more connections in TIME-WAIT than ESTABLISHED, around twice as many most of the time, on some occasions four times as many. Q4: Does this have an impact on performance? Q5: If so, is it wise/recommended to reduce the TIME-WAIT value (and what to)? Q6: Is this ratio of TIME-WAIT / ESTABLISHED connections normal? Could this be related to malicious connection attempts?

    Read the article

  • Solid State Drive Occasionally Freezes For A Minute While OS Is "Beach Balling"

    - by Boris_yo
    Almost a month ago I bought Intel 330 128GB solid state drive, migrated my data with Intel branded limited-feature Acronis from old HDD to new SSD, optimized with Intel Toolbox and started using it. Occassionally I get close to 1 minute freezes while seeing operating system "beach balls" and animations still work, I can interact and click on something but nothing responds, nothing loads. Recently a couple of such freezes occurred in shorter amount of time in a row. I have noticed that if I stop interacting with laptop, the freeze lasts less time than if I was interacting with laptop. But the bigger problem is when freeze just does not end and computer keeps being stuck unlil it is more than half hour and I run out of patience to keep waiting and feeling I need to restart the system because I am not getting anywhere. Such freeze happened while laptop was cold booting into Windows 7. This is when freeze hang occurred and I had to restart, only later to be greeted with Windows recovery screen stating something about faiure of boot sector and asking to insert Windows repair CD. But after I restarted, Windows booted successfully and all was well. I have filmed video of freeze hang occurring in cold boot which you can see here (on video page look below for description): http://www.youtube.com/watch?v=8b7MQlcDTUs As I have mentioned in the beginning, the SSD is less than a month old but here is S.M.A.R.T statistics just in case (TRIM is enabled btw according to CrystalDiskInfo): I want to emphasize that this SSD is the only drive I have, yet it is working in RAID mode (it was enabled initially in BIOS by previous laptop's owner) on Intel Rapid Storage drivers. I am contemplating about switching to AHCI mode but want to be sure this won't cause data loss. Additionally, the stock firmware is the only firmware available currently, yet Intel does not respond to my posts in their community board. If anyone here has this SSD model or generally has experience with SSD drives, I would love to know your thoughts.

    Read the article

  • pxe boot dos 7.x / 8.x on modern mainboard without floppy controller

    - by GitaarLAB
    How to pxe boot MS DOS 7.x / 8.x on a modern pc (mainboard without floppy controller) without using an external usb floppy drive? MS DOS 6.22 and earlier or other flavors pxe boot just fine on floppy-less hardware. But DOS 7.x and 8.x renders an error on boot: "Type the name of the Command Interpreter (e.g., C:\WINDOWS\COMMAND.COM) I read somewhere during research this was a rather unknown error, that started to become more common due to the advent of floppy-controller-less hardware. On some hardware (bios dependent) one could plug a usb-floppy-drive in the computer before booting (but that MIGHT also require it to be a "golden floppy drive" (as they where called back then). From a russian site (I read about a year ago and cannot find the hyperlink) MS-Dos versions 6.22 did some-kind of floppy-drive reset during initialization and since it couldn't connect to the floppy-host thus the error. How can I resolve this (without a physical external usb floppy)? Might there be some kind of virtual floppy-driver that could resolve this (for example to be loaded before the dos image loads)? Or could someone point me into the right direction (maybe even a hex-address and some further explanation or something)? I'm using syslinux by the way.

    Read the article

  • Domain Trust 2008 to 2003

    - by nick3216
    I'm having trouble setting up the trust relationship between a Windows Server 2003 and a Windows Server 2008 AD. Domain a is Windows Server 2003 Forest functional level. Domain b is a Windows Server 2008 Forest functional level. I can set up the incoming side of the trust relationship on domain "a" so that it trusts domain "b". Try as I might on domain "b" I can't set up the outgoing side of the trust relationship to domain "a". The GUI interface gives an unhelpful 'The request is not supported'. I'm not sure netdom is being more or less helpful as it refers me to FilterSIDs netdom trust /add b /uo:b\admin /po:* /d:a /ud:a\admin /pd:* /oneside:trusting To improve the security of this external trust, security identifier (SID) filtering is enabled, however, if users have been migrated to the trusted domain and their SID histories have been preserved, you may choose to turn off this feature. For more information about SID filtering and how to turn it off, see the help for netdom trust /FilterSids or see Help and Support. The request is not supported. The command failed to complete succesfully. I say 'less helpful' because Windows Server 2008 doesn't support the /FilterSIDs option. How can we force creation of this trust? Edit: Just to clarify I've checked that the [Computer Configuration\Windows Settings\Security Settings\Local Policies\Security Options] "Network access: Allow anonymous SID/Name translation” is enabled on both sides of the trust as per http://social.technet.microsoft.com/Forums/en/winserverDS/thread/cc61fc25-3569-4413-bbfd-92390eb31118

    Read the article

  • What to look for in a switch with LAN/WAN verses an iSCSI SAN?

    - by Luke
    I'm setting up a VMWare ESXi 5 environment with 3 server nodes. Dell recommended 2x Force10 S60 switches shared (iSCSI SAN, LAN/WAN). The S60 switches are extremely powerful. They have 1.25 GB of buffer cache, < 9us latency. But they are very expensive (online price ~$15k per switch, actual quote a little less). I've been told that "by the book" you should at least have 2 internal switches for SAN, and 2 switches for LAN/WAN (each with a redundant). I know some of the pros and cons of each approach. What I'm wondering is, would it be more cost effective to disjoin the SAN from LAN with less expensive switches? The answer to this question highlights what I should be looking for in a switch for the SAN. What should I be looking for in a LAN/WAN switch, in comparison to the SAN? With the above linked question for the SAN: How is buffer latency measured? When you see 36 MB of buffer cache, is that shared or per port? So 36 MB would be 768kb or 36MB per port? With 3 to 6 servers how much buffer cache do you really need? What else should I be looking at? Our application will be heavily using HTML5 websockets (high number of persistent connections). The amount of data being sent is small; Data sent between client <- server isn't broadcasted (not a chat/IM service). We will be doing some database reporting too (csv export, sums, some joins). We are a small business and on a budget. We'd probably only be able to spend no more than $20k on switches total (2 or 4).

    Read the article

  • Difficult to point-and-click target with Apple Magic Trackpad?

    - by Andrew Swift
    My Magic Trackpad is great for most things involving dragging and gestures, but for simple point-and-click use it is starting to get really annoying. For example, my bank has a numeric code that I need to enter from a visual grid on the screen, by clicking on the six numbers in order. To do this is quite difficult with the Magic Trackpad. Idea 1: it is because when you start dragging on the touchpad, there is a brief delay before the mouse starts to move. The delay might be enough to screw up my anticipation of where the cursor is going to go. Idea 2: it may have to do with the cursor acceleration. If I move it a little, the cursor moves a little. If I move it a lot, it goes much faster. I have tried various settings in the preference pane, and although the cursor does move more or less quickly, it doesn't seem any more or less easy to hit a given target. As I am writing this post, I tried a brief experiment -- I chose a spot on the page (the bracket in the superuser logo) and tried to move the cursor there in one quick movement. It's extremely difficult, even though I've been using Macs and touchpads since 1999. There is something about the behavior of the trackpad that is making it impossible for me to learn how to accurately predict where the cursor will end up. Does anyone else have this problem?

    Read the article

  • Difficult to point-and-click target with Apple Magic Trackpad?

    - by Andrew Swift
    My Magic Trackpad is great for most things involving dragging and gestures, but for simple point-and-click use it is starting to get really annoying. For example, my bank has a numeric code that I need to enter from a visual grid on the screen, by clicking on the six numbers in order. To do this is quite difficult with the Magic Trackpad. Idea 1: it is because when you start dragging on the touchpad, there is a brief delay before the mouse starts to move. The delay might be enough to screw up my anticipation of where the cursor is going to go. Idea 2: it may have to do with the cursor acceleration. If I move it a little, the cursor moves a little. If I move it a lot, it goes much faster. I have tried various settings in the preference pane, and although the cursor does move more or less quickly, it doesn't seem any more or less easy to hit a given target. As I am writing this post, I tried a brief experiment -- I chose a spot on the page (the bracket in the superuser logo) and tried to move the cursor there in one quick movement. It's extremely difficult, even though I've been using Macs and touchpads since 1999. There is something about the behavior of the trackpad that is making it impossible for me to learn how to accurately predict where the cursor will end up. Does anyone else have this problem?

    Read the article

  • How to go about rotating logs which are arbitrary named and placed in deeply nested directories?

    - by Roman Grazhdan
    I have a couple of hosts which are basically a playground for developers. On these hosts, each of them has a directory under /tmp where he is free to do all he wants - store files, write logs etc. Of course, the logs are to be rotated, or else the disc will be 100% full in a week. The files can be plenty, but I've dealt with it with paths like /tmp/[a-e]*/* and so on and lived happily for a while, but as they try new cool stuff on the machine logrotate rules grow ugly and unmanageable, and it's getting more difficult to understand which files hit the glob. Also, logrotate would segfault if asked to rotate a socket. I don't feel like trying to enforce some naming policies in that environment, I think it's going to take quite a lot of time and get people annoyed and still would fail at some point. And I still need to manage the logs, not just rm the dirs at night. So is it a good idea in circumstances like these to write a script which would handle these temporary files? I prefer sticking with standard utilities whenever possible, but here I think logrotate is getting less and less manageable. And probably someone heard of some logrotate alternatives which would work well in such an environment? I don't need emailing logs or some other advanced features, so theoretically some well commented find | xargs would do. P.S. I do have a log aggregator but this stuff is not going to touch my little cute logstash machine.

    Read the article

  • Ram configuration 4x4 or 2x8?

    - by Carl B
    I am looking to upgrade my Ram to 16gigs and I am wondering if there is any distinct advantage of the way I do it. That being a 4x4 or 2x8 set up. In all my searching there have been a number of pros for each profile. I can find no benchmarck results for either setup as a compaison. So, if there are 2 profiles of the same speed, same voltage, same timing and same cas what would perform better or have a better over all benafit? A few examples from my search - a 4x4 set would lend a benafit in that if one stick failed, you only lose 25% of your Ram vs 50% in a 2x8 set up. a 2x8 set up would have less strain on the memory controler and motherboard. a 2x8 would generate less heat. a 2x8 set up is easier to over clock (not part of my need, but alot of the comparisons circled around the overclocability ease of the 2 stick set up). There is one outstanding benafit that I have found in at least the target company I have looked at and that is price. The 2x8 is nearly half the cost. My motherboard supports a max of 16 gigs and I have a 64 bit OS. Has anyone seen any performance comparisons or is 16 gb just 16 gb no matter how you slice it? And is there any merit to the above pros? Edit: as per the mobo specs - Main Memory • Supports four unbuffered DIMM of 1.5 Volt DDR3 800/1066/1333/1600*/1800*/2133* (OC) DRAM, 16GB Max

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Is it possible to get ESC to behave as an actual escape key?

    - by leftaroundabout
    So I have finally switched, not so much because I'm yet convinced Emacs in itself is the better editor but because it certainly does have more powerful extensions. I am still using vim-mode though, perhaps that's part of my problem... but I really don't intend to abandon the modes-approach, so I'll probably stay with it. I'm getting along quite well, but one thing I find really unnerving is the behaviour of the esc key (which I have in the shift-lock position). I'm used to relying on this a lot as more or less a "panic key", which may not be nice but I find allows me to work generally quite a bit less caring about the keystrokes themselves, and thus faster. What I'd like this key to do is just get me out of any minibuffer or special editing mode into a well-defined normal state. Perhaps most importantly, I would like it to not do anything unrelated, Simulate meta. What do I have an alt key for? Close windows I'm not even in at the time. Getting interpreted as the final key in some key sequence. ... Is it possible to turn all that off and make esc an actual escape key? Vim-mode does make it behave kind of as I like in some situations, but when other plugins are involves this often breaks. Alternatively, are there different options that might suit my kind of workflow?

    Read the article

  • XFS: No space left on device

    - by beketa
    I am using XFS on small HDD (/dev/sdb1, less than 1TB) and storing many small files (-32KB). df -h and -i show that it has available space. # df -hv Filesystem Size Used Avail Use% Mounted on /dev/sda3 127G 19G 102G 16% / tmpfs 16G 0 16G 0% /lib/init/rw udev 16G 168K 16G 1% /dev tmpfs 16G 0 16G 0% /dev/shm /dev/sda1 99M 20M 75M 21% /boot /dev/sdb1 136G 123G 14G 91% /mnt/sdb1 # df -iv Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 8421376 36199 8385177 1% / tmpfs 4126158 5 4126153 1% /lib/init/rw udev 4124934 671 4124263 1% /dev tmpfs 4126158 1 4126157 1% /dev/shm /dev/sda1 26112 222 25890 1% /boot /dev/sdb1 24905120 11076608 13828512 45% /mnt/sdb1 However I got No space left on device error. # touch /mnt/sdb1/test touch: cannot touch `/mnt/sdb1/test': No space left on device I think inode64 issue is not related to this case because drive is less than 1TB and df -i shows that there are free inodes. I unmounted and mounted with -o inode64 but got the same error. xfs_repair does not report any problem. xfs_info shows drive information as follows. # xfs_info /dev/sdb1 meta-data=/dev/sdb1 isize=1024 agcount=16, agsize=2227764 blks = sectsz=512 attr=2 data = bsize=4096 blocks=35644210, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=17404, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any ideas? Thanks!

    Read the article

  • AMD processors witn graphics card bundled [closed]

    - by shybovycha
    Sorry for posting this question here - just don't know to which StackExchange website i should be writing. I've heard AMD created processors with video card bundled. So now these processors should work as fast as just usual processors with discrete video card but AMD's ones should use less power and spread less warm. Some googling around gave me the result like "AMD processors of A-series". They were mentioned to be build using that technology i described above. But on the other hand, we have a small rate of publishing and not-very-good quality of AMD drivers. I am a game-developer and web-developer so i need a powerful processor and graphics card and a lot of RAM on board (to make it possible to create a sample Grails application, for example or to create some 3D models in Maya/Cinema4D, for instance). Still i want my battery to be a long-living one, so power usage is a bit critical for me. So, my questions are: are there any processor building technology like i've described and which series they are (if they exist)? which processor shall fit the laptop the best: AMD one or i5/i7 on with nVidia graphics card for the purposes mentioned above?

    Read the article

  • Adding a transaction ID to ruby-on-rails logs

    - by Blue Warrior NFB
    We have a RoR app (rails version 3.2.15 right now). As it has been getting busier, the log-files it's producing are becoming less and less useful for troubleshooting. When they come in like this, it's not a problem: Started GET "/accounts/28088166/kittens/22894/rendered_png?file_id=5d3eaec77954a489b5ddd75143091767&kitten_store_id=9970569bbacf7b6dbeb4eb9295960d69&size=large" for 172.16.202.30 at 2013-11-12 13:45:00 +0000 Processing by KittenController#rendered_png as HTML Parameters: {"file_id"="5d3eaec77954a489b5ddd75143091767", "kitten_store_id"="9970569bbacf7b6dbeb4eb9295960d69", "size"="large", "kitten_cam_id"="280941", "id"="kjlak357aw479607t"} Rendered text template (0.0ms) Sent data (1.8ms) Completed 200 OK in 1037.4ms (Views: 1.4ms | ActiveRecord: 98.4ms) Short request, quickly assembled, all the relevant log-lines are in one block. However, not all of our code renders in 1037ms. There are a few calls that can exceed several seconds, and during that time several of these quicker ones can come in. When that happens, its very, very hard to identify which log-lines belong to which GET. Sent data (4.1ms) Completed 200 OK in 767.4ms (Views: 3.2ms | ActiveRecord: 72.2ms) Completed 200 OK in 2338.0ms (Views: 0.2ms | ActiveRecord: 0.0ms) Ooookaaaay... which goes to what? Is it possible to add something like a transaction-ID to these log-lines? The log-spam would be interspersed, but at least grep-magic would give me the unified entries that I need.

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • How do I know if I managed to completely remove an undetected trojan?

    - by ubuntuisbetter
    I catched a trojan that uses explorer.exe to reproduce itself in case of deletion of its autostart entry or main exe file in Programs/x. It had already tried to contact a suspicious server over explorer.exe, blocked that via my firewall. I: Removed the autostart entries from the registry Looked through my services if there was anything suspicious Deleted the trojan from Programs/ Went through System Volume Information to find a 2 month old explorer.exe and replaced the possibly infected one. There are no suspicious processes running now anymore (no duplicate explorer.exe) and nothing wants to connect this trojan owners sever either. I checked my system with several anti-malware programs too. What the trojan did: Started a second explorer.exe Always when I deleted the main trojan exe file it was reproduced (by the second explorer.exe) Always when I deleted the autostart entry it was reproduced by the explorer.exe too. When I terminated the suspicious explorer.exe, which used only half as much memory as the less suspicious one from Windows, a strange thing that I know from the computers in my Informatics class happened: A window popped up in the top left of my explorer-less desktop, titled "Personal settings for ... are ..." that obviously copied some files. Then both explorer.exes started again and the trojan was everywhere again. What did the trojan actually do to get explorer to rescue it? Is my PC clean of this newish trojan now? What are the other locations I should check for the trojan? The trjoan doesn't seem very high-level, could it have changed other system files or is the autostart entry vital for it? Thanks in advance, Your trojan paranoid friend (Getting linux in a week)

    Read the article

  • Reduce size of MP4

    - by testing
    I have a MP4 file with a lengh of 22:44. Here are the details: Video: width: 720 px height: 404 px data bitrate: 1022 kBit/s overall bitrate: 1182 kBit/s fps: 24 codec: H264 - MPEG4 AVC (part 10) (avc1) Audio: bitrate: 159 kBit/s stereo sample rate: 48 kHz codec: MPEG AAC Audio (mp4a) I thought I can reduce the current filesize (about 200 MB) by reducing the width and the height (420 x 236). I tried different programs: Handbrake, Format Factory, Next Video Converter and Super. The first three didn't worked as expected: Handbrake has a bug by setting the width and the height, the next two doesn't allow the fine setting of the videosize (only presets of width and height). Super seems to be the best, but I didn't found a setting which reduces the file size. I reduced the width and the height but only got 20 MB less. Now I tried the xth setting and I still get a too high file size. I want to reduce the filesize to 100 MB or less. The ouput format should be FLV or MP4, because I need this for flowplayer. Which settings of SUPER or which program should I use to reduce the file size? Of course the video should still be viewable.

    Read the article

  • Powershell Script to help archive company email

    - by sec_goat
    I am attempting to use powershell to gather emails that pertain to a certain subject, so that these correspondences might be turned over to a legal department. I am having a couple of issues here that I would like some assistance getting past. I run the following command: get-mailbox -Database "Mailbox Database" | Export-Mailbox -ContentKeywords "Keywords To Search" -TargetMailbox "sec_goat" -TargetFolder EmailSearch -StartDate "01/13/2011 12:01:00 This has pretty much done what I want, and returned a boat load of emails, however it has also flooded my inbox with hundreds of blank calendars and contact lists. I realize now I should have used the exclusion on these folders, as well as a test environment (which we don't have). 1.How can I clean up this script to not include all the blank folders, contacts and calendars that DO NOT match the keywords search? 2.How do I clean up hundreds of blank contact lists and calendars in my mailbox without right clicking and deleting each one? EDIT: I edited the post to change the scope of the question. I think my focus is less on the legal perspective and more on the "How can I clean up my mess and make future archives less messy and painful?"

    Read the article

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Using Plesk for webhosting on Ubuntu - Security risk or reasonably safe?

    - by user66952
    Sorry for this newb-question I'm pretty clueless about Plesk, only have limited debian (without Plesk) experience. If the question is too dumb just telling me how to ask a smarter one or what kind of info I should read first to improve the question would be appreciated as well. I want to offer a program for download on my website hosted on an Ubuntu 8.04.4 VPS using Plesk 9.3.0 for web-hosting. I have limited the ssh-access to the server via key only. When setting up the webhosting with Plesk it created an FTP-login & user is that a potential security risk that could bypass the key-only access? I think Plesk itself (even without the ftp-user-account) through it's web-interface could be a risk is that correct or are my concerns exaggerated? Would you say this solution makes a difference if I'm just using it for the next two weeks and then change servers to a system where I know more about security. 3.In other words is one less likely to get hacked within the first two weeks of having a new site up and running than in week 14&15? (due to occurring in less search results in the beginning perhaps, or for whatever reason... )

    Read the article

  • How to create Haar Cascade (xml) for using with OpenCV?

    - by inTagger
    If you familiar with OpenCV library, you know what is haar cascade image object detection. I mean image object detection like human face or something else. I have haar cascade xml for face detection, but i don't know how to create my own. I want to create Haar Cascade xml to detect simple bright circle light sources (i.e. flashing infrared light from TV remote control). So, how to create Haar Cascade (xml) for using with OpenCV?

    Read the article

  • Core Data NSPredicate for relationships.

    - by Mugunth Kumar
    My object graph is simple. I've a feedentry object that stores info about RSS feeds and a relationship called Tag that links to "TagValues" object. Both the relation (to and inverse) are to-many. i.e, a feed can have multiple tags and a tag can be associated to multiple feeds. I referred to http://stackoverflow.com/questions/844162/how-to-do-core-data-queries-through-a-relationship and created a NSFetchRequest. But when fetch data, I get an exception stating, NSInvalidArgumentException unimplemented SQL generation for predicate What should I do? I'm a newbie to core data :( I know I've done something terribly wrong... Please help... Thanks -- NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"FeedEntry" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"authorname" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSEntityDescription *tagEntity = [NSEntityDescription entityForName:@"TagValues" inManagedObjectContext:self.managedObjectContext]; NSPredicate *tagPredicate = [NSPredicate predicateWithFormat:@"tagName LIKE[c] 'nyt'"]; NSFetchRequest *tagRequest = [[NSFetchRequest alloc] init]; [tagRequest setEntity:tagEntity]; [tagRequest setPredicate:tagPredicate]; NSError *error = nil; NSArray* predicates = [self.managedObjectContext executeFetchRequest:tagRequest error:&error]; TagValues *tv = (TagValues*) [predicates objectAtIndex:0]; NSLog(tv.tagName); // it is nyt here... NSPredicate *predicate = [NSPredicate predicateWithFormat:@"tag IN %@", predicates]; [fetchRequest setPredicate:predicate]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; --

    Read the article

  • What is wrong with my PHP/MySQL code?

    - by James
    I am building a navigation menu which lists all my categories and subcategories. The problem is it returns only one of these and not all. I have the categories echoed inside the while loop so I'm not sure why it's only showing one result and not all: <?php $query = mysql_query("SELECT * FROM categories_parent"); while ($row = mysql_fetch_assoc($query)) { $id = $row['id']; $name = $row['name']; $slug = $row['slug']; $childStatus = $row['child_status']; // if has child categories if ($childStatus == "1") { echo "<li class='dir'><a href='category.php?slug=$slug'>$name</a>"; echo "<ul id='dropdown'>"; $query = mysql_query("SELECT * FROM categories_child WHERE parent=$id"); while ($row = mysql_fetch_assoc($query)) { $id = $row['id']; $name = $row['name']; $slug = $row['slug']; echo "<li><a href='category.php?slug=$slug'>$name</a></li>"; } echo "</ul>"; echo "</li>"; } // if singular parent else { echo "<li><a href='category.php?slug=$slug'>$name</a></li>"; } } ?> My database tables: -- -- Table structure for table `categories_child` -- CREATE TABLE IF NOT EXISTS `categories_child` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(1000) NOT NULL, `slug` varchar(1000) NOT NULL, `parent` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=139 ; -- -- Dumping data for table `categories_child` -- INSERT INTO `categories_child` (`id`, `name`, `slug`, `parent`) VALUES (138, 'Britney Spears', 'category/celiberties/britney-spears/', 137), (136, 'Tigers', 'category/animals/tigers/', 136), (137, 'Horses', 'category/animals/horses/', 136), (135, 'Lions', 'category/animals/lions/', 136); -- -- Table structure for table `categories_parent` -- CREATE TABLE IF NOT EXISTS `categories_parent` ( `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(1000) NOT NULL, `slug` varchar(1000) NOT NULL, `child_status` int(1) NOT NULL DEFAULT '0', PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=139 ; -- -- Dumping data for table `categories_parent` -- INSERT INTO `categories_parent` (`id`, `name`, `slug`, `child_status`) VALUES (137, 'Celiberties', 'category/celiberties/', 1), (138, 'TV Shows', 'category/tv-shows/', 0), (136, 'Animals', 'category/animals/', 1);

    Read the article

  • Android popup style activity which sits on top of any other apps

    - by RenegadeAndy
    What I want to create is a popup style application. I have a service in the background - something arrives on the queue and i want an activity to start to inform the user - very very similar to the functionality of SMSPopup app. So I have the code where something arrives on the queue and it calls my activity. However for some reason the activity always shows on top of the originally started activity instead of just appearing on the main desktop of the android device. As an example: I have the main activity which is shown when the application is run I have the service which checks queue I have a popup activity. When i start the main activity it starts the service - I can now close this. I then have something on the queue and it creates the popup activity which launches the main activity with the popup on top of it :S How do I stop this and have it behave as i want... The popup class is : package com.andy.tabletsms.work; import com.andy.tabletsms.tablet.R; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.os.Bundle; import android.view.Gravity; import android.view.LayoutInflater; import android.view.View; import android.view.View.OnClickListener; import android.view.ViewGroup; import android.widget.Button; import android.widget.PopupWindow; import android.widget.TextView; import android.widget.Toast; public class SMSPopup extends Activity implements OnClickListener{ public static String msg; @Override public void onCreate(Bundle bundle){ super.onCreate(bundle); // Toast.makeText(getApplicationContext(), msg, Toast.LENGTH_LONG).show(); this.setContentView(R.layout.popup); TextView tv = (TextView)findViewById(R.id.txtLbl); Intent intent = getIntent(); if (intent != null){ Bundle bb = intent.getExtras(); if (bb != null){ msg = bb.getString("com.andy.tabletsms.message"); } } if(msg == null){ msg = "LOLOLOL"; } tv.setText(msg); Button b = (Button)findViewById(R.id.closeBtn); b.setOnClickListener(this); } @Override public void onClick(View v) { this.finish(); } } and I call the activity from a broadcast receiver which checks the queue every 30 seconds or so : if(main.msgs.size()0){ Intent testActivityIntent = new Intent(context.getApplicationContext(), com.andy.tabletsms.work.SMSPopup.class); testActivityIntent.putExtra("com.andy.tabletsms.message", main.msgs.get(0)); testActivityIntent.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK); context.startActivity(testActivityIntent); } The layout is here : http://pastebin.com/F25u6wdM

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >