Search Results

Search found 5221 results on 209 pages for 'low latency'.

Page 110/209 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • NFS high CPU usage

    - by user269836
    Hello, I have a very strange issue. I have next server: Intel(R) Xeon(TM) MP CPU 3.16GHz cat /proc/cpuinfo | grep proce | wc -l 8 free -m total used free shared buffers cached Mem: 28203 27606 596 0 10789 9714 -/+ buffers/cache: 7103 21100 Swap: 24695 0 24695 RAID card *-storage description: RAID bus controller product: MegaRAID vendor: LSI Logic / Symbios Logic physical id: 7 bus info: pci@0000:13:07.0 logical name: scsi2 version: 01 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list rom configuration: driver=megaraid latency=32 resources: irq:134 memory:d8ff0000-d8ffffff(prefetchable) memory:df600000-df60ffff(prefetchable) HDD: 10x148Gb SCSI U320 15k - RAID5 /dev/sdb1 807G 674G 93G 88% /storage /dev/sdb1 /storage ext4 defaults,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,noatime,nodiratime,noacl,errors=remount-ro 0 1 network cards ethtool -i eth0 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ethtool -i eth1 driver: tg3 version: 3.116 firmware-version: 5704-v3.36, ASFIPMIc v2.36 bus-info: 0000:10:02.0 ifconfig bond0 Link encap:Ethernet HWaddr 00:0f:1f:ff:d6:4d inet addr:192.168.15.71 Bcast:192.168.15.255 Mask:255.255.255.0 inet6 addr: fe80::20f:1fff:feff:d64d/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:1062818202 errors:0 dropped:3918 overruns:0 frame:0 TX packets:1041317321 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10000 RX bytes:258867684559 (241.0 GiB) TX bytes:396569192650 (369.3 GiB) this server running only nfs-kernel-server uname -a Linux nas2-backup 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux Debian 6. What do I have, once per day or two, LA goes up, it can be reached around LA: 40 but if I do: nfs-kernel-server restart. Every thing is OK. But on the next day or a little bit later, LA goes up again. Servers are connected to d-link dgs 1016d with 24 GBits ports. I have tried everything to find out what the problem is. Why it's happening, but still I can not resolve this issue. Any ideas on what is happening here?

    Read the article

  • Inconsistent black levels in windows 7 media center

    - by James G
    I've got a HTPC running windows 7 64bit, hooked up to a Samsung LCD TV. My problem is different types of video are displaying different black levels on the TV. When I play a bluray through Arcsoft Total Media Theater I have to set the "HDMI Black Level" to "normal" in the TV picture options menu. When I play recorded TV through WMC I have to set it to "low" otherwise the black colors on the video are washed out and grey. Is there any way to configure the system so all videos are displayed with the same black level? The hdmi black level setting is deep in Samsung's menus so it's becoming a chore to keep switching it everytime I watch a different type of video. I'm using an ATI 4670 graphics card with HDMI output going straight to the TV. In the ATI catalyst control center I've got pixel format set to RGB 4:4:4 (Full RGB) since the TV wont allow me to change the HDMI black level if I choose one of the other settings.

    Read the article

  • Using 2-port LSI 2308-8e card to control 24 SAS HDDs

    - by GregC
    I would like to rely on a RAID-on-chip solution to control 24 SAS hard drives in a direct-attached environment. How would you approach this to get best bandwidth given that I'd like to spend less than $10,000 on the interconnect. I've read that LSI 2308 chip can easily handle 8-drive SSD RAID6 in hardware. I'd like to harness its power to control 24 SAS hard drives over an expander in an external enclosure. Currently I use an Infortrend S24S-G2240 external enclosure. It provides its own controller and expander. I'd like to use LSI 2308 controller for RAID6 somehow instead of the mystery controller in the enclosure. P.S. I tried to create SAS-expander as a tag, but my rep on this site is low.

    Read the article

  • Super-silent (mid tower) case and fan combo

    - by Dennis G.
    I want to build a HTPC for music/video/blu-ray playback (no gaming). I don't need an expensive HTPC case but just want to go with a standard medium tower case. However, I want it to be super silent so it doesn't make any annoying fan/disk noises when I watch movies. Ideally, it shouldn't make any noticeable noise at all. I understand that choosing a board, CPU and graphic card that run cool and don't consume a lot of power is important for designing a quiet machine, and I think I got that covered. However, there are so many choices in regards to cases, fans and power supplies that it's hard to get started. What are your recommendations for a case/fan (cpu+case)/power supply combination that run absolutely silent and can cool a standard Intel system with a low-power (possibly passively cooled) graphic card? I'm usually a fan of Antec cases, would an Antec Mini P180 be a good starting point? If so, which case fans, CPU fan and power supply would you recommend?

    Read the article

  • How to set expiration date for external files? [closed]

    - by garconcn
    I have a site included lots of external files, most of them are gif format. I have no control on the external files, but have to use them(with permission). When I check the site using Google Pagespeed, I got very low score(31) even though the page load is fast. One of the high priority suggestion is to leverage browser caching by setting an expiration date. However, all the files are on external links. I have already set the expiration date for local files.

    Read the article

  • Geographically distributed file system with preferred locality

    - by dpb
    Hi All -- I'm building a application that needs to distribute a standard file server across a few sites over a WAN. Basically, each site needs to write a lot of misc files of varying size (some in the 100s MB range, but most small), and the application is written such that collisions aren't a problem. I'd like to have a system set up that meets the following qualifications: Each site can store files in a shared "namespace". That is, all the files would show up in the same filesystem. Each site would not send data over the WAN unless necessary. I.e., there would be local storage on each side of the WAN that would be "merged" into the same logical filesystem. Linux & Free ($$$) is a must. Basically, something like a central NFS share would meet most of the requirements, however it would not allow the locally written data to stay local. All data from remote sides of the WAN would be copied locally all the time. I have looked into Lustre, and have run some successful tests with it, however, it appears to distribute files fairly uniformly across the distributed storage. I have dug through the documentation and have not found anything that automatically will "prefer" local storage over remote storage. Even something that went with the lowest latency storage would be fine. It would work most of the time, which would meet this application's requirements. Any ideas?

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Proper 16:9 video size for non-HD 4:3 video (for youtube/vimeo)

    - by Xeoncross
    Since High Definition video came out on all the online sites it has changed the default aspect ratio of the player from 4:3 to 16:9. This means that for people posting SD video you have to resize some of your videos to get them to fit right. For example, NTSC DVD quality (aka 480i/p) is 720x480 pixels (width x height). However, low-end High Definition (720i/p) is 1280x720. Anyway, now that the video players are built for HD you will find that uploading standard quality videos will result in videos that are "letter boxed" which means they have extra black bars on the top and bottom (or sides). Correct me if I'm wrong, but in order to get a 720x480 video to fit a box that is designed for HD the best practice would be to crop some of it off so that it fits as 720x404 since: 16/9 = 1.78 (1.7777777777778) 720/405 = 1.78 405x1.78 = 720.9 The same would stand for 640x480 (old TV quality) video that would need to be 640x360 correct? I'm asking because I'm not sure about all this and whether this is the proper way to fix these letter-boxing/display problems.

    Read the article

  • Worth it to move /var to physical disk vs logical?

    - by Tammer Ibrahim
    Brief question about partition layout. I use an SSD for /, /boot, /usr, & /home partitions. I'd like to move /var to a mechanical disk to minimize writes to the SSD. I'm mainly concerned about maximizing drive life rather than maximizing performance (although I obviously wouldn't want to cripple my server). My mechanical disks consist of two drives sharing LVM, and a third used for nightly rsync backups. I also have a bunch of old 2.5in hard disks lying around. My question is, should I simply create a new LVM volume '/var' on my primary data store, or would it be worth the increased energy consumption (in terms of maximizing the lifetime of the LVMed drives) to install a low volume 2.5in disk to use just for /var? On a more general level my question is about the trade offs of placing OS mounts on the same physical volumes as my data. Thanks for any help!

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • How to diagnose very slow pagefile

    - by svick
    Quite often, one of the applications I use freezes (“does not respond”) for a while, in extreme cases for few minutes. This happens especially when when switching apps. During this time, the HDD light flashes constantly and perfmon show that HDD is used 100% of the time (OTOH, CPU isn't) and that pagefile is being read (which is to be expected when switching apps), but at a very slow rate. When I sort the disk table in perfmon by read or write, the file read and wrote the most is the pagefile, but it's still quite low rate (I don't remember the numbers). How can I diagnose what's causing this? I use Windows Vista, and the computer is quite ordinary two years old laptop.

    Read the article

  • Pinning Google Chrome's "Application Shortcuts" to the Windows 7 Taskbar

    - by Humphrey
    I love the idea behind Google Chromes Application Shortcuts, but they don't integrate well for me into the Windows 7 taskbar. Ideally, I'd like to be able to have my most used webpages (gmail, calendar, etc) as separate windows, pinned to the Windows 7 taskbar. I've created some application shortcuts on my desktop, but I've come across the following problems. If I open my gmail application shortcut, and then later on open a normal chrome window the new window will also use the gmail icon in the taskbar, even if that window has nothing to do with gmail. (What's weirder, is that this new window then uses a high-res gmail icon, but my actually application shortcut window uses a low-res icon. If I pin the application shortcut to the taskbar, then the icon turns into the regular chrome icon. Any idea's how to fix these issues? Or are they bugs in google chrome? Chrome version: 4.0.249.89

    Read the article

  • Server 2008 Hard Faults

    - by claw
    Hey all, plase bear with me as I haven't looked at a server in a very long time. The problem I am having is with a Windows 2008 Standard FE Service Pack 2 Intel Xeon X3430 @ 2.40 2.39 GHZ 4 GB Memory 64 Bit There seems to be no problems other than the physical memory peaking at 91%, always with over 100 Hard Faults Per Second. To my understanding hard faults should be fairly rare on a machine with. Are there any logs I can show you? Or investigate myself. The general performance of the machine is ok, i can access SBS2008 and change settings fairly smoothly without hangs etc. However, we connect to the server and do quite a bit of SQL via an application. For a record to retrieve say 20 rows, it can take 20+ seconds. Thanks in advance, Jamie EDIT: What the server is used for: IIS ASP Web Service SQL 2008 List item Exchange unable to upload screenshots due to low reputation - why doesnt my SO work here :)

    Read the article

  • Recommendations for colocation in the US

    - by Emil
    Hello serverfault I work for a European media company and we are currently looking for colocation in the US. I know the European market quite well unfortunately that is not the case for the US. I'm hoping for you guys to help me out a bit with a few questions, it would be much appreciated! I am looking for a data center that can deliver a high level of availability (tier 3 or better). The installation will be fairly large so capacity is important. Good internet connectivity/carrier presence. However most important is good customer support, skilled dedicated and responsive technical staff, since we won't have tech staff close by. I'm looking for a small and fast moving company that target internet businesses rather than big old enterprise hosting. What locations should we go for given that we want to reach all of the US from a single site and still maintain decent latency? (do we need east and west coast?) Where are the main internet hubs and should you try and get as close as possible? Are there any good online resources I should look at? Where do the large scale internet/media services colocate? Lastly I would be very happy to get some actual recommendations for companies to talk to P.S I'm happy to return the favor if anyone has question regarding data centers and colocation in Europe.

    Read the article

  • ASA Slow IPSec Performance with Inconsistent Window Size

    - by Brent
    I have a IPSec link between two sites over ASA 5520s running 8.4(3) and I am getting extremely poor performance when traffic passes over the IPSec VPN. CPU on the devices is ~13%, Memory at 408 MB, and active VPN sessions 2. The load on both of the the devices is particularly low. Latency between the two sites is ~40ms. Screenshot of wireshark file transfer between the two hosts over the firewall IPSec VPN performing at 10MBPS. Note the changing window size. http://imgur.com/wGTB8Cr Screenshot of wireshark file transfer between the two hosts over the firewall not going over IPSec performing at 55MBPS. Constant window size. http://imgur.com/EU23W1e I'm showing an inconsistent window size when transferring over the IPSec VPN ranging in 46,796 to 65535. When performing at 55+MBPS, the window size is consistently 65,535. Does this show a problem in my configuration of the IPSec VPN in the ASA or a Layer1/2 issue? Using ping xxxxxx -f -l I finally get a non-fragment at 1418 bytes so 1418+28 for IP/ICMP headers = 1446. I know that I have 1500 set on the ASA and Ethernet. I do have "Force Maximum segment size for TCP proxy connection to be" "1380" bytes set under Configuration Advanced TCP Options on the ASA. Using IPERF, I am getting a "TCP Window Full" every few seconds and ~3 MBPS performance. http://imgur.com/elRlMpY Show Run on the ASA http://pastebin.com/uKM4Jh76 Show cry accelerator stats http://pastebin.com/xQahnqK3

    Read the article

  • Should my servers boot from VHD?

    - by tony roth
    I've been testing native vhd boot on several servers. It seems to be pretty transparent in terms of deployment and with my seat of the pants testing I have not noticed any difference in performance. The main reason that I want to boot vhd is due to their transportablility between different hardware and to hyper-v servers. the following roles will be installed. dfsr dhcp iis application server dc <- haven't tested this yet but see no reason why it won't work. With the above low impact (in terms of performance) roles do you thing booting from VHD is appropriate. thx

    Read the article

  • Computer becomes very slow (permanently) after running a bunch of apps

    - by djzmo
    Hello there, My computer with Windows XP installed becomes very slow after I ran some heavy tasks at a time. (play a full 3D online game while extracting a 4GB RAR archive) It freezes for about 200~500ms every few seconds, and this always happens if I do heavy tasks at once in my computer (for example, installing a program while playing games), and the lag remains permanently (even a reboot won't make it better) unless I repair-install the Windows. Since I have a low-end computer: Intel(R) Pentium(R) 4 CPU 2.00 GHz, 512 MB of RAM ATI RADEON 9550 AGP 256 MB And the only way I used everytime to fix this problem is by repair-installing my Windows XP. So that I won't lose any data or installed programs. But I believe there's a better and faster way to fix this without repair-installing the Windows. Any idea?

    Read the article

  • What to look for in a good cheap color laser printer?

    - by torbengb
    My old color inkjet is giving up, and I'm considering laser to replace it. There are several good questions about color laser printers, but none of them summarize the pro's and con's. So here goes: I am looking to buy a color printer for home use, mostly for photos (at least medium-quality) and also for low-volume b/w text. Duplex would be neat but not a must. One aspect per answer, please: What aspects should I consider, what should I look for, what should I avoid in a home color laser printer? I'll make this a community wiki because there won't be one single definite answer. I'll post a few ideas of my own but I'm hoping to get many useful insights.

    Read the article

  • Is there a way to allow administrators to change or reset user passwords?

    - by Jon Seigel
    We have a custom MembershipProvider implementation using form-based authentication (FBA) under Sharepoint 2007. I've searched high and low on Google, but only found: Active directory and FBA implementations to allow users to change their own password Active directory instructions (including video!) for administrators to change other users' passwords Have we missed an option to enable the latter under FBA? Should this work by default and is the MembershipProvider misbehaving? The procedure to do this as under active directory would be ideal, but the "Change Password" link does not appear in the Edit User screen. We verified that the logged-in user is a site collection administrator.

    Read the article

  • Internet connection slower than network connection speed

    - by Mike Pateras
    I've got a computer connected to a wireless router on a different floor. When I look at the network connection, I'm told the signal strength is low, and that I've got a connection of about 26mbps (often higher). However, my internet connection on that machine is very slow. Speedtests show it at about 1-2mbps, and it really shows when loading pages and video. I have fiber optic internet access, and the machine that's connected to the router/modem via cable gets the 20mbps on speed tests, and is extremely fast in every day use. My question is, is the advertised 26mbps+ connection speed perhaps inaccurate, and that my wireless bandwidth is the likely bottleneck here? Or is the signal strength what's key here? And what might I do about this? Power cycling the router helped a bit, a speed test went as high as 6mbps after doing that.

    Read the article

  • iPhone dev - viewDidUnload subviews

    - by Mk12
    I'm having a hard time undestand a couple of the methods in UIViewController, but first I'll say what I think they are meant for (ignoring interface builder because I'm not using it): -init: initialize non view-related stuff that won't need to be released in low memory situations (i.e. not objects or objects that can't be recreated easily). -loadView: create the view set the [self view] property. -viewDidLoad: Create all the other view elements -viewDidUnload: Release objects created in -viewDidLoad. didReceiveMemoryWarning: Low-memory situation, release unnecessary things such as cached data, if this view doesn't have a superview then the [super didReceiveMemoryWarning] will go on to release (unload) the view and call -viewDidUnload. -dealloc: release everything -viewWillAppear:, -viewDidAppear:, -viewWillDisappear:, -viewDidDisappear: self-explanatory, not necessary unless you want to respond (do something) to those events. I'm not sure about a couple of things. First, the Apple docs say that when -viewDidUnload is called, the view has already been released and set to nil. Will -loadView get called again to recreate the view later on? There's a few things I created in -viewDidLoad that I didn't make a ivar/property for because there is no need and it will be retained by the view (because they are subviews of it). So when the view is released, it will release those too, right? When the view is released, will it release all its subviews? Because all the objects I created in -viewDidLoad are subviews of [self view]. So if they already get released why release them again in -viewDidUnload? I can understand data that is necessary when the view is visible being loaded and unloaded in these methods, but like I asked, why release the subviews if they already get released? EDIT: After reading other questions, I think I might have got it (my 2nd question). In the situation where I just use a local variable, alloc it, make it a subview and release, it will have a retain count of 1 (from adding it as a subview), so when the view is released it is too. Now for the view elements with ivars pointing to them, I wasn't using properties because no outside class would need to access them. But now I think that that's wrong, because in this situation: // MyViewController.h @interface MyViewController : UIViewController { UILabel *myLabel; } // MyViewController.m . . . - (void)viewDidLoad { myLabel = [[UILabel alloc] initWithFrame:CGRectMake(0, 0, 40, 10)]; [myLabel setText:@"Foobar"]; [[self view] addSubview:myLabel]; } - (void)viewDidUnload [ // equivalent of [self setMyLabel:nil]; without properties [myLabel release]; myLabel = nil; } In that situation, the label will be sent the -release message after it was deallocated because the ivar didn't retain it (because it wasn't a property). But with a property the retain count would be two: the view retaining it and the property. So then in -viewDidUnload it will get deallocated. So its best to just always use properties for these things, am I right? Or not? EDIT: I read somewhere that -viewDidLoad and -viewDidUnload are only for use with Interface Builder, that if you are doing everything programmatically you shouldn't use them. Is that right? Why?

    Read the article

  • Have you ever used kon-boot?

    - by Ctrl Alt D-1337
    Has anyone here ever used kon-boot? I guess it may work because of the few blog posts about it but I feel kinda concerned and am interested at hearing experiences from anyone who have used multiple times with no side effects. I am slightly worried for any direct memory altering it tries to do. I am also worried if this will do its job fine to hide the fact it puts in a low level trojan or if the author planned to do anything like that in a future release as it looks like closed source from the site. Also I don't intend to gain illegal access but I find these sort of things very useful for my box of live discs I take every where, just in case. OT: Other question that me be of interest to readers here

    Read the article

  • Drawbacks of installing linux on usb stick?

    - by Znarkus
    I am setting up a router/nas/http/whatever server based on an ION mini-ITX board. I've installed Ubuntu Server on an old 160 GB drive, but it generates a lot more heat and vibrates more than my other new drive (storage). It just doesn't fit the concept, and worse: it takes up a SATA port. As SSD's are crazy expensive I'm thinking of buying an extra 4 GB USB stick, and raid0 it. From my point of view, these are the pros/cons: Pros Low power consumption No vibrations No heat Smaller Get to buy new, larger USB stick (:D) Cons Shorter life time Slower Raid 0 More work maintaing/installing? I think the pros overweighs the cons. Shorter life time and raid 0 is countered by regular backups of the configs/settings. Slower is partially countered by raid 0, and I don't know about the last one. What do You think? Experience? Another solution?

    Read the article

  • CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

    - by ewwhite
    Hello, I'm having a difficult time with a remote CentOS 5.5 kickstart installation on an HP ProLiant DL360 G6. This is in an environment where I maintain an internal CentOS yum repository. The kickstart installation and post scripts have been tested and normally work. This hardware is also common in this environment, so I do not believe that it is a factor. Unfortunately, I'm having problems with a specific server install. The system is remote to the yum repository at a distance of 500 miles. They are connected over a private low-latency 100-megabit layer 2 connection (26ms round-trip). I'm mounting the 10mb CentOS 5 netinstall ISO image via an HP ILO remote console. The initial boot parameters are: linux ks=http://yum.abctrading.com/prop.cfg ksdevice=eth0 ip=x.x.x.x dns=x.x.x.x netmask=255.255.255.0 gateway=x.x.x.x I'm using the url --url http://ks.abctrading.com/5.5/os/x86_64/ method of installation. This quickly boots into the anaconda installer, pulls the kickstart config and formats the drives. The process eventually halts at the screen below, reading "Starting install process.". Going to the other virtual consoles give the second image below. The process stalls at this point and cannot proceed with the rest of the installation. Running the same kickstart config locally works just fine. I've tried mounting the boot ISO from the console as well as from the ILO2 command line pointing to a locally-hosted boot ISO via http. How can I debug this? Are there any options I've overlooked?

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >