Search Results

Search found 1325 results on 53 pages for 'factor'.

Page 26/53 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Sorting IPv4 Addresses

    - by Kumba
    So I've run into a quandary on sorting IPv4 addresses, and didn't know if there was a set rule in some obscure networking document. Do I do a straight sort on the raw address only (such as converting the IP address to a 32bit number and then sorting), do I factor in the CIDR via some mathematical formula, do I sort via the CIDR only (as if I'm comparing the network size and not the addresses directly)? I.e., normal math, we'd do something like -1 < 0 < 1 to denote the order of precedence. Given say, 10.1.0.0/16, 172.16.0.0/12, 192.168.1.0/24, and 192.168.1.42, what would be the order of precedence?

    Read the article

  • sizes of RAM, of virtual memory and of swap for 32-bit OS

    - by Tim
    If I understand correctly, a 32-bit OS (Ubuntu) can only address 4GiB memory, so RAM with size larger than 4Gib will only be used 4Gib of itself and the rest is a waste. I am now confused about this situation for RAM with similar one for virtual memory and for swap. with virtual memory being swap + RAM, if the size of the virtual memory exceeds 4Gib, will the exceeding part be a waste for the 32-bit OS? if I now have to choose the size for my swap partition, is it a factor to consider that the 32-bit OS can only address 4GiB memory? Does the size of swap have to be chosen with respect to the 4Gib addressible limitation? Will the swap exceeding 4GiB always be a waste? is virtual memory equal to RAM and swap? or can virtual memory use space on the hard drive outside the swap partition? Thanks and regards!

    Read the article

  • Apache on Windows random long wait times

    - by Jaxbot
    I have a development machine with Apache installed as a service on Windows. The installation is fresh out of the box, with no changes to configuration aside from adding the PHP module. From day one, I've had a problem that looks like this: Essentially, Apache is freezing for about 11 seconds before replying on random requests. This appears to happen more frequently when the host hasn't been connected to in a while, but this is not always the case. I've eliminated MySQL, PHP, and the specific application; this long wait problem will occur even when loading a static file such as favicon.ico. Thus, the only factor remaining is Apache, which is freezing for consistently around 10-11 seconds before replying. The problem is not the DNS problem that many people point to; as you can see, the DNS lookup is instant, and the problem occurs both on localhost and 127.0.0.1. Thanks for the time.

    Read the article

  • Client side certificates in client browsers with unix server for management

    - by user146253
    We are currently running Unix dedicated servers for everything (Web cluster, database, FTP, batch, ...) except for a Microsoft Active Directory Certificate Services. The sole purpose of this Windows box is to provide client side certificates to our clients browsers. All our clients are required to install a client side certificate on order for them to be able to access our website. Is there an alternative in the Unix space? The purpose is to make sure only the approved hardware of an approved client can access our website. I'm open for any solution that provides me with this level of security. We are however talking about thousands of certified computers just so you can factor that in in a proposed solution. Optionally we would also like to be able to revoke access. With Regards.

    Read the article

  • My laptop was stolen. What do I need to do?

    - by chris
    My laptop was recently stolen. It was a corporate system running XP, which means it was part of a domain - I'm assuming that makes it impossible for someone to log into it, although I know there are ways to reset the local admin account. Is there any way to tell if someone boots it up? I was logged into gmail, using two factor authentication. I will change my password, but is there any chance of tracking any attempted accesses? Other than changing passwords on all my web accounts, is there anything else I need to do?

    Read the article

  • Data transfer speed to USB storage connected to wifi router very slow

    - by RonakG
    Here is my setup. A Linksys Cisco E3200 wifi router. A MacbookPro running OS X Lion 10.7.4. A Seagate GoFlex 1TB hard drive connected to wifi router via the USB port. When I try to transfer data from my MBP to the HDD, the data transfer rate is very low. I'm getting around 3MB/s write speed. This is very slow compared to the speed I get when HDD is directly connected to the MBP. The HDD is NTFS formatted. And the router provides access to HDD using Samba share. So I connect to the HDD using smb://. What is the limiting factor here affecting the data transfer rate?

    Read the article

  • Are all SATA cables compatible with SATA 3?

    - by Jim Fell
    I have a HP Compaq de5700 Small Form Factor desktop computer, and I am looking to upgrade it's hard drive. When I open up the box, it clearly has available SATA connectors on the motherboard, but no indication as to which SATA version (1, 2, or 3). The hard drive I am considering is a SATA 3. My concern is that if the motherboard also supports SATA 3 and I use an old SATA cable (v1 or v2), might there be problems? This is a bare drive, so I don't expect that a cable will come with it, and I have not been able to find the manual for this machine. Thanks.

    Read the article

  • Mac OS X Time Machine Restore - Failure?

    - by Rabe
    I've got an late 2009 macbook pro 13' and yesterday I replaced the native hd (new hd: Hitachi; same form factor). I choosed "Restore from Time Machine" in the options menu on the snow leopard install discs and waited several hours to complete. After an reboot the mac shows up a white screen with apple logo and tiny loading animation. Nothing happens. After another restart; the same. Now I cant boot from CD using the C-Key and I have no idea how to fix that problem.

    Read the article

  • ideal memory configuration 4 bank, ddr3, AM3+ FX - 1 vs 2 vs 4 dimms?

    - by TardisGuy
    Ok, so ive been looking around, trying to learn and understand the way that ram works. Ive gotten one answer that said "The addressing is best for 2 sticks, and when you use 4; it slows down" Another answer said something like: Theres bank/channel interleave that makes the memory read like one stick Also I read something about the memory density also being a factor. I dug further and found out that theres a higher speed limit on my board for 2 sticks vs 4, so now im trying to put an image in my head of how and why, and... pfft. Can anyone explain, or recommend a resource that would answer these questions?

    Read the article

  • Server OS: put it on a separate drive? Yes, no, or depends on the situation?

    - by captainentropy
    Hi, I would like opinions, or facts, both preferably, on whether it's ok to install a server's OS on the RAID array or not. I would predict installation on separate drives is the best but I'm interested in the performance. The server in question will have 8 cores (2.4GHz ea.), 24GB RAM, and ~16TB of usable space of server-class drives in RAID10. There is also a subsytem of an ~equivalent size for backup. I will be running CPU/memory intesive applications on this server in addition to it being file storage for my work (research lab). IF I install the OS (haven't decided which one, probably Ubuntu or Fedora or some other good linux distro) on separate drives will there be any performance problems if they aren't configured in RAID10? IF it is better to have the OS on separate drives should I go for 150GB velociraptors in RAID1 or smallish SSD drives in RAID1? Money is unfortunately a factor as I think I'm close to maxing my budget as it is. Thanks!

    Read the article

  • What is the fastest RAID in practise?

    - by Luke
    I'm going to be rebuilding my server, and I want much faster access to my data. I've used RAID 1 and 0 in the past, and decided upon RAID 10 (dedicated RAID card). Then someone told me to use RAID 5+0, then someone else told me to use RAID 6+0. Assuming the Hardware RAID Card supports each level, what is currently the FASTEST RAID available, given x number of hard drives? Reliability is now another factor, and I am willing to spend money on new drives if a drive (or multiple) fail. I simply want to know what the fastest RAID level is, along with some reliability for recovering from a failure

    Read the article

  • Excel 2010: How to color the area between charts?

    - by Quasdunk
    Hello, I asked this question already on stackoverflow but it hasn't been answered yet. Instead I was advised to try it here, so here I go :) So there's that simple XY-Line-Chart in Excel (2010). It is surrounded by two other graphs which are parallel but offset by the same factor in both the positive and negative direction - something like this: ---------------- (positively offset parallel graph) ---------------- (main graph) ---------------- (negatively offset parallel graph) Now I want to color the space between the main graph and the offset ones, but I just can't seem to find a way! Is it maybe possible with VBA? Or is there maybe a solution for Excel 2007?

    Read the article

  • HTTP transfer speeds start fast then slows to a crawl

    - by AnITAdmin
    We just got a new dedicated 1 gigabit server running IIS. The CPU is 15% or less, the RAM (4 GB total) has 3 GB unused... We are pushing 110 mbits per second... Speeds are really slow.. And, if fact, here's how it happens: We connect, and then the speeds are really fast, and quickly decline to 40 kBps or less. What's going on? It seems the server just wont go above 120 mbits per second. The files are all very large. 50 MB to 500 MB... Could this be a factor? Again, CPU, RAM, UI responsiveness when accessing remotely all seem fine.

    Read the article

  • Suggestions for hosted file sharing services

    - by Jon
    Before I pose my question, I will give some insight as per my scenario: I work for a small business (cost is an important factor) Our bandwidth is limited and would not support an in-house FTP server We need to share files (mostly pdf, inDesign, Illustrator documents) to our clients, and as we expand, we are finding that our current locally-hosted FTP solution is too slow and is becoming a detriment to our sales team. What we need is a remotely hosted solution to share files with our clients, specifically with the following features: Greater than 100gb of secure storage The Ability to distribute unique log in credentials to clients, granting access to a personalized directory or folder, while limiting access to other files on the server. A relatively simple web-based UI for clients with limited computer knowledge We have considered a dedicated remote server, and web-based services (box.net, yousendit.com, onehub.com, filesanywhere.com) but I am unsure as per the direction we should be taking - have I left another solution out? What would you suggest? Thanks in advance.

    Read the article

  • Is 50% download speed on a wireless G network normal?

    - by Bartlomiej Skwira
    I have a wired connection of about 36Mb/s, but my wireless speed is max at about 18-19Mb/s. I have a WRT54G-TM (T-Mobile, 802.11G) router with DD-WRT firmware - I've upgraded it to latest build. Done some settings changes: changed channel - 13 wireless network mode - G-only ACK Timing - 0 Fragmentation Threshold and RTS Threshold - 2304 Basic Rate - All Signal/Noise ratio: -46/-94, signal quality ~50-60%. Is this normal with G networks? Edit: The AP is located about 2 meters from laptop, no walls or metal objects, but its next to a TV. I've done a channel scan (had problems locating it, go to "Status - Wireless - Site survey" - lame naming) and everybody else is on channels 1 and 6. Switched to channel 11 but it didn't help. As for trasmit power I got best results with default 71mw. The antenna might be a factor, I'm using the default 2 antennas.

    Read the article

  • Dynamically changing one-node Cassandra cluster to two nodes

    - by Jason Axelson
    So I have an application that will be very dormant most of the time but will need high-bursting a few days out of the month. Since we are deploying on EC2 I would like to keep only one Cassandra server up most of the time and then on burst days I want to bring one more server up (with more RAM and CPU than the first) to help serve the load. What is the best way to do this? Should I take a different approach? Some notes about what I plan to do: Bring the node up and repair it immediately After the burst time is over decommission the powerful node Use the always-on server as the seed node My main question is how to get the nodes to share all the data since I want a replication factor of 2 (so both nodes have all the data) but that won't work while there is only one server. Should I bring up 2 extra servers instead of just one?

    Read the article

  • Putting two physical hard drives in a single 2.5" bay?

    - by dgw
    My crazy brain came up with this ridiculous idea. "Why can't I have a single 2.5" drive device that actually contains two independent hard drives? I want RAID 1 data mirroring and the security of having redundant drives. This is a thing that must exist!" To be clear, I am not asking if I can somehow shoehorn two 2.5" drives into a single bay, replace an optical drive with an additional HDD, or anything like that. What I have in mind is a single 2.5"-form-factor device that houses two independent hard drives, with separate housings (likely) and distinct controllers. Probably all they'd need to share is the power & SATA connections. Probably no such thing exists, but because I know there exist hard drives the physical size of a stack of postage stamps (roughly), I have to ask.

    Read the article

  • Open table cache in MySQL

    - by vvanscherpenseel
    I have my open table cache set to 1800 and I have a total of 1112 tables. MySQL Tuning Primer reports that 100% of my table cache is used yet my table cache hit rate is 5%. I understand that this happens due to concurrent connections all opening tables. I think I should raise the cache limit. I understand that the cache size is limited by the file descriptor limit of my operating system, but are there any other practical limitations I should be aware of? Searching Google or this very website yields mostly posts explaining the connection-factor or come up with indecisive answers. My question: can I safely increase the open table cache limit? Is there a maximum?

    Read the article

  • Openvpn client-to-client connection reencrypted at Server?

    - by user1684411
    currently i'm using a site-2-site openvpn setup. The routers en/decrypt all traffic that goes from one net to another. One of them is the Openvpn server. This works but performance is not as good as possible. I think the limiting factor is the cpu power of the router. Would it be better if i use client-to-client connections and access the fileserver in the one net from a pc in the other, because the openvpn-server does not have to decrypt the (whole) packets?

    Read the article

  • How can I disable write protection in my USB flash drive?

    - by 97847658
    My USB flash drive is currently unusable because it somehow (quite suddenly!) became write protected. I have googled around and tried many solutions to this problem, but none of them have worked so far. Here are some of the solutions I've tried: The drive has no tangible switch or button. Formatting the drive won't work, even in command line, even "low level formatting", because the drive is (after all) write protected. Changing certain registry keys to 0 doesn't seem to work. Repair_Neo2.9.exe says "USB Flash Disk not found!" One factor that may make it more difficult to find a solution: I have no idea what the make or model is, because I received the USB flash drive from my university as a gift. So if anyone knows how to find the make and model, that alone might be helpful. Any ideas? Thanks.

    Read the article

  • Why does the heat production increase as the clockrate of a CPU increases?

    - by Nils
    This is probably a bit off-topic, but the whole multi-core debate got me thinking. It's much easier to produce two cores (in one package) then speeding up one core by a factor of two. Why exactly is this? I googled a bit, but found mostly very imprecise answers from over clocking boards which do not explain the underlying Physics. The voltage seems to have the most impact (quadratic), but do I need to run a CPU at higher voltage if I want a faster clock rate? Also I like to know why exactly (and how much) heat a semiconductor circuit produces when it runs at a certain clock speed.

    Read the article

  • Recommendation for hardware upgrade: thin clients? Or...?

    - by Alex C.
    I work for an animal shelter in Upstate New York. We have about 50 machines running XP Pro. They're connected to a Windows network with a domain. About half of these computers are used for nothing more than using two web-based apps -- one to keep track of our animals, the other to process credit cards. Having a full-blown desktop PC seems like overkill for this purpose. The PCs are three-to-five years old, and I'd like to come up with a plan to upgrade the hardware. Our donations are down (not surprising, given the economy), so cost is a big factor. Can people recommend some options? Some sort of thin client, maybe?

    Read the article

  • Does multi-platter hard-drive use all of their heads to read simultaneously?

    - by WiSaGaN
    Suppose we have a harddisk with 2 platters with characteristics below: Rotational rate: 10, 000 RPM Avg sectors/track: 1000 Surfaces: 4 Sector size: 512 bytes I was reading "Computer Systems: A Programmer's Perspective 2ed" when I found that it calculates transfer time as if it only uses ONE head to read a sector. If that's the case, why not use 4 heads to write(read) on 4 surfaces? So when I write a 2K bytes file, each head should only need to wait for the platters to rotate just one sector length instead of 4, thus reducing the transfer time by a factor of 4. Or even redesign sector to make each sector on one cylinder but on 4 tracks residing same position respectively on 4 surfaces. Each one of (512/4) bytes. So when the hd needs to read a sector of 512 bytes, we only need the disk to rotate roughly 1/4 compare to original time. The idea looks like RAID 0.

    Read the article

  • Can time skew on Windows be reduced to +/- 5ms?

    - by mbac32768
    A number of our Windows workstations, running ntpd, simply cannot keep time. Our Linux workstations and servers running the same ntpd config don't have this problem, they can stay within +/- 5ms of skew. The Windows hosts easily drift to seconds and sometimes minutes apart. This is a problem for us. The only common factor we have been able to isolate is that the hosts that can't keep time are running Windows. Is there something fundamentally impossible with what we're trying to do?

    Read the article

  • What version-control system is most trivial to set up and use for toy projects?

    - by Norman Ramsey
    I teach the third required intro course in a CS department. One of my homework assignments asks students to speed up code they have written for a previous assignment. Factor-of-ten speedups are routine; factors of 100 or 1000 are not unheard of. (For a factor of 1000 speedup you have to have made rookie mistakes with malloc().) Programs are improved by a sequence is small changes. I ask students to record and describe each change and the resulting improvement. While you're improving a program it is also possible to break it. Wouldn't it be nice to back out? You can see where I'm going with this: my students would benefit enormously from version control. But there are some caveats: Our computing environment is locked down. Anything that depends on a central repository is suspect. Our students are incredibly overloaded. Not just classes but jobs, sports, music, you name it. For them to use a new tool it has to be incredibly easy and have obvious benefits. Our students do most work in pairs. Getting bits back and forth between accounts is problematic. Could this problem also be solved by distributed version control? Complexity is the enemy. I know setting up a CVS repository is too baffling---I myself still have trouble because I only do it once a year. I'm told SVN is even harder. Here are my comments on existing systems: I think central version control (CVS or SVN) is ruled out because our students don't have the administrative privileges needed to make a repository that they can share with one other student. (We are stuck with Unix file permissions.) Also, setup on CVS or SVN is too hard. darcs is way easy to set up, but it's not obvious how you share things. darcs send (to send patches by email) seems promising but it's not clear how to set it up. The introductory documentation for git is not for beginners. Like CVS setup, it's something I myself have trouble with. I'm soliciting suggestions for what source-control to use with beginning students. I suspect we can find resources to put a thin veneer over an existing system and to simplify existing documentation. We probably don't have resources to write new documentation. So, what's really easy to setup, commit, revert, and share changes with a partner but does not have to be easy to merge or to work at scale? A key constraint is that programming pairs have to be able to share work with each other and only each other, and pairs change every week. Our infrastructure is Linux, Solaris, and Windows with a netapp filer. I doubt my IT staff wants to create a Unix group for each pair of students. Is there an easier solution I've overlooked? (Thanks for the accepted answer, which beats the others on account of its excellent reference to Git Magic as well as the helpful comments.)

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >