Search Results

Search found 11531 results on 462 pages for 'cpu cache'.

Page 76/462 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How to make one CPU to be used simulataneously be three different users

    - by beginning_steps
    As a bootstrapping start-up we are thinking of saving on the IT hardware cost by making more use of the hardware that we have. As a solopreneur I have a laptop config : intel core2duo processor, 3Gb RAM and 250 GB RAM. Now we are planning to increase our team to 3 members. Will like your suggestions on the nest cost-effective step that I can take so that I can use the computing power of the existing laptop to act as a kind of server and then buy to more monitors where the new recruits can do the daily work on and they need to have different login id and access and they dont need access to all the files/applications as are available in my laptop. We use internet intensively to do our day to day activity. Please share you experience, whether you think this is a good ploy or there is any other more effective way of achieving the same result.

    Read the article

  • Webserver - Memory-bound or CPU-bound? [closed]

    - by JJP
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Web Sites I'm installing a social network using Zend Framework & MySql, with lots of plugins & queries. I want Webserver & Sql server on one box. I'm trying to choose between two machines (on hetzner.de): A) intel i7-2600 3.4 GHz 16 GB DDR3 RAM B) intel i7-920 2.6 GHz 24 GB DDR3 RAM B has 50% more RAM but 30% slower clock speed. Q is: is it obvious where the bottleneck will be? Would I ever need 24GB of RAM, even with lots of concurrent users?

    Read the article

  • Removing Little Snitch completely (Mac OS X Snow Leopard)

    - by Mathias Bynens
    I uninstalled Little Snitch months ago. Or so, I thought. When opening Console.app, I see something like this: Here’s a textual log: 21/11/09 22:05:31 com.apple.launchd[1] (at.obdev.littlesnitchd[10045]) Exited with exit code: 1 21/11/09 22:05:31 com.apple.launchd[1] (at.obdev.littlesnitchd) Throttling respawn: Will start in 10 seconds 21/11/09 22:05:33 Little Snitch UIAgent[10046] 2.0.4.385: m65968c1c 21/11/09 22:05:33 Little Snitch UIAgent[10046] 2.0.4.385: m579328b9 21/11/09 22:05:33 Little Snitch UIAgent[10046] 2.0.4.385: m41531ded 21/11/09 22:05:33 com.apple.launchd.peruser.501[170] (at.obdev.LittleSnitchUIAgent) Throttling respawn: Will start in 10 seconds 21/11/09 22:05:41 com.apple.launchd[1] (at.obdev.littlesnitchd[10049]) Exited with exit code: 1 21/11/09 22:05:41 com.apple.launchd[1] (at.obdev.littlesnitchd) Throttling respawn: Will start in 10 seconds 21/11/09 22:05:43 Little Snitch UIAgent[10050] 2.0.4.385: m65968c1c 21/11/09 22:05:43 Little Snitch UIAgent[10050] 2.0.4.385: m579328b9 21/11/09 22:05:43 Little Snitch UIAgent[10050] 2.0.4.385: m41531ded 21/11/09 22:05:43 com.apple.launchd.peruser.501[170] (at.obdev.LittleSnitchUIAgent) Throttling respawn: Will start in 10 seconds Spotlight searches for ‘little snitch’ or ‘littlesnitch’ yield no results. Yet, it seems like I didn’t get rid of Little Snitch entirely, since it’s still using up my CPU. Any ideas?

    Read the article

  • Why does my computer crash randomly?

    - by Donavon Decker
    The other day I went out to my van to get my Tower and when I opened the trunk it fell out. I brought it into the house and opened it, and everything looked ok. When I started it up, about 1-3 minutes afterwards it would crash. It did this over and over until I reseated the cooler. Everything seemed normal again, until after about 10 minutes of gameplay (any game), it would crash. I reseated my GPU + reinstalled the drivers, however I still get the same error. A while back, I'd check my 'Windows Rating' periodically, and all of them were in the '6.0-6.9' range except for my hard disk usage (always been like that [not relative]). Today I went in and looked, and my Processor and Memory was rated 5.4. I reseated my cpu and my memory, refreshed the windows rating, and then my processor and memory went from 5.4, to 5.1. A few minutes ago I reseated them once again, and now it's back to 5.4. Note: Not sure if this is relevant to the issue, but I updated my bios earlier today I honestly have no idea what the issue is, but I'm getting aggravated at the problem. Here are some images which contain images of my specifications: i1271.photobucket.com/albums/jj623/donxdeck/1_zps09f0607c.jpg i1271.photobucket.com/albums/jj623/donxdeck/4_zps381cd00a.jpg i1271.photobucket.com/albums/jj623/donxdeck/3_zps54bba720.jpg i1271.photobucket.com/albums/jj623/donxdeck/2_zps945d3d72.jpg Thanks for the help

    Read the article

  • fail2ban log parsing too slow on Raspberry Pi - options? [migrated]

    - by Gordon Morehouse
    I'm running fail2ban on a Raspberry Pi at 950MHz which I cannot overclock further. The Pi is occasionally subject to SYN floods on particular ports. I've set up iptables to throttle the rate of SYNs on the port of interest; when the throttle limits are exceeded, hosts which send SYNs are dropped into the REJECT chain and the particular SYN packet which exceeded the limit is logged. fail2ban then watches for these logged SYNs and, after seeing a few, temporarily bans the host for a short time (this is a transient issue in the app I'm working with). The problem is that the SYN floods can occasionally reach rates which are too fast for fail2ban to keep up with; I'll see 20-40 log messages per second, and eventually fail2ban falls behind and becomes ineffective. To add insult to injury, it continues consuming a LOT of CPU as it tries to catch up. I have verified that DROP chained packets from hosts already banned by fail2ban are not logged, and thus do not add to its load. What are my options here? I have a few ideas, but no clear path forward. Could I make the log-parse regex "easier" so it takes fewer cycles? Would using iptables --log-prefix to put a token near the start of the log message, and/or otherwise simplifying/altering the fail2ban regex help? Here is the current fail2ban config line containing a regex: failregex = kernel:.*?SRC=(?:::f{4,6}:)?(?P<host>[\w\-.^_]+) DST.*?SYN Is there a faster way for fail2ban to watch for the packets exceeding the limits than parsing kern.log? Could fail2ban be run under PyPy instead of CPython with minimal nonstandard wizardry (the OS is Raspbian 7, so, mostly Debian 7)? Is there something better than fail2ban that I could use to watch for the packets which exceed the SYN limits, and after N exceeds in X seconds, temporarily put the offending IP into the iptables DROP bucket, and take it out when the ban timer expires? Again, I'd vastly prefer a solution that uses as much software available in Debian as possible, though I can build Debian packages in a pinch.

    Read the article

  • does my machine configuration make sense?

    - by user1227914
    i couldn't think of a better place to ask this question, so here it goes. we're putting together a dedicated server for a website that will initially host the web server and the mysql database. as the website grows, we'll move the database to a different server and this machine will eventually only server the actual website. so the question is ...does my configuration look okay? it's the first time i'm building a server from scratch so i want to make sure i don't combine components that don't fit or something. things like ..do the drives i picked work for the hot swap ..etc. what do you guys think? am i good to go with this configuration? :) Chassis: Supermicro SuperServer 6016T-MTHF (6x DDR3 SDRAM - ECC DIMM 240-pin, 2x LGA1366 Socket, Power Provided: 600 Watt, 4 (free) x hot-swap - 3.5") CPU: Intel BX80614E5620 Xeon E5620 Processor - 4 Core, 2.40GHz, LGA 1366, 5.86GT/s QPI 12MB Cache, 64-Bit, 80W, HyperThreading Memory: Crucial CT51272BB1339 4GB PC10600 DDR3 Memory - 1333MHz, ECC, Registered, 1x4096MB (possibly 3 or 4 of them) Hard Drives: Western Digital WD2002FAEX Caviar Black Hard Drive - 2TB, 3.5", SATA 6Gbps, 7200 RPM, 64MB (possibly 2 or 3). thank you very much for any professional advice :)

    Read the article

  • How to utilize Varnish for A/B Testing and Feature Rollout?

    - by Ken
    Hi all, wasn't really sure if this should go here on or stackoverlow - admins, please move if i'm mistaken (and sorry). Today we have our web layer exposed to the world. We would like to add Varnish in front of our web layer to accelerate the site and reduce calls to the backend. However, we have some concerns and i was wondering how most people approach them: A/B Testing - How do you test two "versions" of each page and compare? I mean, how does varnish know which page to serve up? If and how do you save seperate versions on each page? Feature rollout - how would you set up a simple feature rollout mechanism? Let's say i want to open a new feature/page to just 10% of the traffic.. and then later increase that to 20%? How do you handle code deployments? Do you purge your entire varnish cache every deployment? (We have deployments on a daily basis). Or do you just let it slowly expire (using TTL)? Any ideas and examples regarding these issues is greatly appreciated! Thanks in advance. Ken.

    Read the article

  • Computer restarts without warning; code bcc116

    - by Robert C.
    Processor: Intel i5 4430 4-Core 4x3Ghz Motherboard: msi h87-g41 Graphics Card: Nvidia GTX760 Power supply: eps-750 cm RAM: 8GB I bought a new assembled gaming PC which worked fine for a few days. Then it started rebooting without warning. After it restarts windows 7 gives me an bbc 116 error code. Apparently it's something to do with my video card, either it overheating or wrong drivers. I've installed the latest driver from Nvidia for my graphics card. Since it's brand new it can't be dust, I'm running it with its lid open to see if the problem persists. I'm also running prime95 now to see if it tells me anything else. Using core temp it tells me that my CPU reaches up to 95° celsius with the blend stress test from prime95. Aaaand it just peaked to 100°. Of course it doesn't reach these temperatures at all while idle/gaming. I'm gonna let prime95 run for a night and to see what happens. Until then does anyone know what I should do next?

    Read the article

  • Linux Scheduler (not using all cores on multi-core machine) RHEL6

    - by User512
    I'm seeing strange behavior on one of my servers (running RHEL 6). There seems to be something wrong with the scheduler. Here's the test program I'm using: #include <stdio.h> #include <unistd.h> #include <stdlib.h> void RunClient(int i) { printf("Starting client %d\n", i); while (true) { } } int main(int argc, char** argv) { for (int i = 0; i < 4; ++i) { pid_t p_id = fork(); if (p_id == -1) { perror("fork"); } else if (p_id == 0) { RunClient(i); exit(0); } } return 0; } This machine has a lot more than 4 cores so we'd expect all processes to be running at 100%. When I check on top, the cpu usage varies. Sometimes it's split (100%, 33%, 33%, 33%), other times it's split (100%, 100%, 50%, 50%). When I try this test on another server of ours (running RHEL 5), there are no issues (it's 100%, 100%, 100%, 100%) as expected. What's causing this and how can I fix it? Thanks

    Read the article

  • ZFS with L2ARC (SSD) slower for random seeks than without L2ARC

    - by Florian Kruse
    I am currently testing ZFS (Opensolaris 2009.06) in an older fileserver to evaluate its use for our needs. Our current setup is as follows: Dual core (2,4 GHz) with 4 GB RAM 3x SATA controller with 11 HDDs (250 GB) and one SSD (OCZ Vertex 2 100 GB) We want to evaluate the use of a L2ARC, so the current ZPOOL is: $ zpool status pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM afstank ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c11t0d0 ONLINE 0 0 0 c11t1d0 ONLINE 0 0 0 c11t2d0 ONLINE 0 0 0 c11t3d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c13t0d0 ONLINE 0 0 0 c13t1d0 ONLINE 0 0 0 c13t2d0 ONLINE 0 0 0 c13t3d0 ONLINE 0 0 0 cache c14t3d0 ONLINE 0 0 0 where c14t3d0 is the SSD (of course). We run IO tests with bonnie++ 1.03d, size is set to 200 GB (-s 200g) so that the test sample will never be completely in ARC/L2ARC. The results without SSD are (average values over several runs which show no differences) write_chr write_blk rewrite read_chr read_blk random seeks 101.998 kB/s 214.258 kB/s 96.673 kB/s 77.702 kB/s 254.695 kB/s 900 /s With SSD it becomes interesting. My assumption was that the results should be in worst case at least the same. While write/read/rewrite rates are not different, the random seek rate differs significantly between individual bonnie++ runs (between 188 /s and 1333 /s so far), average is 548 +- 200 /s, so below the value w/o SSD. So, my questions are mainly: Why do the random seek rates differ so much? If the seeks are really random, they should not differ much (my assumption). So, even if the SSD is impairing the performance it should be the same in each bonnie++ run. Why is the random seek performance worse in most of the bonnie++ runs? I would assume that some part of the bonnie++ data is in the L2ARC and random seeks on this data performs better while random seeks on other data just performs similarly like before.

    Read the article

  • iptables secure squid proxy

    - by Lytithwyn
    I have a setup where my incoming internet connection feeds into a squid proxy/caching server, and from there into my local wireless router. On the wan side of the proxy server, I have eth0 with address 208.78.∗∗∗.∗∗∗ On the lan side of the proxy server, I have eth1 with address 192.168.2.1 Traffic from my lan gets forwarded through the proxy transparently to the internet via the following rules. Note that traffic from the squid server itself is also routed through the proxy/cache, and this is on purpose: # iptables forwarding iptables -A FORWARD -i eth1 -o eth0 -s 192.168.2.0/24 -m state --state NEW -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE # iptables for squid transparent proxy iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.2.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 How can I set up iptables to block any connections made to my server from the outside, while not blocking anything initiated from the inside? I have tried doing: iptables -A INPUT -i eth0 -s 192.168.2.0/24 -j ACCEPT iptables -A INPUT -i eth0 -j REJECT But this blocks everything. I have also tried reversing the order of those commands in case I got that part wrong, but that didn't help. I guess I don't fully understand everything about iptables. Any ideas?

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

  • Would a PHP application benefit from being served from a RAM drive?

    - by Tom Marthenal
    I am in charge of hosting a PHP application that is large and slow, but easy to scale. The application is entirely static, with writable disk storage needed. We've profiled the application, and the main bottleneck appears to come from loading the application and not the work the application does. The application is not CPU-intensive, although it does use a fair amount of memory (think Magento). Currently we distribute it by having a series of servers with the same PHP files on their hard drive and a load balancer in front of them. Easy but expensive. I've been reading about RAM disks and the IO benefits they offer, and was wondering if they would be well-suited to PHP applications. Since PHP applications are loaded from disk for every request and often involve lots of different files (as opposed to being kept in memory like with a Java application), I would figure that disk performance can be a severe bottleneck. Would placing the PHP files on a RAM disk and using the mount point as Apache's document root offer performance benefits? A startup script could create the RAM drive and then copy the files (which are plain-text and small) from a permanent location to the temporary RAM drive. Does this make sense, or should I just trust the linux kernel to cache the appropriate files in memory by itself?

    Read the article

  • 5 year old server upgrade

    - by rizzo0917
    I am looking to upgrade a server for a web app. Currently the application is running very sluggish. We've made some adjustments to mysql (that's another issue in itself) and made some adjustments so that heaviest quires get run on a copy of the database on another server was have as a backup, however this will not last that much longer and we are looking to upgrade. Currently the servers CPUs are (4) Intel(R) XEON(TM) CPU 2.00GHz, with 1 gig of ram. The database is 442.5 MiB, with about 1,743,808 records. There are two parts of the program, the one, side a, inserts and updates most of the data. Side b, reads the data and does some minor updates. Currently our biggest day for side a are 800 users (of 40,000 users all year) imputing the system. And our Side b is currently unknown, however we have a total of 1000 clients. The system is most likely going to cap out at 5000 side b clients, with about a year 300,000 side a users. The current database is 5 years old, so we can most likely expect the database to grow pretty rapidly, possibly double each year (which we can most likely archive older records if it comes to that). So with that being said, should we get a server for each side of the app, side a being the master, side b being the slave, any updates made on side b are router to side a. So the question is should i get 2 of these or 1. 2 x Intel Nehalem Xeon E5520 2.26Ghz (8 Cores) 12GB DDRIII Memory 500GB SATAII HDD 100Mbps Port Speed And Naturally I would need to have a redundant backup so it could potentially be 4 of them.

    Read the article

  • New i7 is slower than old Core 2 Duo? Why? (BIOS programming)

    - by DrChase
    I've always wondered why the companies who make BIOS' either have terrible engineering psychologists or none at all. But without wasting your time further with random speculative questions, my real question is as follows: Why does my new computer run slower than my old computer? Old Computer: Intel Core 2 Duo CPU @ 3.0 Ghz (stock) 4GB OCZ DDR2 800 RAM Wolfdale E8400 mb nVidia GeForce 8600 GT New Computer: Intel Core i7 920 @ ~3.2 Ghz 6 GB OCZ DDR3 1066 RAM EVGA x58 SLI LE motherboard nVidia GeForce GTX 275 Vista x64 Home Premium on both. "Run slower" is defined as: - poorer FPS performance in the same games, applications - takes longer to start up - general desktop usage (checking email, opening up files, running exe's) is noticeably slower At first I thought I must've not set something up in the BIOS or something. But I have no idea how to set anything in the bios except for "Dummy O.C.", which brought me to ~3.2 Ghz. But beyond that I have no idea. I've been reading stuff about "ram timing" and voltages and the like but I really have no idea about that stuff. I'm a psychologist who has a basic understanding in building his own computers, not a computer scientist. Can someone give me some wisdom that might guide me to the reason my new computer is worse than my older one? I'm sorry if this is a bad question, or not appropriate to SO. I'm just pretty frustrated now and you all have helped me in the past so I figured I'd give it a shot. Thanks for your time.

    Read the article

  • Apache FilesMatch regexp: Can it match by the cache buster 10 digit (rails generated) following the filename?

    - by ynkr
    According to the apache FilesMatch docs: The FilesMatch directive provides for access control by filename Basically, I only want to set an expires header for resources that have a 10 digit "cache buster" id appended to the name. So, here is my attempt at such a thing in my httpd.conf <FilesMatch "(jpg|jpeg|png|gif|js|css)\?\d{10}$"> ExpiresActive On ExpiresDefault "now plus 5 minutes" </FilesMatch> And here is an example of a resource I want to match: http://localhost:3000/images/of/elvis/eating-a-bacon-sandwich.png?1306277384 Now obviously my FilesMatch regexp is not matching so I am guessing 1 of 2 things is happening. Either my regexp is wonky or the '?1231231231' cache busting part of the file is not part of what apache considers part of the filename. Can anybody confirm and/or give me a way to cache only those resources that will not persist beyond the next deploy?

    Read the article

  • Myths: Does Deleting the Cache Actually Speed Up Your PC?

    - by The Geek
    Every time you ask somebody with a reasonable level of tech skills what you should do to speed up your PC, they start jabbering on about running ccleaner and clearing the cache. But does the act of clearing a cache really speed things up? Nope. Most people assume that all temporary files are just clutter created by lousy applications, but that isn’t actually the truth. Cache files are created by apps to store commonly used information so it doesn’t have to be generated or downloaded again.    

    Read the article

  • how to add a url cache to ASIHttpRequest

    - by smokey_the_bear
    I'm using ASIHttpRequests and an ASINetworkQueue in an iphone app to retrieve some 100k XML files and a lot of thumbnails from a web service. I'd like to cache the requests in the style of NSURLCache. ASI doesn't seem to support caching as is, and I looked at the code and it drops to C to create the requests, so inserting the NSURLCache layer seemed tricky. What's the best way to implement this?

    Read the article

  • Distributed Computing Framework (.NET) - Specifically for CPU Instensive operations

    - by StevenH
    I am currently researching the options that are available (both Open Source and Commercial) for developing a distributed application. "A distributed system consists of multiple autonomous computers that communicate through a computer network." Wikipedia The application is focused on distributing highly cpu intensive operations (as opposed to data intensive) so I'm sure MapReduce solutions don't fit the bill. Any framework that you can recommend ( + give a brief summary of any experience or comparison to other frameworks ) would be greatly appreciated. Thanks.

    Read the article

  • Cache invalidation between two web applications

    - by Muxa
    I need to invalidate cache in a web application when related data is updated in another application (running on the same machine). Both applications use the same database. I know there's SqlCacheDependency. How do is it in terms of performance? Is interprocess communication (e.g. using name pipes) an option in web applications? Does it outperform SqlCacheDependency?

    Read the article

  • Why is the HttpContext.Cache count always zero?

    - by jjr2527
    I set up a few pages with OutputCache profiles and confirmed that they are being cached by using multiple browsers and requests to retrieve the page with a timestamp which matched across all requests. When I try to enumerate the HttpContect.Cache it is always empty. Any ideas what is going on here or where I should be going for this information instead?

    Read the article

  • Access Google Chrome's cache

    - by jldupont
    Is it possible to access Google Chrome's cache from within an extension? I'd like to write an extension that loads a cached version of a page when the online one can't be accessed (e.g. Internet connectivity issue). Updated: I know I could write an NPAPI plugin accessible through an extension to accomplish this but I'd rather not suffer writing one... I am after a solution without resorting to NPAPI, please.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >