Search Results

Search found 8687 results on 348 pages for 'per ersson'.

Page 56/348 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • overload environment

    - by Richo
    I've recently switched across to nesting my home directory across all my machines in an svn repo, meaning that my utility scripts, configuration (irssi, vim, zsh, screen etc) as well as my .profile and so forth are easier to keep up to date across all the places I login. I use a set of sourced .local files to override them on a per site basis as required. As it stands, many of my scripts inherit some form of configuration, and for the most part I've been setting an environment variable in .profile, and then if needed on a per site basis overriding it in .profile.local This works great, but are there pitfalls in having a stack of environment variables? If I take my default environment from within an X session before any of my personal configuration I have not even increased it by 50% but some of the machines I work on are low resource, am I bloating my system unneccessarily, or being needlessly paranoid? Should I start moving this config into seperate flatfiles that are loaded as needed? This means extra infrastructure, or alternately writing a single module for storing config that all of my utilities can inherit.

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • Printer recommendation

    - by Coding District
    Hi guys, I'm looking to buy a printer for home use and I'm not sure which one to get. I'm not very good when it comes to printers. Here's what I'm looking for: cheap (least $ per page) good quality (last longest, any specific brands to avoid?) not heavy printing (let's say ~5 pages per week) OK quality (I don't need "the best". I'm not going to print any photos but will need color) can scan, fax, and print I'm currently looking at these two since it's boxing day tomorrow and they're on sale: http://www.bestbuy.ca/EN-CA/product/id/10155178.aspx http://www.bestbuy.ca/en-CA/product/hewlett-packard-hp-officejet-wireless-all-in-one-inkjet-printer-4500-wl-4500-wl/10146663.aspx?path=14c256643988a02e34424eec10028145en02 Can I get some opinions about the above?

    Read the article

  • How do you pick what server setup you need?

    - by ed209
    I recently started receiving pubsub data feed from etsy. It averages around 250 notifications per minute. But obviously, when the USA wakes up that spikes quite heavily. I want to be able to deal with those spikes (about 3 per day) but the rest of day is fine. What's the best method of getting the right server configuration. My current approach is to keep upgrading until the server stops dying... next leap is: Processor: AMD Phenom II X6-1055T HEXA Core RAM: 4GB DDR2 SDRAM HD1: SATA Drive (7,200 rpm) (+500 GB 7200 RPM SATA hard drive) HD2: SATA Backup Drive (+500 GB SATA (7,200 rpm)) OS: Linux OS (+CentOS 5 64-bit) Bandwidth: 6000GB Monthly Transfer (3000 in + 3000 out) (+100M uplink port) What's the best approach to working out what sort of server setup you need?

    Read the article

  • HTTP transfer speeds start fast then slows to a crawl

    - by AnITAdmin
    We just got a new dedicated 1 gigabit server running IIS. The CPU is 15% or less, the RAM (4 GB total) has 3 GB unused... We are pushing 110 mbits per second... Speeds are really slow.. And, if fact, here's how it happens: We connect, and then the speeds are really fast, and quickly decline to 40 kBps or less. What's going on? It seems the server just wont go above 120 mbits per second. The files are all very large. 50 MB to 500 MB... Could this be a factor? Again, CPU, RAM, UI responsiveness when accessing remotely all seem fine.

    Read the article

  • Somebody knows why the sectors of the IBM floppy disk are named 1 to 8 (and not 0 to 7 )

    - by Olivier Briand
    I am now programming on a 8 bits Z80 computer with CP/M 2.2, (as a hobby) and the floppy disk format is IBM, 40 tracks, 8 sectors per track, 512 bytes per sector. free space is 154 Ko on each face of the disk. Why the sectors are indexed 1 to 8 (and not zero to seven, as usually is seen with computers)? The catalog of the floppy disk is on the track 1 (sector 1 to 4, 64 entries). I'm wondering is the catalog on track zero? Is the zero track reserved to included a system (as track 0 & 1 are reserved to the system on a CP/M floppy disk, and catalog is on track 2)?

    Read the article

  • Email server - Disk quota sizes - suggestions?

    - by Ian H
    Working out a new server for an agency of 200 Employees - with approx 240 email accounts. Internally I'm arguing with myself over the amount of drive space to allocate to each user for the disk quota, I'm just looking for suggestions. Once i have a quota size decided, it will define the solution for storage. I've had everything from 4 GB per account ( which i feel is being generous ) down to 500 Mb ( with is rather restrictive in today's day and age. ) Thing is 4 GB per acocunt is just under 1 TB of allocated storage for email alone. Does anyone follow a "rule of thumb" or have thoughts on this? thanks in advance

    Read the article

  • What is the best way to configure the number of workers in Apache?

    - by rbm
    My site receives a lot of traffic for 2 hours during the day (2000 hits per minute). The rest of the day receives less traffic(500e hits per minute). I have been experimenting with the MaxClients and MaxSpareServers values but I still get some downtime during peek hours. How can I calculate the best values for my configuration based on the amount of ram that I have ? Each process is like 36-40 M of Memory total used free shared buffers cached Mem: 3096 793 2302 0 0 0 -/+ buffers/cache: 793 2302 Swap: 0 0 0 Values that I am using now <IfModule prefork.c> StartServers 10 MinSpareServers 22 MaxSpareServers 60 ServerLimit 90 MaxClients 90 MaxRequestsPerChild 400 </IfModule>

    Read the article

  • Suggestions for hosted file sharing services

    - by Jon
    Before I pose my question, I will give some insight as per my scenario: I work for a small business (cost is an important factor) Our bandwidth is limited and would not support an in-house FTP server We need to share files (mostly pdf, inDesign, Illustrator documents) to our clients, and as we expand, we are finding that our current locally-hosted FTP solution is too slow and is becoming a detriment to our sales team. What we need is a remotely hosted solution to share files with our clients, specifically with the following features: Greater than 100gb of secure storage The Ability to distribute unique log in credentials to clients, granting access to a personalized directory or folder, while limiting access to other files on the server. A relatively simple web-based UI for clients with limited computer knowledge We have considered a dedicated remote server, and web-based services (box.net, yousendit.com, onehub.com, filesanywhere.com) but I am unsure as per the direction we should be taking - have I left another solution out? What would you suggest? Thanks in advance.

    Read the article

  • How do I change the output line length from the "top" linux command running in batch mode

    - by Tom
    The following command is useful to capture the current processes that are taking up the most CPU in a file: top -c -b -n 1 > top.log The -c flag is particularly useful because it gives you the command line arguments of each process rather than just the process name. The problem is that each line of output is truncated to fit on the current terminal window. This is ok if you can have a wide terminal because you have a lot of the output but if your terminal is only 165 characters wide, you only get 165 characters of information per process and it is often not enough characters to show the full process command. This is a particular problem when the command is executed without a terminal, for example if you do it via a cron job. Does anyone know how to stop top truncating data or force top to display a certain number of characters per line? This is not urgent because there is an alternative method of getting the top 10 CPU using processes: ps -eo pcpu,pmem,user,args | sort -r -k1 | head -n 10

    Read the article

  • Incrementing ticket numbers each time I print

    - by Danny
    I have an excel sheet where I have a set 4 identical tickets to print per page which we use for stock takes. Rather then creating a huge document with 1000 pages for 4000 tickets each with their own unique ticket number (starting from 1) I would like to find a Macro or function which will print a page with 4 tickets on (1,2,3,4) then continue to print another with (5,6,7,8) and so on. I have found some code that people have already written but it has only applied to one number changing per page rather than 4 simultaneously and being a complete visual basic novice, I was unable to change the code to suit my preferences. If someone could explain simply how I could achieve this I would be very very grateful :)

    Read the article

  • Is it possible to group chrome extension processes?

    - by Shajirr
    I have a problem with Chrome - most extensions, even those which consume merely 5-10 MB of memory, each have their own process, and because of that Chrome uses a single process for all the tabs, which consume a lot more memory compared to extensions, even with --proccess-per-tab switch. This behavior seemes illogical - why do you need extensions in separate processes if you can't use your browser properly when it takes 5-10 seconds just to load a tab and freezes constantly? Is it possible somehow to limit the number of processes which can be used for extensions, maybe group them to 10 extensions per 1 process?

    Read the article

  • VPC on Windows 7 very slow network

    - by Shigg
    I have a Windows 2003 virtual machine which I use for website testing. I've just installed Windows 7 and am using the new version VPC (not xp mode). When I try to copy a file - I need to copy some big databases across - I get a file copy speed of about 20k per sec. Copying from one PC to another on the real network transfers files at 13mb per second. Any ideas what may be causing this? I've turned off differential network compression on win 7. The Virtual HD is on a seperate physical drive to the OS. Running Windows 7 64 bit on a dual xeon with 16gb ram and 10,000 rpm drives. Tried installing VPC 2007 but windows blocks it running saying its not compatable. Many thanks for any ideas.

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • Why do SSD drives get so much more expensive as they get larger?

    - by futuraprime
    Normal HDD costs go up very little as drives get larger. For example, an average 1TB drive costs a little under $90, 2TB costs a little over $100, and a 3TB drive costs close to $150. For HDDs, the cost per GB goes down as the number of GB goes up. SSD costs don't work like this: a 128GB SSD goes for $120ish, 256GB goes for $250ish, and 512GB drives get up to $600. The cost per GB goes up as the number of GB rises. What is it about SSDs that makes them so much costlier as they get larger?

    Read the article

  • Is there any script to do accounting for the proftpd's xferlog?

    - by Aseques
    I would like to convert from the xferlog format that proftpd uses into per user in/out bytes, to have a summary on how much traffic does each user use per month. The exact format is this: Thu Oct 17 12:47:05 2013 1 123.123.123.123 74852 /home/vftp/doc1.txt b _ i r user ftp 0 * c Thu Oct 17 12:47:06 2013 2 123.123.123.123 86321 /home/vftp/doc2.txt b _ i r user ftp 0 * c So far I only found a script that makes a nice report but not exactly what I needed, that one can be found here I might create a fork of this one and place it somewhere but it probably has been done a lot of times already. Just found a well hidden page in proftpd site with some more examples here

    Read the article

  • Can anyone recommend free network monitoring tools?

    - by Josamoto
    Hi all I have a machine at home on a 3G internet connection, and my PC is consuming approximately about 200MB per day in uploads & downloads. I'm using an application called Networx to monitor the upload/download usage, but I am on the outlook for the culprit application munching away on my cap. Networx tells me how much my connection uses in total, and I need to know how much each application uploaded / downloaded. Does anyone know of a network connection monitoring utility for Windows 7 that can give me a daily outline of how much data was uploaded & downloaded, and specify this on a PER APPLICATION basis. Thanks in advance!

    Read the article

  • Can anyone recommend free network monitoring tools?

    - by Josamoto
    Hi all I have a machine at home on a 3G internet connection, and my PC is consuming approximately about 200MB per day in uploads & downloads. I'm using an application called Networx to monitor the upload/download usage, but I am on the outlook for the culprit application munching away on my cap. Networx tells me how much my connection uses in total, and I need to know how much each application uploaded / downloaded. Does anyone know of a network connection monitoring utility for Windows 7 that can give me a daily outline of how much data was uploaded & downloaded, and specify this on a PER APPLICATION basis. Thanks in advance!

    Read the article

  • Why mysql 5.5 slower than 5.1 (linux,using mysqlslap)

    - by Zenofo
    my.cnf (5.5 and 5.1 is the same) : back_log=200 max_connections=512 max_connect_errors=999999 key_buffer=512M max_allowed_packet=8M table_cache=512 sort_buffer=8M read_buffer_size=8M thread_cache=8 thread_concurrency=4 myisam_sort_buffer_size=128M interactive_timeout=28800 wait_timeout=7200 mysql 5.5: ..mysql5.5/bin/mysqlslap -a --concurrency=10 --number-of-queries 5000 --iterations=5 -S /tmp/mysql_5.5.sock --engine=innodb Benchmark Running for engine innodb Average number of seconds to run all queries: 15.156 seconds Minimum number of seconds to run all queries: 15.031 seconds Maximum number of seconds to run all queries: 15.296 seconds Number of clients running queries: 10 Average number of queries per client: 500 mysql5.1: ..mysql5.5/bin/mysqlslap -a --concurrency=10 --number-of-queries 5000 --iterations=5 -S /tmp/mysql_5.1.sock --engine=innodb Benchmark Running for engine innodb Average number of seconds to run all queries: 13.252 seconds Minimum number of seconds to run all queries: 13.019 seconds Maximum number of seconds to run all queries: 13.480 seconds Number of clients running queries: 10 Average number of queries per client: 500 Why mysql 5.5 slower than 5.1 ? BTW:I'm tried mysql5.5/bin/mysqlslap and mysql5.1/bin/mysqlslap,result is the same

    Read the article

  • Is IMAP (un)subscribe meant to work accross mail clients?

    - by equaeghe
    I read my IMAP-mail on different computers/mail clients. I wanted to unsubscribe from the majority of my folders in Outlook. Later, I noticed that on another computer, where I use Thunderbird, I was also unsubscribed from those folders as well. This is not what I wanted (at all), so I subscribed again under Thunderbird. The effect was that I also got subscribed again under Outlook. So I guess this means (un)subscribing to IMAP folders is an account-coupled thing, not a per-client thing. Is there a way to achieve (un)subscribing per client?

    Read the article

  • Load balanced proxies to avoid an API request limit

    - by ClickClickClick
    There is a certain API out there which limits the number of requests per day per IP. My plan is to create a bunch of EC2 instances with elastic IPs to sidestep the limitation. I'm familiar with EC2 and am just interested in the configuration of the proxies and a software load balancer. I think I want to run a simple TCP Proxy on each instance and a software load balancer on the machine I will be requesting from. Something that allows the following to return a response from a different IP (round robin, availability, doesn't really matter..) eg. curl http://www.bbc.co.uk -x http://myproxyloadbalancer:port Could anyone recommend a combination of software or even a link to an article that details a pleasing way to pull it off? (My client won't be curl but is proxy aware.. I'll be making the requests from a Ruby script..)

    Read the article

  • non-mapped virtual memory & total number of connections

    - by tszming
    We have two MongoDB data nodes (replica set) - Primary & Secondary. I noticed that the non-mapped virtual memory is relatively high and wondering if they are hurting our MongoDB performance (The server usually peaked at around 6-7K queries per sec). In MMS, it was stated: "The most common case of usage of a high amount of memory for non-mapped is that there are very many connections to the database." So we checked the memory usage with db.serverStatus().mem in our Secondary: { "bits" : 64, "resident" : 6846, "virtual" : 416797, "supported" : true, "mapped" : 205549, "mappedWithJournal" : 411098, "note" : "virtual minus mapped is large. could indicate a memory leak" } Note: We are using 2.0.4 and now the default stack size should be 1MB per connection. The current number of connections is around 1.1K, but the non-mapped virtual memory (virtual-mappedWithJournal) is around 5699 MB. The trend is quite stable so I can't say there is a leak here, but where is the memory gone? Any idea?

    Read the article

  • Change Google Chrome's Process model?

    - by mobius42
    See here: http://imgur.com/lKffI.png Does anyone here know how to stop Chrome doing this? Chrome seems to group all tabs I open through the same page into one process. If I copy and paste the links individually into separate tabs, it creates new processes, but when I just middle click links, it groups them into one. I want to force Chrome to create a new process for every tab because when one page locks up, it freezes pretty much all the tabs I have open and if one of the tabs crashes, it takes the rest with it. You can apparently alter Chrome's process model to one called "--process-per-tab" which seems to be what I'm looking for, but when I try and open Chrome with this argument via the terminal, it doesn't work. It's likely I'm not using the correct command; what I tried was: /Applications/"Google Chrome.app"/Contents/MacOS/"Google Chrome" --process-per-tab I'm on OSX and using the latest dev build 5.0.396.0.

    Read the article

  • In terms of load handling, which is better: one server or two of equivalent power?

    - by seldary
    My goal is to figure out if i'm better off with one strong server, or multiple weaker servers with a load balancer. Does the fact of splitting the load between servers have an effect on the total load my website could take? It's hard to single that out, because there are of course a lot of parameters that affect the results, so some assumptions: Putting failover considerations aside - I know it matters, but for the sake of the question's simplicity, lets assume nothing fails. The servers in the multiple servers option have an accumulated "power" equivalent to the one server option (about the same amount of cores and RAM space). If that is too theoretical, here is a concrete question that could help: Suppose I have several instances of exactly the same server - lets call it S. Suppose that server S can serve a load of up to X calls per time unit. Will two S servers with a load balancer serve 2X calls per time unit? significantly more? significantly less?

    Read the article

  • Trying to find a good filehost [closed]

    - by user67481
    I'm looking for a good filehost that I can use to link downloads on my blog (personally created files, no copyright infringement). Been looking at mediafire, but I'm not sure what else is out there that would meet my needs. Ideally wanting something that has no files-per-day-per-user limits, can host individual files of at least 500MB each, and has very little hassle for the users who download from them. I'll pay for a 'premium' or whatever level account if necessary. Any good suggestions? Or will mediafire be my best bet for this?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >