Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 234/457 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • What is a good router linux distro and WHY?

    - by madmaze
    I have a rather large home network with many clients. Ive decided I want to build a Linux based router, I have an 1.6Ghz dual-core(atom) system kicking around which will be re-purposed. So ive looked at a bunch of specific router distros but cant decide. I have also looked into taking a Ubuntu server or FreeBSD install and adding needed packages. So question is, what is the best router-Linux or base Linux distro and why? resources appreciated.

    Read the article

  • How to debug high memory usage by registry?

    - by bkr
    I have a windows 2008 r2 server running ADFS2 and some web apps that is having issues with low memory. Digging around I found that the 5.5 GB were being used under Kernel Memory (paged). Further digging with Poolmon, I discovered that the majority of that (5GB+) was being used by CM - configuration manager. Also known as the registry. I'm really now sure how to tell why the registry is using so much memory however, or how to release it? Looking at the physical registry files they don't appear that large. EDIT #1 Using the powershell script @ http://jdhitsolutions.com/blog/2011/05/get-registry-size-and-age/ confirms what I saw looking at the physical registry files, that they're relatively small Computername : (removed) Status : OK CurrentSize : 67 MaximumSize : 2048 FreeSize : 1981 PercentFree : 96.728515625 Created : 4/1/2011 11:38:02 AM Age : 454.23:41:28.2540682

    Read the article

  • How can I run a game server on a computer behind a NAT, if I have another computer not behind a NAT?

    - by Macha
    My home connection is part of a large NAT, outside my control. Inside my home, my router has a NAT, under my control. I control a public facing Linux VPS with one IP address, outside my home network. Ideally, what I'd like to do is set something up so that I connect my home computer to my VPS, and after that port X on my VPS leads to port Y on my home computer, for the purposes of running a game server of a game that does not run under Linux. Is this possible?

    Read the article

  • Excel 2007: Filtering out rows in a table based on a list

    - by Sam Johnson
    I have a large table that looks like this: ID String 1 abcde 2 defgh 3 defgh 4 defgh 5 ijkl 6 ijkl 7 mnop 8 qrst I want to selectivley hide rows by populating a list of filterd values. For example, I'd like to filter out (hide) all rows that contain 'ef', 'kl', and 'qr'. Is there an easy way to do this? I know how to use Advanced filters to include only the rows that contain those substrings, but not the inverse. Has anyone does this before?

    Read the article

  • The Coolest Server Names

    - by deadprogrammer
    These days server naming is a bit of a lost art. Most large organizations don't allow for fanciful names and name their servers with jumbles of digits and letters. In the olden days just about every system administrator came up with a unique naming scheme, well, sometimes unique - many just settled for Star Trek characters. To this day my favorite server name is Qantas - a Unix server that Joel Spolsky has or used to have. Why Qantas? You'd have to ask Rainman. So my question is this - what is the coolest server name or naming convention that you encountered? Let the geekfest begin. This question is marked "community wiki", so I am not getting any "rep" from it.

    Read the article

  • How to get Bash shell history range

    - by Aniti
    How can I get/filter history entries in a specific range? I have a large history file and frequently use history | grep somecommand Now, my memory is pretty bad and I also want to see what else I did around the time I entered the command. For now I do this: get match, say 4992 somecommand, then I do history | grep 49[0-9][0-9] this is usually good enough, but I would much rather do it more precisely, that is see commands from 4972 to 5012, that is 20 commands before and 20 after. I am wondering if there is an easier way? I suspect, a custom script is in order, but perhaps someone else has done something similar before.

    Read the article

  • Font size of emacs in ubuntu

    - by Ispinfx
    I use emacs in ubuntu and I use Monaco 10 as its default font. However, the font rendering seems a bit odd compared to my gnome terminal with the same font size: It's a bit smaller and not as clear as that in the terminal. I've tried to avoid simply this with size 11 but it's too large for me. How can I make it the same as its look in the terminal ? Any help is appreciated :) UPDATE: I should tell you the above on is GUI emacs running a shell, and the below is the gnome terminal. On the right are their correspond font settings. Both 100% capture with font size 10: (left: emacs, middle: terminal, right: gedit) One more (gvim's):

    Read the article

  • Access to self created torrent on public tracker

    - by Nick
    Not sure if this is the right site to ask this, but here it goes: Let's say I'd like to share a couple of private files with a few friends. The size of these are quite large, so I've figured the best route to distribute these is via torrent. So, on my home PC I create a torrent and start seeding and announce to a public tracker like openbittorrent and publicbt. Now, both of those are public trackers, but they don't seem to have anyway of searching through what is actually being tracked. If I'm only passing around the torrent file to a few friends, whats the chances that someone else will 'randomly' come across the torrent via the public tracker and start leeching?

    Read the article

  • Social Media Aggregator, Global Update via Powershell

    - by deanjmiller
    Does anyone know of a way to interface with a Social Media Aggregator using Powershell. For Instance, I would like to update my global status on digsby using Powershell. Digsby would then fan the message out to Facebook, Myspace, Twitter, Etc.. I am open to using any Social Media Aggregator that can do this.. Digsby, Seesmic, Ping.fm TweetDeek, etc.. If any of these programs have a com interface or something like it I'm sure who ever implements this first will have a large gain in users.

    Read the article

  • cannot at all find sql instance (while installing an asp.net app on IIS)

    - by giddy
    So I'm really not a DBA, I'm an app dev. I had to install my asp.net mvc3 app on my client's(a large company) IIS6 + Win2k3 machine, with absolutely no help from their sysadmins. The final problem now is SQL Server 2008 r2, after figuring out how to create a login from windows, my app and sqlcmd.exe always complains it cannot find a sql server instance!! I have all the sql services (in services.msc) running to Log On as the local system. I can login fine with SQL Server Management Studio with Windows Auth. I created my database, my asp.net app needs/uses windows auth. But for the love of God, whatever I do my app always complains it cannot find the instance. (Also tried running SQL CMD and it complains of the same thing too!) My data base connection string looks like this: Data Source=machinename\username;Initial Catalog=myDataStore;Integrated Security=True;MultipleActiveResultSets=True Machinename\user is the same thing that shows up on the sql server management studio login if I choose windows authentication right?

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • rsync doesn't use delta transfer on first run

    - by ockzon
    I'm trying to synchronize a large local directory (with a batch file using rsync 3.0.7 on Cygwin, Windows 7 x64, 30k files, 200gb size) to a remote server (Debian x64 with kernel 2.6, rsyncd 3.0.7) over a slow internet connection (90kbyte/s upload). I know almost all files are identical and I verified that using md5sum locally and remotely. However when executing rsync from my local machine every file gets transferred completely for the first time. When I terminate the batch file after a few transfers and run it again then the already transferred files are skipped. But as soon as it gets to a file not yet transferred it uploads the file as a whole again instead of noticing that the checksum is the same locally and remotely. The batch file calling rsync looks like this (backslashes and line brakes added here for readability): c:\cygwin\bin\rsync.exe --verbose --human-readable --progress --stats \ --recursive --ignore-times --password-file pwd.txt \ /cygdrive/d/ftp/data/ \ rsync://[email protected]:33400/data/ | \ c:\cygwin\bin\tee.exe --append rsync.log I experimented using the following parameters in varying combinations but that didn't help either: --checksum --partial --partial-dir=/tmp/.rsync-partial --compress

    Read the article

  • Avoid cache overflow in Atempo LiveBackup

    - by Vebjorn Ljosa
    When attempting the initial backup of a new client, Atempo LiveBackup seems to require a very large cache. For instance, a 20 GB cache is not enough to back up a computer that has 100 GB of data. It appears that LiveBackup is adding new files to the cache at a faster rate than it can send them to the server. When the cache fills up, the backup fails. Aside from removing most data from the computer and then add them back gradually after the initial backup, is there a good workaround? Is it possible to make LiveBackup slow down its scan so as to not fill the cache? Or is it possible to place the cache on an external drive?

    Read the article

  • deploying skype for 50 nationwide users with preset usernames

    - by kevyn
    Is there a way to sign up a large group of skype names at once? is there a way to enable the users to be given a skype username based on their own e-mail addresses? What I would like to do is roll out skype in an office in every county in UK with a pre defined username such as 'mycompanyname-warwickshire', 'mycompanyname-bedfordshire' and so on. Our users are only basic computer users, so I would ideally like this done with as least fuss as possible for them! Thanks in advance ps. if anyone has a good way of doing this by using any alternative software, I'm open to suggestions

    Read the article

  • Moving folders take long in windows 7

    - by acidzombie24
    What can i do to fix this? maybe drop permission properties? maybe not. I have a large folder with 100k files. I moved it into my archive folder and its taking forever to move. Why is that? I know on XP it takes <1sec but not on windows 7. I am sure its a permission thing, is there a way i can disable it and make it faster?

    Read the article

  • Accidentally ejected my Verbatim drive and can't get the icon back

    - by Erin
    Hi, I have time machine running on my iMac OSX v10.5.8 and also have a Verbatim 1TB attached that I use as a workspace/scratchdisk so I can manipulate large music files before I transfer them. However, when cleaning behind my computer the other day I think I dislodged the connection (or maybe one of the kids hit the eject button, i don't know) however, I've re-booted many times and it's not reconnected. It doesn't appear in my disc utility windown and I don't know how to get the icon back! I've looked in time machine but it doesn't appear there at all (cos it's not supposed to I think - it's not connected - my mate hooked it up for me and he won't return my calls!). Help. I don't know how to get it back! Sorry for being a plank.

    Read the article

  • Encrypt EC2 API call

    - by Frank
    I have to host an AMI in the Amazon Marketplace. i need to get the type of instance, whenever some user launches the AMI., like if its small medium or large. based on that i need to make some changes in the AMI when its created. I can do this with Amazon API call, to get the instance type, but the problem is that the instances created with the AMI will be started by other users, and i cannot use my AWS Credentials in the Amazon API. Is there any way that i can create an anonymous readonly user to make only specific type of EC2 API Calls? Or can i encrypt my EC2 API credentials, so no one can use it?

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • Auto-crop black margins dynamically of scanned images?

    - by naxa
    I have a notebook photocopied and the photocopy scanned, about 200 pages. For various reasons I need to print this material. There are large amounts of black areas at the sides of the page (after the page itself ends), "black margins". The image looks like this: I would like to remove the black places, but keeping all text. * The even and odd pages have the black part at different places. * Notably, there is a white edge outside the black one, too! * Most notably, the black areas has no fixed width (I've tried to overlay all the images for even and odd pages separately). It's width varies. The batch algorythm should be able to detect it. Is there a way to remove these black-white margins automatically, keeping the text?

    Read the article

  • Distributed Server Monitoring Solution

    - by MaterialEdge
    I belong to an independent IT firm that manages and maintains about 50 business clients networks, ranging from small 5 system networks to 200+ systems. Because we are unable to directly monitor each server at these locations (distributed over a very large area) on a regular basis I am looking for a method to monitor and alert us to any problems that may arise so that we can respond quickly with, hopefully, preventative measures. I'm not sure what solutions are available for this type of situation, but something that utilizes a central server at our business with all client servers sending alerts or logs to it for daily monitoring might work best. All these servers are running a Windows Server OS. In your opinion, what would be the best course of action to accomplish this?

    Read the article

  • Filtering serial port io

    - by mr odus
    I am currently working on a project where I communicate with hardware via a com port on its respective pc(win xp or 7) It is a fairly large project and sifting through the log file can be a bit of pain. This is my current setup. I use putty to do the actual serial communication, and write it to a log file. Then using MinGW Msys I filter it using tail -f "puttyLog" | grep -i "search term" Is there a better way to do this? I mean specifically filtering the input in realtime. Not that mine is terrible, but it still involves having to read from a log and sometimes there have been hangups where it will be delayed for a minute or 2. I have used software in the past with a main io window and then internal filter panels, but can no longer remember or find it.

    Read the article

  • Remote file copy util (like rsync) but that will take account of data already copied (in this sessio

    - by Rory McCann
    Let's say I have a directory with 2 files, both are identical and quite large (e.g. 2GB ea.) I want to rsync that directory to a remote host. As I understand it (and I could be wrong), rsync calculates checksums of files. Surely if it sees 2 files with the same checksum it can just copy the first file, then do a local copy on the remote host for the 2nd file? That would make it faster, no? On a similar note, doesn't rsync hash all the remote files before copying? If it saw a different file with the same hash as a file that was to transfered, it could do a local copy on the remote host. Does rsync support this sort of thing? Is there some way to turn it on? Is there a tool similar to rsync that will do this sort of 'hash based' local copies?

    Read the article

  • Are all SFP+ tranceivers usable for FEX between Nexus 5000 and Nexus 2000?

    - by Alain O'Dea
    I am looking at building a network with Nexus 5000 parent switches and Nexus 2000 fabric extenders. The mystery at the moment is what kind of SFP+ tranceivers are required for cross-connecting racks. Right now I am considering FET-10G, but I am not sure that 100m is long enough given the separation between racks is potentially very large since it is a rented rack environment. Are all SFP+ tranceivers usable for FEX between Nexus 5000 and Nexus 2000? Specifically, can SFP-10G-SR transceivers be used for longer distance FEX?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >