Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 234/457 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Using Powershell's Invoke-command to install .exe on remote computer

    - by Bernie
    I have an .exe I would like to install on a large farm of Windows Server 2008 computers. I am attempting to use Powershell remoting. I have this command which works locally: invoke-command {& "N:\Temp\fortify_installer\HP-Fortify-3.20-Analyzers_and_Apps-Windows-x86.exe /s /f1N:\Temp\fortify_installer\response.iss"} But when I add the -computername flag it seems to go off to nowhere, and the installer is never run on the remote machine. I can launch notepad.exe via the same command and it runs. Does it have something to do with it being an installer, or something else? I realize many versions of this question have been asked and I have read them, but I am still confused as to why this doesn't work.

    Read the article

  • The Coolest Server Names

    - by deadprogrammer
    These days server naming is a bit of a lost art. Most large organizations don't allow for fanciful names and name their servers with jumbles of digits and letters. In the olden days just about every system administrator came up with a unique naming scheme, well, sometimes unique - many just settled for Star Trek characters. To this day my favorite server name is Qantas - a Unix server that Joel Spolsky has or used to have. Why Qantas? You'd have to ask Rainman. So my question is this - what is the coolest server name or naming convention that you encountered? Let the geekfest begin. This question is marked "community wiki", so I am not getting any "rep" from it.

    Read the article

  • Is there any two-panels bookmarks manager for Google Chrome?

    - by L. Shaydariv
    Hi to all. I'm just wondering, is there any two-panels-interface (like Total Commander, File Manager, etc) bookmark manager, extension, etc for Google Chrome? Using of the default bookmark manager is not so suitable because of two reasons: 1) I've gathered a very large collection of bookmarks (please, don't ask why) 2) bookmarks hierarchy tree always expands its branches when the bookmark manager is open (this makes the moving of bookmarks through the bookmark tree much and much harder). I tried to use Link Commander but it's very and very slow. Any suggestions? Thank you for advices.

    Read the article

  • Programatically Determine Exchange Attachment Limit

    - by Jeff Ballard
    Is there any way to query the exchange server to determine the maximum attachment file size? I'd be doing this in ASP.NET/C#. I'd like to be able to validate the file they want to attach is not over the limit before the user attempts to send the file to the server as opposed to having the server send back an exception when it attempts to attach the file and it discovers the file is too large. I've also posted this question about this on stackoverflow.com as well - I figured a sysadmin for Exchange may have an answer as well as a developer. Hopefully I do not incur the wrath of the stackexchange gods.

    Read the article

  • Increasing load capacity for growing website

    - by markxi
    My website currently runs on a dedicated web server (with LiteSpeed) and dedicated MySQL database server. It's a download based site with a lot of user-generated content, which can be streamed and downloaded, there are also thousands of thumbnails and static content. I'm at the stage where the web server can no longer handle the amount of traffic, so I'm looking a how best to increase capacity considering the large amount of downloadable content. My host suggests mirroring everything on a second web server and distributing the load between them using either DNS Made Easy, or to have my own load balancer (using ldirector) in front of the two web servers. Could anyone advise whether the above method would be the best option? Does any one have any experience with DNS Made Easy and/or ldirector? I'd appreciate any help.

    Read the article

  • Excel 2007: Filtering out rows in a table based on a list

    - by Sam Johnson
    I have a large table that looks like this: ID String 1 abcde 2 defgh 3 defgh 4 defgh 5 ijkl 6 ijkl 7 mnop 8 qrst I want to selectivley hide rows by populating a list of filterd values. For example, I'd like to filter out (hide) all rows that contain 'ef', 'kl', and 'qr'. Is there an easy way to do this? I know how to use Advanced filters to include only the rows that contain those substrings, but not the inverse. Has anyone does this before?

    Read the article

  • Moving folders take long in windows 7

    - by acidzombie24
    What can i do to fix this? maybe drop permission properties? maybe not. I have a large folder with 100k files. I moved it into my archive folder and its taking forever to move. Why is that? I know on XP it takes <1sec but not on windows 7. I am sure its a permission thing, is there a way i can disable it and make it faster?

    Read the article

  • Which file system to choose from when formatting 1.5TB hard drive (hdd)

    - by MaxiWheat
    I plan to buy a 1.5TB hard drive soon. I would like to know which file system to choose from when I'm gonna format it. With FAT32, there is a limitation concerning the maximum file size (4GB) that bugs me since I might save large files such as DVD images which are over 4GB. On the other hand, NTFS allows me to save larger files, but seems less compatible with other OS than Windows and is also proprietary to Microsoft. Are there other alternatives ? Can you give me your advices ?

    Read the article

  • Accidentally ejected my Verbatim drive and can't get the icon back

    - by Erin
    Hi, I have time machine running on my iMac OSX v10.5.8 and also have a Verbatim 1TB attached that I use as a workspace/scratchdisk so I can manipulate large music files before I transfer them. However, when cleaning behind my computer the other day I think I dislodged the connection (or maybe one of the kids hit the eject button, i don't know) however, I've re-booted many times and it's not reconnected. It doesn't appear in my disc utility windown and I don't know how to get the icon back! I've looked in time machine but it doesn't appear there at all (cos it's not supposed to I think - it's not connected - my mate hooked it up for me and he won't return my calls!). Help. I don't know how to get it back! Sorry for being a plank.

    Read the article

  • Avoid cache overflow in Atempo LiveBackup

    - by Vebjorn Ljosa
    When attempting the initial backup of a new client, Atempo LiveBackup seems to require a very large cache. For instance, a 20 GB cache is not enough to back up a computer that has 100 GB of data. It appears that LiveBackup is adding new files to the cache at a faster rate than it can send them to the server. When the cache fills up, the backup fails. Aside from removing most data from the computer and then add them back gradually after the initial backup, is there a good workaround? Is it possible to make LiveBackup slow down its scan so as to not fill the cache? Or is it possible to place the cache on an external drive?

    Read the article

  • Limit number of simultaneous connections squid makes to a single server

    - by Ben Voigt
    Note: I am asking about outbound concurrent connection limits, not inbound, which is sufficiently covered on existing questions Modern browsers typically open a large number of simultaneous connections, to take advantage of the fact that TCP fairly shares bandwidth between connections. Of course, this doesn't result in fair sharing between users, so some servers have started penalizing hosts which open too many connections. This limit can be configured client-side (e.g. IE MaxConnectionsPerServer, Firefox network.http.max-connections-per-server), but the method differs for each browser and version, and many users aren't competent to adjust it themselves. So we turn to a squid transparent HTTP proxy for central management of HTTP download. How can the number of simultaneous connections from squid to a remote webserver be limited, so the webserver doesn't perceive it as abuse of concurrent connections? Ideally the limit would be per source address. Squid should accept virtually unlimited concurrent requests from the client browser, and issue them sequentially to the remote server, only N at a time, delaying (but not dropping) the others.

    Read the article

  • How can I run a game server on a computer behind a NAT, if I have another computer not behind a NAT?

    - by Macha
    My home connection is part of a large NAT, outside my control. Inside my home, my router has a NAT, under my control. I control a public facing Linux VPS with one IP address, outside my home network. Ideally, what I'd like to do is set something up so that I connect my home computer to my VPS, and after that port X on my VPS leads to port Y on my home computer, for the purposes of running a game server of a game that does not run under Linux. Is this possible?

    Read the article

  • deploying skype for 50 nationwide users with preset usernames

    - by kevyn
    Is there a way to sign up a large group of skype names at once? is there a way to enable the users to be given a skype username based on their own e-mail addresses? What I would like to do is roll out skype in an office in every county in UK with a pre defined username such as 'mycompanyname-warwickshire', 'mycompanyname-bedfordshire' and so on. Our users are only basic computer users, so I would ideally like this done with as least fuss as possible for them! Thanks in advance ps. if anyone has a good way of doing this by using any alternative software, I'm open to suggestions

    Read the article

  • PostgreSQL disaster recovery options

    - by Alex
    My customer has quite a large (the total "data" folder size is 200G) PostgreSQL database and we are working on a disaster recovery plan. We have identified three different types of disasters so far: hardware outage, too much load and unintentional data loss due to erroneously executed bad migration (like DELETE or ALTER TABLE DROP COLUMN). First two types seem to be easy to mitigate but we can't elaborate a good mitigation plan for the third type. I proposed to use ZFS and frequent (hourly) snapshots but "ZFS" means "OpenIndiana" these days and our Ops engineers do not have much expertise in it, so using OpenIndiana imposes another risk. Colleagues try to convince me that restoring from PostgreSQL PITR backup can be as fast as restoring from a ZFS snapshot but I highly doubt that replaying, say, 50G of archived WALs can be considered "fast". What other options are we missing? Is ZFS an only viable alternative? Can we get a fast Pg DB restore time in the Linux environment?

    Read the article

  • Filtering serial port io

    - by mr odus
    I am currently working on a project where I communicate with hardware via a com port on its respective pc(win xp or 7) It is a fairly large project and sifting through the log file can be a bit of pain. This is my current setup. I use putty to do the actual serial communication, and write it to a log file. Then using MinGW Msys I filter it using tail -f "puttyLog" | grep -i "search term" Is there a better way to do this? I mean specifically filtering the input in realtime. Not that mine is terrible, but it still involves having to read from a log and sometimes there have been hangups where it will be delayed for a minute or 2. I have used software in the past with a main io window and then internal filter panels, but can no longer remember or find it.

    Read the article

  • Encrypt EC2 API call

    - by Frank
    I have to host an AMI in the Amazon Marketplace. i need to get the type of instance, whenever some user launches the AMI., like if its small medium or large. based on that i need to make some changes in the AMI when its created. I can do this with Amazon API call, to get the instance type, but the problem is that the instances created with the AMI will be started by other users, and i cannot use my AWS Credentials in the Amazon API. Is there any way that i can create an anonymous readonly user to make only specific type of EC2 API Calls? Or can i encrypt my EC2 API credentials, so no one can use it?

    Read the article

  • cannot at all find sql instance (while installing an asp.net app on IIS)

    - by giddy
    So I'm really not a DBA, I'm an app dev. I had to install my asp.net mvc3 app on my client's(a large company) IIS6 + Win2k3 machine, with absolutely no help from their sysadmins. The final problem now is SQL Server 2008 r2, after figuring out how to create a login from windows, my app and sqlcmd.exe always complains it cannot find a sql server instance!! I have all the sql services (in services.msc) running to Log On as the local system. I can login fine with SQL Server Management Studio with Windows Auth. I created my database, my asp.net app needs/uses windows auth. But for the love of God, whatever I do my app always complains it cannot find the instance. (Also tried running SQL CMD and it complains of the same thing too!) My data base connection string looks like this: Data Source=machinename\username;Initial Catalog=myDataStore;Integrated Security=True;MultipleActiveResultSets=True Machinename\user is the same thing that shows up on the sql server management studio login if I choose windows authentication right?

    Read the article

  • Distributed Server Monitoring Solution

    - by MaterialEdge
    I belong to an independent IT firm that manages and maintains about 50 business clients networks, ranging from small 5 system networks to 200+ systems. Because we are unable to directly monitor each server at these locations (distributed over a very large area) on a regular basis I am looking for a method to monitor and alert us to any problems that may arise so that we can respond quickly with, hopefully, preventative measures. I'm not sure what solutions are available for this type of situation, but something that utilizes a central server at our business with all client servers sending alerts or logs to it for daily monitoring might work best. All these servers are running a Windows Server OS. In your opinion, what would be the best course of action to accomplish this?

    Read the article

  • How to split file on Windows 2003 using MS supported tool

    - by Rune
    Hi, Is it possible to split a large file into smaller files on Windows 2003 using a tool provided/supported/sanctioned by Microsoft? I see that there are a lot of freeware tools (various zip tools) for this task, but I need to move files off of a production server, thus would like to avoid tools I don't know if I can trust. I would much prefer some tool included in the Windows Server 2003 Resource Kit Tools or something along those lines. Does such a tool exist? Thank you.

    Read the article

  • Using MongoDB + Redis + Apache on the same server in production?

    - by Dayson
    I intend to launch my web app using a 8 GB VPS. It uses MongoDB + Redis for storage/caching and Apache + PHP-FPM for serving requests. Could there be any issues with running Mongo + Redis + Apache on the same server? Would it make more sense to setup 2 x 4 GB VPS servers and keep Mongo on one and Redis + Apache on another? Should I just start with one server and worry about scaling horizontally later by delegating the existing server to Mongo in the future (due to its large RAM) and moving the web servers on to multiple smaller VPS'?

    Read the article

  • Social Media Aggregator, Global Update via Powershell

    - by deanjmiller
    Does anyone know of a way to interface with a Social Media Aggregator using Powershell. For Instance, I would like to update my global status on digsby using Powershell. Digsby would then fan the message out to Facebook, Myspace, Twitter, Etc.. I am open to using any Social Media Aggregator that can do this.. Digsby, Seesmic, Ping.fm TweetDeek, etc.. If any of these programs have a com interface or something like it I'm sure who ever implements this first will have a large gain in users.

    Read the article

  • Does ZFS cache Compressed or Uncompressed data in a ZFS file-system with compression turned on?

    - by George Bailey
    ZFS supports file-system compression and it also caches frequently or recently accessed data. If a system has lots of CPU but the underlying data storage system is slow. It is possible that ZFS would perform better with compression turned on. This can be easily tested when writing files by measuring CPU and disk usage and throughput. (of course latency may exist,, but this would not be an issue for large files). But what about cache? If data will have to be decompressed every time it is read then this is probably less of a good idea. Is the cached data compressed?. Does anybody have some information on this?

    Read the article

  • Remote file copy util (like rsync) but that will take account of data already copied (in this sessio

    - by Rory McCann
    Let's say I have a directory with 2 files, both are identical and quite large (e.g. 2GB ea.) I want to rsync that directory to a remote host. As I understand it (and I could be wrong), rsync calculates checksums of files. Surely if it sees 2 files with the same checksum it can just copy the first file, then do a local copy on the remote host for the 2nd file? That would make it faster, no? On a similar note, doesn't rsync hash all the remote files before copying? If it saw a different file with the same hash as a file that was to transfered, it could do a local copy on the remote host. Does rsync support this sort of thing? Is there some way to turn it on? Is there a tool similar to rsync that will do this sort of 'hash based' local copies?

    Read the article

  • rsync doesn't use delta transfer on first run

    - by ockzon
    I'm trying to synchronize a large local directory (with a batch file using rsync 3.0.7 on Cygwin, Windows 7 x64, 30k files, 200gb size) to a remote server (Debian x64 with kernel 2.6, rsyncd 3.0.7) over a slow internet connection (90kbyte/s upload). I know almost all files are identical and I verified that using md5sum locally and remotely. However when executing rsync from my local machine every file gets transferred completely for the first time. When I terminate the batch file after a few transfers and run it again then the already transferred files are skipped. But as soon as it gets to a file not yet transferred it uploads the file as a whole again instead of noticing that the checksum is the same locally and remotely. The batch file calling rsync looks like this (backslashes and line brakes added here for readability): c:\cygwin\bin\rsync.exe --verbose --human-readable --progress --stats \ --recursive --ignore-times --password-file pwd.txt \ /cygdrive/d/ftp/data/ \ rsync://[email protected]:33400/data/ | \ c:\cygwin\bin\tee.exe --append rsync.log I experimented using the following parameters in varying combinations but that didn't help either: --checksum --partial --partial-dir=/tmp/.rsync-partial --compress

    Read the article

  • Auto-crop black margins dynamically of scanned images?

    - by naxa
    I have a notebook photocopied and the photocopy scanned, about 200 pages. For various reasons I need to print this material. There are large amounts of black areas at the sides of the page (after the page itself ends), "black margins". The image looks like this: I would like to remove the black places, but keeping all text. * The even and odd pages have the black part at different places. * Notably, there is a white edge outside the black one, too! * Most notably, the black areas has no fixed width (I've tried to overlay all the images for even and odd pages separately). It's width varies. The batch algorythm should be able to detect it. Is there a way to remove these black-white margins automatically, keeping the text?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >