Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 150/457 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Random password generator: many, in columns, on command line, in Linux

    - by Adam Backstrom
    A while back, I came across a random password generator for the command line that displayed a grid of "memorable" passwords. Output was something like this: adam@host:~$ CantRememberThisCommand lkajsdf aksjdfl kqwrupo qwerpoi qwerklw zxlkelq The idea was that you could run this utility while someone was looking over your shoulder, and still pick a password with some level of secrecy due to the large number of choices. I cannot remember what this utility was called. Oh interwebs, can you help?

    Read the article

  • Determine the percentage of a file that has been ftp'd from client to server

    - by klwillie
    I want to ftp a large file from a Windows client to a Windows server, using their IP addresses. This is on an internet independent network. While the file is transferring, I would like to determine how many bytes have been received by the server. I then would like to use this information to determine in real-time the percentage of the file that has been transferred to the server. Any recommendations as to the ftp command syntax and C# code to achieve this?

    Read the article

  • Linux file copy with ETA?

    - by bobby
    I'm copying a large amount of files between disks. There's approximately 16 GB of data. I'd like to see progress information, and even an estimated time of completion from the command line. Any advice?

    Read the article

  • Is it possible to copy a set of files, but automatically skip if file already exists?

    - by awe
    I know that the copy command has an option to automatically replace a file if it already exists, but I want to know if it is a way to copy the files only if they not already exist (/Y). I do not know the actual file names in the batch code, as I copy from the source using wildcards in the copy command: copy *.zip c:\destination The reason I want this instead of automatic overwrite is that the files are large, and to skip existing would save a lot of execution time.

    Read the article

  • How do I prevent TCP connection freezes over an OpenVPN network?

    - by Jason R
    New details added at the end of this question; it's possible that I'm zeroing in on the cause. I have a UDP OpenVPN-based VPN set up in tap mode (I need tap because I need the VPN to pass multicast packets, which doesn't seem to be possible with tun networks) with a handful of clients across the Internet. I've been experiencing frequent TCP connection freezes over the VPN. That is, I will establish a TCP connection (e.g. an SSH connection, but other protocols have similar issues), and at some point during the session, it seems that traffic will cease being transmitted over that TCP session. This seems to be related to points at which large data transfers occur, such as if I execute an ls command in an SSH session, or if I cat a long log file. Some Google searches turn up a number of answers like this previous one on Server Fault, indicating that the likely culprit is an MTU issue: that during periods of high traffic, the VPN is trying to send packets that get dropped somewhere in the pipes between the VPN endpoints. The above-linked answer suggests using the following OpenVPN configuration settings to mitigate the problem: fragment 1400 mssfix This should limit the MTU used on the VPN to 1400 bytes and fix the TCP maximum segment size to prevent the generation of any packets larger than that. This seems to mitigate the problem a bit, but I still frequently see the freezes. I've tried a number of sizes as arguments to the fragment directive: 1200, 1000, 576, all with similar results. I can't think of any strange network topology between the two ends that could trigger such a problem: the VPN server is running on a pfSense machine connected directly to the Internet, and my client is also connected directly to the Internet at another location. One other strange piece of the puzzle: if I run the tracepath utility, then that seems to band-aid the problem. A sample run looks like: [~]$ tracepath -n 192.168.100.91 1: 192.168.100.90 0.039ms pmtu 1500 1: 192.168.100.91 40.823ms reached 1: 192.168.100.91 19.846ms reached Resume: pmtu 1500 hops 1 back 64 The above run is between two clients on the VPN: I initiated the trace from 192.168.100.90 to the destination of 192.168.100.91. Both clients were configured with fragment 1200; mssfix; in an attempt to limit the MTU used on the link. The above results would seem to suggest that tracepath was able to detect a path MTU of 1500 bytes between the two clients. I would assume that it would be somewhat smaller due to the fragmentation settings specified in the OpenVPN configuration. I found that result somewhat strange. Even stranger, however: if I have a TCP connection in the stalled state (e.g. an SSH session with a directory listing that froze in the middle), then executing the tracepath command shown above causes the connection to start up again! I can't figure out any reasonable explanation for why this would be the case, but I feel like this might be pointing toward a solution to ultimately eradicate the problem. Does anyone have any recommendations for other things to try? Edit: I've come back and looked at this a bit further, and have found only more confounding information: I set the OpenVPN connection to fragment at 1400 bytes, as shown above. Then, I connected to the VPN from across the Internet and used Wireshark to look at the UDP packets that were sent to the VPN server while the stall occurred. None were greater than the specified 1400 byte count, so the fragmentation seems to be functioning properly. To verify that even a 1400-byte MTU would be sufficient, I pinged the VPN server using the following (Linux) command: ping <host> -s 1450 -M do This (I believe) sends a 1450-byte packet with fragmentation disabled (I at least verified that it didn't work if I set it to an obviously-too-large value like 1600 bytes). These seem to work just fine; I get replies back from the host with no issue. So, maybe this isn't an MTU issue at all. I'm just confused as to what else it might be! Edit 2: The rabbit hole just keeps getting deeper: I've now isolated the problem a bit more. It seems to be related to the exact OS that the VPN client uses. I have successfully duplicated the problem on at least three Ubuntu machines (versions 12.04 through 13.04). I can reliably duplicate an SSH connection freeze within a minute or so by just cat-ing a large log file. However, if I do the same test using a CentOS 6 machine as a client, then I don't see the problem! I've tested using the exact same OpenVPN client version as I was using on the Ubuntu machines. I can cat log files for hours without seeing the connection freeze. This seems to provide some insight as to the ultimate cause, but I'm just not sure what that insight is. I have examined the traffic over the VPN using Wireshark. I'm not a TCP expert, so I'm not sure what to make of the gory details, but the gist is that at some point, a UDP packet gets dropped due to the limited bandwidth of the Internet link, causing TCP retransmissions inside the VPN tunnel. On the CentOS client, these retransmissions occur properly and things move on happily. At some point with the Ubuntu clients, though, the remote end starts retransmitting the same TCP segment over and over (with the transmit delay increasing between each retransmission). The client sends what looks like a valid TCP ACK to each retransmission, but the remote end still continues to transmit the same TCP segment periodically. This extends ad infinitum and the connection stalls. My question here would be: Does anyone have any recommendations for how to troubleshoot and/or determine the root cause of the TCP issue? It's as if the remote end isn't accepting the ACK messages sent by the VPN client. One common difference between the CentOS node and the various Ubuntu releases is that Ubuntu has a much more recent Linux kernel version (from 3.2 in Ubuntu 12.04 to 3.8 in 13.04). A pointer to some new kernel bug maybe? I'm assuming that if that were so, then I wouldn't be the only one experiencing the problem; I don't think this seems like a particularly exotic setup.

    Read the article

  • Prefork or Worker MPM for amazon xlarge server?

    - by Netismine
    I'm trying to measure would it be better to have prefork or worker mpm apache module for the server I'm working on, which is Amazon X-Large 15 GB memory 8 EC2 Compute Units (4 virtual cores with 2 EC2 Compute Units each) and that will run a Magento website with about 50 active users at once. Site serves a lot of images and about 45 requests per page. Images sometimes hang, so it seems worker would be a better option? Thanks

    Read the article

  • Using ffmpeg to cut up video

    - by Neil
    I am using ffmpeg like this e.g.: ffmpeg -i input.wmv -ss 60 -t 60 -acodec copy -vcodec copy output.wmv to cut out a section of a large file. The -ss part works fine but the -t is ignored. That is, it correctly removes the first -ss seconds but then just keeps going to the end of the input with the copy. Is there a way to use ffmpeg to cut off the end of a video without recoding it?

    Read the article

  • Security camera for HQ and remote sites?

    - by Atlas
    We want to install security cams at HQ site and 3 remotes sites. Basically: (1) Each site would have N cams (2) Each site should have DVR locally to record everything. What we want is that HQ to be able to see the live/recorded videos of each remote site and including itself. Preferably HQ would have 1 large screen, and display all cams of itself and remotes sites, say showing it in 32x32 cells. Does such system exists?

    Read the article

  • How can I detect hard drive failures?

    - by Francis
    I am in charge of a large number of Windows servers. Recently, many have been reporting hard drive errors with event codes 11 and 55. CHKDSK indicates that the drives are fine most of the time. What other diagnostic tools could I use to more accurately detect hard drive failures? Could these Windows events be false positives? I have already evaluated S.M.A.R.T., and it seems to have significant sensitivity and specificity issues.

    Read the article

  • What's the easiest way to duplicate a portion of a directory structure onto an external drive?

    - by Jon Cage
    I'm trying to move a large chunk of data from one of our servers onto an external drive for delivery to Amazon glacier storage. To do that, I'd like to copy a chunk of the server, preserving the directory structure. I.e. move this: \\MyServer\Some\Longwinded\Path\TheDataIWantToCopy \\MyServer\Some\Longwinded\Path\TheDataIWantToCopy\First bit of data\DataFile1.dat to this: D:\ D:\First bit of data\DataFile1.dat

    Read the article

  • How to set original error message for apache 2.2

    - by ffffff
    Apache 2.2 default 414 message is Request-URI Too Large The requested URL's length exceeds the capacity limit for this server. I wanna set custom message so I set http.conf ErrorDocument 414 /error/HTTP_REQUEST_URI_TOO_LARGE.html.var But I do not get along well How to set original error message for apache 2.2

    Read the article

  • What is the most efficient way to scan in thousands of pictures? [closed]

    - by leora
    My parents have a number of really large albums and the pictures are starting to face in a lot of cases so we thought it would be a good idea to scan in all the pictures and move them to online albums. The issue is that the task is daunting given that there are thousands of pictures. Are there any services or ideas on how to scan in albums of pictures that won't take up hundreds of man hours for me?

    Read the article

  • CRC error when extracting to SSD from 2nd HDD

    - by gbn
    Hello I have a large RAR file (split up) containing an ISO on my 2nd HDD When I extract it: to the same HDD, it's OK to the system/OS SSD, I get CRC errors I've checked memory, run memtests, checked wires etc I have no other issues; only with this one RAR file Any ideas please?

    Read the article

  • How can I prepare a TortoiseSVN installer to use the serf HTTP library instead of neon?

    - by Sam Johnson
    I'm going to be distributing instructions on how to access our new Subversion repository with TortoiseSVN. Because it's hosted on Windows, and we have some large files in the repository, we have to use the Serf HTTP library instead of neon. This is normally specified by manually editing the Subversion "servers" file on the client machine and adding the line http-library=serf Is there a way I can customize the TortoiseSVN installer to do this automatically? I'm just trying to get it up and running as easy as possible for our new SVN users.

    Read the article

  • excel / open office - append an incrementing value to all non-unique fields

    - by mheavers
    I have a large table of about 7500 store names. I need to search through those names and, if they are not unique, append an incrementing value, for example: store_1 store_2 etc. Anyone know how to do this? For another project, I was using this: =J1&IF(COUNTIF($J$1:J1,J1)1,COUNTIF($J$1:J1,J1),"") but in open office this gives an error, and in google spreadsheets, it times out because my database is so big. Any suggestions?

    Read the article

  • Improve file transfer speed between Windows PCs and servers

    - by Geotarget
    I've setup a server which I've connected to multiple PCs in my workplace. Sadly, data transfer speeds are at max 3 MB/sec per connection which works out slow for file transfers, especially when transferring large files. I'm using Windows filesharing and the server is a Windows Server 2008 (2 Ghz CPU, 1 GB RAM) and the client PCs mostly running Windows 7. How can I detect bottlenecks in my network and improve file sharing speed within the network?

    Read the article

  • linux disk usage report inconsistancy after removing file. cpanel inaccurate disk usage report

    - by brando
    relevant software: Red Hat Enterprise Linux Server release 6.3 (Santiago) cpanel installed 11.34.0 (build 7) background and problem: I was getting a disk usage warning (via cpanel) because /var seemed to be filling up on my server. The assumption would be that there was a log file growing too large and filling up the partition. I recently removed a large log file and changed my syslog config to rotate the log files more regularly. I removed something like /var/log/somefile and edited /etc/rsyslog.conf. This is the reason I was suspicious of the disk usage report warning issued by cpanel that I was getting because it didn't seem right. This is what df was reporting for the partitions: $ [/var]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 9.3G 108M 99% /var This is what du was reporting for /var mount point: $ [/var]# du -sh 528M . clearly something funky was going on. I had a similar kind of reporting inconsistency in the past and I restarted the server and df reporting seemed to be correct after that. I decided to reboot the server to see if the same thing would happpen. This is what df reports now: $ [~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 697M 8.7G 8% /var This looks more like what I'd expect to get. For consistency this is what du reports for /var: $ [/var]# du -sh 638M . question: This is a nuisance. I'm not sure where the disk usage reports issued by cpanel get their info but it clearly isn't correct. How can I avoid this inaccurate reporting in the future? It seems like df reporting wrong disk usage is a strong indicator of the source problem but I'm not sure. Is there a way to 'refresh' the filesystem somehow so that the df report is accurate without restarting the server? Any other ideas for resolving this issue?

    Read the article

  • Moving directories full of files over the top

    - by JavaRocky
    I took a backup of a directory which has a number directories and files inside them. Recently some files have gone missing. I would like to just move over the missing files. I prefer moving files instead of just copying as space is a premium on this particular box and the files are quite large. How can i achieve this?

    Read the article

  • Varnish send while cache

    - by osmano807
    I wanted the varnish download the file to the cache while sends to client. From what I am seeing, it first download and then send the file, and that with very large files, is slow. (sorry for english, I'm using an online translator.)

    Read the article

  • Is there extensible structured file analyzer, like network analysis tools?

    - by ???
    There are many network analysis tools like Wireshark, Sniffer Pro, Omnipeak which can dump the packet data in structured manner. I'm just writing my own file analyzer for general purpose, which can dump JPEG, PNG, EXE, ELF, ASN.1 DER encoded files, etc. in tree style. There are so many file formats in the world that I can't handle them all. So I'm wondering if there's some software already there, with pluggable architecture and a large established file format repository?

    Read the article

  • Burn 30GB zip file to DVD

    - by Joel Coehoorn
    I have a zip 30GB zip file containing an archive a digital materials available in the school library that I want to burn to dvd. Of course, 30Gb is far too large for a single dvd and the content is already zipped. I'm open to ideas, but leaning towards suggestions that will help me automatically spread the file over multiple dvds, including a simple program to stitch it back together again later.

    Read the article

  • Prevent abuse of public HTTP directory meant for images

    - by sutre
    The situation: Each user has their own public HTTP directory, meant for images only. This could easily be abused by users using it to serve large files, wasting bandwidth. The question: Is there any fairly simple way to prevent this abuse? Either by allowing the webserver to only images to be served, restricting size, or some other method.

    Read the article

  • Slow down individual connections passing through a Linux router?

    - by davr
    We have a Linux server acting as a router/firewall for our office. Occasionally someone will upload a large file that takes up all our bandwidth. I don't want to implement any complex rules or traffic shaping, but I'm wondering if there is a way to slow down a single connection on the spot? I found tcpnice, but it doesn't slow down the transfers in my testing.

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >