In SourceSave I could simply see the history of changes for any given file.
And then pick any two versions from that list and compare.
How can I do it in Subversion via Tortoise?
I have a file like this:
==================================[RUN]===================================
result : Ok
CPU time : 0.016001 s
==================================[RUN]===================================
result : Ok
CPU time : 1.012010 s
i want to numbering RUNs like this
==================================[RUN 1]===================================
result : Ok
CPU time : 0.016001 s
==================================[RUN 2]===================================
result : Ok
CPU time : 1.012010 s
how can i do that using grep or any other commands?
I would like to control a bash script like this:
#!/bin/sh
USER1=_parsefromfile_
HOST1=_parsefromfile_
PW1=_parsefromfile_
USER2=_parsefromfile_
HOST2=_parsefromfile_
PW2=_parsefromfile_
imapsync \
--buffersize 8192000 --nosyncacls --subscribe --syncinternaldates --IgnoreSizeErrors \
--host1 $HOST1 --user1 $USER1 --password1 $PW1 --ssl1 --port1 993 --noauthmd5 \
--host2 $HOST2 --user2 $USER2 --password2 $PW2 --ssl2 --port2 993 --noauthmd5 --allowsizemismatch
with parameters from a control file like this:
host1 user1 password1 host2 user2 password2
anotherhost1 anotheruser1 anotherpassword1 anotherhost2 anotheruser2 anotherpassword2
where each line represents one run of the script with the parameters extracted and made into variables.
what would be the most elegant way of doing this?
PAT
If you run hundred of web sites on your servers, what it is the most efficient, automated way to detect if bots are using your HTML forms to send spam email, even if your forms have some kind of protection?
I downloaded a TV season on iTunes (.m4v files), however the files are clustered into 3 episodes each. I'd like to chop these up so that each episode is in it's own file. Googling around a while didn't provide any promising leads.
What's the easiest way to split these files up?
Thanks in advance,
Cheers!
I am working on legacy projects with thousands of files spanning more than 10 projects.
I am wondering what other people are using for searching string in a text file.
I am using windows and i typically search on 10,000 files to make sure some code is not called from other places. I've tried numerous text search tools mentioned here such as Actual Search & Replace, Ultraedit, notepad++ but they all take very long time to search due to the large # of files they have to look into.
I have a windows server 2008 R2 standard edition. The system suddenly stopped accepting remote desktop connections. When I tried to connect directly to the console, I am unable to start any applications. I got errors "The page file is to small to complete the action". Under takmanager in performance the system shows "Commit(GB) 127/127". What does this imply?
The system has 32 GB ram, 5 raid disks each 150 Gb
When I open a PowerPoint file which I received via e-mail in PowerPoint 2010, I get the following error message:
PowerPoint found an error that it can't correct. You should save presentations, exit, and then restart PowerPoint.
Even if you click OK the error message appears again and again and makes it difficult to quit PowerPoint. Furthermore it gives no indication what caused the problem or how to solve it.
I'm running LAMP on Ubuntu 8.04. Apache's username and group are www-data. I put my connection details and AES key in a file in a directory that's not web served. I chown-ed the files to www-data:www-data and set the permissions to 700. Still, the script that require()s these files will only run if I chmod the files to 755. What am I missing?
When setting up an Apache virtual host, I'll occasionally get the following error when attempting to access the site.
Forbidden
You don't have permission to access / on this server.
Is there any method to (or tool that will) tell me why Apache is denying access? (local rule in httpd.conf, file permissions, etc.
I'm not looking for help with a specific configuration, instead I'm looking for a way to have the computer tell me what's wrong with my system and/or configuration.
I have some m4v files I made with Handbrake where the AC3 audio channel was the first one, and the stereo was the second. This causes problems with some things (like Quicktime) so I want to repackage the file such that the stereo track is the first audio track. I don't want to re-encode things.
Can I do this? Can I do it for free?
I've got a PDF file which has a single 'ZapfDingbat' font character (a big tick mark) in it. I opened it in Adobe Acrobat Professional and tried to use 'touch up' tool to change this character.
But I can't for some reason. How do I go about this?
I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. .
Here's my config;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
gzip on;
server {
# Listen for HTTP
listen 80;
# Match to local host names.
server_name *.local;
# We need to store a "cleaned" host.
set $no_www $host;
set $no_local $host;
# Strip out www.
if ($host ~* www\.(.*)) {
set $no_www $1;
rewrite ^(.*)$ $scheme://$no_www$1 permanent;
}
# Strip local for directory names.
if ($no_www ~* (.*)\.local) {
set $no_local $1;
}
# Define default path handler.
location / {
root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs;
index index.php index.html index.htm;
# Route non-existent paths through Kohana system router.
try_files $uri $uri/ /index.php?kohana_uri=$request_uri;
}
# pass PHP scripts to FastCGI server listening on 127.0.0.1:9000
location ~ \.php$ {
root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi.conf;
}
# Prevent access to system files.
location ~ /\. {
return 404;
}
location ~* ^/(modules|application|system) {
return 404;
}
}
}
I have a problem with IIS: if I ask an unknown file extension, such as .flv, it doesn't serve it and displays "page not found".
I can configure it in the Mime Types, but I want to be able to specify a wildcard, like in IIS 7, where I can allow all types under "Request Filtering".
Is it possible?
Thanks!
In a load balanced environment, is it necessary to have all of the web servers in the DMZ? Or will just having the Load Balancer in the DMZ achieve the desired security? If it matters, the web server and application server are the same -- GF, Tomcat fronted by httpd on the same server, OAS, etc...
LB - WEB/APPLICATION - DB
Also, would the setup be different if it was
LB - Web Server - Application Server - DB
Thanks,
Bradford
What are some good and free usenet servers out there, for accessing non-binaries groups, available to the general public ?
I'm particularly interested in the comp.* domain.
p.s. No. As much as I love Google, Google groups just don't cut it in this segment.
How can I get NTFS5 file system in XP? When I install XP to my PC it shows only format using FAT32 and format using NTFS. Which version of XP contains NTFS5?
I have an Excel workbook with a lot of rows in it. I filter it using the values of one column and then I want to save the filtered results to a CSV file. When I do "Save as..." I get all the rows. I can copy paste the filtered results to another sheet and save from there but I'd rather not. It's Excel 2003 primarily.
Is there a way to perform some action (vbs, batch, exe, etc) upon the successful upload of a file using Windows Server 2008 R2 built-in IIS FTP server/service?
In a linux shell, I want to make sure that a certain set of files all begin with <?, having that exact string and no other characters at the beginning. How can I grep or use some other to express "file begins with"?
Is there a command that can be used from the command line to output a list of the mapped network drives on the local system and their location on the network to a txt file? This will only be used on Windows-based systems running XP.
Is there a way to manipulate icons on an Mac OS X file from either Automator or the terminal?
In my case, I want to remove custom icons (that is, the same as doing 'Get Info' and Edit-Cut on the icon) from a large number of files.
I have all my databases in full recovery and have the log backups happening every 15 minutes so my log files are usually pretty small. question is if there is a nightly operation that causes lots of transactions to happen and causes my log files to grow, should i shrink them back down afterward?
Does having a gigantic log file negatively affect the database performance? Disk space isn't an issue at this time.