Search Results

Search found 39200 results on 1568 pages for 'zip files'.

Page 79/1568 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Change Check Out Folder for checked out files in SourceSafe

    - by Town
    I had to rebuild my machine and went from XP to Windows 7. I've now got a bit of an issue: I had files checked out in SourceSafe previously, which I still have copies of in the local folder on my new install. However, SourceSafe still has them checked out to the old XP folder (c:\documents and settings etc) whereas the files now reside in c:\Users. Pending Checkins in Visual Studio now thinks I have nothing checked out, and SourceSafe declares that the files are checked out to me under the c:\documents and settings\ path. Is there any way to tell SourceSafe to simply "look over there" for the files instead? It seems to work with individually undoing and redoing checkout on the files, but that's a lengthy process and one I'd like to avoid if possible. If I simply checkout the files individually then it lists them as checked out to me twice, one for each of the locations. Any pointers would be very much appreciated!

    Read the article

  • How to enable error log in lighttpd properly?

    - by Tomaszs
    I have a Centos 5 system with Lighttpd and fastcgi enabled. It does log access but does not log errors. I have Internal Server Error 500 and no info in log and when I try to open not -existing file also - no info in error log. How to enable it properly? Below is list of modules that I've enabled: server.modules = ( "mod_rewrite", "mod_redirect", "mod_alias", # "mod_access", # "mod_cml", # "mod_trigger_b4_dl", # "mod_auth", "mod_status", "mod_setenv", "mod_fastcgi", # "mod_webdav", # "mod_proxy_core", # "mod_proxy_backend_fastcgi", # "mod_proxy_backend_scgi", # "mod_proxy_backend_ajp13", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) Here are setting of debugging: ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" debug.log-file-not-found = "enable" #debug.log-condition-handling = "enable" Setting of path to error and access log: ## where to send error-messages to server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" #### accesslog module accesslog.filename = "/home/lxadmin/httpd/lighttpd/ligh.log" Settings of fastcgi: fastcgi.debug = 1 fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 12, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "500" ) ))) And in included config file I have: server.errorlog = "/home/httpd/mywebsite.com/stats/mywebsite.com-error_log" What comes to log files: /home/httpd/mywebsite.com/stats/ -rw-r--r-- 1 apache apache 5173239 May 16 11:34 mywebsite.com-custom_log -rwxrwxrwx 1 root root 0 Mar 27 2009 mywebsite.com-error_log /home/lxadmin/httpd/lighttpd/ -rwxrwxrwx 1 apache apache 2184 Apr 22 22:59 error.log -rwxrwxrwx 1 apache apache 6088621 May 16 11:26 ligh.log I gave error logs chmod 777 for a try to check if it's the issue, but apparently it's not. So my question is: what to do to have error log enabled?

    Read the article

  • C# Binary File Compare

    - by Simon Farrow
    I'm in a situation where I want to compare two binary files. One of them is already stored on the server with a pre-calculated Crc32 in the database from when I stored it originally. I know that if the Crc is different then the files are definitely different. However, if the Crc is the same I don't know that the files are. So what I'm looking for is a nice efficient way of comparing the two streams on from the posted file and one from the file system. I'm not an expert on streams but I'm well aware that I could easily shoot myself in the foot here as far as memory usage is concerned. Any help is greatly appreciated.

    Read the article

  • Using Visual sudio .ncb file for reflection.

    - by Rushi
    I am developing visual game level editor in c++. For this I want reflection(RTTI) mechanism to know class attributes at runtime. I am currently using PDB files for this.But using PDB I couldn't retrieve actual code line for extra information in commented format which is given for that attribute. Visual studio uses NCB files for intelligence. So will it be better idea to use NCB instead PDB? If yes,How to retrieve information from NCB files? Is there any SDK like DIA SDK?

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • Looking for suitable backup solution Mac OS X to offsite Centos 6 server 1TB of working data

    - by Brady
    I'll start by saying what we have in place currently: On site file server (Mac OS X Server) that is used by GFX designers and they have a working 1TB of data. Offsite server with 2TB available storage (Centos 6) Mac OS X server rsync data to offsite server every 6 hours (rsync -avz --delete --progress -e ssh ...) Mac OS X server does full data backup to LTO 4 tape on a 10 day recycle (Mon-Fri for 2 weeks) rsync pushes about 60GB of file changes a day. The problem: The onsite tape backup is failing as 1TB of graphics files don't compress well to fit onto a 800GB LTO4 tape. Backup is incredibly slow doing a full backup. Pain in the backside getting people to remember to change the tape. Often gets forgotten etc The quick solution: Buy LTO5 Drive and tapes. However this has been turned down because of the cost... What I would like: Something that works in the same way rysnc works. Only changed data is sent over the wire and can be scheduled to run multiple times during the day. Data that is sent is compressed and sent over SSH. Something that keeps a 14day retention but doesn't keep duplicate data So as an example if I have 1TB of working data and 60GB of changes are made each day then I expect around 1.84TB of data to be stored on the offsite server. To work with the Mac OS X server and Centos 6 server. Not cost an arm and a leg. Must be a cheaper solution than buying an LTO5 drive with tapes (around £1500). Be able to be setup to run autonomously. Have some sort of control panel that will allow an admin to easily restore a file/folder. Any recommendations?

    Read the article

  • combine two GCC compiled .o object files into a third .o file

    - by ~lucian.grijincu
    How does one combine two GCC compiled .o object files into a third .o file? $ gcc -c a.c -o a.o $ gcc -c b.c -o b.o $ ??? a.o b.o -o c.o $ gcc c.o other.o -o executable If you have access to the source files the -combine GCC flag will merge the source files before compilation: $ gcc -c -combine a.c b.c -o c.o However this only works for source files, and GCC does not accept .o files as input for this command. Normally, linking .o files does not work properly, as you cannot use the output of the linker as input for it. The result is a shared library and is not linked statically into the resulting executable. $ gcc -shared a.o b.o -o c.o $ gcc c.o other.o -o executable $ ./executable ./executable: error while loading shared libraries: c.o: cannot open shared object file: No such file or directory $ file c.o c.o: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped $ file a.o a.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped

    Read the article

  • C1083 : Permission denied on .sbr files

    - by speps
    Hello, I am using Visual Studio 2005 (with SP1) and I am getting weird errors concerning .sbr files. These files, as I read on MSDN, are intermediate files for BSCMAKE to generate a .bsc file. The errors I get are, for example (on different builds) : 11string.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\debug\String.sbr' : Permission denied 58type.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\Debug/Type.sbr' : Permission denied Translation : cannot open compiler intermediate file It seems to be consistent (I have at least 5 or 6 examples like this) with a .cpp file being compiled twice in the same project, respectively : 11String.cpp *some warnings, 2 lines* 11String.cpp 58Type.cpp *some warnings and other files compiled, a lot of lines* 58Type.cpp I already checked the .vcproj files for duplicate entries and it does not seem to be the problem. I would appreciate any help regarding this issue. Deactivating the build of .bsc files seems to be a workaround but maybe someone has better information than this. Thanks.

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • What libraries are available for manipulating super large images in .Net

    - by tpower
    I have some really large files for example 320 MB tif file with 14000 X 9000 pixels. The operations I need to perform are basically scaling the images to get smaller versions of it and breaking the image into tiles. My code works fine with small files and I use the .Net Bitmap objects but I will occasionally get Out of Memory exceptions for larger files. I've tried using the FreeImage libraries FreeImageBitmap but have the same problems. I'm using something like the following to scale the image: using (Graphics g = Graphics.FromImage((Image)result)) { g.DrawImage( source, xOffset, yOffset, source.Width * scale, source.Height * scale ); } Ideally I'd like a third party library to do all the hardwork, but if you have any tips or resources with more information I would appreciate it.

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • Blackberry Widget Packager saves my files in strange places

    - by chibineku
    I have just installed the Blackberry Widget Packager and Blackberry Web Plug-In for Eclipse, and everything works fine, but my files are output to strange places. For example, I tried putting my zipped source files in the folder Blackberry Widget Packager/web and I got an error during packaging. Packaging works when the .zip is in the same directory as wwbp, though. When a widget is successfully created, the executable .jar file and the .rapc files are put in some stupid folder like users/user/temp/widgetname094098456, and the other files are split between two folders in Blackberry Widget Packager/bin. This is slightly annoying as I don't want to be spending time herding my files. Anyone have any thoughts on why my files are being scattered like this?

    Read the article

  • PHP FTP Upload thousands of files

    - by user275074
    Hi, I've written a small FTP class which I used to move files from a local server to a remote server. It does this by checking an array of local files with an array of files on the remote server. If the file exists on the remote server, it won't bother uploading it. The script works fine for small amounts of files, but I've noticed that the local server can have as many as 3000+ image files to transfer, this seems to cause the script to flop and only transfer a 100 or so. How can I modify the script to handle potentially thousands of image transfer files?

    Read the article

  • Optimize included files and uses in Delphi

    - by Roland Bengtsson
    I try to increase performance of Delphi 2007 and Codeinsight. In the application there are 483 files added in the DPR file. I don't know if it is imagination but I feel that I got better performance from Codeinsight by simply readd all files in the DPR. I also think (correct me if I'm wrong) that all files that are included in a uses section also should be included in the DPR file for best performance. My question is, does it exists a tool that scan the whole project and give a list what files are missing in the DPR file and what files can be removed? Would also be nice to have a list of uses that can be removed in the PAS files. Regards

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

  • Zabbix doesn't update value from file neither with log[] nor with vfs.file.regexp[] item

    - by tymik
    I am using Zabbix 2.2. I have a very specific environment, where I have to generate desired data to file via script, then upload that file to ftp from host and download it to Zabbix server from ftp. After file is downloaded, I check it with log[] and vfs.file.regexp[] items. I use these items as below: log[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,"all",\1] vfs.file.regexp[/path/to/file.txt,"C.*\s([0-9]+\.[0-9])$",Windows-1250,,,\1] The line I am parsing looks like below: C: 8195Mb 5879Mb 2316Mb 28.2 The value I want to extract is 28.2 at the end of file. The problem I am currently trying to solve is that when I update the file (upload from host to ftp, then download from ftp to Zabbix server), the value does not update. I was trying only log[] at start, but I suspect, that log[] treat the file as real log file and doesn't check the same lines (althought, following the documentation, it should with "all" value), so I added vfs.file.regexp[] item too. The log[] has received a value in past, but it doesn't update. The vfs.file.regexp[] hasn't received any value so far. file.txt has got reuploaded and redownloaded several times and situation doesn't change. It seems that log[] reads only new lines in the file, it doesn't check lines already caught if there are any changes. The zabbix_agentd.log file doesn't report any problem with access to file, nor with regexp construction (it did report "unsupported" for log[] key, when I had something set up wrong). I use debug logging level for agent - I haven't found any interesting info about that problem. I have no idea what I might be doing wrong or what I do not know about how Zabbix is performing these checks. I see 2 solutions for that: adding more lines to the file instead of making new one or making new files and check them with logrt[], but those doesn't satisfy my desires. Any help is greatly appreciated. Of course I will provide additional information, if requested - for now I don't know what else might be useful.

    Read the article

  • On a Hudson master node, what are the .tmp files created in the workspace-files folder?

    - by Patrick Johnmeyer
    Question: In the path HUDSON_HOME/jobs/<jobname>/builds/<timestamp>/workspace-files, there are a series of .tmp files. What are these files, and what feature of Hudson do they support? Background Using Hudson version 1.341, we have a continuous build task that runs on a slave instance. After the build is otherwise complete, including archiving the artifacts, task scanner, etc., the job appears to hang for a long period of time. In monitoring the master node, I noted that many .tmp files were being created and modified under builds//workspace=files, and that some of them were very large. This appears to be causing the delay, as the job completed at the same time that files in this path stopped changing. Some key configuration points of the job: It is tied to a specific slave node It builds in a 'custom workspace' It triggers a downstream job that builds in the same custom workspace on the same slave node

    Read the article

  • How to determine on which file system a file was created in Java

    - by rafrafUk
    Hi Everyone! I get files in different formats coming from different systems that I need to import into our database. Part of the import process it to check the line length to make sure the format is correct. We seem to be having issues with files coming from UNIX systems where one character is added. I suspect this is due to the return carriage being encoded differently on UNIX and windows platform. Is there a way to detect on which file system a file was created, other than checking the last character on the line? Or maybe a way of reading the files as text and not binary which I suspect is the issue? Thanks Guys !

    Read the article

  • Bash - replacing targeted files with a specific file, whitespace in directory names

    - by Dispelwolf
    I have a large directory tree of files, and am using the following script to list and replace a searched-for name with a specific file. Problem is, I don't know how to write the createList() for-loop correctly to account for whitespace in a directory name. If all directories don't have spaces, it works fine. The output is a list of files, and then a list of "cp" commands, but reports directories with spaces in them as individual dirs. aindex=1 files=( null ) [ $# -eq 0 ] && { echo "Usage: $0 filename" ; exit 500; } createList(){ f=$(find . -iname "search.file" -print) for i in $f do files[$aindex]=$(echo "${i}") aindex=$( expr $aindex + 1 ) done } writeList() { for (( i=1; i<$aindex; i++ )) do echo "#$i : ${files[$i]}" done for (( i=1; i<$aindex; i++ )) do echo "/usr/bin/cp /cygdrive/c/testscript/TheCorrectFile.file ${files[$filenumber]}" done } createList writeList

    Read the article

  • Python configuration file generator

    - by Stan
    I want to use Python to make a configuration file generator. My roughly idea is feeding input with template files and some XML files with the real settings. Then use the program to generate the real configuration files. I got several questions: Is there any open source configuration generator program? (what could be the keyword), I wonder if there's anything can be added/modified in the design. Does Python have good XML parser module? Is it good idea to use XML file to save the original settings? I've been thinking to use Excel since it's more intuitive to maintain, but harder for program to parse. Not sure how people deal with this. Hope the community can give me some suggestions. Thanks a lot!

    Read the article

  • How to identify/handle text file newlines in Java?

    - by rafrafUk
    Hi Everyone! I get files in different formats coming from different systems that I need to import into our database. Part of the import process it to check the line length to make sure the format is correct. We seem to be having issues with files coming from UNIX systems where one character is added. I suspect this is due to the return carriage being encoded differently on UNIX and windows platform. Is there a way to detect on which file system a file was created, other than checking the last character on the line? Or maybe a way of reading the files as text and not binary which I suspect is the issue? Thanks Guys !

    Read the article

  • Xcode can't find imported header files

    - by solerous
    I've used other IDE's in the past but am new to Xcode. I'm trying to bring in a bunch of C code from an open source project. I've imported them into a new Group and the .c files all show up under Implementation Files and the full list of files shows up in the Groups/Files group as well as my project directory in the finder. When I try to include or import one of these header files, code completion even works so I know Xcode is seeing them. But then when I go to build it says "no such file or directory". How can I get these files to import into my code?

    Read the article

  • directory monitoring

    - by foz1284
    Hi, What is the best way for me to check for new files added to a directory, I dont think the filesystemwatcher would be suitable as this is not an always on service but a method that runs when my program starts up. there are over 20,000 files in the folder structure I am monitoring, at present I am checking each file individually to see if the filepath is in my database table, however this is taking around ten minutes and I would like to speed it up is possible, I can store the date the folder was last checked - is it easy to get all files with createddate last checked date. anyone got any Ideas? Thanks Mark

    Read the article

  • List files starting with a specific name using java

    - by user3610075
    i want to list files starting with a name like "Report" from a folder. i found this in google to list all files but i don't how to list file starting with a name. Thank you File directory = new File("C:\\Users\\kiki\\Downloads"); File[] files = directory.listFiles(); for (int index = 0; index < files.length; index++) { //Print out the name of files in the directory System.out.println(files[index].toString()); }

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >