Search Results

Search found 46908 results on 1877 pages for 'managing files and folder'.

Page 132/1877 | < Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >

  • SQLite assembly not copied to output folder for unit testing

    - by Groo
    Problem: SQLite assembly referenced in my DAL assembly does not get copied to the output folder when doing unit tests (Copy local is set to true). I am working on a .Net 3.5 app in VS2008, with NHibernate & SQLite in my DAL. Data access is exposed through the IRepository interface (repository factory) to other layers, so there is no need to reference NHibernate or the System.Data.SQLite assemblies in other layers. For unit testing, there is a public factory method (also in my DAL) which creates an in-memory SQLite session and creates a new IRepository implementation. This is also done to avoid have a shared SQLite in-memory config for all assemblies which need it, and to avoid referencing those DAL internal assemblies. The problem is when I run unit tests which reside a separate project - if I don't add System.Data.SQLite as a reference to the unit test project, it doesn't get copied to the TestResults...\Out folder (although this project references my DAL project, which references System.Data.SQLite, which has its Copy local property set to true), so the tests fail while NHibernate is being configured. If I add the reference to my testing project, then it does get copied and unit tests work. What am I doing wrong?

    Read the article

  • How to enable error log in lighttpd properly?

    - by Tomaszs
    I have a Centos 5 system with Lighttpd and fastcgi enabled. It does log access but does not log errors. I have Internal Server Error 500 and no info in log and when I try to open not -existing file also - no info in error log. How to enable it properly? Below is list of modules that I've enabled: server.modules = ( "mod_rewrite", "mod_redirect", "mod_alias", # "mod_access", # "mod_cml", # "mod_trigger_b4_dl", # "mod_auth", "mod_status", "mod_setenv", "mod_fastcgi", # "mod_webdav", # "mod_proxy_core", # "mod_proxy_backend_fastcgi", # "mod_proxy_backend_scgi", # "mod_proxy_backend_ajp13", # "mod_simple_vhost", # "mod_evhost", # "mod_userdir", # "mod_cgi", # "mod_compress", # "mod_ssi", # "mod_usertrack", # "mod_expire", # "mod_secdownload", # "mod_rrdtool", "mod_accesslog" ) Here are setting of debugging: ## enable debugging #debug.log-request-header = "enable" #debug.log-response-header = "enable" #debug.log-request-handling = "enable" debug.log-file-not-found = "enable" #debug.log-condition-handling = "enable" Setting of path to error and access log: ## where to send error-messages to server.errorlog = "/home/lxadmin/httpd/lighttpd/error.log" #### accesslog module accesslog.filename = "/home/lxadmin/httpd/lighttpd/ligh.log" Settings of fastcgi: fastcgi.debug = 1 fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 12, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "2", "PHP_FCGI_MAX_REQUESTS" => "500" ) ))) And in included config file I have: server.errorlog = "/home/httpd/mywebsite.com/stats/mywebsite.com-error_log" What comes to log files: /home/httpd/mywebsite.com/stats/ -rw-r--r-- 1 apache apache 5173239 May 16 11:34 mywebsite.com-custom_log -rwxrwxrwx 1 root root 0 Mar 27 2009 mywebsite.com-error_log /home/lxadmin/httpd/lighttpd/ -rwxrwxrwx 1 apache apache 2184 Apr 22 22:59 error.log -rwxrwxrwx 1 apache apache 6088621 May 16 11:26 ligh.log I gave error logs chmod 777 for a try to check if it's the issue, but apparently it's not. So my question is: what to do to have error log enabled?

    Read the article

  • C# Binary File Compare

    - by Simon Farrow
    I'm in a situation where I want to compare two binary files. One of them is already stored on the server with a pre-calculated Crc32 in the database from when I stored it originally. I know that if the Crc is different then the files are definitely different. However, if the Crc is the same I don't know that the files are. So what I'm looking for is a nice efficient way of comparing the two streams on from the posted file and one from the file system. I'm not an expert on streams but I'm well aware that I could easily shoot myself in the foot here as far as memory usage is concerned. Any help is greatly appreciated.

    Read the article

  • acessing network shared folder with a username and password string in vb.net

    - by Irene
    i am using the following code to read the details from a network folder which is restricted for only one user shell("net use q: \\serveryname\foldername /user:admin pwrd", AppWinStyle.Hide, True, 10000) Process.Start(path) shell("net use q: /delete") when i run this to open any pdf or jpg or any other files except word/excel/powerpoint, everything is working fine. but the problem comes only when i access a word file. in the step one, i am giving permission to access the word file. in the step two, word file is open. in the third, i am deleting the q drive. the problem is the word file is still open. so i am getting a dos window, saying that "some connections of still connected or searching some folders, do you want to force disconnect" please help.... how to access a word file (editable files) providing user name and password from the code and at the same time he shoud not have access to any other folders directly.

    Read the article

  • Hardlink files not the same

    - by SabreWolfy
    I created a hardlink of a file as follows: ln /path/to/source/file1 /path/to/target/file2 Using md5sum, the two files are identical. After a while, the source file has been modified by another program. The target file does not get "updated". The md5sums are now different. The files are on the same partition of course, otherwise I could not create a link. What I'm trying to do is get a copy of the source file into the target folder (which is versioned), so that I have access to the source file elsewhere. I tried moving the source file to the target folder with a different name and then creating a symlink to it at the source, but the program expecting the file then (somehow) created a file of the name it wanted in the target folder. Ideas?

    Read the article

  • Thin permissions in etc folder (Ubuntu)

    - by Apollo
    I am working on a RoR server setup that uses Thin and Nginx. It works fine, but only if I manually add the folder /etc/thin and set the permissions to 777 in order to use the command below: thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production If I don't set it to 777, I get this error: me@UbuntuRails:/etc$ thin config -C /etc/thin/testapp.yml -c /var/www/testapp --servers 1 -e production /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in initialize': Permission denied - /etc/thin/testapp.yml (Errno::EACCES) from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:inopen' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/controllers/controller.rb:115:in config' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:187:inrun_command' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/lib/thin/runner.rb:152:in run!' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/gems/thin-1.5.0/bin/thin:6:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in load' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/thin:19:in' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in eval' from /usr/local/rvm/gems/ruby-1.9.3-p286@rails328/bin/ruby_noexec_wrapper:14:in' I don't like to set this folder to a 777, sounds like a rubbish workaround. I run everything from an admin user account, not root. RVM runs from my admin user and gem only works in my admin as well. If I sudo that action, nothing happens because my root doesn't "know" thin. Which is the correct way to handle this? Thanks!

    Read the article

  • iPhone dev: load a file from resource folder

    - by thomax
    I'm writing an iPhone app with a UIWebView which should display various html files I have in the app resource folder. In xcode my project overview, these html files are displayed like this: dirA |---> index.html |---> a1.html |---> a2.html |---> my.css |---> dirB |---> b1.html |---> b2.html |---> dirC |---> c1.html |---> c2.html These resources where added to the project as such: - Checked "Copy items into destination groups folder (if needed)". - Reference type: Default. - Text encoding: Unicode (utf-8). - Recursively create groups for any added folders. The links in my html are relative, meaning they look like this: <a href="a1.html">a2</a> <a href="a2.html">a2</a> <a href="dirB/b2.html">b2</a> <a href="dirC/c1.html">b2</a> In order to display the index.html when the app starts up, I use the following code: NSString *path = [[NSBundle mainBundle] pathForResource:@"index" ofType:@"html"]; NSURL *url = [NSURL fileURLWithPath:path]; NSURLRequest *request = [NSURLRequest requestWithURL:url]; [webView loadRequest:request]; This works fine. Following links from the index file also works fine, as long as the html files requested are directly under dirA. If the link followed points to a file in a sub-directory, then didFailLoadWithError will catch the situation and report that the requested file does not exist. Note that [webView loadHtmlString:myHtml]; cannot be part of the solution, as I need back and forward buttons to work in my web view. So the question is: How can I follow a relative link to an html file in a sub directory within my resources? I've been all over stackoverflow and the rest of the tubes for the past few days trying to figure this one out, but nowhere have I come across the solution to this exact problem. Any insight at all would be very, very much appreciated!

    Read the article

  • Using Visual sudio .ncb file for reflection.

    - by Rushi
    I am developing visual game level editor in c++. For this I want reflection(RTTI) mechanism to know class attributes at runtime. I am currently using PDB files for this.But using PDB I couldn't retrieve actual code line for extra information in commented format which is given for that attribute. Visual studio uses NCB files for intelligence. So will it be better idea to use NCB instead PDB? If yes,How to retrieve information from NCB files? Is there any SDK like DIA SDK?

    Read the article

  • How to use ShowHelp with Vista's virtualized "Program Files" folder

    - by fmunkert
    Hi, we have the problem that ShowHelp seems to fail under Vista and Windows Server 2008 if the path name of the help file contains a virtualized folder name. For example, under the German version Vista, "Program Files" is called "Programme". The call System.Windows.Forms.Help.ShowHelp(null, @"C:\Programme\Microsoft Visual Studio 9.0\Common7\Tools\spyxx.chm"); fails, wheras System.Windows.Forms.Help.ShowHelp(null, @"C:\Program Files\Microsoft Visual Studio 9.0\Common7\Tools\spyxx.chm"); succeeds. If there any way in C# to convert a file path into its non-virtualized counterpart? Or is there any other solution to that problem? Regards

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • combine two GCC compiled .o object files into a third .o file

    - by ~lucian.grijincu
    How does one combine two GCC compiled .o object files into a third .o file? $ gcc -c a.c -o a.o $ gcc -c b.c -o b.o $ ??? a.o b.o -o c.o $ gcc c.o other.o -o executable If you have access to the source files the -combine GCC flag will merge the source files before compilation: $ gcc -c -combine a.c b.c -o c.o However this only works for source files, and GCC does not accept .o files as input for this command. Normally, linking .o files does not work properly, as you cannot use the output of the linker as input for it. The result is a shared library and is not linked statically into the resulting executable. $ gcc -shared a.o b.o -o c.o $ gcc c.o other.o -o executable $ ./executable ./executable: error while loading shared libraries: c.o: cannot open shared object file: No such file or directory $ file c.o c.o: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, not stripped $ file a.o a.o: ELF 32-bit LSB relocatable, Intel 80386, version 1 (SYSV), not stripped

    Read the article

  • C1083 : Permission denied on .sbr files

    - by speps
    Hello, I am using Visual Studio 2005 (with SP1) and I am getting weird errors concerning .sbr files. These files, as I read on MSDN, are intermediate files for BSCMAKE to generate a .bsc file. The errors I get are, for example (on different builds) : 11string.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\debug\String.sbr' : Permission denied 58type.cpp : fatal error C1083: Impossible d'ouvrir le fichier généré(e) par le compilateur : '.\Debug/Type.sbr' : Permission denied Translation : cannot open compiler intermediate file It seems to be consistent (I have at least 5 or 6 examples like this) with a .cpp file being compiled twice in the same project, respectively : 11String.cpp *some warnings, 2 lines* 11String.cpp 58Type.cpp *some warnings and other files compiled, a lot of lines* 58Type.cpp I already checked the .vcproj files for duplicate entries and it does not seem to be the problem. I would appreciate any help regarding this issue. Deactivating the build of .bsc files seems to be a workaround but maybe someone has better information than this. Thanks.

    Read the article

  • Is my webserver being abused for banking fraud?

    - by koffie
    Since a few weeks i'm getting a lot of 403 errors from apache in my log files that seem to be related to a bank frauding scheme. The relevant log entries look like this (The ip 1.2.3.4 is one I made up, I did not modify the rest of each line) www.bradesco.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 427 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.bb.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:32 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.santander.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" www.banese.com.br:80 / 1.2.3.4 - - [01/Dec/2012:07:20:33 +0100] "GET / HTTP/1.1" 403 370 "-" "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11" the logformat I use is: LogFormat "%V:%p %U %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" The strange thing is that all these domains are domains of banks and 3 out of the 4 domains are also in the list of the bank frauding scheme described on: http://www.abuse.ch/?p=2925 I would really like to know if my server is being abused for bank frauding or not. I suspect not, because it's giving 403 to all requests. But any extra checks that I can do to ensure that my server is not being abused are welcome. I'm also curious on how the "bad guys" expected my server to behave. I.e. are they just expecting my server to act as a proxy to hide the ip of the fake site, or are they expecting that my server will actually serve the fake banking website? Is the ip 1.2.3.4 more likely to be the ip of a victim or the ip of a bad guy. I suspect a bad guy, because it's quite unlikely that a real person would visit 4 bank sites in a second. If it's from a bad guy I'm very curious at what he is trying to do.

    Read the article

  • What libraries are available for manipulating super large images in .Net

    - by tpower
    I have some really large files for example 320 MB tif file with 14000 X 9000 pixels. The operations I need to perform are basically scaling the images to get smaller versions of it and breaking the image into tiles. My code works fine with small files and I use the .Net Bitmap objects but I will occasionally get Out of Memory exceptions for larger files. I've tried using the FreeImage libraries FreeImageBitmap but have the same problems. I'm using something like the following to scale the image: using (Graphics g = Graphics.FromImage((Image)result)) { g.DrawImage( source, xOffset, yOffset, source.Width * scale, source.Height * scale ); } Ideally I'd like a third party library to do all the hardwork, but if you have any tips or resources with more information I would appreciate it.

    Read the article

  • Rolling back file moves, folder deletes and mysql queries

    - by Workoholic
    This has been bugging me all day and there is no end in sight. When the user of my php application adds a new update and something goes wrong, I need to be able to undo a complex batch of mixed commands. They can be mysql update and insert queries, file deletes and folder renaming and creations. I can track the status of all insert commands and undo them if an error is thrown. But how do I do this with the update statements? Is there a smart way (some design pattern?) to keep track of such changes both in the file structure and the database? My database tables are MyISAM. It would be easy to just convert everything to InnoDB, so that I can use transactions. That way I would only have to deal with the file and folder operations. Unfortunately, I cannot assume that all clients have InnoDB support. It would also require me to convert many tables in my database to InnoDB, which I am hesitant to do.

    Read the article

  • Most scalable way of serving a small set of static HTTP content

    - by Ekevoo
    The story: Hi guys. I'm among the people responsible for serving the results of the most anticipated (by number of people participating) annual entrance exam in my state. As such, when our results are published, the interest is overwhelming. In the past we delegated the responsibility of serving the results to the media, but that spoils a little the officialness of these results. This year we went with a little (long overdue) experiment of using lighttpd instead of Apache as well as other physical network optimizations I wasn't directly involved with. The results were very satisfactory. The server didn't choke even once, nor we saw any of the usual Twitter complaints on unavailability and/or slowness that were previously common. However, because we still delegated the first publication of the results to the media I'm still not 100% sure we can handle the load of actually publishing the results first. The question: Now because these files are like 14MB in total and a true lightweight Linux distribution isn't that big either, I'm thinking: what if next year we run full RAMdrive? Is there any? Is that useful? Is that worth it for a team that uses Debian almost exclusively? Are there other optimizations that I should be focusing on instead?

    Read the article

  • Parsing log files in a folder in ColdFusion

    - by Simon Guo
    The problem is there is a folder ./log/ containing the files like: jan2010.xml, feb2010.xml, mar2010.xml, jan2009.xml, feb2009.xml, mar2009.xml ... each xml file would like: <root><record name="bob" spend="20"></record>...(more records)</root> I want to write a piece of ColdFusion code (log.cfm) that simply parsing those xml files. For the front end I would let user to choose a year, then the click submit button. All the content in that year will be show up in separate table by month. Each table shows the total money spent for each person. like: person cost bob 200 mike 300 Total 500 Thanks.

    Read the article

  • Blackberry Widget Packager saves my files in strange places

    - by chibineku
    I have just installed the Blackberry Widget Packager and Blackberry Web Plug-In for Eclipse, and everything works fine, but my files are output to strange places. For example, I tried putting my zipped source files in the folder Blackberry Widget Packager/web and I got an error during packaging. Packaging works when the .zip is in the same directory as wwbp, though. When a widget is successfully created, the executable .jar file and the .rapc files are put in some stupid folder like users/user/temp/widgetname094098456, and the other files are split between two folders in Blackberry Widget Packager/bin. This is slightly annoying as I don't want to be spending time herding my files. Anyone have any thoughts on why my files are being scattered like this?

    Read the article

  • how to know what files or folder are changed before do commit

    - by Pedro
    My problem is how to know what files or folder are changed before do commit. I can add all the new files in my working copy before do commit, and the repository changes, but if for example i delete one file of the working copy i dont know the way to add this change before do commit. When you use the tortoise for example before do commit the program shows all the changes of the working copy and you can choose what changes commit and what changes dont. There is some way to do this usin sharp svn?? thanks for your answer!!!

    Read the article

  • PHP FTP Upload thousands of files

    - by user275074
    Hi, I've written a small FTP class which I used to move files from a local server to a remote server. It does this by checking an array of local files with an array of files on the remote server. If the file exists on the remote server, it won't bother uploading it. The script works fine for small amounts of files, but I've noticed that the local server can have as many as 3000+ image files to transfer, this seems to cause the script to flop and only transfer a 100 or so. How can I modify the script to handle potentially thousands of image transfer files?

    Read the article

  • Optimize included files and uses in Delphi

    - by Roland Bengtsson
    I try to increase performance of Delphi 2007 and Codeinsight. In the application there are 483 files added in the DPR file. I don't know if it is imagination but I feel that I got better performance from Codeinsight by simply readd all files in the DPR. I also think (correct me if I'm wrong) that all files that are included in a uses section also should be included in the DPR file for best performance. My question is, does it exists a tool that scan the whole project and give a list what files are missing in the DPR file and what files can be removed? Would also be nice to have a list of uses that can be removed in the PAS files. Regards

    Read the article

  • Converting Multiple files to zip and saving them in ownCloud

    - by user1055380
    I wanted to convert an array with some css, js and html files into a zip file and save them in ownCloud (it has it's own framework but it's knowledge is not required.) What I am saving is an infinite loop of zip files, as in, a zip inside a zip so I can't even check that the code is working correctly or not. Please help. Here is the link to the code. <?php /* creates a compressed zip file */ $filename = $_GET["filename"]; function create_zip($files = array(),$destination = '',$overwrite = false) { //if the zip file already exists and overwrite is false, return false if(file_exists($destination) && !$overwrite) { return false; } //vars $valid_files = array(); //if files were passed in... if(is_array($files)) { //cycle through each file foreach($files as $file => $local) { //make sure the file exists if(file_exists($file)) { $valid_files[$file] = $local; } } } //if we have good files... if(count($valid_files)) { //create the archive $zip = new ZipArchive(); if($zip->open($destination,$overwrite ? ZIPARCHIVE::OVERWRITE : ZIPARCHIVE::CREATE) !== true) { return false; } //add the files foreach($valid_files as $file => $local) { $zip->addFile($file, $local); } //debug //echo 'The zip archive contains ',$zip->numFiles,' files with a status of ',$zip->status; //close the zip -- done! $zip->close(); //check to make sure the file exists return file_exists($destination); } else { return false; } } $files_to_zip = array( 'apps/impressionist/css/mappingstyle.css' => '/css/mappingstyle.css', 'apps/impressionist/css/style.css' => '/css/style.css', 'apps/impressionist/js/jquery.js' => '/scripts/jquery.js', 'apps/impressionist/js/impress.js' => '/scripts/impress.js', realpath('apps/impressionist/output/'.$filename.'.html') => $filename.'.html' ); //if true, good; if false, zip creation failed $result = create_zip($files_to_zip, $filename.'.zip'); $save_file = OC_App::getStorage('impressionist'); $save_file ->file_put_contents($filename.'.zip',$files_to_zip); ?>

    Read the article

  • Nginx: Disallow index.html in URL

    - by Martin Vilcans
    We're generating a site consisting of only static files (using Assemble). Having the .html extension on URLs looks so nineties, so we generate every static HTML file in its own directory and call it index.html. For example, the url http://www.example.com/foo/bar/ is in the file /var/www/foo/bar/index.html. This works well, but there is one small thing nagging me: Now there are two possible URLs to the same resource: http://www.example.com/foo/bar/ (slash URL) http://www.example.com/foo/bar/index.html (index.html URL) By accident someone may link to the index.html form of the URL, which is bad for SEO and looks ugly (remember the nineties?). Is it possible in Nginx to give a 404 error on the index.html URL, but serve the slash URL? I tried this: location ~ /index\.html$ { return 404; } But it seems that Nginx does some internal rewrite of the slash URL to the index.html URL, and then matches this location so we get a 404 even on the slash URL. Note that to catch mistakes, we want index.html URLs to be an error, not just redirect to the slash URL.

    Read the article

< Previous Page | 128 129 130 131 132 133 134 135 136 137 138 139  | Next Page >