Search Results

Search found 39784 results on 1592 pages for 'ignore files'.

Page 166/1592 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Alternative Windows Offline Files + Windows Backup + Previous Version Setup

    - by Herson
    Currently our documents are all hosted in a Windows 7 box. Users can access the files using Windows share and the documents are available offline (windows 7 feature). The documents are being backed up daily by Windows 7 backup and restore utility. Users can access previous versions of the file (from the backups) using Windows Explorer "previous versions" feature. This setup is currently working well, except for the following: We would prefer to have access to hourly versions of the file, not daily. The previous version mechanism is tied up to the backup mechanism. Windows 7 performs a full backup every week and incremental backup everyday. The previous versions of a file is actually what are the available in the backups. If you 20GB documents and want to maintain at least three(3) year history, you will use at minimum 3 years * 52 weeks * 20GB or about 3TB even if there are few changes in the documents. Its pretty inefficient use of space. Looking up previous versions of a file is very slow (tens of minutes). This is probably related to the previous issue - Windows has to traverse its all of its backups. I am considering using SVN + autocommit/autoupdate tortoisesvn. It will have the following advantages: Backups are easy and will also backup the whole history of each documents. (Just backup the repository). Creating previous versions can be frequent. I think svn commit / update can be done every two minutes or so. Users can sync over the net. However, I can see the following issues: More conflicts than the original setup because both multiple users can now edit the same file even both are online, i.e. can connect to the SVN repo. The users can off course lock the file first before editing, but that would mean they have to adjust. Delay on propagation of file changes. On windows 7 file sharing, changes made by one online user will be instantaneously available to other online users. With the SVN setup, changes will only be propagated when the users execute the svn add/commit/update sequence. Delay will be probably a few minutes. This workflow will no longer work: "Hi, I just edited document X, can you have a quick look?" I would like to ask the opinion of the community for alternative setups, or improvements on the above setups to work out the kinks.

    Read the article

  • nginx+django serving static files

    - by avalore
    I have followed instruction for setting up django with nginx from the django wiki (https://code.djangoproject.com/wiki/DjangoAndNginx) and have nginx setup as follows (a few name changes to fit my setup). user nginx nginx; worker_processes 2; error_log /var/log/nginx/error_log info; events { worker_connections 1024; use epoll; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" ' '"$gzip_ratio"'; client_header_timeout 10m; client_body_timeout 10m; send_timeout 10m; connection_pool_size 256; client_header_buffer_size 1k; large_client_header_buffers 4 2k; request_pool_size 4k; gzip on; gzip_min_length 1100; gzip_buffers 4 8k; gzip_types text/plain; output_buffers 1 32k; postpone_output 1460; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 75 20; ignore_invalid_headers on; index index.html; server { listen 80; server_name localhost; location /static/ { root /srv/static/; } location ~* ^.+\.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js|mov) { access_log off; expires 30d; } location / { # host and port to fastcgi server fastcgi_pass 127.0.0.1:8080; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param QUERY_STRING $query_string; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass_header Authorization; fastcgi_intercept_errors off; fastcgi_param REMOTE_ADDR $remote_addr; } access_log /var/log/nginx/localhost.access_log main; error_log /var/log/nginx/localhost.error_log; } } Static files aren't being served (nginx 404). If I look in the access log it seems nginx is looking in /etc/nginx/html/static... rather than /srv/static/ as specified in the config. I've no clue why it's doing this, any help would be hugely appreciated.

    Read the article

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • How to check there are no html files in current directory?

    - by kev
    I have a script which will download html files into current directory. Then it'll generate a report based on these html files. At last, it'll delete all these html files. So, when I run this script, I want to make sure there is no html files in current dir. This is what I got: if ls *.html >/dev/null 2>&1; then echo 'clear HTML files first' exit fi Is there any easy way to check?

    Read the article

  • SCons does not clean all files

    - by meowsqueak
    I have a file system containing directories of "builds", each of which contains a file called "build-info.xml". However some of the builds happened before the build script generated "build-info.xml" so in that case I have a somewhat non-trivial SCons SConstruct that is used to generate a skeleton build-info.xml so that it can be used as a dependency for further rules. I.e.: for each directory: if build-info.xml already exists, do nothing. More importantly, do not remove it on a 'scons --clean'. if build-info.xml does not exist, generate a skeleton one instead - build-info.xml has no dependencies on any other files - the skeleton is essentially minimal defaults. during a --clean, remove build-info.xml if it was generated, otherwise leave it be. My SConstruct looks something like this: def generate_actions_BuildInfoXML(source, target, env, for_signature): cmd = "python '%s/bin/create-build-info-xml.py' --version $VERSION --path . --output ${TARGET.file}" % (Dir('#').abspath,) return cmd bld = Builder(generator = generate_actions_BuildInfoXML, chdir = 1) env.Append(BUILDERS = { "BuildInfoXML" : bld }) ... # VERSION = some arbitrary string, not important here # path = filesystem path, set elsewhere build_info_xml = "%s/build-info.xml" % (path,) if not os.path.exists(build_info_xml): env.BuildInfoXML(build_info_xml, None, VERSION = build) My problem is that 'scons --clean' does not remove the generated build-info.xml files. I played around with env.Clean(t, build_info_xml) within the 'if' but I was unable to get this to work - mainly because I could not work out what to assign to 't' - I want a generated build-info.xml to be cleaned unconditionally, rather than based on the cleaning of another target, and I wasn't able to get this to work. If I tried a simple env.Clean(None, "build_info_xml") after but outside the 'if' I found that SCons would clean every single build-info.xml file including those that weren't generated. Not good either. What I'd like to know is how SCons goes about determining which files should be cleaned and which should not. Is there something funny about the way I've used a generator function that prevents SCons from recording this target as a Clean candidate?

    Read the article

  • WiX 3: Using heat.exe to add bulk files to a new WiX project: HEAT5150

    - by Karen Kwong
    If this is a repeat question, please direct me to the existing solution. I wasn't able to find a matching query. We currently use InstallShield. I'm attempting to covert a project with 407 files to a WiX3 installation package. I tried using heat.exe to do some of the automation but I get the following warning for almost every file: c: heat dir "c:\projectDir\projectA" -gg -ke -template:Product -out "c:\install\projectA\heatOutput" heat.exe: warning HEAT5150 : Could not harvest data from a file that was expected to be a SelfReg DLL: c:\projectDir\projectA\plugin1.dll. If this file does not support SelfReg you can ignore this warning. Otherwise, this error detail may be helpful to diagnose the failure: Unable to load file: c:\projectDir\projectA\plugin1.dll, error: 126. Q: Is it normal for this warning to be reported for every file? If there's a current "How To create/convert to your first WiX install project with many files" tutorial, please point me to it. The key requirement is "with many files". Thank-you -Karen Kwong- PS. I know that WiX is designed for incremental install project creation but it would be nice to know if there's an automated way to convert existing install projects.

    Read the article

  • Using .NET's HttpWebRequest to download a multitude of files in a row

    - by Cornelius
    I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though). I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem. Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do. Here's some basic code to verify what I'm doing (just in case I'm missing closing something): WebRequest webRequest = WebRequest.Create(uri); webRequest.Method = "GET"; webRequest.Credentials = new NetworkCredential(username, password); WebResponse webResponse = webRequest.GetResponse(); try { using(Stream stream = webResponse.GetResponseStream()) { // read the stream } } finally { webResponse.Close() }

    Read the article

  • Writing files in App_Data causes tempdata to be null

    - by RAMX
    I have a small asp.net MVC 1 web app that can store files and create directories in the App_Data directory. When the write operation succeeds, I add a message to the tempdata and do a redirectToRoute. The problem is that the tempdata is null when the action is executed. If i write the files in a directory outside of the web applications root directory, the tempdata is not null and everything works correctly. Any ideas why writing in the app_data seems to clear the tempdata ? edit: if DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment) writes in the App_Data, TempData will be null in the action being redirected to. if it is a directory out of the web app root it is fine. No exceptions are being thrown. [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(int id, string path, FormCollection form) { ViewData["path"] = path; ViewData["id"] = id; HttpPostedFileBase hpf; string comment = form["FileComment"]; hpf = Request.Files["File"] as HttpPostedFileBase; if (hpf.ContentLength != 0) { DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment); TempData["notification"] = "file was created"; return RedirectToRoute(new { controller = "File", action ="ViewDetails", id = id, path = path + Path.GetFileName(hpf.FileName) }); } else { TempData["notification"] = "No file were selected."; return View(); } }

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • Check if files in a directory are still being written in Windows Batch File

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

  • Cron Job on Ubuntu Hardy Executing But Not Deleting Files As Expected

    - by Patrick McKenzie
    I have a bit of a pickle here and wonder if anyone can give me some pointers: I have a cron job which executes for a particular user daily and is supposed to sweep files in a particular directory. Technically, it is two jobs. I've turned on cron.log to verify they're actually executing, and they are: May 24 11:03:01 AppNameGoesHere /USR/SBIN/CRON[11257]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {popular,index,purchasing,purchasing-alternate,support,about-us,guarantee,screenshots}.htm{,l}) May 24 11:04:01 AppNameGoesHere /USR/SBIN/CRON[11260]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {stats,popular,bcf,articles,expenses}) I have removed the actual usernames and formatted it so that it is less ugly on StackOverflow. Now, my question: Despite the fact that I can see these deletions executing and apparently succeeding in the log, if I go to the specified directory, the files are still there. I initially suspected permission hijinx were going on, but I've verified that I can delete the files manually by su-ing into the mongrel_AppNameGoesHere user and issuing individual rm commands or by copy/pasting the cron job to the command line. Anything that I don't manually zap stays unzapped despite days of that cron job executing successfully. Any suggestions on to what might be happening? I was previously using Dapper Drake with these cron jobs in the /etc/crontab file directly, and when I upgraded to Hardy I moved them to user-specific crontabs (via sudo crontab -e - u mongrel_AppNameGoesHere), which was the point where they appear to have stopped working.)

    Read the article

  • error in implementing static files in django

    - by POOJA GUPTA
    my settings.py file:- STATIC_ROOT = '/home/pooja/Desktop/static/' # URL prefix for static files. STATIC_URL = '/static/' # Additional locations of static files STATICFILES_DIRS = ( '/home/pooja/Desktop/mysite/search/static', ) my urls.py file:- from django.conf.urls import patterns, include, url from django.contrib.staticfiles.urls import staticfiles_urlpatterns from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^search/$','search.views.front_page'), url(r'^admin/', include(admin.site.urls)), ) urlpatterns += staticfiles_urlpatterns() I have created an app using django which seraches the keywords in 10 xml documents and then return their frequency count displayed as graphical representation and list of filenames and their respective counts.Now the list has filenames hyperlinked, I want to display them on the django server when user clicks them , for that I have used static files provision in django. Hyperlinking has been done in this manner: <ul> {% for l in list1 %} <li><a href="{{STATIC_URL}}static/{{l.file_name}}">{{l.file_name}}</a{{l.frequency_count</li> {% endfor %} </ul> Now when I run my app on the server, everything is running fine but as soon as I click on the filename, it gives me this error : Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order: ^search/$ ^admin/ ^static\/(?P<path>.*)$ The current URL, search/static/books.xml, didn't match any of these. I don't know why this error is coming, because I have followed the steps required to achieve this. I have posted my urls.py file and it is showing error in that only. I'm new to django , so Please help

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • Importing owl files

    - by Mikae Combarado
    Hello, I have a problem with importing owl files using owl api in Java. I successfully can import 2 owl files. However, a problem occurs, when I try to import 3 or more owl files that are integrated to each other. E.g. Base.owl -- base ontology Electronics.owl -- electronics ontology which imports Base.owl Telephone.owl -- telephone ontology which imports Base.owl and Electronics.owl When, I just import Base.owl and run Electronics.owl, it works smoothly. The code is given below: File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); OWLOntology ont = manager.loadOntologyFromOntologyDocument(fileElectronic); However, when I want to load Telephone.owl, I just create an additional iriMapper and add it to the manager. The additional code is shown with ** : File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); **File fileTelephone = new File("filePath/Telephone.owl");** SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); **SimpleIRIMapper iriMapper2 = new SimpleIRIMapper(IRI.create("url/Electronic.owl"), IRI.create(fileElectronic));** OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); **manager.addIRIMapper(iriMapper2);** OWLOntology ont = manager.loadOntologyFromOntologyDocument(**fileTelephone**); The code shown above gives this error : Could not load import: Import(url/Electronic.owl>) Reason: Could not loaded imported ontology: <url/Base.owl> Cause: null It would be really appreciated, if someone gives me a hand... Thanks in advance...

    Read the article

  • Not all TFS Build type files are getting copied

    - by k4k4sh1
    Because I have several builds sharing some assemblies containing common build tasks, I have one TFSBuild.proj for all builds and import different targets depending on the build, like the following: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <Import Project="Build_1.targets" Condition="'$(BuildDefinition)'=='Build_1'" /> <Import Project="Build_2.targets" Condition="'$(BuildDefinition)'=='Build_2'" /> <Import Project="Build_3.targets" Condition="'$(BuildDefinition)'=='Build_3'" /> </Project> Each target for a particular build has your usual content for a build type file, but in my case, I also reference some tasks inside assemblies checked into the same folder as TFSBuild.proj in source control. I wanted to add folders to contain some test build targets, since my folder was getting a bit full and cluttered. The following illustrates what I mean. $(TFS project)\build\ TFSBuild.proj Build_1.targets ... Assembly1.dll Assembly2.dll ... Folder\ Test_target_1.targets .... When I stated my build, however, I found that Test_target_1.targets and other files in Folder were not being copied to the build directory, while TFSBuild.proj and other files in the root level, as it were, of the build type folder were being copied. This caused my test build to not be able to reference files inside Folder, causing my build to immediately fail. I realize the simplest work-around would be to get rid of Folder and move all of its contents up to the build folder, but I would really like to have Folder if at all possible. Thanks for your help in advance.

    Read the article

  • Python: Seeing all files in Hex.

    - by Recursion
    I am writing a python script which looks at common computer files and examines them for similar bytes, words, double word's. Though I need/want to see the files in Hex, ande cannot really seem to get python to open a simple file in python. I have tried codecs.open with hex as the encoding, but when I operate on the file descriptor it always spits back File "main.py", line 41, in <module> main() File "main.py", line 38, in main process_file(sys.argv[1]) File "main.py", line 27, in process_file seeker(line.rstrip("\n")) File "main.py", line 15, in seeker for unit in f.read(2): File "/usr/lib/python2.6/codecs.py", line 666, in read return self.reader.read(size) File "/usr/lib/python2.6/codecs.py", line 472, in read newchars, decodedbytes = self.decode(data, self.errors) File "/usr/lib/python2.6/encodings/hex_codec.py", line 50, in decode return hex_decode(input,errors) File "/usr/lib/python2.6/encodings/hex_codec.py", line 42, in hex_decode output = binascii.a2b_hex(input) TypeError: Non-hexadecimal digit found def seeker(_file): f = codecs.open(_file, "rb", "hex") for LINE in f.read(): print LINE f.close() I really just want to see files, and operate on them as if it was in a hex editor like xxd. Also is it possible to read a file in increments of maybe a word at a time. No this is not homework.

    Read the article

  • No test coverage files generated for Unit Test bundle in Xcode

    - by John Gallagher
    The Problem I've got a Cocoa project on the desktop and I'm using Xcode 3.2.1 on Snow Leopard 10.6.2. I want to generate code coverage files for my Unit Test Target in Xcode. What I've Tried As articles like this one suggest, I've adjusted the build settings to: “Generate Test Coverage Files” checked “Instrument Program Flow” checked “-lgcov” added to “Other Linker Flags” I've also set the Run Script section of the test target to have the following: # Run the unit tests in this test bundle. "${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests" # Run gcov on the framework getting tested if [ "${CONFIGURATION}" = 'Coverage' ]; then FRAMEWORK_NAME=LapsusInterpretationEngine FRAMEWORK_OBJ_DIR=${OBJROOT}/${FRAMEWORK_NAME}.build/${CONFIGURATION}/EngineTests.build/Objects-normal/${NATIVE_ARCH} mkdir -p coverage pushd coverage find ${OBJROOT} -name *.gcda -exec gcov -o ${FRAMEWORK_OBJ_DIR} {} \; popd fi Since my Framework name is LapsusInterpretationEngine but my target is named EngineTests, I put this directly into the FRAMEWORK_OBJ_DIR but this didn't seem to help. I've tried cleaning before building. I've made sure all the above build settings apply to both the Unit Test Target and the Application Target. What I Get No .gcda or .gcno files anywhere in the build directory I'm using. I point CoverStory to the Objects-normal directory in my builds folder and it complains that there's nothing there for it to read. I must be doing something really obvious wrong. Anyone any ideas? I have tried the "EngineTests.build" directory being ${FRAMEWORK_NAME} and this gives the same results.

    Read the article

  • Execute SQL on CSV files via JDBC

    - by Markos Fragkakis
    Dear all, I need to apply an SQL query to CSV files (comma-separated text files). My SQL is predefined from another tool, and is not eligible to change. It may contain embedded selects and table aliases in the FROM part. For my task I have found two open-source (this is a project requirement) libraries that provide JDBC drivers: CsvJdbc XlSQL JBoss Teiid Create an Apache Derby DB, load all CSVs as tables and execute the query. These are the problems I encountered: it does not accept the syntax of the SQL (it uses internal selects and table aliases). Furthermore, it has not been maintained since 2004. I could not get it to work, as it has as dependency a SAX Parser that causes exception when parsing other documents. Similarly, no change since 2004. Have not checked if it supports the syntax, but seems like an overhead. It needs several entities defines (Virtual Databases, Bindings). From the mailing list they told me that last release supports runtime creation of required objects. Has anyone used it for such simple task (normally it can connect to several types of data, like CSV, XML or other DBS and create a virtual, unified one)? Can this even be done easily? From the 4 things I considered/tried, only 3 and 4 seem to me viable. Any advice on these, or any other way in which I can query my CSV files? Cheers

    Read the article

  • Replacing unversioned files in WiX major upgrade.

    - by Joshua
    I am still having this problem. This is the closest I have come to a solution that works, and yet it doesn't quite work. Here is (most of) the code: <Product Id='$(var.ProductCode)' UpgradeCode='$(var.UpgradeCode)' Name="Pathways" Version='$(var.ProductVersion)' Manufacturer='$(var.Manufacturer)' Language='1033'> Maximum="$(var.ProductVersion)" IncludeMaximum="no" Language="1033" Property="OLDAPPFOUND" / -- -- -- There is a later version of this program installed. The problem I am having is that I need the two files in the Database component to replace the previous copies. Since these files are unversioned, I have attempted to use the CompanionFile tag set to the PathwaysExe since that is the main executable of the application, and it IS being updated, even if the log says it isn't! The strangest thing about this is that the PathwaysLdf file IS BEING UPDATED CORRECTLY, and the PathwaysMdf file IS NOT. The log seems to indicate that the "Existing file is of an equal version (Checked using version of companion)". This is very strange because that file is being replaced just fine. The only idea I have left is that this problem has to do with the install sequence, and I'm not sure how to proceed! I have the InstallExecuteSequence set like I do because of the SettingsXml file, and my need to NOT overwrite that file, which is actually working now, so whatever solution works for the database files can't break the working settings file! ;) The full log is located at: http://pastebin.com/HFiGKuKN PLEASE AND THANK YOU!

    Read the article

  • Why are mercurial subrepos behaving as unversioned files in eclipse AND torotoiseHG

    - by noam
    I am trying to use the subrepo feature of mercurial, using the mercurial eclipse plugin\tortoiseHG. These are the steps I took: Created an empty dir /root cloned all repos that I want to be subrepos inside this folder (/root/sub1, /root/sub2) Created and added the .hgsub file in the root repo /root/.hgsub and put all the mappings of the sub repos in it using tortoiseHG, right clicked on /root and selected create repository here again with tortoise, selected all the files inside /root and added them to to the root repo commited the root repo pushed the local root repo into an empty repo I have set up on kiln Then, I pulled the root repo in eclipse, using import-mercurial. Now I see that all the subrepos appear as though they are unversioned (no "orange cylinder" icon next to their corresponding folders in the eclipse file explorer). Furthermore, when I right click on one of the subrepos, I don't get all the hg commands in the "team" menu as I usually get, with root projects - no "pull", "push" etc. Also, when I made a change to a file in a subrepo, and then "committed" the root project, it told me there were no changes found. I see the same behavior also in tortoiseHG - When I am browsing files under /root, the files belonging directly to the root repo have an small icon (a V sign) on them marking they are version controlled, while the subrepos' folders aren't marked as such. Am I doing something wrong, or is it a bug?

    Read the article

  • Getting Error When Opening Files

    - by Nathan Campos
    I'm developing a simple Text Editor to understand better PocketC language, then I've done this: #include "\\Storage Card\\My Documents\\PocketC\\Parrot\\defines.pc" int filehandle; int file_len; string file_mode; initComponents() { createctrl("EDIT", "test", 2, 1, 0, 24, 70, 25, TEXTBOX); wndshow(TEXTBOX, SW_SHOW); guigetfocus(); } main() { filehandle = fileopen(OpenFileDlg("Plain Text Files (*.txt)|*.txt; All Files (*.*)|*.*"), 0, FILE_READWRITE); file_len = filegetlen(filehandle); if(filehandle = -1) { MessageBox("File Could Not Be Found!", "Error", 3, 1); } initComponents(); editset(TEXTBOX, fileread(filehandle, file_len)); } Then I tried to run the application, it opens the Open File Dialog, I select a file(that is at \test.txt) that I've created with notepad, then I got my MessageBox saying that the file wans't found. Then I want to know why I'm getting this if the file is all correct? *PS: When I click to exit the MessageBox, I saw that the TextBox is displaying where the file is(I've tested with many other files, and with all I got the error and this).

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >