Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 187/1620 | < Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >

  • Serving protected files using Nginx's X-Accel-Redirect header

    - by andybak
    I'm trying to serve protected files using this directive in my nginx.conf: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/secure/; } I'm passing in paths in the form: "/myfile.doc" and the file's path would be: /home/ldr/webapps/nginx/app/secure/myfile.doc I just get 404's when I access "http: //myserver/secure/myfile.doc" (space inserted after http to stop ServerFault converting it to a link) I've tried taking the trailing / off the location directive and that makes no difference. Two questions: How do I fix it! How can I debug problems like this myself? How can I get Nginx to report which path it's looking for? error.log shows nothing and access.log just tells me which url is being requested - this is the bit I already know! It's no fun trying things randomly without any feedback. Here's my entire nginx.conf: daemon off; worker_processes 2; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 21534; server_name my.server.com; client_max_body_size 5m; location /media/ { alias /home/ldr/webapps/nginx/app/media/; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; fastcgi_pass unix:/home/ldr/webapps/nginx/app/myproject/django.sock; fastcgi_pass_header Authorization; fastcgi_hide_header X-Accel-Redirect; fastcgi_hide_header X-Sendfile; fastcgi_intercept_errors off; include fastcgi_params; } location /secure { internal; alias /home/ldr/webapps/nginx/app/secure/; } } } EDIT: I'm trying some of the suggestions here So I've tried: location /secure/ { internal; alias /home/ldr/webapps/nginx/app/; } both with and without the trailing slash on location. I've also tried moving this block before the "location /" directive. The page I linked to has ^~ after 'location' giving: location ^~ /secure/ { ...etc... Not sure what that signifies but it didn't work either!

    Read the article

  • How to check there are no html files in current directory?

    - by kev
    I have a script which will download html files into current directory. Then it'll generate a report based on these html files. At last, it'll delete all these html files. So, when I run this script, I want to make sure there is no html files in current dir. This is what I got: if ls *.html >/dev/null 2>&1; then echo 'clear HTML files first' exit fi Is there any easy way to check?

    Read the article

  • SCons does not clean all files

    - by meowsqueak
    I have a file system containing directories of "builds", each of which contains a file called "build-info.xml". However some of the builds happened before the build script generated "build-info.xml" so in that case I have a somewhat non-trivial SCons SConstruct that is used to generate a skeleton build-info.xml so that it can be used as a dependency for further rules. I.e.: for each directory: if build-info.xml already exists, do nothing. More importantly, do not remove it on a 'scons --clean'. if build-info.xml does not exist, generate a skeleton one instead - build-info.xml has no dependencies on any other files - the skeleton is essentially minimal defaults. during a --clean, remove build-info.xml if it was generated, otherwise leave it be. My SConstruct looks something like this: def generate_actions_BuildInfoXML(source, target, env, for_signature): cmd = "python '%s/bin/create-build-info-xml.py' --version $VERSION --path . --output ${TARGET.file}" % (Dir('#').abspath,) return cmd bld = Builder(generator = generate_actions_BuildInfoXML, chdir = 1) env.Append(BUILDERS = { "BuildInfoXML" : bld }) ... # VERSION = some arbitrary string, not important here # path = filesystem path, set elsewhere build_info_xml = "%s/build-info.xml" % (path,) if not os.path.exists(build_info_xml): env.BuildInfoXML(build_info_xml, None, VERSION = build) My problem is that 'scons --clean' does not remove the generated build-info.xml files. I played around with env.Clean(t, build_info_xml) within the 'if' but I was unable to get this to work - mainly because I could not work out what to assign to 't' - I want a generated build-info.xml to be cleaned unconditionally, rather than based on the cleaning of another target, and I wasn't able to get this to work. If I tried a simple env.Clean(None, "build_info_xml") after but outside the 'if' I found that SCons would clean every single build-info.xml file including those that weren't generated. Not good either. What I'd like to know is how SCons goes about determining which files should be cleaned and which should not. Is there something funny about the way I've used a generator function that prevents SCons from recording this target as a Clean candidate?

    Read the article

  • WiX 3: Using heat.exe to add bulk files to a new WiX project: HEAT5150

    - by Karen Kwong
    If this is a repeat question, please direct me to the existing solution. I wasn't able to find a matching query. We currently use InstallShield. I'm attempting to covert a project with 407 files to a WiX3 installation package. I tried using heat.exe to do some of the automation but I get the following warning for almost every file: c: heat dir "c:\projectDir\projectA" -gg -ke -template:Product -out "c:\install\projectA\heatOutput" heat.exe: warning HEAT5150 : Could not harvest data from a file that was expected to be a SelfReg DLL: c:\projectDir\projectA\plugin1.dll. If this file does not support SelfReg you can ignore this warning. Otherwise, this error detail may be helpful to diagnose the failure: Unable to load file: c:\projectDir\projectA\plugin1.dll, error: 126. Q: Is it normal for this warning to be reported for every file? If there's a current "How To create/convert to your first WiX install project with many files" tutorial, please point me to it. The key requirement is "with many files". Thank-you -Karen Kwong- PS. I know that WiX is designed for incremental install project creation but it would be nice to know if there's an automated way to convert existing install projects.

    Read the article

  • Using .NET's HttpWebRequest to download a multitude of files in a row

    - by Cornelius
    I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though). I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem. Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do. Here's some basic code to verify what I'm doing (just in case I'm missing closing something): WebRequest webRequest = WebRequest.Create(uri); webRequest.Method = "GET"; webRequest.Credentials = new NetworkCredential(username, password); WebResponse webResponse = webRequest.GetResponse(); try { using(Stream stream = webResponse.GetResponseStream()) { // read the stream } } finally { webResponse.Close() }

    Read the article

  • LightDM will not start after stopping it

    - by Sweeters
    I am running Ubuntu 11.10 "Oneiric Ocelot", and in trying to install the nvidia CUDA developer drivers I switched to a virtual terminal (Ctrl-Alt-F5) and stopped lightdm (installation required that no X server instance be running) through sudo service lightdm stop. Re-starting lightdm with sudo service lightdm start did not work: A couple of * Starting [...] lines where displayed, but the process hanged. (I do not remember at which point, but I think it was * Starting System V runlevel compatibility. I manually rebooted my laptop, and ever since booting seems to hang, usually around the * Starting anac(h)ronistic cron [OK] log line (not consistently at that point, though). From that point on, I seem to be able to interact with my system only through a tty session (Ctrl-Alt-F1). I've tried purging and reinstalling both lightdm and gdm, as well as selecting both as the default display managers (through sudo dpkg-reconfigure [lightdm / gdm] or by manually editing /etc/X11/default-display-manager) through both apt-get and aptitude (that shouldn't make a difference anyway) after updating the packages, but the problem persists. Some of the responses I'm getting are the following: After running sudo dpkg-reconfigure lightdm (but not ... gdm) I get the following message: dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_NAME missing dpkg-maintscript-helper:warning: environment variable DPKG_MATINSCRIPT_PACKAGE missing After trying sudo service lightdm start or sudo start lightdm I get to see the boot loading screen again but nothing changes. If I go back to the tty shell I see lightdm start/running, process <num> but ps -e | grep lightdm gives no output. After trying sudo service gdm start or sudo starg gdm I get the gdm start/running, process <num> message, and gdm-binary is supposedly an active process, but all that happens is that the screen blinks a couple of times and nothing else. Other candidate solutions that I'd found on the web included running startx but when I try that I get an error output [...] Fatal server error: no screens found [...]. Moreover, I made sure that lightdm-gtk-greeter is installed but that did not help either. Please excuse my not including complete outputs/logs; I am writing this post from another computer and it's hard to manually copy the complete logs. Also, I've seen several posts that had to do with similar problems, but either there was no fix, or the one suggested did not work for me. In closing: Please help! I very much hope to avoid re-installing Ubuntu from scratch! :) Alex @mosi I did not manage to fix the NVIDIA kernel driver as per your instructions. I should perhaps mention that I'm on a Dell XPS15 laptop with an NVIDIA Optimus graphics card, and that I have bumblebee installed (which installs nvidia drivers during its installation, I believe). Issuing the mentioned commands I get the following: ~$uname -r 3.0.0-12-generic ~$lsmod | grep -i nvidia nvidia 11713772 0 ~$dmesg | grep -i nvidia [ 8.980041] nvidia: module license 'NVIDIA' taints kernel. [ 9.354860] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354864] nvidia 0000:01:00.0: power state changed by ACPI to D0 [ 9.354868] nvidia 0000:01:00.0: enabling device (0006 -> 0007) [ 9.354873] nvidia 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 9.354879] nvidia 0000:01:00.0: setting latency timer to 64 [ 9.355052] NVRM: loading NVIDIA UNIX x86_64 Kernel Module 280.13 Wed Jul 27 16:53:56 PDT 2011 Also, running aptitude search nvidia gives me the following: p nvidia-173 - NVIDIA binary Xorg driver, kernel module a p nvidia-173-dev - NVIDIA binary Xorg driver development file p nvidia-173-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-173-updates-dev - NVIDIA binary Xorg driver development file p nvidia-96 - NVIDIA binary Xorg driver, kernel module a p nvidia-96-dev - NVIDIA binary Xorg driver development file p nvidia-96-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-96-updates-dev - NVIDIA binary Xorg driver development file p nvidia-cg-toolkit - Cg Toolkit - GPU Shader Authoring Language p nvidia-common - Find obsolete NVIDIA drivers i nvidia-current - NVIDIA binary Xorg driver, kernel module a p nvidia-current-dev - NVIDIA binary Xorg driver development file c nvidia-current-updates - NVIDIA binary Xorg driver, kernel module a p nvidia-current-updates-dev - NVIDIA binary Xorg driver development file i nvidia-settings - Tool of configuring the NVIDIA graphics dr p nvidia-settings-updates - Tool of configuring the NVIDIA graphics dr v nvidia-va-driver - v nvidia-va-driver - I've tried manually installing (sudo aptitude install <package>) packages nvidia-common and nvidia-settings-updates but to no avail. For example, sudo aptitude install nvidia-settings-updates returns the following log: Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... No packages will be installed, upgraded, or removed. 0 packages upgraded, 0 newly installed, 0 to remove and 83 not upgraded. Need to get 0 B of archives. After unpacking 0 B will be used. Writing extended state information... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... The same happens with the Linux headers (i.e. I cannot seem to be able to install linux-headers-3.0.0-12-generic). The output of aptitude search linux-headers is as follows: v linux-headers - v linux-headers - v linux-headers-2.6 - i linux-headers-2.6.38-11 - Header files related to Linux kernel versi i linux-headers-2.6.38-11-generic - Linux kernel headers for version 2.6.38 on i A linux-headers-2.6.38-8 - Header files related to Linux kernel versi i A linux-headers-2.6.38-8-generic - Linux kernel headers for version 2.6.38 on v linux-headers-3 - v linux-headers-3.0 - v linux-headers-3.0 - i A linux-headers-3.0.0-12 - Header files related to Linux kernel versi p linux-headers-3.0.0-12-generic - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-generic- - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-server - Linux kernel headers for version 3.0.0 on p linux-headers-3.0.0-12-virtual - Linux kernel headers for version 3.0.0 on p linux-headers-generic - Generic Linux kernel headers p linux-headers-generic-pae - Generic Linux kernel headers v linux-headers-lbm - v linux-headers-lbm - v linux-headers-lbm-2.6 - v linux-headers-lbm-2.6 - p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-gene - Header files related to linux-backports-mo p linux-headers-lbm-3.0.0-12-serv - Header files related to linux-backports-mo p linux-headers-server - Linux kernel headers on Server Equipment. p linux-headers-virtual - Linux kernel headers for virtual machines @heartsmagic I did try purging and reinstalling any nvidia driver packages, but it did not seem to make a difference, My xorg.conf file contains the following: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 280.13 ([email protected]) Wed Jul 27 17:15:58 PDT 2011 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" HorizSync 28.0 - 33.0 VertRefresh 43.0 - 72.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Writing files in App_Data causes tempdata to be null

    - by RAMX
    I have a small asp.net MVC 1 web app that can store files and create directories in the App_Data directory. When the write operation succeeds, I add a message to the tempdata and do a redirectToRoute. The problem is that the tempdata is null when the action is executed. If i write the files in a directory outside of the web applications root directory, the tempdata is not null and everything works correctly. Any ideas why writing in the app_data seems to clear the tempdata ? edit: if DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment) writes in the App_Data, TempData will be null in the action being redirected to. if it is a directory out of the web app root it is fine. No exceptions are being thrown. [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(int id, string path, FormCollection form) { ViewData["path"] = path; ViewData["id"] = id; HttpPostedFileBase hpf; string comment = form["FileComment"]; hpf = Request.Files["File"] as HttpPostedFileBase; if (hpf.ContentLength != 0) { DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment); TempData["notification"] = "file was created"; return RedirectToRoute(new { controller = "File", action ="ViewDetails", id = id, path = path + Path.GetFileName(hpf.FileName) }); } else { TempData["notification"] = "No file were selected."; return View(); } }

    Read the article

  • Fluent config not generating mapping files

    - by rboarman
    Hello, I am trying to get Fluent nHibernate to generate mappings so I can take a look at the files and the sql. My code is based on this post and on what I can glean from the documentation. http://stackoverflow.com/questions/1375146/fluent-mapping-entities-and-classmaps-in-different-assemblies I am using the latest code from git. Here’s my config code: Configuration cfg = new Configuration(); var ft = Fluently.Configure(cfg); //DbConnection by fluent ft.Database ( MsSqlConfiguration .MsSql2008 .ConnectionString("……") .ShowSql() .UseReflectionOptimizer() ); //get mapping files. ft.Mappings(m => { //set up the mapping locations m.FluentMappings.AddFromAssemblyOf<Entity>() .ExportTo(@"C:\temp"); m.Apply(cfg); }); I also tried: var sessionFactory = Fluently.Configure() .Database(MsSqlConfiguration .MsSql2008 .ShowSql() .ConnectionString(“……")) .Mappings(p => p.FluentMappings .AddFromAssemblyOf<Entity>() .ExportTo(@"c:\temp\")) .BuildSessionFactory(); I have verified that the connection string is correct. The issue is that no mapping files show up in the ExportTo folder and no sql code shows up in the output window or in the log file. No errors or exceptions are generated either. I have no idea where to go from here. Thank you in advance. Rick

    Read the article

  • Check if files in a directory are still being written in Windows Batch File

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • SSIS package not picking up configuration files correctly

    - by blntechie
    We have recently migrated some 30 DTS packages in SQL Server 2000 to SSIS packages in SQL Server 2008. We have created packages in such a way that all environment related variables and other required information are picked from configuration files for maintenance of packages across different environments like Dev, QA and Prod. After setting up all the packages with the config files, when we tested the packages from Business Intelligence Development Studio, it worked fine and picked up the values from the configuration file. And when changing the values in config files to Dev or other env it correctly picked up the values and executed. Similarly, tried for 2 different environments and the packages worked fine. So we deployed to Prod and it was working fine. Yesterday, I had to make a functional change for one package and so I made the change in the package (it is just changing a parameter in a SQL procedure execution task and not related to any variables) and tested in BIDS with 2 environments and it worked fine. As the change was not related to any environment change, we deployed only the updated package (not the associated config file) manually in Prod (i.e without the use of manifest). The config file which was used by the package previously and working fine in Prod remained unaltered. But when the package was executed, the package was pointing to QA and the package didn't read from the config file I believe. One reason may be, it is still using the last executed values which remains in the .dtsx file(can be checked by opening the file in a text editor) usually. But normally, when a package is executed, the values will be overwritten from config file. Guess it is not happening. What are the possible reasons for this? We have tested extensively switching between test environments and it does not show this behavior. We have encountered this in Prod environment twice now. Anyone else have experienced this and how have you resolved this?

    Read the article

  • Cron Job on Ubuntu Hardy Executing But Not Deleting Files As Expected

    - by Patrick McKenzie
    I have a bit of a pickle here and wonder if anyone can give me some pointers: I have a cron job which executes for a particular user daily and is supposed to sweep files in a particular directory. Technically, it is two jobs. I've turned on cron.log to verify they're actually executing, and they are: May 24 11:03:01 AppNameGoesHere /USR/SBIN/CRON[11257]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {popular,index,purchasing,purchasing-alternate,support,about-us,guarantee,screenshots}.htm{,l}) May 24 11:04:01 AppNameGoesHere /USR/SBIN/CRON[11260]: (mongrel_AppNameGoesHere) CMD (rm -rf /var/www/apps/AppNameGoesHere/current/public/ {stats,popular,bcf,articles,expenses}) I have removed the actual usernames and formatted it so that it is less ugly on StackOverflow. Now, my question: Despite the fact that I can see these deletions executing and apparently succeeding in the log, if I go to the specified directory, the files are still there. I initially suspected permission hijinx were going on, but I've verified that I can delete the files manually by su-ing into the mongrel_AppNameGoesHere user and issuing individual rm commands or by copy/pasting the cron job to the command line. Anything that I don't manually zap stays unzapped despite days of that cron job executing successfully. Any suggestions on to what might be happening? I was previously using Dapper Drake with these cron jobs in the /etc/crontab file directly, and when I upgraded to Hardy I moved them to user-specific crontabs (via sudo crontab -e - u mongrel_AppNameGoesHere), which was the point where they appear to have stopped working.)

    Read the article

  • How to exclude R*.class files from a proguard build

    - by Jeremy Bell
    I am one step away from making the method described here: http://stackoverflow.com/questions/2761443/targeting-android-with-scala-2-8-trunk-builds work with a single project (vs one project for scala and one for android). I've come across a problem. Using this input file (arguments to) proguard: -injars bin;lib/scala-library.jar(!META-INF/MANIFEST.MF,!library.properties) -outjar lib/scandroid.jar -libraryjars lib/android.jar -dontwarn -dontoptimize -dontobfuscate -dontskipnonpubliclibraryclasses -dontskipnonpubliclibraryclassmembers -keepattributes Exceptions,InnerClasses,Signature,Deprecated, SourceFile,LineNumberTable,*Annotation*,EnclosingMethod -keep public class org.scala.jeb.** { public protected *; } -keep public class org.xml.sax.EntityResolver { public protected *; } Proguard successfully builds scandroid.jar, however it appears to have included the generated R classes that the android resource builder generates and compiles. In this case, they are located in bin/org/jeb/R*.class. This is not what I want. The android dalvik converter cannot build because it thinks there is a duplicate of the R class (it's in scandroid and also the R*.class files). How can I modify the above proguard arguments to exclude the R*.class files from the scandroid.jar so the dalvik converter is happy? Edit: I should note that I tried adding ;bin/org/jeb/R.class;etc... to the -libraryjars argument, and that only seemed to cause it to complain about duplicate classes, and in addition proguard decided to exclude my scala class files too.

    Read the article

  • error in implementing static files in django

    - by POOJA GUPTA
    my settings.py file:- STATIC_ROOT = '/home/pooja/Desktop/static/' # URL prefix for static files. STATIC_URL = '/static/' # Additional locations of static files STATICFILES_DIRS = ( '/home/pooja/Desktop/mysite/search/static', ) my urls.py file:- from django.conf.urls import patterns, include, url from django.contrib.staticfiles.urls import staticfiles_urlpatterns from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^search/$','search.views.front_page'), url(r'^admin/', include(admin.site.urls)), ) urlpatterns += staticfiles_urlpatterns() I have created an app using django which seraches the keywords in 10 xml documents and then return their frequency count displayed as graphical representation and list of filenames and their respective counts.Now the list has filenames hyperlinked, I want to display them on the django server when user clicks them , for that I have used static files provision in django. Hyperlinking has been done in this manner: <ul> {% for l in list1 %} <li><a href="{{STATIC_URL}}static/{{l.file_name}}">{{l.file_name}}</a{{l.frequency_count</li> {% endfor %} </ul> Now when I run my app on the server, everything is running fine but as soon as I click on the filename, it gives me this error : Using the URLconf defined in mysite.urls, Django tried these URL patterns, in this order: ^search/$ ^admin/ ^static\/(?P<path>.*)$ The current URL, search/static/books.xml, didn't match any of these. I don't know why this error is coming, because I have followed the steps required to achieve this. I have posted my urls.py file and it is showing error in that only. I'm new to django , so Please help

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • Importing owl files

    - by Mikae Combarado
    Hello, I have a problem with importing owl files using owl api in Java. I successfully can import 2 owl files. However, a problem occurs, when I try to import 3 or more owl files that are integrated to each other. E.g. Base.owl -- base ontology Electronics.owl -- electronics ontology which imports Base.owl Telephone.owl -- telephone ontology which imports Base.owl and Electronics.owl When, I just import Base.owl and run Electronics.owl, it works smoothly. The code is given below: File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); OWLOntology ont = manager.loadOntologyFromOntologyDocument(fileElectronic); However, when I want to load Telephone.owl, I just create an additional iriMapper and add it to the manager. The additional code is shown with ** : File fileBase = new File("filepath/Base.owl"); File fileElectronic = new File("filePath/Electronic.owl"); **File fileTelephone = new File("filePath/Telephone.owl");** SimpleIRIMapper iriMapper = new SimpleIRIMapper(IRI.create("url/Base.owl"), IRI.create(fileBase)); **SimpleIRIMapper iriMapper2 = new SimpleIRIMapper(IRI.create("url/Electronic.owl"), IRI.create(fileElectronic));** OWLOntologyManager manager = OWLManager.createOWLOntologyManager(); manager.addIRIMapper(iriMapper); **manager.addIRIMapper(iriMapper2);** OWLOntology ont = manager.loadOntologyFromOntologyDocument(**fileTelephone**); The code shown above gives this error : Could not load import: Import(url/Electronic.owl>) Reason: Could not loaded imported ontology: <url/Base.owl> Cause: null It would be really appreciated, if someone gives me a hand... Thanks in advance...

    Read the article

  • Not all TFS Build type files are getting copied

    - by k4k4sh1
    Because I have several builds sharing some assemblies containing common build tasks, I have one TFSBuild.proj for all builds and import different targets depending on the build, like the following: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <Import Project="Build_1.targets" Condition="'$(BuildDefinition)'=='Build_1'" /> <Import Project="Build_2.targets" Condition="'$(BuildDefinition)'=='Build_2'" /> <Import Project="Build_3.targets" Condition="'$(BuildDefinition)'=='Build_3'" /> </Project> Each target for a particular build has your usual content for a build type file, but in my case, I also reference some tasks inside assemblies checked into the same folder as TFSBuild.proj in source control. I wanted to add folders to contain some test build targets, since my folder was getting a bit full and cluttered. The following illustrates what I mean. $(TFS project)\build\ TFSBuild.proj Build_1.targets ... Assembly1.dll Assembly2.dll ... Folder\ Test_target_1.targets .... When I stated my build, however, I found that Test_target_1.targets and other files in Folder were not being copied to the build directory, while TFSBuild.proj and other files in the root level, as it were, of the build type folder were being copied. This caused my test build to not be able to reference files inside Folder, causing my build to immediately fail. I realize the simplest work-around would be to get rid of Folder and move all of its contents up to the build folder, but I would really like to have Folder if at all possible. Thanks for your help in advance.

    Read the article

  • Python: Seeing all files in Hex.

    - by Recursion
    I am writing a python script which looks at common computer files and examines them for similar bytes, words, double word's. Though I need/want to see the files in Hex, ande cannot really seem to get python to open a simple file in python. I have tried codecs.open with hex as the encoding, but when I operate on the file descriptor it always spits back File "main.py", line 41, in <module> main() File "main.py", line 38, in main process_file(sys.argv[1]) File "main.py", line 27, in process_file seeker(line.rstrip("\n")) File "main.py", line 15, in seeker for unit in f.read(2): File "/usr/lib/python2.6/codecs.py", line 666, in read return self.reader.read(size) File "/usr/lib/python2.6/codecs.py", line 472, in read newchars, decodedbytes = self.decode(data, self.errors) File "/usr/lib/python2.6/encodings/hex_codec.py", line 50, in decode return hex_decode(input,errors) File "/usr/lib/python2.6/encodings/hex_codec.py", line 42, in hex_decode output = binascii.a2b_hex(input) TypeError: Non-hexadecimal digit found def seeker(_file): f = codecs.open(_file, "rb", "hex") for LINE in f.read(): print LINE f.close() I really just want to see files, and operate on them as if it was in a hex editor like xxd. Also is it possible to read a file in increments of maybe a word at a time. No this is not homework.

    Read the article

  • No test coverage files generated for Unit Test bundle in Xcode

    - by John Gallagher
    The Problem I've got a Cocoa project on the desktop and I'm using Xcode 3.2.1 on Snow Leopard 10.6.2. I want to generate code coverage files for my Unit Test Target in Xcode. What I've Tried As articles like this one suggest, I've adjusted the build settings to: “Generate Test Coverage Files” checked “Instrument Program Flow” checked “-lgcov” added to “Other Linker Flags” I've also set the Run Script section of the test target to have the following: # Run the unit tests in this test bundle. "${SYSTEM_DEVELOPER_DIR}/Tools/RunUnitTests" # Run gcov on the framework getting tested if [ "${CONFIGURATION}" = 'Coverage' ]; then FRAMEWORK_NAME=LapsusInterpretationEngine FRAMEWORK_OBJ_DIR=${OBJROOT}/${FRAMEWORK_NAME}.build/${CONFIGURATION}/EngineTests.build/Objects-normal/${NATIVE_ARCH} mkdir -p coverage pushd coverage find ${OBJROOT} -name *.gcda -exec gcov -o ${FRAMEWORK_OBJ_DIR} {} \; popd fi Since my Framework name is LapsusInterpretationEngine but my target is named EngineTests, I put this directly into the FRAMEWORK_OBJ_DIR but this didn't seem to help. I've tried cleaning before building. I've made sure all the above build settings apply to both the Unit Test Target and the Application Target. What I Get No .gcda or .gcno files anywhere in the build directory I'm using. I point CoverStory to the Objects-normal directory in my builds folder and it complains that there's nothing there for it to read. I must be doing something really obvious wrong. Anyone any ideas? I have tried the "EngineTests.build" directory being ${FRAMEWORK_NAME} and this gives the same results.

    Read the article

  • Execute SQL on CSV files via JDBC

    - by Markos Fragkakis
    Dear all, I need to apply an SQL query to CSV files (comma-separated text files). My SQL is predefined from another tool, and is not eligible to change. It may contain embedded selects and table aliases in the FROM part. For my task I have found two open-source (this is a project requirement) libraries that provide JDBC drivers: CsvJdbc XlSQL JBoss Teiid Create an Apache Derby DB, load all CSVs as tables and execute the query. These are the problems I encountered: it does not accept the syntax of the SQL (it uses internal selects and table aliases). Furthermore, it has not been maintained since 2004. I could not get it to work, as it has as dependency a SAX Parser that causes exception when parsing other documents. Similarly, no change since 2004. Have not checked if it supports the syntax, but seems like an overhead. It needs several entities defines (Virtual Databases, Bindings). From the mailing list they told me that last release supports runtime creation of required objects. Has anyone used it for such simple task (normally it can connect to several types of data, like CSV, XML or other DBS and create a virtual, unified one)? Can this even be done easily? From the 4 things I considered/tried, only 3 and 4 seem to me viable. Any advice on these, or any other way in which I can query my CSV files? Cheers

    Read the article

  • Replacing unversioned files in WiX major upgrade.

    - by Joshua
    I am still having this problem. This is the closest I have come to a solution that works, and yet it doesn't quite work. Here is (most of) the code: <Product Id='$(var.ProductCode)' UpgradeCode='$(var.UpgradeCode)' Name="Pathways" Version='$(var.ProductVersion)' Manufacturer='$(var.Manufacturer)' Language='1033'> Maximum="$(var.ProductVersion)" IncludeMaximum="no" Language="1033" Property="OLDAPPFOUND" / -- -- -- There is a later version of this program installed. The problem I am having is that I need the two files in the Database component to replace the previous copies. Since these files are unversioned, I have attempted to use the CompanionFile tag set to the PathwaysExe since that is the main executable of the application, and it IS being updated, even if the log says it isn't! The strangest thing about this is that the PathwaysLdf file IS BEING UPDATED CORRECTLY, and the PathwaysMdf file IS NOT. The log seems to indicate that the "Existing file is of an equal version (Checked using version of companion)". This is very strange because that file is being replaced just fine. The only idea I have left is that this problem has to do with the install sequence, and I'm not sure how to proceed! I have the InstallExecuteSequence set like I do because of the SettingsXml file, and my need to NOT overwrite that file, which is actually working now, so whatever solution works for the database files can't break the working settings file! ;) The full log is located at: http://pastebin.com/HFiGKuKN PLEASE AND THANK YOU!

    Read the article

  • Why are mercurial subrepos behaving as unversioned files in eclipse AND torotoiseHG

    - by noam
    I am trying to use the subrepo feature of mercurial, using the mercurial eclipse plugin\tortoiseHG. These are the steps I took: Created an empty dir /root cloned all repos that I want to be subrepos inside this folder (/root/sub1, /root/sub2) Created and added the .hgsub file in the root repo /root/.hgsub and put all the mappings of the sub repos in it using tortoiseHG, right clicked on /root and selected create repository here again with tortoise, selected all the files inside /root and added them to to the root repo commited the root repo pushed the local root repo into an empty repo I have set up on kiln Then, I pulled the root repo in eclipse, using import-mercurial. Now I see that all the subrepos appear as though they are unversioned (no "orange cylinder" icon next to their corresponding folders in the eclipse file explorer). Furthermore, when I right click on one of the subrepos, I don't get all the hg commands in the "team" menu as I usually get, with root projects - no "pull", "push" etc. Also, when I made a change to a file in a subrepo, and then "committed" the root project, it told me there were no changes found. I see the same behavior also in tortoiseHG - When I am browsing files under /root, the files belonging directly to the root repo have an small icon (a V sign) on them marking they are version controlled, while the subrepos' folders aren't marked as such. Am I doing something wrong, or is it a bug?

    Read the article

  • Getting Error When Opening Files

    - by Nathan Campos
    I'm developing a simple Text Editor to understand better PocketC language, then I've done this: #include "\\Storage Card\\My Documents\\PocketC\\Parrot\\defines.pc" int filehandle; int file_len; string file_mode; initComponents() { createctrl("EDIT", "test", 2, 1, 0, 24, 70, 25, TEXTBOX); wndshow(TEXTBOX, SW_SHOW); guigetfocus(); } main() { filehandle = fileopen(OpenFileDlg("Plain Text Files (*.txt)|*.txt; All Files (*.*)|*.*"), 0, FILE_READWRITE); file_len = filegetlen(filehandle); if(filehandle = -1) { MessageBox("File Could Not Be Found!", "Error", 3, 1); } initComponents(); editset(TEXTBOX, fileread(filehandle, file_len)); } Then I tried to run the application, it opens the Open File Dialog, I select a file(that is at \test.txt) that I've created with notepad, then I got my MessageBox saying that the file wans't found. Then I want to know why I'm getting this if the file is all correct? *PS: When I click to exit the MessageBox, I saw that the TextBox is displaying where the file is(I've tested with many other files, and with all I got the error and this).

    Read the article

  • Check if files in a directory are still being written using Windows Batch Script

    - by FMFF
    Hello. Here's my batch file to parse a directory, and zip files of certain type REM Begin ------------------------ tasklist /FI "IMAGENAME eq 7za.exe" /FO CSV > search.log FOR /F %%A IN (search.log) DO IF %%~zA EQU 0 GOTO end for /f "delims=" %%A in ('dir C:\Temp\*.ps /b') do ( "C:\Program Files\7-Zip\cmdline\7za.exe" a -tzip -mx9 "C:\temp\Zip\%%A.zip" "C:\temp\%%A" Move "C:\temp\%%A" "C:\Temp\Archive" ) :end del search.log REM pause exit REM End --------------------------- This code works just fine for 90% of my needs. It will be deployed as a scheduled task. However, the *.ps files are rather large (minimum of 1GB) in real time cases. So the code is supposed to check if the incoming file is completely written and is not locked by the application that is writing it. I saw another example elsewhere, that suggested the following approach :TestFile ren c:\file.txt c:\file.txt if errorlevel 0 goto docopy sleep 5 goto TestFile :docopy However this example is good for a fixed file. How can I use that many labels and GoTo's inside a for loop without causing an infinite loop? Or is this code safe to be used in the For Loop? Thank you for any help.

    Read the article

  • Maven jaxb generate plugin to read xsd files from multiple directories

    - by ziggy
    If i have xsd file in the following directories src/main/resources/xsd src/main/resources/schema/common src/main/resources/schema/soap How can i instruct the maven jaxb plugin to generate jaxb classes using all schema files in the above directory? I can get it to generate the class files if i specify one of the folders but i cant get i dont know how to include all three folders. Here is how i generate the files for one folder: <plugin> <groupId>org.jvnet.jaxb2.maven2</groupId> <artifactId>maven-jaxb2-plugin</artifactId> <executions> <execution> <goals> <goal>generate</goal> </goals> </execution> </executions> <configuration> <schemaDirectory>src/main/resources/xsd</schemaDirectory> </configuration> </plugin> I tried adding multiple entries in the element but it just ignores all of them if i do that. Thanks

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

< Previous Page | 183 184 185 186 187 188 189 190 191 192 193 194  | Next Page >