Search Results

Search found 3168 results on 127 pages for 'directories'.

Page 83/127 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • List of header file locations for the Havok Physics Engine

    - by QAH
    Hello everyone! I am trying to integrate the Havok physics engine into my small game. It is a really nice SDK, but the header files are all over the place. Many headers are deeply nested in multiple directories. That gets confusing when you are trying to include headers for different important objects. I would like to know if there is a nice guide that will let you know where certian objects are and what headres they are in. I have already looked at Havok's documentation, and I also looked at the reference manual, but they don't give great detail as to where certain classes are located (header location). Also, is there any programs out there that can scan header files and create a list of where objects can be found? Thanks again

    Read the article

  • hp -ux remote cpio copy

    - by soField
    REMOTE SERVER remsh remoteserverhostname -l remoteusername find /tmp/a1/ | cpio -o > /tmp/paketr.cpio LOCAL SERVER rcp remoteserverhostname:/tmp/paketr.cpio /tmp/aaa cpio -idmv < /tmp/paketr.cpio i'am trying to get and create directory structure from remote server to local server i can do this with following command list but i wonder if i can do this with just one command by running cpio with pass-through mode remsh remoteserverhostname find /tmp/a1 | cpio -pd /tmp current </tmp/tmp/a1/b1/y1> newer current </tmp/tmp/a1/b1/z1> newer current </tmp/tmp/a1/b2/l2smc> newer "/tmp/a1/b3": No such file or directory Cannot stat </tmp/a1/b3>. 0 blocks so when i try to cpio -pd option , i am expecting it to create directories for me but it does not i was using rcp but its not preserving symbolic links :( what can i do ? hp-ux

    Read the article

  • FileNotFound exception when trying to write to a file

    - by Chris Knight
    OK, I'm feeling like this should be easy but am obviously missing something fundamental to file writing in Java. I have this: File someFile = new File("someDirA/someDirB/someDirC/filename.txt"); and I just want to write to the file. However, while someDirA exists, someDirB (and therefore someDirC and filename.txt) do not exist. Doing this: BufferedWriter writer = new BufferedWriter(new FileWriter(someFile)); throws a FileNotFoundException. Well, er, no kidding. I'm trying to create it after all. Do I need to break up the file path into components, create the directories and then create the file before instantiating the FileWriter object?

    Read the article

  • Create javadoc with multiple src dirs

    - by Ed Marty
    I have a Util package with source files in three seperate directories, defined like so: src/com/domain/util src/Standard/com/domain/util src/Extended/com/domain/util The package is built with the first set of files and either one of the second or third set, to create a total of two different implementations of the same interface. Now, I want to generate javadoc based on those files. How can I specify that? What I really want to do is javadoc com.domain.util -sourcepath ./src;./src/Standard to build the javadoc for the standard util package, and javadoc com.domain.util -sourcepath ./src;./src/Extended to build the javadoc for the extended util package. This doesn't work. The only way I've found so far to actually make it work is to merge the directory structure of the common classes and the Standard classes into another location and run with that for the standard javadoc, then do the same for the Extended package. Is there another way?

    Read the article

  • Prevent bot from crawling certain areas of site.

    - by Skoder
    Hey, I don't know much about SEO and how web spiders work, so forgive my ignorance here. I'm creating a site (using ASP.NET-MVC) which has areas that displays information retrieved from the database. The data is unique to the user, so there's no real server-side output caching going on. However, since the data can contain things the user may not wish to have displayed from search engine results, I'd like to prevent any spiders from accessing the search results page. Are there any special actions I should take to ensure that the search result directory isn't crawled? Also, would a spider even crawl a page that's dynamically generated and would any actions preventing certain directories being search mess up my search engine rankings? edit: I should add, I'm reading up on robots.txt protocol, but it relies on co-operation from the web crawler. However, I'd also like to prevent any data-mining users who will ignore the robots.txt file. I appreciate any help!

    Read the article

  • How to split a path platform independent?

    - by Janusz
    I'm using the following code to get an array with all sub directories from a given path. String[] subDirs = path.split(File.separator); I need the array to check if certain folders are at the right place in this path. This looked like a good solution until findBugs complains that File.separator is used as a regular expression. It seems that passing the windows file separator to a function that is building a regex from it is a bad idea because the backslash being an escape character. How can I split the path in a cross platform way without using File.separator? Or is code like this okay? String[] subDirs = path.split("/");

    Read the article

  • Options for Linux OS executable archive files - self installers

    - by Matt1776
    I am looking to create a web-project that is able to install with a program. The user should be able to download an archive file or tar file, run it (executable), and the setup script would ask for paths and configurable values and then unpack its 'payload' and sorting out the contents for deployment. This would be a Linux version of the MSI installer. Is there such a thing for Linux operating systems? This does not involve kernel level manipulations. All it needs to do is copy directories and files on the filesystem, which should cover about 80% if not more of all the *nix distributions.

    Read the article

  • Elmah on MVC website with WordPress/php sub directory

    - by creativeincode
    I have created a website using ASP.NET MVC and use ELMAH for error handling, this works perfectly. After setting up a virtual directory on my website under /blog and adding the necessary WordPress php files and mysql db, I get the below error come up. Could not load file or assembly 'Elmah' or one of its dependencies. The system cannot find the file specified. I think this has something to do with the fact that ELMAH is applying itself to all sub-directories. Is there a way that I can tell ELMAH to ignore everything under /blog? Or is there a way to get around this? Thanks in advance.

    Read the article

  • Can't Get Virtual Users Setup in VSFTPD -Tried Everything

    - by N.T.
    Have Ubuntu 11.10 with vsftpd installed and working. Can not get virtual users setup at all? Vsftpd will allow main Ubuntu owner account to login, but nothing else? I've followed several tutorials on adding virtual users, but nothing works? I just need to add 2 virtual users and have them be able to upload files to vsftpd Ubuntu computer from other computers on my Lan network. Everywhere I've looked, people just point toward tutorials on adding virtual users, but that just is NOT working. I've been struggling with this for over a week now! PLEASE Help. Thanks. I'll even give a donation if someone can figure this out. here is the vsftpd.conf file I am using. I copied the original, and make a new one, every time I try a tutorial. So far, none have worked. Here is the vsftpd.conf file I'm using. (I hope this helps?) # Example config file /etc/vsftpd.conf # # The default compiled in settings are fairly paranoid. This sample file # loosens things up a bit, to make the ftp daemon more usable. # Please see vsftpd.conf.5 for all compiled in defaults. # # READ THIS: This example file is NOT an exhaustive list of vsftpd options. # Please read the vsftpd.conf.5 manual page to get a full idea of vsftpd's # capabilities. # # # Run standalone? vsftpd can run either from an inetd or as a standalone # daemon started from an initscript. listen=YES # # Run standalone with IPv6? # Like the listen parameter, except vsftpd will listen on an IPv6 socket # instead of an IPv4 one. This parameter and the listen parameter are mutually # exclusive. #listen_ipv6=YES # # Allow anonymous FTP? (Disabled by default) anonymous_enable=YES # # Uncomment this to allow local users to log in. local_enable=YES # # Uncomment this to enable any form of FTP write command. write_enable=YES # # Default umask for local users is 077. You may wish to change this to 022, # if your users expect that (022 is used by most other ftpd's) local_umask=022 # # Uncomment this to allow the anonymous FTP user to upload files. This only # has an effect if the above global write enable is activated. Also, you will # obviously need to create a directory writable by the FTP user. #anon_upload_enable=YES # # Uncomment this if you want the anonymous FTP user to be able to create # new directories. anon_mkdir_write_enable=YES # # Activate directory messages - messages given to remote users when they # go into a certain directory. dirmessage_enable=YES # # If enabled, vsftpd will display directory listings with the time # in your local time zone. The default is to display GMT. The # times returned by the MDTM FTP command are also affected by this # option. use_localtime=YES # # Activate logging of uploads/downloads. xferlog_enable=YES # # Make sure PORT transfer connections originate from port 20 (ftp-data). connect_from_port_20=YES # # If you want, you can arrange for uploaded anonymous files to be owned by # a different user. Note! Using "root" for uploaded files is not # recommended! #chown_uploads=YES #chown_username=whoever # # You may override where the log file goes if you like. The default is shown # below. #xferlog_file=/var/log/vsftpd.log # # If you want, you can have your log file in standard ftpd xferlog format. # Note that the default log file location is /var/log/xferlog in this case. xferlog_std_format=YES # # You may change the default value for timing out an idle session. #idle_session_timeout=600 # # You may change the default value for timing out a data connection. #data_connection_timeout=120 # # It is recommended that you define on your system a unique user which the # ftp server can use as a totally isolated and unprivileged user. #nopriv_user=ftpsecure # # Enable this and the server will recognise asynchronous ABOR requests. Not # recommended for security (the code is non-trivial). Not enabling it, # however, may confuse older FTP clients. #async_abor_enable=YES # # By default the server will pretend to allow ASCII mode but in fact ignore # the request. Turn on the below options to have the server actually do ASCII # mangling on files when in ASCII mode. # Beware that on some FTP servers, ASCII support allows a denial of service # attack (DoS) via the command "SIZE /big/file" in ASCII mode. vsftpd # predicted this attack and has always been safe, reporting the size of the # raw file. # ASCII mangling is a horrible feature of the protocol. #ascii_upload_enable=YES #ascii_download_enable=YES # # You may fully customise the login banner string: ftpd_banner=Welcome to Sage FTP service. # # You may specify a file of disallowed anonymous e-mail addresses. Apparently # useful for combatting certain DoS attacks. #deny_email_enable=YES # (default follows) #banned_email_file=/etc/vsftpd.banned_emails # # You may restrict local users to their home directories. See the FAQ for # the possible risks in this before using chroot_local_user or # chroot_list_enable below. chroot_local_user=YES # # You may specify an explicit list of local users to chroot() to their home # directory. If chroot_local_user is YES, then this list becomes a list of # users to NOT chroot(). #chroot_local_user=YES #chroot_list_enable=YES # (default follows) #chroot_list_file=/etc/vsftpd.chroot_list # # You may activate the "-R" option to the builtin ls. This is disabled by # default to avoid remote users being able to cause excessive I/O on large # sites. However, some broken FTP clients such as "ncftp" and "mirror" assume # the presence of the "-R" option, so there is a strong case for enabling it. #ls_recurse_enable=YES # # Debian customization # # Some of vsftpd's settings don't fit the Debian filesystem layout by # default. These settings are more Debian-friendly. # # This option should be the name of a directory which is empty. Also, the # directory should not be writable by the ftp user. This directory is used # as a secure chroot() jail at times vsftpd does not require filesystem # access. secure_chroot_dir=/var/run/vsftpd/empty # # This string is the name of the PAM service vsftpd will use. pam_service_name=vsftpd local_root=/media/FilesDrive # # This option specifies the location of the RSA certificate to use for SSL # encrypted connections. rsa_cert_file=/etc/ssl/private/vsftpd.pem

    Read the article

  • remote cpio copy

    - by soField
    remsh remoteserverhostname -l remoteusername find /tmp/a1/ | cpio -o > /tmp/paketr.cpio rcp remoteserverhostname:/tmp/paketr.cpio /tmp/aaa cpio -idmv < /tmp/paketr.cpio i'am trying to get and create directory structure from remote server to local server i can do this with following command list but i wonder if i can do this with just one command by running cpio with pass-through mode remsh remoteserverhostname find /tmp/a1 | cpio -pd /tmp current </tmp/tmp/a1/b1/y1> newer current </tmp/tmp/a1/b1/z1> newer current </tmp/tmp/a1/b2/l2smc> newer "/tmp/a1/b3": No such file or directory Cannot stat </tmp/a1/b3>. 0 blocks so when i try to cpio -pd option , i am expecting it to create directories for me but it does not what can i do ?

    Read the article

  • to Imagemagick PHP exec

    - by Erik Smith
    I found a very helpful post on here about cropping images in a circle. However, when I try to execute the imagemagick script using exec in PHP, I'm getting no results. I've checked to make sure the directories have the correct permissions and such. Is there a step I'm missing? Any insight would be much appreciated. Here's what my script looks like: $run = exec('convert -size 200x200 xc:none -fill daisy.jpg -draw "circle 100,100 100,1" uploads/new.png'); Edit: Imagemagick is installed.

    Read the article

  • IntelliJ: Adding resources to the class path, but still gives me "java.util.MissingResourceException: Can't find bundle.... locale en_US"

    - by Martin
    I am quite new to intellij and i have loaded in a project that i wish to compile, everything seems to be going well, but when i compile it. I get the bundle cannot be found. java.util.MissingResourceException: Can't find bundle for base name openfire_i18n, locale en_US at java.util.ResourceBundle.throwMissingResourceException(ResourceBundle.java:1499) After doing some investigation, it appears i have to include the resources in the class path, is this correct? I have done Project Settings, Modules, Dependencies, and i added a "Jars or directories" There is a checkbox that says Export, i have tried clicking it and unclicking it :-) My resources i can see are in i8n\ResourceBundle I tried adding the i8n and it asked me for a category for selected files, i put classes RUN - but still same error.. so i tried adding the i8n/ResourceBundle directory RUN - still same error. I notice under my ResourceBundle directory there are C:\Dev\Java\openfire_src_3_7_1\openfire_src\src\i18n\openfire_i18n_en.properties but there is no specific en_US but i thought it supposed to fallback to EN ?? SO i think everything is ok. Can anyone help I am really stuck Thanks

    Read the article

  • core data editor problems

    - by Peyman
    I was recommended by someone in Stack Overflow to use Core Data Editor http://christian-kienle.de/CoreDataEditor/ to manage the sqlite persistent store. However the latest version (3.0) crashes on launch everytime. Older versions load but I see nothing when i point the config to the persistent store and the object model directories. There is no documentation either. can someone point me to the right place to sort this problem? I am trying to find a more manageable way to coordinate core data development than sqlite consoles. thank you

    Read the article

  • Git How do I Push a project, that was Downloaded from Source

    - by JZ
    I worked with a graphic designer that did not clone from my github account. He downloaded the project from source rather than using the command "git clone". Since he pulled his files, a month has gone by and I want to do the following tasks: Create a new branch Push the graphic designers project into that branch Merge his branch with Master I've tried the following the github forking guide with not much luck; when I attempt to push the files into a new branch I get an error: fatal: Not a git repository (or any of the parent directories): .git How do I do this?

    Read the article

  • Developer tool for configuring IIS6

    - by Marc Gravell
    edit: IIS6; I'm not sure IIS7 is an option in the immediate future... From a developer angle, I am constantly changing my IIS settings, or need to merge settings from other teams into different VMs. The "Save Configuration to Disk" has never really worked well for me. Because we are making lots of small changes, web installation projects have never really worked either... Tools aimed for the web-admin aren't necessarily a good fit for the developer - we have different aims and needs. Does anyone have a script / tool / utility that would allow us to quickly configure IIS? In particular: remove everything (start clean) add a load of virtual directories, each mapped to application base paths set as an application set the app-pool (we'll assume the app pool already exists) set the ASP.NET version to 2.x if needed from some find of flat input list (any format would do).

    Read the article

  • Daemon with Clojure/JVM

    - by Isaac Copper
    I'd like to have a small (not doing too damn much) daemon running on a little server, watching a directory for new files being added to it (and any directories in the main one), and calling another Clojure program to deal with that new file. Ideally, each file would be added to a queue (a list represented by a ref in Clojure?) and the main process would take care of those files in the queue on a FIFO basis. My question is: is having a JVM up running this little program all the time too much a resource hog? And do you have any suggestions as to how go about doing this? Thank you very much!

    Read the article

  • Impersonation and Delegation

    - by Samuel Kim
    I am using impersonation is used to access file on UNC share as below. var ctx = ((WindowsIdentity)HttpContext.Current.User.Identity).Impersonate(); string level = WindowsIdentity.GetCurrent().ImpersonationLevel); On two Windows 2003 servers using IIS6, I am getting different impersonation levels: Delegation on one server and Impersonation on the other server. This causes issues where I am unable to access the UNC share on the server with 'Impersonation' level. What could be causing this difference? I searched through machine.config and IIS settings for the app pool, site and virtual directories - but aren't able to find the cause of this problem.

    Read the article

  • Convert PDF to Image Batch

    - by tro
    I am working on a solution where I can convert pdf files to images. I am using the following example from codeproject: http://www.codeproject.com/Articles/317700/Convert-a-PDF-into-a-series-of-images-using-Csharp?msg=4134859#xx4134859xx now I tried with the following code to generate from more then 1000 pdf files new images: using Cyotek.GhostScript; using Cyotek.GhostScript.PdfConversion; using System; using System.Collections.Generic; using System.Drawing; using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; namespace RefClass_PDF2Image { class Program { static void Main(string[] args) { string outputPath = Properties.Settings.Default.outputPath; string pdfPath = Properties.Settings.Default.pdfPath; if (!Directory.Exists(outputPath)) { Console.WriteLine("Der angegebene Pfad " + outputPath + " für den Export wurde nicht gefunden. Bitte ändern Sie den Pfad (outputPath) in der App.Config Datei."); return; } else { Console.WriteLine("Output Pfad: " + outputPath + " gefunden."); } if (!Directory.Exists(pdfPath)) { Console.WriteLine("Der angegebene Pfad " + pdfPath + " zu den PDF Zeichnungen wurde nicht gefunden. Bitte ändern Sie den Pfad (pdfPath) in der App.Config Datei."); return; } else { Console.WriteLine("PDF Pfad: " + pdfPath + " gefunden."); } Pdf2ImageSettings settings = GetPDFSettings(); DateTime start = DateTime.Now; TimeSpan span; Console.WriteLine(""); Console.WriteLine("Extraktion der PDF Zeichnungen wird gestartet: " + start.ToShortTimeString()); Console.WriteLine(""); DirectoryInfo diretoryInfo = new DirectoryInfo(pdfPath); DirectoryInfo[] directories = diretoryInfo.GetDirectories(); Console.WriteLine(""); Console.WriteLine("Es wurden " + directories.Length + " verschiedende Verzeichnisse gefunden."); Console.WriteLine(""); List<string> filenamesPDF = Directory.GetFiles(pdfPath, "*.pdf*", SearchOption.AllDirectories).Select(x => Path.GetFullPath(x)).ToList(); List<string> filenamesOutput = Directory.GetFiles(outputPath, "*.*", SearchOption.AllDirectories).Select(x => Path.GetFullPath(x)).ToList(); Console.WriteLine(""); Console.WriteLine("Es wurden " + filenamesPDF.Count + " verschiedende PDF Zeichnungen gefunden."); Console.WriteLine(""); List<string> newFileNames = new List<string>(); int cutLength = pdfPath.Length; for (int i = 0; i < filenamesPDF.Count; i++) { string temp = filenamesPDF[i].Remove(0, cutLength); temp = outputPath + temp; temp = temp.Replace("pdf", "jpg"); newFileNames.Add(temp); } for (int i = 0; i < filenamesPDF.Count; i++) { FileInfo fi = new FileInfo(newFileNames[i]); if (!fi.Exists) { if (!Directory.Exists(fi.DirectoryName)) { Directory.CreateDirectory(fi.DirectoryName); } Bitmap firstPage = new Pdf2Image(filenamesPDF[i], settings).GetImage(); firstPage.Save(newFileNames[i], System.Drawing.Imaging.ImageFormat.Jpeg); firstPage.Dispose(); } //if (i % 20 == 0) //{ // GC.Collect(); // GC.WaitForPendingFinalizers(); //} } Console.ReadLine(); } private static Pdf2ImageSettings GetPDFSettings() { Pdf2ImageSettings settings; settings = new Pdf2ImageSettings(); settings.AntiAliasMode = AntiAliasMode.Medium; settings.Dpi = 150; settings.GridFitMode = GridFitMode.Topological; settings.ImageFormat = ImageFormat.Png24; settings.TrimMode = PdfTrimMode.CropBox; return settings; } } } unfortunately, I always get in the Pdf2Image.cs an out of memory exception. here the code: public Bitmap GetImage(int pageNumber) { Bitmap result; string workFile; //if (pageNumber < 1 || pageNumber > this.PageCount) // throw new ArgumentException("Page number is out of bounds", "pageNumber"); if (pageNumber < 1) throw new ArgumentException("Page number is out of bounds", "pageNumber"); workFile = Path.GetTempFileName(); try { this.ConvertPdfPageToImage(workFile, pageNumber); using (FileStream stream = new FileStream(workFile, FileMode.Open, FileAccess.Read)) { result = new Bitmap(stream); // --->>> here is the out of memory exception stream.Close(); stream.Dispose(); } } finally { File.Delete(workFile); } return result; } how can I fix that to avoid this exception? thanks for any help, tro

    Read the article

  • Change the install directory for an OSX Package

    - by Scott
    It drives me nuts that every time I download a binary to run on OSX that it wants to install in system directories. ~/Applications is a perfectly fine place to install and doesn't require blindly authenticating somebody else's binary. Is there a way to change the install directory for packages? On a few I've been able to open the package and edit the plist to install it elsewhere, but that doesn't work universally. I install from source when I can, but it isn't always an option. Is there a good way to force the installer to use ~/Applications?

    Read the article

  • CruiseControl / NANT <copy> Task

    - by Striker
    We have a website with all the media (css/images) stored in a media folder. The media folder and it's 95 subdirectories contain about 400 total files. We have a Cruiscontrol project that monitors just the media directory for changes and when triggered copies those files to our integration server. Unfortunately, our integration server is at a remote location and so even when copying 2-3 files the NANT task is taking 4+ minutes. I believe the combination of the sheer number or directories/files and our network latency is causing the NANT task to run slow. I believe it is comparing the modified dates of both the local and remote copy of every file. I really want to speed this up and my initial thought was instead of trying to copy the whole media folder, can I get the list of file modifications from CruiseControl and specifically copy those files instead, saving the NANT task the work of having to compare them all for changes. Is there a way to do what I am asking or is there a better way to accomplish the same performance gains?

    Read the article

  • pylint ignore by directory

    - by Ciantic
    Following is from pylint docs: --ignore=<file> Add <file or directory> to the black list. It should be a base name, not a path. You may set this option multiple times. [current: %default] Yet I'm not having luck getting the directory part work. I have directory called migrations, which has django-south migration files. As I enter --ignore=migrations it still keeps giving me the errors/warnings in files inside migrations directory. Could it be that --ignore is not working for directories? If I could even use regexp to match the ignored files it would work, since django-south files are all named 0001_something, 0002_something...

    Read the article

  • CherryPy configuration tools.staticdir.root problem

    - by Alan Harris-Reid
    Hi there, How can I make my static-file root directories relative to my application root folder (instead of a hard-coded path)? In accordance with CP instructions (http://www.cherrypy.org/wiki/StaticContent) I have tried the following in my configuration file: tree.cpapp = cherrypy.Application(cpapp.Root()) tools.staticdir.root = cpapp.current_dir but when I run cherrpy.quickstart(rootclass, script_name='/', config=config_file) I get the following error builtins.ValueError: ("Config error in section: 'global', option: 'tree.cpapp', value: 'cherrypy.Application(cpapp.Root())'. Config values must be valid Python.", 'TypeError', ("unrepr could not resolve the name 'cpapp'",)) I know I can do configuration from within the main.py file just before quickstart is called (eg. using os.path.abspath(os.path.dirname(file))), but I prefer using the idea of a separate configuration file if possible. Any help would be appreciated (in case it is relevant, I am using CP 3.2 with Python 3.1) TIA Alan

    Read the article

  • Linux ext3 readdir and concurrent updates

    - by Wangnick
    Dear all, we are receiving about 10000 messages per hour. We store them as individual files in hourly directories on an ext3 filesystem. The file name includes a sequence number. We use rsync to mirror these files every 20 seconds at another location (via a SAN, but that doesn't matter). Sometimes an rsync run picks up files n-3, n-2, n-1, n+1, and then next rsync run continues with n, n+2, n+3, n+4 and so on. Is it possible that when one process creates files in a certain sequence within a directory, that another process using readdir() sees the files appearing in a different sequence? Kind regards, Sebastian

    Read the article

  • Problem With HTML5 Application Cache Whitelist - Won't Ignore Items

    - by Ryan Donnelly
    I'm trying to use HTML5 Application Cache to speed some things up on an iPhone webapp. It works great for storing images, css and JS, but the problem is that it also tries to store the HTML. I haven't been able to get it to ignore the html and stop storing it in the cache. From what I've read, I have to "whitelist" the files and directories that I want to load no matter what. I've tried listing the files I want cached explicitly, and I've tried adding a series of things under the "NETWORK:" heading. I've tried * / /* http://mysite.com http://mysite.com/ http://mysite.com/* None of them seem to work. Is there any way to ignore HTML files by MIME-Type or anything? Any advice would be appreciated. Ryan P.S. Of course, my site is not mysite.com..I just used that for simplicity.

    Read the article

  • Blank space after file extension -> weird FileInfo behaviour

    - by Axarydax
    Somehow a file has appeared in one of my directories, and it has space at the end of its extension - its name is "test.txt ". The weird thing is that Directory.GetFiles() returns me the path of this file, but I'm unable to retrieve file information with FileInfo class. The error manifests here: DirectoryInfo di = new DirectoryInfo("c:\\somedir"); FileInfo fi = di.GetFileSystemInfos("test*")[0] as FileInfo; //correctly fi.FullName is "c:\somedir\test.txt " //but fi.Exists==false (!) Is FileInfo class broken? Can I somehow retrieve information about this file? I really don't know how did that file appear on my file system, and I am unable to recreate some more of them. All of my attempts to create a new file with this type of extension have failed, but now my program is crashing when encoutering it. I can easily handle the exception when finding the file, but boy am I curious about this!

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >