Search Results

Search found 17260 results on 691 pages for 'folder tree'.

Page 335/691 | < Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >

  • How do I catalog files on several external hard drives that I want to store off-line? OSX

    - by raudi
    My partner, an artist, has more than 10 external hardisks both USB and firewire and every 2-3 months a new one has to be added (She's working with videos and pictures) currently its 10TB and growing so too much for a affordable NAS. Right now the files are not indexed and I think can not be searched with spotlight because not all drives can be connected at the same time. So if she wants to search for a file, she has to guess which disk/disks (based mostly on the date) and then search several drives. Now I'm looking for a solution to index/catalog the drives, something like GentibusCD Cathy Disclib (all these solutions are unfortunately Windows only) Is there any software for OSX that will catalog all the hard drives, so she can search the catalog, find the files, and get the ID of the disk / disk name that has the content? Preferably something with a GUI so my partner can also use it easily Preferably with Thumbnails for pictures/videos (But even an equivalent to "tree /F /A" would be better than nothing)

    Read the article

  • Why does subshell not inherit exported variable (PS1)?

    - by amn
    After some debugging I finally narrowed down the problem as to why my X session xterm prompt does not appear according to my PS1 setting. If I run sh -c env, it doesn't even show PS1 in the list. Why? export PS1='test' sh -c env # No PS1 in the list, default prompt appearance (shell name + version) Substituting sh with bash yields same result, alas the behavior appears to be the same for both shells/modes. As far as I understood from man bash, the environment resulting from command run by shell with -c should include the exported variables. And it does - exporting FOOBAR results in FOOBAR listed in env run by subshell. It appears that the story is different if the variable is PS1 however. What is going on? I want my prompt propagated throughout the process tree and system. For matters sake, it is set in /etc/profile.d/user.sh (a file I created myself) with the following: PS1='\u@\H \w \$ ' export PS1 I am running Arch Linux (updated yesterday.)

    Read the article

  • How do I solve the .NET CF exception "Can't find PInvoke DLL"?

    - by Ignas Limanauskas
    This is to all the C# gurus. I have been banging my head on this for some time already, tried all kinds of advice on the net with no avail. The action is happening in Windows Mobile 5.0. I have a DLL named MyDll.dll. In the MyDll.h I have: extern "C" __declspec(dllexport) int MyDllFunction(int one, int two); The definition of MyDllFunction in MyDll.cpp is: int MyDllFunction(int one, int two) { return one + two; } The C# class contains the following declaration: [DllImport("MyDll.dll")] extern public static int MyDllFunction(int one, int two); In the same class I am calling MyDllFunction the following way: int res = MyDllFunction(10, 10); And this is where the bloody thing keeps giving me "Can't find PInvoke DLL 'MyDll.dll'". I have verified that I can actually do the PInvoke on system calls, such as "GetAsyncKeyState(1)", declared as: [DllImport("coredll.dll")] protected static extern short GetAsyncKeyState(int vKey); The MyDll.dll is in the same folder as the executable, and I have also tried putting it into the /Windows folder with no changes nor success. Any advice or solutions are greatly appreciated.

    Read the article

  • How do you organise multiple git repositories?

    - by dbr
    With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?

    Read the article

  • shell script filter du and find by a string inside a file in a subfolder

    - by Jason
    I have the following command that I run on cygwin: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | xargs du --max-depth=0 > foldersizesreport.csv I intended to do the following with this command: for each folder under /d/tmp/ that was modified in last 150 days, check its total size including files within it and report it to file foldersizesreport.csv however that is now not good enough for me, as it turns out inside each /d/tmp/subfolder1/somefile.properties /d/tmp/subfolder2/somefile.properties /d/tmp/subfolder3/somefile.properties /d/tmp/subfolder4/somefile.properties so as you see inside each subfolderX there is a file named somefile.properties inside it there is a property SOMEPROPKEY=3808612800100 (among other properties) this is the time in millisecond, i need to change the command so that instead of -mtime -150 it will include in the whole calculation only subfolderX that has a file inside them somefile.properties where the SOMEPROPKEY=3808612800100 is the time in millisecond in future, if the value SOMEPROPKEY=23948948 is in past then dont at all include the folder in the foldersizesreport.csv because its not relevant to me. so the result report should be looking like: /d/tmp/,subfolder1,<itssizein KB> /d/tmp/,subfolder2,<itssizein KB> and if subfolder3 had a SOMEPROPKEY=34243234 (time in ms in past) then it would not be in that csv file. so basically I'm looking for: find /cygdrive/d/tmp/* -maxdepth 0 -mtime -150 -type d | <only subfolders that have in them property in file SOMEPROPKEY=28374874827 - time in ms in future and not in past | xargs du --max-depth=0 > foldersizesreport.csv

    Read the article

  • Need help with PHP URL encoding/decoding

    - by Kenan
    On one page I'm "masking"/encoding URL which is passed to another page, there I decode URL and start file delivering to user. I found some function for encoding/decoding URL's, but sometime encoded URL contains "+" or "/" and decoded link is broken. I must use "folder structure" for link, can not use QueryString! Here is encoding function: $urll = 'SomeUrl.zip'; $key = '123'; $result = ''; for($i=0; $i<strlen($urll); $i++) { $char = substr($urll, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)+ord($keychar)); $result.=$char; } $result = urlencode(base64_encode($result)); echo '<a href="/user/download/'.$result.'/">PC</a>'; Here is decoding: $urll = 'segment_3'; //Don't worry for this one its CMS retrieving 3rd "folder" $key = '123'; $resultt = ''; $string = ''; $string = base64_decode(urldecode($urll)); for($i=0; $i<strlen($string); $i++) { $char = substr($string, $i, 1); $keychar = substr($key, ($i % strlen($key))-1, 1); $char = chr(ord($char)-ord($keychar)); $resultt.=$char; } echo '<br />DEC: '. $resultt; So how to encode and decode url. Thanks

    Read the article

  • How can I send rich emails using the user's mail client ?

    - by Brann
    I need my .net program to send rich emails (usually containing table data, around 20 columns x 10 rows) using the user's mail infrastructure, allowing him to review/edit the mail before sending it, and storing the mail in his 'sent items' folder. mailto: seems the obvious choice, but unfortunately, it doesn't support neither attachments nor html bodies. It seems some clients support some extra features (e.g. Outlook 97 used to support a &Attach tag, but this is not the case for more recent versions). I could use mailto and try to format the text body to look nice (using tabs, etc), but this isn't really elegant and wouldn't support huge data. using automation seems a very huge task, as I would need to automate dozens of clients (4 or 5 versions of outlook, lotusnotes, thunderbid, etc.) ... This would be a huge task and it's not really my core business ... I could send emails through code and write my own mail form to let the user edit the mail, but this would have a lot of drawbacks : the user would need to manually configure the mail server settings he wouldn't have access to his contact directory the mail wouldn't be sent in his sent items folder This seems a quite common issue, but I haven't found any satisfying solution yet ; does someone knows of a library supporting this (ie containing automation logic for most mainstream email clients?). Or an alternative to mailto ?

    Read the article

  • What tool can I use to definitely kill a process on Windows?

    - by Moak
    Can anyone recommend an application (preferably usb-portable that doesn't require setup) That really kills a process immediately in windows XP? I ask this because often when I need to use the XP task manager it seems to want to go about it "the polite way" and sometimes crashed programs don't quit or take a minute to shut down. I need a real stone cold killer, not a pushover-could-you-please-quit-now-no?-ok-sorry-program Edit I'm sorry if it wasn't clear previously but I did mean the situation when even the "End process" command in Process tab of the task manager doesn't kill a program, however one of the answers did point me to the "End Process Tree" command which I've never noticed (when right clicking on the process)

    Read the article

  • Permission to see the expandable list of ISA Server 2006

    - by Hossein Mobasher
    I am working on ISA Server 2006 in Windows Server. I want to add some policy rules to my server, I followed this link. But It points to In the Microsoft Internet Security and Acceleration Server 2006 management console, expand the array name, and then click the Firewall Policy node. When I open the ISA Server 2006 Management Console, I can not show the expand list, how can I force ISA to show the expandable tree to start Firewall Policy? Could any one please help me to do this ? Note : I have administrator permission for my account. Thanks in advance :)

    Read the article

  • Resizing FileReference image then reuploading, only reuploads original image.

    - by pfunc
    I can;t figure out how to do this. Someone selects and image after calling FileReference.browse(). I take that image and make a thumbnail in flash. Then I upload that image like so: var newFileReq:URLRequest = new URLRequest(FILE_UPLOAD_TEMP); newFileReq.contentType = "application/octet-stream"; var fileReqVars:URLVariables = new URLVariables(); fileReqVars.image = myThumbImage; fileReqVars.folder = "Thumbs"; newFileReq.data = fileReqVars; newFileReq.method = URLRequestMethod.POST; //upload the first image fileRef.addEventListener(Event.COMPLETE, onFirstFileUp); fileRef.upload(newFileReq, "Filedata"); All this does it upload the original image. How do I change the fileRef to upload the new thumb? I have traced out the size of the "myThumbImage" and it is correct. I have placed it visually on the stage after creating the thumb, and it seems like it works. But when I upload it to an aspx page (that basically just throws it into a folder), it uploads the original larger image.

    Read the article

  • Life Cycle Navigator?

    - by C.W.Holeman II
    In many environments the file system directory structure and naming conventions attempt to allow one to use a file manager to navigate the life cycle of a document. This overloading of functions makes it difficult for users to handle the complexity. A file browser is a tool that lets the user navigate among files located in a directory structure to find a specific file. Whereas, when given a specific file, a life cycle navigator is a tool that lets the user navigate its life cycle from source to published copy and across versions. Does a Life Cycle Navigator exit? I see a user pointing at an object: Left mouse button displays the document Right mouse button has a Life Cycle Navigator (LCN) The LCN displays a tree for a specific document within a file manger, for example: Published 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All Source Draft 3.2 Current 3.1 3.0 +2.x +1.x +Archived +All +Work Flow +Properties Or from a command line: $ lcn x.pdf --open_source_document | my_favorite_editor $ lcn x.pdf --show_published_version_info $ lcn x.pdf --show_previous_publish_versions_info See also, Life Cycle Navigator.

    Read the article

  • Short names versus long names in Windows

    - by normski
    I have some code which gets the short name from a file path, using GetShortNameW(), and then later retrieves the long name view GetLongNameA(). The original file is of the form "C:/ProgramData/My Folder/File.ext" However, following conversion to short, then back to long, the filename becomes "C:/Program Files/My Folder/Filename.ext". The short name is of the form "C:/PROGRA~2/MY_FOL~1/FIL~1.EXT" The short name is being incorrectly resolved. The code compiles using VS 2005 on Windows 7 (I cannot upgrade the project to VS2008) Does anybody have any idea why this might be happening? DWORD pathLengthNeeded = ::GetShortPathNameW(aRef->GetFilePath().c_str(), NULL, 0); if(pathLengthNeeded != 0) { WCHAR* shortPath = new WCHAR[pathLengthNeeded]; DWORD newPathNameLength = ::GetShortPathNameW(aRef->GetFilePath().c_str(), shortPath, pathLengthNeeded); if(newPathNameLength != 0) { UI_STRING unicodePath(shortPath); std::string asciiPath = StringFromUserString(unicodePath); pathLengthNeeded = ::GetLongPathNameA(asciiPath.c_str(),NULL, 0); if(pathLengthNeeded != 0) {// convert it back to a long path if possible. For goodness sake can't we use Unicode throughout?F char* longPath = new char[pathLengthNeeded]; DWORD newPathNameLength = ::GetLongPathNameA(asciiPath.c_str(), longPath, pathLengthNeeded); if(newPathNameLength != 0) { std::string longPathString(longPath, newPathNameLength); asciiPath = longPathString; } delete [] longPath; } SetFullPathName(asciiPath); } delete [] shortPath; }

    Read the article

  • Sort command not working as expected

    - by user964689
    If anybody can help me to write a loop to iterate over files in a folder it would save me a huge amount of time. I think it must be quite a simple solution ,but I currently don't know how to nest a loop within a loop. So far I have this script: cd /folderlocation/ for i in `</textfile_containing_lines_to_iterate_through` do #size=`echo $i | perl -nE '/:([\d-]+)/ && say abs(eval $1)'` #echo "$size" zcat dataset | head -n 18 > temp"$i".vcf tabix dataset $i >> temp"$i".vcf vcftools --window-pi 1000000 --vcf temp10individuals"$i".vcf >> run_summary.txt cat out.windowed.pi >> outputfile_2 #rm temp* done grep -v "PI" outputfile_2 > outputfile rm outputfile_2 I need to expand this so that the script will run multiple times, through all of the 'textfiles_containing_lines_to_iterate_through'. Currently I change the name of the textfile manually each time and re-run the script. So I'd need a loop that does this for file in folder, and also that uses the name of the file as part of the outputfile name so that I can match an output file to an inputfile. Any help would be really useful and greatly appreciated! Many thanks in advance.

    Read the article

  • iPhone Simulator - Restores Plist - Weird Issue

    - by David Schiefer
    Hi All, today I've come across a rather weird issue. After adding code to validate a key (i added it below), the Simulator refuses to let go of the old plist file. I deleted the simulator folder in the Application Support folder, then deleted the *build directory and restarted xcode & build & run my app...still the same issue. the old plist is still there and 100% identical. I then changed the identifier and the snippet's validation keys, the plist however stayed the same. basically, no matter what i do it won't go. the same thing happens on the iphone itself. I have checked through the code, i don't create the key anywhere, but it still returns YES for it at every restart. Here's the code I added: + (void)initialize{ ////////////////////////////SPECIFING THE PREDATA/////////////////// NSUserDefaults *defaults = [NSUserDefaults standardUserDefaults]; NSDictionary *appDefaults = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:NO] forKey:@"protect"]; [defaults registerDefaults:appDefaults]; } if ([[NSUserDefaults standardUserDefaults] boolForKey:@"protect"] == NO) { [navigationController pushViewController:help animated:YES]; [help.navigationItem hidesBackButton]; } else { [window addSubview:passcode.view]; [self performSelector:@selector(responder) withObject:nil afterDelay:1]; } as a result, it will always go for the else option, which for some reason, doesn't get executed either. I assumed an error but the log is empty and there's no crash.

    Read the article

  • Javascript: How to test JSONP on localhost

    - by hqt
    Here is the way I test JSONP: I run XAMPP, and in folder htdocs I create a javascript folder . I create json1.json file contain some data to process. After that, I run this html file locally, and it will call a function in "other machine" (in this case is localhost) Here is my code : <head> <script> function updateSales(sales) { alert('test jsonp'); // real code here } </script> </head> <body> <h1>JSONP Example</h1> <script src="http://localhost:85/javascript/json1.json?callback=updateSales"></script> </body> But when I run, nothing happen. But if change to other real json on the internet, it will work. It means change line : <script src="http://localhost:85/javascript/json1.json?callback=updateSales"></script> to line: <script src="http://gumball.wickedlysmart.com/?callback=updateSales"></script> It will work smoothly. So, I don't know how to test JSONP by using localhost, please help me. Thanks :)

    Read the article

  • Two Applications using the same index file with Hibernate Search

    - by Dominik Obermaier
    Hi, I want to know if it is possible to use the same index file for an entity in two applications. Let me be more specific: We have an online Application with a frondend for the users and an application for the backend tasks (= administrator interface). Both are running on the same JBOSS AS. Both Applications are using the same database, so they are using the same entities. Of course the package names are not the same in both applications for the entities. So this is our usecase: A user should be able to search via the frondend. The user is only allowed to see results which are tagged with "visible". This tagging happens in our admin interface, so the index for the frontend should be updated every time an entity is tagged as "visible" in the backend. Of course both applications do have the same index root folder. In my index folder there are 2 index files: de.x.x.admin.model.Product de.x.x.frondend.model.Product How to "merge" this via Hibernate Search Configuration? I just did not get it via the documentation... Thanks for any help!

    Read the article

  • ASP.NET - Accessing copied content

    - by James Kolpack
    I have a class library project which contains some content files configured with the "Copy if newer" copy build action. This results in the files being copied to a folder under ...\bin\ for every project in the solution. In this same solution, I've got a ASP.NET web project (which is MVC, by the way). In the library I have a static constructor load the files into data structures accessible by the web project. Previously I've been including the content as an embedded resource. I now need to be able to replace them without recompiling. I want to access the data in three different contexts: Unit testing the library assembly Debugging the web application Hosting the site in IIS For unit testing, Environment.CurrentDirectory points to a path containing the copied content. When debugging however, it points to C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE. I've also looked at Assembly.GetExecutingAssembly().Location which points to C:\Windows\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\c44f9da4\9238ccc\assembly\dl3\eb4c23b4\9bd39460_f7d4ca01\. What I need is to the physical location of the webroot \bin folder, but since I'm in a static constructor in the library project, I don't have access to a Request.PhysicalApplicationPath. Is there some other environment variable or structure where I can always find my "Copy if newer" files?

    Read the article

  • How can I see the structure of a webpage inline?

    - by Coldblackice
    How can I see a webpage's structure "inline" with the visual representation of the actual page, all at once? I'm trying to understand HTML layout better, but it's hard to get a feel for it, even having the source open on a separate monitor, because there's just so much expansive and miscellaneous code. I suppose I could sift through it, clean it up, and set up some type of custom collapsible tree system, but that would take too long for the amount of pages I'd like to get a quick view of the layout/structure of. For reference, I'm using Firefox for my internet browsing.

    Read the article

  • Is there a best practice for concatenating MP3 Files, adjusting sample rates to match, while preserving original files?

    - by Scott
    Hello overflow community! Does anyone know if there is a "best practice" to concatenate mp3 files to create new files, while preserving the original files? I am working on a CentOS Linux machine, in command line. I will eventually call the command line from a PHP script. I have been doing research and I have come up with a process that I think could work. It combines general advice from different forums, blogs, and sources like this one. So here I go: Create a temporary folder Loop through files to create a new, converted copy, of file into a "raw" format (which one, I don't know. I didn't know "raw" files existed before too long ago. I could use some suggestions on this) Store the path to the temporary files, in the temporary folder, and then loop through the files to concatenate them and then put the new merged file the final "processed directory" Delete the contents of the temporary file with the temporary raw files inside. Convert the final file from "raw" to mp3 and enjoy the finished result I'm thinking that this course of action might be best because I can't necessarily control the quality of the original "source" mp3s. The only other option I could think of would be to create a script that would perform a similar process upon files being added to the system leaving only the files with the "proper" format and removing the original "erroneous" file. Hopefully you can see that I have put some thought into this and that I'm trying to leverage the collective knowledge of this community to choose the best direction. Perhaps there is a better path that I could take? By concatenate, I mean to join together in sequence to create a new audio file from the "concatenated files."

    Read the article

  • Protected flash video (requires HAL) on Gentoo

    - by Mala
    I am unable to play "protected" flash video, such as Amazon Prime Instant Video. From what I've read and uncovered, this seems to be due to a lack of HAL being installed on my computer. Confirmation that it is required for protected video can be seen towards the beginning of http://helpx.adobe.com/x-productkb/multi/flash-player-11-problems-playing.html However, hal is not in the gentoo portage tree, and in any case has been deprecated and replaced by udev. How can I go about getting Amazon Prime Instant Video to work again? I was considering grabbing the source from http://www.freedesktop.org/wiki/Software/hal but the links there won't load, and trying to install it from old ebuilds or from overlays which claim to still support it (e.g. kde-sunset) result in a compilation error: In file included from addon-generic-backlight.c:38:0: /usr/include/glib-2.0/glib/gmain.h:21:2: error: #error "Only <glib.h> can be included directly." Has anyone else solved this issue?

    Read the article

  • installing packages without sudo access

    - by whitman
    Is there an easy way to install packages with a large dependency tree, provided that you don't have superuser access? For instance, say I wanted to install firefox. Firefox has a ton of dependencies, each of which have their own dependencies, etc. Installing these the "./configure; make; make install" way would take forever. Is there an option I can give apt-get to make it install in a personal directory? Or is there a way to hack it to do all the heavy lifting for me?

    Read the article

  • Applications not installing due to unmet dependencies

    - by Vineet Sharma
    I was running Apache on Linode. I recently shifted to Lighttpd and removed apache, now whenever I try to install any application I get the following error. ivineet:~# apt-get install subversion Reading package lists... Done Building dependency tree Reading state information... Done You might want to run `apt-get -f install' to correct these: The following packages have unmet dependencies: libapache2-mod-php5: Depends: apache2-mpm-prefork (> 2.0.52) but it is not going to be installed or apache2-mpm-itk but it is not going to be installed subversion: Depends: libsvn1 (= 1.6.12dfsg-6) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). ivineet:~#

    Read the article

  • File.Move, why do i get a FileNotFoundException? The file exist...

    - by acidzombie24
    Its extremely weird since the program is iterating the file! outfolder and infolder are both in H:/ my external HD using windows 7. The idea is to move all folders that only contain files with the extention db and svn-base. When i try to move the folder i get an exception. VS2010 tells me it cant find the folder specified in dir. This code is iterating through dir so how can it not find it! this is weird. string []theExt = new string[] { "db", "svn-base" }; foreach (var dir in Directory.GetDirectories(infolder)) { bool hit = false; if (Directory.GetDirectories(dir).Count() > 0) continue; foreach (var f in Directory.GetFiles(dir)) { var ext = Path.GetExtension(f).Substring(1); if(theExt.Contains(ext) == false) { hit = true; break; } } if (!hit) { var dst = outfolder + "\\" + Path.GetFileName(dir); File.Move(dir, outfolder); //FileNotFoundException: Could not find file dir. } } }

    Read the article

  • How to create copying items from property values?

    - by Nam Gi VU
    Let's say I have a list of sub paths such as <PropertyGroup> <subPaths>$(path1)\**\*; $(path2)\**\*; $(path3)\file3.txt; </subPaths> </PropertyGroup> I want to copy these files from folder A to folder B (surely we already have all the sub folders/files in A). What I try was: <Target Name="Replace" DependsOnTargets="Replace_Init; Replace_Copy1Path"> </Target> <Target Name="Replace_Init"> <PropertyGroup> <subPaths>$(path1)\**\*; $(path2)\**\*; $(path3)\file3.txt; </subPaths> </PropertyGroup> <ItemGroup> <subPathItems Include="$(subPathFiles.Split(';'))" /> </ItemGroup> </Target> <Target Name="Replace_Copy1Path" Outputs="%(subPathItems.Identity)"> <PropertyGroup> <src>$(folderA)\%(subPathItems.Identity)</src> <dest>$(folderB)\%(subPathItems.Identity)</dest> </PropertyGroup> <Copy SourceFiles="$(src)" DestinationFiles="$(dest)" /> </Target> But the Copy task didn't work. It doesn't translate the *** to files. What did I do wrong? Please help!

    Read the article

  • PHP script unable to email on OpenBSD Apache

    - by MattC
    I have a webserver running OpenBSD 4.7 and PHP 5.2.12 out of the ports tree. There is a small contact page that is supposed to send an email to a specific address. When I fill in the form using a web browser, it sends the AJAX request to the PHP page which claims it worked successfully but there is no email. The maillog is empty as well. I created a small php script that replicates this functionality and when I run it by hand using the "php -f" command, it sends an email without a problem. I think this has to do with being chrooted but I can't seem to get it to work. Furthermore, I can't seem to get PHP to log. I told it to log to /var/www/logs/php_errors.log and restarted but can't get it to send anything to the file. Does anyone have any tips for debugging these sort of things in OpenBSD?

    Read the article

< Previous Page | 331 332 333 334 335 336 337 338 339 340 341 342  | Next Page >