Search Results

Search found 20029 results on 802 pages for 'directory permissions'.

Page 200/802 | < Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >

  • How to install PHP-FPM and PHP on Ubuntu?

    - by Sanoj
    I have problems with installing PHP and in Ubuntu. I followed the instructions on the PHP-FPM site, PHP FastCGI Process Manager but when doing ../configure && make to compile PHP I got a lot of not found messages (listed below), and I don't know how to fix them. I tried both the Integrated compilation and Separate compilation but both compilations ends up with the same messages. Is there a solution or workaround? An alternativ way to install PHP with PHP-FPM? ../configure: 11986: ac_fn_c_check_func: not found ../configure: 11997: ac_fn_c_check_func: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: checking for socket in -lsocket: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: checking for socket in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12147: ac_fn_c_try_link: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: result: no: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: no: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: checking for __socket in -lsocket: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: checking for __socket in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12147: ac_fn_c_try_link: not found ../configure: 12147: 5: Bad file descriptor ../configure: 12147: :: result: no: not found ../configure: 12147: 6: Bad file descriptor ../configure: 12147: no: not found ../configure: 12154: ac_fn_c_check_func: not found ../configure: 12165: ac_fn_c_check_func: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: checking for socketpair in -lsocket: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: checking for socketpair in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12315: ac_fn_c_try_link: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: result: no: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: no: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: checking for __socketpair in -lsocket: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: checking for __socketpair in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12315: ac_fn_c_try_link: not found ../configure: 12315: 5: Bad file descriptor ../configure: 12315: :: result: no: not found ../configure: 12315: 6: Bad file descriptor ../configure: 12315: no: not found ../configure: 12322: ac_fn_c_check_func: not found ../configure: 12333: ac_fn_c_check_func: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: checking for htonl in -lsocket: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: checking for htonl in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12483: ac_fn_c_try_link: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: result: no: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: no: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: checking for __htonl in -lsocket: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: checking for __htonl in -lsocket... : not found cat: confdefs.h: No such file or directory ../configure: 12483: ac_fn_c_try_link: not found ../configure: 12483: 5: Bad file descriptor ../configure: 12483: :: result: no: not found ../configure: 12483: 6: Bad file descriptor ../configure: 12483: no: not found ../configure: 12490: ac_fn_c_check_func: not found ../configure: 12501: ac_fn_c_check_func: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: checking for gethostname in -lnsl: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: checking for gethostname in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12651: ac_fn_c_try_link: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: result: no: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: no: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: checking for __gethostname in -lnsl: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: checking for __gethostname in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12651: ac_fn_c_try_link: not found ../configure: 12651: 5: Bad file descriptor ../configure: 12651: :: result: no: not found ../configure: 12651: 6: Bad file descriptor ../configure: 12651: no: not found ../configure: 12658: ac_fn_c_check_func: not found ../configure: 12669: ac_fn_c_check_func: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: checking for gethostbyaddr in -lnsl: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: checking for gethostbyaddr in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12819: ac_fn_c_try_link: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: result: no: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: no: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: checking for __gethostbyaddr in -lnsl: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: checking for __gethostbyaddr in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12819: ac_fn_c_try_link: not found ../configure: 12819: 5: Bad file descriptor ../configure: 12819: :: result: no: not found ../configure: 12819: 6: Bad file descriptor ../configure: 12819: no: not found ../configure: 12826: ac_fn_c_check_func: not found ../configure: 12837: ac_fn_c_check_func: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: checking for yp_get_default_domain in -lnsl: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: checking for yp_get_default_domain in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12987: ac_fn_c_try_link: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: result: no: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: no: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: checking for __yp_get_default_domain in -lnsl: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: checking for __yp_get_default_domain in -lnsl... : not found cat: confdefs.h: No such file or directory ../configure: 12987: ac_fn_c_try_link: not found ../configure: 12987: 5: Bad file descriptor ../configure: 12987: :: result: no: not found ../configure: 12987: 6: Bad file descriptor ../configure: 12987: no: not found ../configure: 12995: ac_fn_c_check_func: not found ../configure: 13006: ac_fn_c_check_func: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: checking for dlopen in -ldl: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: checking for dlopen in -ldl... : not found cat: confdefs.h: No such file or directory ../configure: 13156: ac_fn_c_try_link: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: result: no: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: no: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: checking for __dlopen in -ldl: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: checking for __dlopen in -ldl... : not found cat: confdefs.h: No such file or directory ../configure: 13156: ac_fn_c_try_link: not found ../configure: 13156: 5: Bad file descriptor ../configure: 13156: :: result: no: not found ../configure: 13156: 6: Bad file descriptor ../configure: 13156: no: not found ../configure: 13164: 5: Bad file descriptor ../configure: 13164: :: checking for sin in -lm: not found ../configure: 13164: 6: Bad file descriptor ../configure: 13164: checking for sin in -lm... : not found cat: confdefs.h: No such file or directory ../configure: 13196: ac_fn_c_try_link: not found ../configure: 13198: 5: Bad file descriptor ../configure: 13198: :: result: no: not found ../configure: 13198: 6: Bad file descriptor ../configure: 13198: no: not found ../configure: 13214: ac_fn_c_check_func: not found ../configure: 13225: ac_fn_c_check_func: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for inet_aton in -lresolv: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for inet_aton in -lresolv... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for __inet_aton in -lresolv: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for __inet_aton in -lresolv... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for inet_aton in -lbind: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for inet_aton in -lbind... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: checking for __inet_aton in -lbind: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: checking for __inet_aton in -lbind... : not found cat: confdefs.h: No such file or directory ../configure: 13510: ac_fn_c_try_link: not found ../configure: 13510: 5: Bad file descriptor ../configure: 13510: :: result: no: not found ../configure: 13510: 6: Bad file descriptor ../configure: 13510: no: not found ../configure: 13516: 5: Bad file descriptor ../configure: 13516: :: checking for ANSI C header files: not found ../configure: 13516: 6: Bad file descriptor ../configure: 13516: checking for ANSI C header files... : not found cat: confdefs.h: No such file or directory ../configure: 13615: ac_fn_c_try_compile: not found ../configure: 13617: 5: Bad file descriptor ../configure: 13617: :: result: no: not found ../configure: 13617: 6: Bad file descriptor ../configure: 13617: no: not found ../configure: 13665: ac_cv_header_dirent_dirent.h: not found ../configure: 13665: 5: Bad file descriptor ../configure: 13665: :: checking for dirent.h that defines DIR: not found ../configure: 13665: 6: Bad file descriptor ../configure: 13665: checking for dirent.h that defines DIR... : not found eval: 1: Bad substitution

    Read the article

  • Apache: Setting DocumentRoot to cgi directory results in downloading file instead of executing it.

    - by fastmonkeywheels
    I have a c-compiled CGI application that I need to execute from the DocumentRoot of my Apache server. The CGI file is called index.cgi and is located at /usr/lib/cgi-bin/index.cgi. I have the following Directory definition <Directory "/usr/lib/cgi-bin/"> Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch AllowOverride None Order allow,deny Allow from all DirectoryIndex index.cgi </Directory> I have the following VirtualHost setting: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /usr/lib/cgi-bin # ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost> If I go to 127.0.0.1 or 127.0.0.1/index.cgi I get prompted to download the index.cgi file, however if I enable the ScriptAlias in the vhost configuration block and go to 127.0.0.1/cgi-bin/index.cgi I see the output of my CGI application. I had originally solved this problem with mod_rewrite, however that worked on my test system the target (embedded) doesn't have that module available so I'm looking at another route (again).

    Read the article

  • Coda 2 and SCP uploading files with the wrong permission

    - by Tom Black
    Currently I have a basic Ubuntu server running a website. The website is for a few students learning HTML/PHP and each student has their own account with a symbolic link to the shared website folder. Since the students are working on the website together, each user needs to be able to modify all the files (index.html for example). So I created a Webdev group containing all of the students with the default umask of 0002 set in their .bashrc (This allows newly created files to be 774). The shared folder is owned by the group Webdev with a chmod g+s so that new files/folders also belong to the group Webdev. The problem is that the students are using an IDE (Coda 2) and when they create a new file or folder using the IDE the file has the permissions of 644 on the server (not group writable). However when I make a new file through connecting with Cyberduck (SFTP client) the file permissions are 664 (as they should be). So I don't understand why Coda would be any different. However, after some trial and error I believe that Coda is first creating the file on local disk and then uploading that file to the server. On a mac by default a newly created file is 644. When the client uploads a file that's already 644 it stays 644 on the server side (umask is kind of useless in this situation). I've also tried creating ACL permissions for that folder but an uploaded file from my mac via SCP doesn't get the default ACL permissions. In Coda there is an option to change file permissions on a transfer. However this option seems to apply a chmod to all files being uploaded or saved. When one of students is modifying a file created by someone else when they try to upload the file or save it Coda tries to also do a chmod but fails because that user isn't the owner of the file. My current solution is using bindfs... I mount the shared web folder and bindfs sets permissions and group ownership of newly created files. However, bindfs seems to be a bit slow and I'm sure there is a better solution. Even if the students ditched Coda 2 and used Mac vim with scp the newly created files on the server would behave the same (644) which is default on the mac. Other options... 1) Either I teach the students to use (ssh/chmod) with their IDE to change their own file permissions when uploading. 2) I make all the students' Macs have the default umask of 0002 which would upload files with the right permissions. 3) Write a corn script to fix the file permissions every 5 to 15 minutes... (This option I think is the worst if students are working together at the same time). Is there any way that I could make all files that are uploaded via SCP have the default file permissions of 664 even though the uploaded file has a lower permission? (After hours of searching I don't think this is possible) I guess a corn script is my best option for novice users. How do web developers work together on larger sites? similar to this: http://serverfault.com/questions/283492/how-to-specify-file-permission-when-putting-a-file-using-openssh-sftp-command Also similar: http://serverfault.com/questions/395418/managing-linux-directory-permissions-sftp

    Read the article

  • All virtualhosts serving Apache default files

    - by tj111
    I'm trying to configure Apache as an in-network webserver, and am using the sites-available/sites-enabled feature as opposed to just static vhost files. I set up a couple VirtualHosts, all with a unique DocumentRoot, however request for all the VirtualHosts just serve up the "It's Working!" default file. I can't for the life of me figure out why it won't serve the content out of the correct directory. Here's the contents of the virtualhost directive files, let me know if I need to post more. default (note that apache renames this to 000-default in sites-enabled, so it's not an ordering issue) NameVirtualHost *:80 ServerName emp <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName emp DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> billmed <VirtualHost *:80> ServerName billmed.emp ServerRoot /home/empression/Projects/billmed/web/httpdocs <Directory "/home/empression/Projects/billmed/web/httpdocs"> Order Allow,Deny Allow from All </Directory> </VirtualHost> Note that I have DNS zones for both emp and billmed.emp, as well as entries in /etc/hosts. My ultimate goal is to set up this machine as an in-house webserver with a custom tld (emp), but progress has been pretty slow.

    Read the article

  • how to setup .ssh directory inside an encrypted volume on Mac OSX and still have public key logins?

    - by Vitaly Kushner
    I have my .ssh directory inside an encrypted sparse image. i.e. ~/.ssh is a symlink to /Volumes/VolumeName/.ssh The problem is that when I try to ssh into that machine using a public key I see the following error message in /var/log/secure.log: Authentication refused: bad ownership or modes for directory /Volumes Any way to solve this in a clean way? Update: The permissions on ~/.ssh and authorized_keys are right: > ls -ld ~ drwxr-xr-x+ 77 vitaly staff 2618 Mar 16 08:22 /Users/vitaly/ > ls -l ~/.ssh lrwxr-xr-x 1 vitaly staff 22 Mar 15 23:48 /Users/vitaly/.ssh@ -> /Volumes/Astrails/.ssh > ls -ld /Volumes/Astrails/.ssh drwx------ 3 vitaly staff 646 Mar 15 23:46 /Volumes/Astrails/.ssh/ > ls -ld /Volumes/Astrails/ drwx--x--x@ 18 vitaly staff 1360 Jan 12 22:05 /Volumes/Astrails// > ls -ld /Volumes/ drwxrwxrwt@ 5 root admin 170 Mar 15 20:38 /Volumes// error message sats the problem is with /Volumes, but I don't see the problem. Yes it is o+w but it is also +t which should be ok but apparently isn't. The problem is I can't change /Volumes permissions (or rather shouldn't) but I do want public key login to work. First I thought of mounting the image on other place then /Volumes, but it is automaunted on login by standard OSX mounting. I asked about it here: How to change disk image's default mount directory on osx The only answer I got is "you can't" ;) I could hack my way around, by writing some shellscript that will manually mounting volume at a non-standard location but it would be a gross hack, I'm still looking for a cleaner way to do what I need.

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • How to delete a residual Ubuntu directory from Windows?

    - by memo1288
    I'm using Windows 7. After installing (and uninstalling) Ubuntu on my laptop, I found that it left a folder called ".Trash-1000" on my H drive. I cannot remove it: if I try to delete it from Explorer, it says: The file name you specified is not valid or too long. Specify a different file name. If I try to remove it from the command line, this is what happens: H:\>rmdir .Trash-1000 /S /Q .Trash-1000\files\Screenshot from 2013-09-24 11:57:32.png - The filename, direct ory name, or volume label syntax is incorrect. .Trash-1000\files\Screenshot from 2013-09-24 12:03:45.2.png - The filename, dire ctory name, or volume label syntax is incorrect. .Trash-1000\info\Screenshot from 2013-09-24 11:57:32.png.trashinfo - The filenam e, directory name, or volume label syntax is incorrect. .Trash-1000\info\Screenshot from 2013-09-24 12:03:45.2.png.trashinfo - The filen ame, directory name, or volume label syntax is incorrect. The files mentioned there are the contents of that folder. Using quotes around the folder name yields the same result. Trying to delete any of the sub-folders results in the same error, and trying to remove any of the files inside results in "No such file or directory". As I said before, I no longer have Ubuntu installed. How can I remove this folder?

    Read the article

  • How to set permissions so two users can work on the same hg repository?

    - by John Mee
    Ubuntu: Jaunty Mercurial: 1.3.1 Access: ssh (users john and bob) File permission: -rw-rw---- 1 john john 129276 May 17 13:28 dirstate User: bob Command: 'hg st' Response: **abort: Permission denied: /our/respository/.hg/dirstate** Obviously mercurial can't let bob see the state because the file it needs to read belongs to me. So I change the permissions to allow bob to read the file and everything is fine, up until I next try to do something, whence the situations are reversed. Now he owns the file and I can't read it. So I set up a "committers" group and both john and bob belong to the group, but still mercurial fiddles with the ownership and permissions whenever one or other commits. How do we configure it so two different logins in the same group can commit to the same repository over ssh?

    Read the article

  • FAT Volume and CE

    - by Kate Moss' Open Space
    Whenever we format a disk volume, it is a good idea to name the label so it will be easier to categorize. To label a volume, we can use LABEL command or UI depends on your preference. Windows CE does provide FAT driver and support various format (FAT12, FAT16,FAT32, ExFAT and TFAT - transaction-safe FAT) and many feature to let you scan and even defrag the volume but not labeling. At any time you format a volume in CE and then mount it on PC, the label is always empty! Of course, you can always label the volume on PC, even it is formatted in CE. So looks like CE does not care about the volume label at all, neither report the label to OS nor changing the label on FAT.So how can we set the volume label in CE? To Answer this question, we need to know how does FAT stores the volume label. Here are some on-line resources are handy for parsing FAT. http://en.wikipedia.org/wiki/File_Allocation_Table http://www.pjrc.com/tech/8051/ide/fat32.html http://www.microsoft.com/whdc/system/platform/firmware/fatgen.mspx You can refer to PUBLIC\COMMON\OAK\DRIVERS\FSD\FATUTIL\MAIN\bootsec.h and dosbpb.h or the above links for the fields we discuss here. The first sector of a FAT Volume (it is not necessary to be the first sector of a disk.) is a FAT boot sector and BPB (BIOS Parameter Block). And at offset 43, bgbsVolumeLabel (or bsVolumeLabel on FAT16) is for storing the volume lable, but note in the spec also indicates "FAT file system drivers should make sure that they update this field when the volume label file in the root directory has its name changed or created.". So we can't just simply update the bgbsVolumeLabel but also need to create a volume lable file in root directory. The volume lable file is not a real file but just a file entry in root directory with zero file lenth and a very special file attribute, ATTR_VOLUME_ID. (defined in public\common\oak\drivers\fsd\fatutil\MAIN\fatutilp.h) Locating and accessing bootsector is quite straight forward, as long as we know the starting sector of a FAT volume, that's it. But where is the root directory? The layout of a typical FAT is like this Boot sector (Volume ID in the figure) followed by Reserved Sectors (1 on FAT12/16 and 32 on FAT32), then FAT chain table(s) (can be 1 or 2), after that is the root directory (FAT12/16 and not shows in the figure) then begining of the File and Directories. In FAT12/16, the root directory is placed right after FAT so it is not hard to caculate the offset in the volume. But in FAT32, this rule is no longer true: the first cluster of the root directory is determined by BGBPB_RootDirStrtClus (or offset 44 in boot sector). Although this field is usually 0x00000002 (it is how CE initial the root directory after formating a volume. Note we should never assume it is always true) which means the first cluster contains data but not like the root directory is contiguous in FAT12/16, it is just like a regular file can be fragmented. So we need to access the root directory (of FAT32) hopping one cluster to another by traversing FAT table. Let's trace the code now. Although the source of FAT driver is not available in CE Shared Source program, but the formatter, Fatutil.dll, is available in public\common\oak\drivers\fsd\fatutil\MAIN\formatdisk.cpp. Be aware the public code only provides formatter for FAT12/16/32 for ExFAT it is still not available. FormatVolumeInternal is the main worker function. With the knowledge here, you should be able the trace the code easily. But I would like to discuss the following code pieces     dwReservedSectors = (fo.dwFatVersion == 32) ? 32 : 1;     dwRootEntries = (fo.dwFatVersion == 32) ? 0 : fo.dwRootEntries; Note the dwReservedSectors is 32 in FAT32 and 1 in FAT12/16. Root Entries is another different mentioned in previous paragraph, 0 for FAT32 (dynamic allocated) and fixed size (usually 512, defined in DEFAULT_ROOT_ENTRIES in public\common\sdk\inc\fatutil.h) And then here   memset(pBootSec->bsVolumeLabel, 0x20, sizeof(pBootSec->bsVolumeLabel)); It sets the Volume Label as empty string. Now let's carry on to the next section - write the root directory.     if (fo.dwFatVersion == 32) {         if (!(fo.dwFlags & FATUTIL_FORMAT_TFAT)) {             dwRootSectors = dwSectorsPerCluster;         }         else {             DIRENTRY    dirEntry;             DWORD       offset;             int               iVolumeNo;             memset(pbBlock, 0, pdi->di_bytes_per_sect);             memset(&dirEntry, 0, sizeof(DIRENTRY));                         dirEntry.de_attr = ATTR_VOLUME_ID;             // the first one is volume label             memcpy(dirEntry.de_name, "TFAT       ", sizeof (dirEntry.de_name));             memcpy(pbBlock, &dirEntry, sizeof(dirEntry));              ...             // Skip the next step of zeroing out clusters             dwCurrentSec += dwSectorsPerCluster;             dwRootSectors = 0;         }     }     // Each new root directory sector needs to be zeroed.     memset(pbBlock, 0, cbSizeBlk);     iRootSec=0;     while ( iRootSec < dwRootSectors) { Basically, the code zero out the each entry in root directory depends on dwRootSectors. In FAT12/16, the dwRootSectors is calculated as the sectors we need for the root entries (512 for most of the case) and in FAT32 it just zero out the one cluster. Please note that, if it is a TFAT volume, it initialize the root directory with special volume label entries for some special purpose. Despite to its unusual initialization process for TFAT, it does provide a example for how to create a volume entry. With some minor modification, we can assign the volume label in FAT formatter and also remember to sync the volume label with bsVolumeLabel or bgbsVolumeLabel in boot sector.

    Read the article

  • Getting an error when using 'make' command (installing aircrack-ng on Ubuntu 12.04)

    - by Mohd Arafat Hossain
    I followed instructions from here http://securit.se/en/2012/03/kompilera-reaver-ubuntu-12-04/. I edited the 'common.mak' file successfully and when I type in make I get this error mohd-arafat-hossain@TUD:~/aircrack-ng-1.1$ make make -C src all make[1]: Entering directory `/home/mohd-arafat-hossain/aircrack-ng-1.1/src' make -C osdep make[2]: Entering directory `/home/mohd-arafat-hossain/aircrack-ng-1.1/src/osdep' Building for Linux make[3]: Entering directory `/home/mohd-arafat-hossain/aircrack-ng-1.1/src/osdep' make[3]: `.os.Linux' is up to date. make[3]: Leaving directory `/home/mohd-arafat-hossain/aircrack-ng-1.1/src/osdep' make[2]: Leaving directory `/home/mohd-arafat-hossain/aircrack-ng-1.1/src/osdep' gcc -g -W -Wall -O3 -D_FILE_OFFSET_BITS=64 -D_REVISION=0 -Iinclude -c -o aircrack-ng.o aircrack-ng.c In file included from aircrack-ng.c:65:0: crypto.h:12:26: fatal error: openssl/hmac.h: No such file or directory compilation terminated. make[1]: *** [aircrack-ng.o] Error 1 make[1]: Leaving directory `/home/mohd-arafat-hossain/aircrack-ng-1.1/src' make: *** [all] Error 2 What am I supposed to do now?

    Read the article

  • HTTP Error 403.1 - Permissions are fixed, what else is wrong?

    - by baron
    I have developed a HTTP Handler Web Service, and have had it successfully deployed, through testing on other environments i've ran into another problem. This is the error message I receive: You have attempted to execute a CGI, ISAPI, or other executable program from a directory that does not allow programs to be executed. HTTP Error 403.1 - Forbidden: Execute access is denied. Internet Information Services (IIS) So it is obvious what needs to be fixed, so: 1) Start Internet Information Services (IIS) Manager. 2) Right-click the Web site that contains the SharePoint Web site that you created, and then click Properties. 3) Click the Home Directory tab. 4) Under Application settings, click either Scripts only in the Execute permissions list or Scripts and Executables in the Execute permissions list (as appropriate to your situation). Click OK. 5) Quit IIS Manager. But I'm still getting the same error. So what else could be wrong?

    Read the article

  • Facebook proxy email not arriving -- do I need permissions?

    - by Felix
    I'm building a website that allows user to connect using Facebook Connect. So far I'm able to log the user in and fetch data about them (name, email, pic, etc.). If I fetch the email (using Users.getInfo) I get a proxied email ([email protected]), which is absolutely great. Problem is, that email doesn't work. I've tried sending an email to it and I never received it. There are two reasons I see that could cause this: I don't have enough permissions. Ok, I can understand that, but if I don't have enough permissions then why are they returning an email at all? The email has to be somehow sent from the application itself (I've tried sending it from my Gmail account) -- but how would Facebook know that the email is coming from the application? So which is it? Or is it something else?

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • Windows 7 Change internet time settings tells me I have no permissions.

    - by Matthias Vance
    LS, While trying to solve my computer clock always running ahead (even when on, not just on every boot), I apparently broke some security settings. All I did (as far as I can remember) was stop and start the w32time service. Now, whenever I go to the "Internet time" tab, and click "Change settings..." Windows tells me I don't have permissions to do so. Facts I am a member of the Administrators group. In gpedit.msc, I checked that the Administrators group is allowed to change the system time. Kind regards, Matthias Vance

    Read the article

  • What's the best way of handling permissions for apache2's user www-data in /var/www ?

    - by gyaresu
    Has anyone got a nice solution for handling files in /var/www/ ? We're running Name Based Virtual Hosts and the apache2 user is 'www-data' We've got two regular users & root. So when messing with files in /var/www ,rather than having to... chown -R www-data:www-data ...all the time, what's a good way of handling this? Supplementary question. How hardcore do you then go on permissions? This one has always been a problem in collaborative development environments. Cheers.

    Read the article

  • How do I extract all the files in a VHD to a hard disk including permissions?

    - by Middletone
    I'd like to know wha thte best way is to make an exact copy of a vhd image and pu tit onto my hard disk. I've tried xcopy but there seems to be a number of issues rlated to permissions when doing this. Ideally I'd like to copy the bits so that they match exactly on the new drive. I encountered this when trying to restore a vista backup only to discover the idiots work who decided to not let me restore a 400 gig image to a 1 TB drive size. I've sucessfully mounted the drive in Win 7 which is the environment in which I'm trying ot copy these files.

    Read the article

  • What permissions / ownership to set on PHP Sessions Folder when running FastCGI / PHP-FPM (as user "nobody")?

    - by Professor Frink
    I'm having trouble getting a number of scripts running because PHP-FPM can't write to my session folder: "2009/10/01 23:54:07 [error] 17830#0: *24 FastCGI sent in stderr: "PHP Warning: Unknown: open(/var/lib/php/session/sess_cskfq4godj4ka2a637i5lq41o5, O_RDWR) failed: Permission denied (13) in Unknown on line 0 PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/var/lib/php/session) in Unknown on line 0" while reading upstream" Obviously this is a permission issue; my session folder's owner/group is the webserver's user, NGINX. PHP-FPM runs as nobody though, and hence adding it to the nginx group is not so trivial. A temporary solution is to set the permissions of /var/lib/php/session to 777 - I have a feeling that's not the "best practice" though. What is the best practice when you need to assign a daemon write access to a folder, but it is running as nobody ?

    Read the article

  • How do I grant permissions to remotely start/stop a service using Powershell?

    - by splattered bits
    We have a PowerShell script that restarts a service on another computer. When we use PowerShell's built-in service control cmdlets, like so: $svc = Get-Service -Name MyService -ComputerName myservicehostname Stop-Service -InputObject $svc Start-Service -InputObject $svc We get this error back: Stop-Service : Cannot open MyService service on computer 'myservicehostname'. However, when we use sc.exe, like so: C:\Windows\System32\sc \\myservicehostname stop MyService C:\Windows\System32\sc \\myservicehostname start MyService the start and stop succeed. The user doing the restarting is not an administrator. We use subinacl to grant the user permissions to start/stop and query the service: subinacl.exe /service MyService /GRANT=MyServiceControlUser=STO How come PowerShell can't stop my service but sc.exe can?

    Read the article

  • Mercurial changeset hook problem when auto updating. Server permissions maybe??

    - by Gary Willoughby
    I am using Mercurial SCM over a LAN using a normal shared folder instead of http and i'm having a problem getting the auto update hook to run. I have entered this hook as detailed here: http://mercurial.selenic.com/wiki/FAQ#FAQ.2BAC8-CommonProblems.Any_way_to_.27hg_push.27_and_have_an_automatic_.27hg_update.27_on_the_remote_server.3F This installs the hook, but when i push something to the remote repo i get an error: added 1 changesets with 1 changes to 1 files running hook changegroup: hg update >&2 warning: changegroup hook exited with status -1 There is a stackoverflow question similar to this here: http://stackoverflow.com/questions/2885246/mercurial-auto-update-problem but it offers no solutions other than it may be a permissions error somewhere. Has anyone else had this problem and can anyone else shed any more light on this or give me a heads up on where to start fixing this? Thanks.

    Read the article

  • Debian - Can't stop MySQL; permissions?

    - by anon
    I just tried to upgrade from debian squeeze to unstable by replacing 'squeeze' with 'unstable' in /etc/apt/sources.list. The upgrade went smoothly except for mysql, which failed because it couldn't stop mysql. /etc/init.d/mysql stop simply returns that it failed, but if I try to get the status with /etc/init.d/mysql status it gives me this error: me@debian:~$ sudo /etc/init.d/mysql status /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'debian-sys-maint'@'localhost' (using password: YES)' . mysql is running fine, and I checked the permissions for debian-sys-maint in phpmyadmin and it's allowed to do everything, but only connect from 'localhost.'

    Read the article

  • solutions for a webserver dedicated to manage permissions/ACL and (reverse) proxying API servers?

    - by giohappy
    I'm considering various layouts to expose various HTTP API services (running on their own differents servers) through a frontend server dedicated to manage permissions on behalf of the API services. I've considered various options, from the classical ones like Nginx, Apache, etc. to HAProxy, passing by the various Python webserver solutions like Tornado, Twisted (which gives me the opportunity to implement my own ACL system easily). The foundamental feature is high performance and scalability, and the ability to manage fine grained ACL rules (similar to the HAProxy ACL system) I would like to know what is a suggested approach to setup what I need, and if (opne source) ready-to-use solutions are already available dedicated to this.

    Read the article

  • ASP.NET- using System.IO.File.Delete() to delete file(s) from directory inside wwwroot?

    - by Jim S
    Hello, I have a ASP.NET SOAP web service whose web method creates a PDF file, writes it to the "Download" directory of the applicaton, and returns the URL to the user. Code: //Create the map images (MapPrinter) and insert them on the PDF (PagePrinter). MemoryStream mstream = null; FileStream fs = null; try { //Create the memorystream storing the pdf created. mstream = pgPrinter.GenerateMapImage(); //Convert the memorystream to an array of bytes. byte[] byteArray = mstream.ToArray(); //return byteArray; //Save PDF file to site's Download folder with a unique name. System.Text.StringBuilder sb = new System.Text.StringBuilder(Global.PhysicalDownloadPath); sb.Append("\\"); string fileName = Guid.NewGuid().ToString() + ".pdf"; sb.Append(fileName); string filePath = sb.ToString(); fs = new FileStream(filePath, FileMode.CreateNew); fs.Write(byteArray, 0, byteArray.Length); string requestURI = this.Context.Request.Url.AbsoluteUri; string virtPath = requestURI.Remove(requestURI.IndexOf("Service.asmx")) + "Download/" + fileName; return virtPath; } catch (Exception ex) { throw new Exception("An error has occurred creating the map pdf.", ex); } finally { if (mstream != null) mstream.Close(); if (fs != null) fs.Close(); //Clean up resources if (pgPrinter != null) pgPrinter.Dispose(); } Then in the Global.asax file of the web service, I set up a Timer in the Application_Start event listener. In the Timer's ElapsedEvent listener I look for any files in the Download directory that are older than the Timer interval (for testing = 1 min., for deployment ~20 min.) and delete them. Code: //Interval to check for old files (milliseconds), also set to delete files older than now minus this interval. private static double deleteTimeInterval; private static System.Timers.Timer timer; //Physical path to Download folder. Everything in this folder will be checked for deletion. public static string PhysicalDownloadPath; void Application_Start(object sender, EventArgs e) { // Code that runs on application startup deleteTimeInterval = Convert.ToDouble(System.Configuration.ConfigurationManager.AppSettings["FileDeleteInterval"]); //Create timer with interval (milliseconds) whose elapse event will trigger the delete of old files //in the Download directory. timer = new System.Timers.Timer(deleteTimeInterval); timer.Enabled = true; timer.AutoReset = true; timer.Elapsed += new System.Timers.ElapsedEventHandler(OnTimedEvent); PhysicalDownloadPath = System.Web.Hosting.HostingEnvironment.ApplicationPhysicalPath + "Download"; } private static void OnTimedEvent(object source, System.Timers.ElapsedEventArgs e) { //Delete the files older than the time interval in the Download folder. var folder = new System.IO.DirectoryInfo(PhysicalDownloadPath); System.IO.FileInfo[] files = folder.GetFiles(); foreach (var file in files) { if (file.CreationTime < DateTime.Now.AddMilliseconds(-deleteTimeInterval)) { string path = PhysicalDownloadPath + "\\" + file.Name; System.IO.File.Delete(path); } } } This works perfectly, with one exception. When I publish the web service application to inetpub\wwwroot (Windows 7, IIS7) it does not delete the old files in the Download directory. The app works perfect when I publish to IIS from a physical directory not in wwwroot. Obviously, it seems IIS places some sort of lock on files in the web root. I have tested impersonating an admin user to run the app and it still does not work. Any tips on how to circumvent the lock programmatically when in wwwroot? The client will probably want the app published to the root directory. Thank you very much.

    Read the article

  • Error while installing emacs23 from Software Center

    - by vrcmr
    Trying to install emacs in Software Center Ubuntu 12.04 got this error. installArchives() failed: Selecting previously unselected package emacs23. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 182385 files and directories currently installed.) Unpacking emacs23 (from .../emacs23_23.3+1-1ubuntu9_i386.deb) ... Processing triggers for desktop-file-utils ... Processing triggers for bamfdaemon ... Rebuilding /usr/share/applications/bamf.index... Processing triggers for gnome-menus ... Processing triggers for man-db ... Setting up emacs23 (23.3+1-1ubuntu9) ... update-alternatives: using /usr/bin/emacs23-x to provide /usr/bin/emacs (emacs) in auto mode. emacs-install emacs23 install/dictionaries-common: Byte-compiling for emacsen flavour emacs23 Warning: Lisp directory `/usr/share/emacs/23.3/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Error: charsets directory (/usr/share/emacs/23.3/etc/charsets) does not exist. Emacs will not function correctly without the character map files. Please check your installation! Warning: Could not find simple.el nor simple.elc Cannot open load file: bytecomp emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 3. dpkg: error processing emacs23 (--configure): subprocess installed post-installation script returned error exit status 255 No apport report written because MaxReports is reached already Errors were encountered while processing: emacs23 Error in function: Setting up emacs23 (23.3+1-1ubuntu9) ... emacs-install emacs23 install/dictionaries-common: Byte-compiling for emacsen flavour emacs23 Warning: Lisp directory `/usr/share/emacs/23.3/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/site-lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/lisp' does not exist. Warning: Lisp directory `/usr/share/emacs/23.3/leim' does not exist. Error: charsets directory (/usr/share/emacs/23.3/etc/charsets) does not exist. Emacs will not function correctly without the character map files. Please check your installation! Warning: Could not find simple.el nor simple.elc Cannot open load file: bytecomp emacs-install: /usr/lib/emacsen-common/packages/install/dictionaries-common emacs23 failed at /usr/lib/emacsen-common/emacs-install line 28, <TSORT> line 3. dpkg: error processing emacs23 (--configure): subprocess installed post-installation script returned error exit status 255

    Read the article

< Previous Page | 196 197 198 199 200 201 202 203 204 205 206 207  | Next Page >