Search Results

Search found 16797 results on 672 pages for 'directory traversal'.

Page 266/672 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • File permission mask/mode settings for Samba on FreeNAS?

    - by tkahn
    I'm currently working on the Samba settings on a FreeNAS server. When any user creates a file or a folder on the server I want the file or folder to get the following RWX permissions: Folders: drwxrws--- Files: -rwxrws--- To set the permissions like this manually I use chmod 2770 which works great. But I want this to happen automatically and therefore I've added the following lines to smb.conf: create mask = 2770 directory mask = 2770 force create mode = 2770 force directory mode = 2770 But when I test by creating a file in one of the folders it get's these permissions: Folder: drwxrwx File: -rwxrw---- What am I overlooking or doing wrong? Is the order of the lines relevant? Does the setgid digit (the 2 in 2770) mess things up?

    Read the article

  • Webserver insists on opening "blog1.php" instead of "index.php"

    - by pepoluan
    I'm at my wits' end. I have just ripped out a website and in the process of rebuilding everything. Previously, the 'home page' of the website is a blog, with the address "www.mydomain.com/blog1.php". After exporting everything, I deleted the whole directory, and -- based on request -- immediately create a blog/ directory. The idea is to get the blog back up as soon as possible, and temporarily redirect people accessing www.mydomain.com to the blog. Accessing the blog via http://www.mydomain.com/blog/ works. So I put in an index.php file containing a (temporary) redirect to the blog's address. The problem: The server insists on opening blog1.php instead of index.php. Even after we deleted all the files (including .htaccess). And even putting in a new .htaccess file with the single line of DirectoryIndex index.php doesn't work. The server stubbornly wants blog1.php. Now, the server is actually a webhosting, so I have no actual access to it. I have to do my work via cPanel. Currently, I work around this issue by creating blog1.php; but I really want to know why the server does not revert to opening index.php. Did I perhaps miss some important settings in the byzantine cPanel menu page?

    Read the article

  • Apache virtualhost - Mac OSX 10.7.3

    - by Rakan
    After upgrading to Lion, all my virtualhosts stopped working. They redirect to "It works" main apache page on my device for some weird reason. Example: /etc/hosts: 127.0.0.1 myhost.com <VirtualHost *:80> DocumentRoot "/Library/WebServer/Documents/testproj/" ServerName myhost.com <Directory "/Library/WebServer/Documents/testproj/"> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory> ErrorLog "/private/var/log/apache2/testproj-error_log" CustomLog "/private/var/log/apache2/testproj-access_log" common </VirtualHost> Did anyone else face the same issue? How can I fix this?

    Read the article

  • How to recover C:\Users folder

    - by Matías Fidemraizer
    Today I was moving C:\Users to another partition using symlink method. I had the great idea of making the symlink from C:\Users = U:\, instead of C:\Users = U:\Users. Sadly, I've deleted the original "Users" folder and now, when I try to login, it says that The User Profile Service failed the logon. Maybe I'm wrong, but this is because the root directory of user profiles isn't the system one, so now when I create C:\Users, I can't log into Windows and I get the above error message. How can create a new C:\Users directory and workaround the problem? Thank you in advance!

    Read the article

  • How can I play .bin file with VLC?

    - by freebird
    For two days I have tried to get vlc play this .bin file. I played it fine in windows xp using vlc. So I decided to install same vlc thats on my xp partition and it plays through Wine the file no problem. Why won't it play natively? fr33bird@fr33bird-desktop:~/Downloads/********/*********$ vlc ********.bin VLC media player 1.1.10 The Luggage (revision exported) Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS") Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE") [0x8387914] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. Blocked: call to setlocale(6, "") Warning: call to srand(1309627174) Warning: call to rand() Blocked: call to setlocale(6, "") (process:14474): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. Blocked: call to setlocale(6, "") libdvdnav: Using dvdnav version 4.1.3 libdvdread: Using libdvdcss version 1.2.10 for DVD access libdvdread: Can't stat /home/fr33bird/Downloads/*******/*******/*******.bin No such file or directory libdvdnav: vm: failed to open/read the DVD [0x87034e4] filesystem access error: cannot open file /home/fr33bird/Downloads/*****/*****/*******.bin (No such file or directory) [0x843535c] main input error: open of `file:///home/fr33bird/Downloads/******/****/********.bin' failed: (null) Funny it says no such file even thou it's attempting to read it. Xine says that the .bin is encrypted so I installed libdvdcss2 w32codecs ubuntu-restricted-codecs from Medibuntu still same issue. How can I fix this? I want to run this natively not through wine. I Even tried to change .bin to .mpg and .avi.

    Read the article

  • Linux filesystem with inodes close on the disk

    - by pts
    I'd like to make the ls -laR /media/myfs on Linux as fast as possible. I'll have 1 million files on the filesystem, 2TB of total file size, and some directories containing as much as 10000 files. Which filesystem should I use and how should I configure it? As far as I understand, the reason why ls -laR is slow because it has to stat(2) each inode (i.e. 1 million stat(2)s), and since inodes are distributed randomly on the disk, each stat(2) needs one disk seek. Here are some solutions I had in mind, none of which I am satisfied with: Create the filesystem on an SSD, because the seek operations on SSDs are fast. This wouldn't work, because a 2TB SSD doesn't exist, or it's prohibitively expensive. Create a filesystem which spans on two block devices: an SSD and a disk; the disk contains file data, and the SSD contains all the metadata (including directory entries, inodes and POSIX extended attributes). Is there a filesystem which supports this? Would it survive a system crash (power outage)? Use find /media/myfs on ext2, ext3 or ext4, instead of ls -laR /media/myfs, because the former can the advantage of the d_type field (see in the getdents(2) man page), so it doesn't have to stat. Unfortunately, this doesn't meet my requirements, because I need all file sizes as well, which find /media/myfs doesn't print. Use a filesystem, such as VFAT, which stores inodes in the directory entries. I'd love this one, but VFAT is not reliable and flexible enough for me, and I don't know of any other filesystem which does that. Do you? Of course, storing inodes in the directory entries wouldn't work for files with a link count more than 1, but that's not a problem since I have only a few dozen such files in my use case. Adjust some settings in /proc or sysctl so that inodes are locked to system memory forever. This would not speed up the first ls -laR /media/myfs, but it would make all subsequent invocations amazingly fast. How can I do this? I don't like this idea, because it doesn't speed up the first invocation, which currently takes 30 minutes. Also I'd like to lock the POSIX extended attributes in memory as well. What do I have to do for that? Use a filesystem which has an online defragmentation tool, which can be instructed to relocate inodes to the the beginning of the block device. Once the relocation is done, I can run dd if=/dev/sdb of=/dev/null bs=1M count=256 to get the beginning of the block device fetched to the kernel in-memory cache without seeking, and then the stat(2) operations would be fast, because they read from the cache. Is there a way to lock those inodes and/or blocks into memory once they have been read? Which filesystem has such a defragmentation tool?

    Read the article

  • Apache 2 UserDir for only one VirtualHost

    - by dentarg
    Is it possible to enable the UserDir Directive for just one VirtualHost rather than have it on for all and then disable it (with "UserDir disable") for each VirtualHost you don't want it on? I have tried by putting this inside a <VirtualHost> and comment out everything in the global config (/etc/apache2/conf.d/userdir.conf). No luck though. <IfModule mod_userdir.c> UserDir public.www UserDir disabled root <Directory /home/*/public.www> AllowOverride FileInfo AuthConfig Limit Indexes Options MultiViews Indexes SymLinksIfOwnerMatch IncludesNoExec <Limit GET POST OPTIONS> Order allow,deny Allow from all </Limit> <LimitExcept GET POST OPTIONS> Order deny,allow Deny from all </LimitExcept> </Directory> </IfModule>

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • Where does the temporary flash file get stored when I am viweing from Firefox

    - by Nishant
    I am watching a lecture and it seems to be adobe flash ...I wanna save this video that I am viewing . The website I am checking is http://cs75.tv/2009/fall/ . I am using Firefox . Dont know if this info helps , but .... My about:cache result is this . Memory cache device Number of entries: 212 Maximum storage size: 13312 KiB Storage in use: 8087 KiB Inactive storage: 6819 KiB List Cache Entries Disk cache device Number of entries: 3224 Maximum storage size: 500000 KiB Storage in use: 26066 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\Cache List Cache Entries Offline cache device Number of entries: 0 Maximum storage size: 512000 KiB Storage in use: 0 KiB Cache Directory: C:\Documents and Settings\nvarm\Local Settings\Application Data\Mozilla\Firefox\Profiles\d74svniy.default\OfflineCache List Cache Entries

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Can't get powershell to return where results from GCI using ACL

    - by Rossaluss
    I'm trying to get Powershell to list files in a directory that are older than a certain date and match a certain user. I've got the below script so far which gives me all the files older than a certain date and lists the directory and who owns them: $date=get-date $age=$date.AddDays(-30) ls '\\server\share\folder' -File -Recurse | ` where {$_.lastwritetime -lt "$age"} | ` select-object $_.fullname,{(Get-ACL $_.FullName).Owner} | ` ft -AutoSize However, when I try and use an additional where parameter to select only files owned by a certain user, I get no results at all, even though I know I should, based on the match I'm trying to obtain (as below): $date=get-date $age=$date.AddDays(-30) ls '\\server\share\folder' -File -Recurse | ` where ({$_.lastwritetime -lt "$age"} -and {{(get-acl $_.FullName).owner} -eq "domain\user"}) | ` select-object $_.fullname,{(Get-ACL $_.FullName).Owner} | ` ft -AutoSize Am I missing something? Can I not use the get-acl command in a where condition as I've tried to? Any help would be appreciated. Thanks

    Read the article

  • How to auto-mount a copied encrypted home

    - by LedZ
    How can I auto-mount and use my encrypted home that I copied to another partition on the same hard disk? I'm running Ubuntu 11.10. My encrypted home is on sda1. There I've 2 users: userA and userB. Another partition is sda3 on which I have some other Data. BTW, sda1 is formatted as EXT4, sda3 is formatted as EXT3. I did the following: I logged out from GUI (Gnome) and changed (using Ctrl+Alt+F1) to the shell. From there I logged in, changed to sudo (using sudo -s) . After then I created a new mountpoint (tmp) under /mnt (mkdir /mnt/tmp) mounted /dev/sda3 on that mountpoint /mnt/tmp (mount /dev/sda3 /mnt/tmp) copied my encrypted /home to /mnt/tmp using rsync (rsync --acvxASXH --progress --stats /home/ /mnt/tmp/). After the “copy-procedure” I looked to my “new home” in /mnt/tmp and there I found the following 3 folders: userA, userB, .ecryptfs My structure for /dev/sda3 mounted on /mnt/tmp looks like the following (userB in ecryptfs I've not listed): -userA ¦ +userB ¦ +.ecryptfs ¦ +userA ¦ + auto-mount ¦ + auto-umount ¦ + Private.mnt ¦ + Private.sig ¦ + wrapped-passphrase ¦ + .wrapped-passphrase.recorded ¦ + .Private + (encrypted file_1) + (encrypted file_2) + (encrypted file_n) Now I would like that this copy of the original home-directory should act with the same behavior as the original home-directory means, that it should be auto-mounted at reboot and give me access to my unencrypted files and after logout all my files should be encrypted again. Any suggestions?

    Read the article

  • AD Stopping A script and Adding a Value to A User's Account Attribute

    - by Steven Maxon
    ‘This will launch the PPT in a GPO Dim ppt Set ppt = CreateObject("PowerPoint.Application") ppt.Visible = True ppt.Presentations.Open "C:\Scripts\Test.pptx" ‘This is the batch file at the end of the PPT that records the date, time, computer name and username echo "Logon Date:%date%,Logon Time:%time%,Computer Name:%computername%,User Name:%username%" \servertest\g$\Tracking\LOGON.TXT ‘This is what I need but can’t find: I need the script to check a value in the Active Directory user’s account in the Web page: attribute that would shut off the script if the user has already competed reading the presentation. Could be as simple as writing XXXX. I need the value XXXX written to the Active Directory user’s account in the Web page: attribute when they finish reading the presentation after they click on the bat file so the script will not run again when they log in. Thanks for any help.

    Read the article

  • securing unpatched websites

    - by neuron
    I have a client with a lot (read several thousand) websites in several old cms solutions that are no longer maintained. Now moving all of them to a maintained solution isn't really an option at this point. So I'm thinking about ways to secure the solutions without patching them. The solutions are mostly joomla 1.0/1.5 and wordpress. What I'm thinking is something like this: mod_suexec to lock everyone into their own home directory apparmor to deny any and all file writes by default. (exclude by default, include things like "images" directories). use htaccess to prevent anything in writable directories from being executed. (aka disable php_engine for images/ directory). mysql triggers to check the "users" tables to prevent adding new admins/superadmins. Does this make sense? Is it viable? Am I missing something obvious?

    Read the article

  • How do I theme the Nautilus background image?

    - by Kesymaru
    I want to change the background image in the Nautilus file browser. My idea is to put my own style in the background. I'm using Ubuntu 11.10 and Nautilus is version 3. I know that I have to change the nautilus.css file of the theme, but the problem is that there is not a parameter for the background. I just want to apply an image but I can't find the file or parameter to change it. The CSS file is in the directory /home/UserName/.theme/MyTheme/gtk-3.0/apps. I've changed the nautilus.css file. I wrote two new lines using CSS style but I don't know where the correct place is to put it. The lines are: background-image: url("carbon.jpg"); background-repeat: repeat; Obviously I put the image called carbon.jpg in the same directory of nautilus.css, but this change doesn't work because I need to know whichs class displays the Nautilus file browsing frame. If I find this class I guess that this code will work. If someone knows how to do it, please tell me because I really want to make this change.

    Read the article

  • Break a hard link of a file in use

    - by Stebi
    I used hard links to merge duplicated files on my SSD (space is still precious) and now have a weird problem. Common files like msvcr110.dll got hard linked. Now I want to delete a program which has this file in its installation directory. But I cannot because this file (on another location) is used by a currently running application (don't know which) and windows doesn't allow me to delete this file because it's in use. I can rename the file but it still points to the same file, so not possible to delete it. Is there any way to break a the hard link of a file which is currently in use? I currently use a trash folder where I move those files to so I can delete the directory structure of program to be deleted. But I'd like to get rid of this leftover (although it doesn't take much space as it's a hard link).

    Read the article

  • Trying to compile from source newest apache with newest openssl

    - by AlexMA
    I need to install apache 2.4.10 using openssl 1.0.1i. I compiled openssl from source with: $ ./config \ --prefix=/opt/openssl-1.0.1i \ --openssldir=/opt/openssl-1.0.1i $ make $ sudo make install and Apache with: ./configure --prefix=/etc/apache2 \ --enable-access_compat=shared \ --enable-actions=shared \ --enable-alias=shared \ --enable-allowmethods=shared \ --enable-auth_basic=shared \ --enable-authn_core=shared \ --enable-authn_file=shared \ --enable-authz_core=shared \ --enable-authz_groupfile=shared \ --enable-authz_host=shared \ --enable-authz_user=shared \ --enable-autoindex=shared \ --enable-dir=shared \ --enable-env=shared \ --enable-headers=shared \ --enable-include=shared \ --enable-log_config=shared \ --enable-mime=shared \ --enable-negotiation=shared \ --enable-proxy=shared \ --enable-proxy_http=shared \ --enable-rewrite=shared \ --enable-setenvif=shared \ --enable-ssl=shared \ --enable-unixd=shared \ --enable-ssl \ --with-ssl=/opt/openssl-1.0.1i \ --enable-ssl-staticlib-deps \ --enable-mods-static=ssl make (would run sudo make install next but I get an error) I'm essentially following the guide here except with newer slightly newer versions. My problem is I get a linker error when I run make for apache: Making all in support make[1]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' make[2]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' /usr/share/apr-1.0/build/libtool --silent --mode=link x86_64-linux-gnu-gcc -std=gnu99 -pthread -L/opt/openssl-1.0.1i/lib -lssl -lcrypto \ -o ab ab.lo /usr/lib/x86_64-linux-gnu/libaprutil-1.la /usr/lib/x86_64-linux-gnu/libapr-1.la -lm /usr/bin/ld: /opt/openssl-1.0.1i/lib/libcrypto.a(dso_dlfcn.o): undefined reference to symbol 'dlclose@@GLIBC_2.2.5' I tried the answer here, but no luck. I would prefer to just use aptitude, but unfortunately the versions I need aren't available yet. If anyone knows how to fix the linker problem (or what I think is a linker problem), or knows of a better way to tell apache to use a newer openssl, it would be greatly appreciated; I've got apache 1.0.1i working otherwise.

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

  • Secure copy in Linux

    - by Michael
    Hi all, I wanna simpy exchange 3 directories to a collegue's home directory (I dont have write access to that one) from my home directory, probably using secure copy if possible. I am not good with Linux command line, so I am not sure how to do that and I would very much appreciate it if somebody could help me a bit out with this. I guess it should look something like that scp -r /home/user1/directoy1 /home/user2/directoy1 scp -r /home/user1/directoy2 /home/user2/directoy2 scp -r /home/user1/directoy3 /home/user2/directoy3 Do I need to specify the login name of my collegue so that the files can be copied when he enters his password? Thanks for your help, Michael

    Read the article

  • remote symbolic link / junction

    - by Blueberry
    Might be a pretty obvious one but have had some trouble finding solid answers. I have a directory on a windows network share containing different versions of an application. I would like to have a link to one of these called 'current', which will be a symbolic link to the directory sitting beside all the other versions and pointing to one of these. Creating this link seems to be more of an issue than I would have thought. Looks like symlink only shows the link on the same machine as where it was created (which is not going to work for obvious reasons) and junction needs to be run on the server which is practically impossible due to various restrictions. What would be the best way to go about this? Would I just need to copy the files twice or can I have a symbolic link which can be created and accessed remotely?

    Read the article

  • Setting up WAMP to run on a LAN

    - by Steve
    I've installed WAMP on a Windows 7 PC, and it is running fine locally, as localhost. I want PCs on the LAN to be able to view the local server. When they load my PC's IP address in their browser, they receive a "You don't have permission to access / on this server" error. I followed this guide, but the issue remains. To recap: I've added an inbound exception to Windows Firewall for port 80 for Private and Domain connections. I've edited Apache's httpd.conf to include: Listen 80 Listen 192.168.0.5:80 < Directory "c:/wamp/www/wordpress/" allow from all < /Directory I've edited httpd-vhosts.conf to include: < VirtualHost 192.168.0.5:80 DocumentRoot "C:/wamp/www/wordpress" < /VirtualHost Any ideas?

    Read the article

  • Apache virtual hosts - Resources on website not loaded when accessed from other hostname than localhost

    - by Christian Stadegaart
    Running virtual hosts on Mac OS X 10.6.8 running Apache 2.2.22. /etc/hosts is as follows: 127.0.0.1 localhost 3dweergave studio-12.fritz.box 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost Virtual hosts configuration: NameVirtualHost *:80 <VirtualHost *:80> DocumentRoot "/opt/local/www/3dweergave" ServerName 3dweergave ErrorLog "logs/3dweergave-error_log" CustomLog "logs/3dweergave-access_log" common <Directory "/opt/local/www/3dweergave"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost *:80> ServerName main </VirtualHost> This will output the following settings: *:80 is a NameVirtualHost default server 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost 3dweergave (/opt/local/apache2/conf/extra/httpd-vhosts.conf:21) port 80 namevhost main (/opt/local/apache2/conf/extra/httpd-vhosts.conf:34) I made 3dweergave the default server by putting it first in the list. This will cause all undefined virtual hosts' names to load 3dweergave, and thus http://localhost will point to 3dweergave. Of course, normally, the first in the list is the virtual host main and localhost will point to main, but for testing purposes I switched them. When I navigate to http://localhost, my CakePHP default homepage shows as expected: Screenshot 1 But when I navigate to http://3dweergave, my CakePHP default homepage doesn't show as expected. It looks like every relative link to resources are not accepted by the server: Screenshot 2 For example, the CSS isn't loaded. When I open the source and click on the link, it opens the CSS file in the browser without errors. But when I run FireBug while loading the webpage, it seems that the CSS file isn't retrieved. (<link rel="stylesheet" type="text/css" href="/css/cake.generic.css" />) How can I fix this unwanted behaviour?

    Read the article

  • /data/tmp on database server?

    - by Mellon
    I am on a Linux ubuntu machine with MySQL installed. My teacher gave out an assignment which mentioned "copy cars.dat to /data/tmp on the MySQL database server" without any explanations, I do not know what is the "/data/tmp on database server" means exactly? Basically after that I need to execute SQL statement like LOAD DATA INFILE '/data/tmp/cars.dat' INTO TABLE cars So, what does copy cars.dat to /data/tmp on the database server means as there is no /data/tmp directory even? Personally, I checked /etc/mysql/my.cnf file, inside which there are definitions of : ... basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp ... Does it mean to copy cars.dat to the tmpdir which is just /tmp under root directory??

    Read the article

  • Check if folders exist in Git repository... testing if a sub-string exists in bash with NULL as a separator

    - by Craig Francis
    I have a common git "post-receive" script for several projects, and it needs to perform different actions if an /app/ or /public/ folder exists in the root. Using: FOLDERS=`git ls-tree -d --name-only -z master`; I can see the directory listing, and I would like to use the RegExp support in bash to run something like: if [[ "$FOLDERS" =~ app ]]; then ... fi But that won't work if there was something like an "app lication" folder... I specified the "-z" option in the git "ls-tree" command so I could use the \0 (null) character as a separator, but not sure how to test for that in the bash RegExp. Likewise I know there is support for specifying a particular path in the ls-tree command, and could then pipe that to "wc -l", but I'd have thought it was quicker to get a full directory listing of the root (not recursive) then test for the 2 (or more) folders with the returned output. Possibly related to: http://stackoverflow.com/questions/7938094/git-how-to-check-which-files-exist-and-their-content-in-a-shared-bare-repos

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >