Search Results

Search found 39784 results on 1592 pages for 'ignore files'.

Page 414/1592 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • File upload permissions issue on Windows Server 2008 R2 IIS 7.5 PHP 5.3 with Drupal v.7.26

    - by Taras
    I have website on Drupal version: 7.26 OS on server is Windows Server 2008 R2 Web server $_SERVER["SERVER_SOFTWARE"]: Microsoft-IIS/7.5 Server API: CGI/FastCGI Core PHP Version: 5.3.28 file_uploads: On post_max_size: 75M upload_max_filesize: 50M upload_tmp_dir: C:\inetpub\wwwroot\tmp memory_limit: 128M open_basedir: C:\inetpub\wwwroot;C:\inetpub\wwwroot\tmp When I go to /admin/config/media/file-system I see error messages: The directory sites\default\files exists but is not writable and could not be made writable. The directory tmp exists but is not writable and could not be made writable. Public file system path: sites\default\files Temporary directory: tmp I have set permissions on folders C:\inetpub\wwwroot\tmp : IIS_IUSRS : Full control C:\inetpub\wwwroot\sites\default\files : IIS_IUSRS : Full control I am working as Administrator user: C:\Users\Administrator\Downloadsecho %username% Administrator I can`t change Read Only Attributes for these folders. Every time I do this change and press Apply button and Apply changes to this folder, subfolders and files is checked and press OK button it displays Applying attributes... dialog when it finishing I press OK button on folder properties dialog closing it. When I open Properties dialog once again I see Read-only is checked again. How can I fix it?

    Read the article

  • Boot ISO image from GRUB4DOS on EFI machines

    - by Vladimir Tikhomirov
    I failed with loading ISO image (non-distro) from GRUB2 from USB stick, but found the way how I can boot the GRUB4DOS and then load the image from there. However, it doesn't work all the time and the questions is WHY it doesn't? Environment and loading process: We need to have EFI machine, USB stick, booting ISO, GRUB2 and GRUB4DOS. Last 3 on USB stick. Boot: USB - EFI loader - GRUB2 - GRUB4DOS - ISO image Configuration files To boot GRUB4DOS I use this from grub.cfg: menuentry "image.iso" { linux /syslinux/grub.exe --config-file="/menu.lst" } My menu.lst is here: timeout 20 default 0 title image.iso find --set-root --ignore-floppies --ignore-cd //image.iso map --heads=0 --sectors-per-track=0 //image.iso (hd32) map --hook chainloader (hd32) This works perfectly with Legacy machines. However, when I come to GRUB4DOS, I don't see the menu with image.iso, I see only GRUB command line. That means that my menu.lst didn't load. Why is it like this? Background and ideas I have an idea that GRUB4DOS doesn't recognize my USB stick as a device. I tried the command find and got (hd0,0), (hd0,1), (hd0,2), (rd). When I tried to set root to any of these devices I don't see fat file system, how it was with Legacy machines. The root device is (hd0,0), which has ntfs file system which should be partition with Windows. EFI machines support only GRUB2, so I can't boot GRUB4DOS straight away. Please, don't suggest anything like this, because my image doesn't have kernel. You can imagine that you load HDAT2 or Hiren's boot cd, for example. menuentry "Blancco Blancco5.iso" { set isofile="/image.iso" loopback loop $isofile set root=(loop) linux /isolinux/vmlinuz isofile=$isofile splash quiet initrd /isolinux/initrd }

    Read the article

  • Drupal install and permissions

    - by Richard
    So I'm really stuck on this issue. An install process is complaining about write permission on settings.php and sites/default/files/. However, I've moved these files temporarily to write/read (chmod 777) and changed the owner/group to "apache" as shown below. -bash-4.1$ ls -hal total 28K drwxrwxrwx. 3 richard richard 4.0K Aug 23 15:03 . drwxr-xr-x. 4 richard richard 4.0K Aug 18 14:20 .. -rwxrwxrwx. 1 apache apache 9.3K Mar 23 16:34 default.settings.php drwxrwxrwx. 2 apache apache 4.0K Aug 23 15:03 files -rwxrwxrwx. 1 apache apache 0 Aug 23 15:03 settings.php However, the install is still complaining about write permissions. I followed steps one and two of the INSTALL.txt but no luck. Update: To further explore the situation, I created sites/default/richard.php with the following code: <?php error_reporting(E_ALL); ini_set('display_errors', '1'); mkdir('files'); print("<hr> User is "); passthru("whoami"); passthru("pwd"); ?> Run from the command line (under user "richard"), no problem. The folder is created everything is a go. Run from the web, I get the following: Warning: mkdir(): Permission denied in /var/www/html/sites/default/richard.php on line 9 User is apache /var/www/html/sites/default Update 2: Safe mode appears to be off... -bash-4.1$ cat /etc/php.ini | grep safe | grep mode | grep -v \; safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH sql.safe_mode = Off

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Is it possible to extract contents of a ThinApp container?

    - by rsk82
    Is there possible to extract such packages made by somebody else. Ok, so if there is no general way of extracting such archives, which can be a huge exe file or tiny exe starter with huge packed *.bin file with main files of an app that is to be run in portable way - is there a way to set an option in compilation *.ini file or other way to make such package able to be extracted. I remember I read somewhere that somebody created a tiny program (in was mentioned in vmware forums as far as I can remember, it was a crude thing coded for private use and I never managed to download it) to sit with main portable application and if such application has an open/save file dialog, it is possible to navigate to that program which virtually sits in the program files alongside main app from within the main app, and such program would scan all files that it can see and somehow is able to distiguish a real file from virtual one, and save all virtual files in a structure that is similar to the initial compilation folder from which portable app was created. I know that it is a very round-about way of doing things, but maybe the only one feasible. Nevertheless any news on this front ? Do antiviruses can somehow unpack these things ? Maybe they must buy a code or license for it from VmWare ? Edit: I found http://communities.vmware.com/thread/257433?tstart=600 and still trying to make sense out of this. Wrong, this was about moving old version of thinapp to win7.

    Read the article

  • How do I Setup Multiple Sites in HostGator Shared Hosting?

    - by cillosis
    I recently decided to consolidate all of my random projects into a single hosting account as it was starting to get very expensive to run each on an individual hosting plan. I purchased the HostGator Baby plan which allows hosting of multiple domains. You have to set it up with a root domain name which is fine (I used my portfolio domain name). As far as file structure, I wanted a folder for each site in /public_html so the structure looks like this: - public_html/ - myportfolio.com/ - ... my files ... - anothersite.com/ - ... my files ... - thirdsite.com/ - ... my files ... I setup add-on domains and pointed them to their respective folders which works fine. My problem is the root domain ex. myportfolio.com expects it's files to be contained at the root of /public_html rather than within it's folder I created. I setup a redirect to point requests for myportfolio.com to myportfolio.com/myportfolio.com/ which works initially except (at least in my WordPress installation) it still references it's root folder as public_html. TL;DR; What is the best way to go about setting up multiple site hosting in a shared hosting environment (i.e. I can't setup vhosts). Does anybody know of any tutorials or videos that walk through this more clearly? Thanks.

    Read the article

  • Always failed in connecting to the Outlook Anywhere through TMG 2010 with certificate ?

    - by Albert Widjaja
    Hi, I have successfully published Exchange Activesync using TMG 2010 and OWA internally only but somehow when I tried to publish the Outlook Anywhere it failed ( as can be seen from the https://www.testexchangeconnectivity.com ) Settings: IIS 7 settings, I have unchecked the require SSL and "Ignore" the client certificate Exchange CAS settings: ServerName : ExCAS02-VM SSLOffloading : True ExternalHostname : activesync.domain.com ClientAuthenticationMethod : Basic IISAuthenticationMethods : {Basic} MetabasePath : IIS://ExCAS02-VM.domainad.com/W3SVC/1/ROOT/Rpc Path : C:\Windows\System32\RpcProxy Server : ExCAS02-VM AdminDisplayName : ExchangeVersion : 0.1 (8.0.535.0) Name : Rpc (Default Web Site) DistinguishedName : CN=Rpc (Default Web Site),CN=HTTP,CN=Protocols,CN=ExCAS02-VM,CN=Servers,CN=Exchange Administrative....... Identity : ExCAS02-VM\Rpc (Default Web Site) Guid : 59873fe5-3e09-456e-9540-f67abc893f5e ObjectCategory : domainad.com/Configuration/Schema/ms-Exch-Rpc-Http-Virtual-Directory ObjectClass : {top, msExchVirtualDirectory, msExchRpcHttpVirtualDirectory} WhenChanged : 18/02/2011 4:31:54 PM WhenCreated : 18/02/2011 4:30:27 PM OriginatingServer : ADDC01.domainad.com IsValid : True Test-OutlookWebServices settings: 1013 Error When contacting https://activesync.domain.com/Rpc received the error The remote server returned an error: (500) Internal Server Error. 1017 Error [EXPR]-Error when contacting the RPC/HTTP service at https://activesync.domain.com/Rpc. The elapsed time was 0 milliseconds. https://www.testexchangeconnectivity.com testing result: Checking the IIS configuration for client certificate authentication. Client certificate authentication was detected. Additional Details Accept/Require client certificates were found. Set the IIS configuration to Ignore Client Certificates if you aren't using this type of authentication. environment: Windows Server 2008 (HT-CAS) Exchange Server 2007 SP1 TMG 2010 Standard Outlook 2007 client SP2. Any kind of help would be greatly appreciated. Thanks.

    Read the article

  • Running mod_php and suPHP same time

    - by BHare
    I recently went from Debian Lenny with 5.2.x and was able to use mod_php for any php files that were not located in /home/ and suPHP for all the php files that were located in /home/. I did this because I needed a default php.ini (given me all features of php) for my websites in /var/www/ and I didn't want to have to change the owner of all the .php files from root. I also had a default php.ini for all the /home/ php files without dangerous features. This was I had setup: <IfModule mod_suphp.c> <Directory /home/> AddType application/x-httpd-php .php .php3 .php4 .php5 suPHP_AddHandler application/x-httpd-php suPHP_Engine on suPHP_ConfigPath /home/shared/ </Directory> </IfModule> This was working perfect, but recently I upgraded to PHP to 5.3.5 from dotdeb (Lenny has no official php 5.3) . This had weird issues on lenny such as not display errors correctly and little tid bits. So I decided to upgrade from lenny to squeeze. Uninstalled php (along with it came suphp) and reinstalled with the new source. I now have 5.3.3-7 with Debian Squeeze but I cannot get mod_php and suPHP to run at the same time anymore. mod_php will always work and there are no errors in apache2 or suphp logs. If I disabled mod_php then suPHP will work. Is there thing I am doing wrong?

    Read the article

  • Outlook slow to open attachments

    - by Alistair McMillan
    When a colleague tries to open attachments in her email (Outlook 2003 talking to an Exchange 2007 server) they talk ages to open. The files are relatively small, all less than 1MB. We've tried creating a new Windows profile for the user and tried creating new Outlook profiles, however that hasn't made any difference. And we've tried accessing her account from someone else's PC, and the attachments open immediately there. The only thing that might provide a clue is that Process Monitor shows Outlook on her PC trying to write the file to a folder within the user's "Temporary Internet Files" folder with FAST I/O DISALLOWED errors. Can't find a lot of useful information on that message online though. What causes the FAST I/O DISALLOWED errors? And would that make opening attachments so incredibly slow that opening a < 1MB file can take a matter of minutes? UPDATE: Discovered that this isn't just an issue with Outlook. Other files being accessed over the network show the same FAST I/O DISALLOWED errors in Process Monitor. The problem is just more noticeable with Outlook, because although other applications take a while to open files it isn't a matter of minutes.

    Read the article

  • Proftpd on Debian ignoring umask setting

    - by sodan
    I have found a solution for my problem. This is what I did: I added the following to my /etc/proftpd/proftpd.conf: <Limit SITE_CHMOD> DenyAll </Limit> I have the following problem: When I upload files to my FTP server the umask I set is totally ignored. All files have permissions 644. I use Debian 5.0.3 as operating system and proftpd 1.3.1 as ftp server. The user logging in is called mug and he is a local user (no virtual user). He is chrooted to the home directory /home/mug/ I tried the following things: 1. set umask setting in /etc/proftpd/proftpd.conf Umask 000 000 This should result in 777 for directories and 666 for files since directory umask is applied to 777 and file umask is applied to 666. After that I of course restarted the proftpd to be sure that the config is reloaded. 2. set umask for the user in /home/mug/.bashrc I added the following to the .bashrc for the user: umask 0000 After that I reloaded the .bashrc: source /home/mug/.bashrc I also checked the umask setting for the user by changing to the user and using this command: su mug umask As result I got a umask of 0000 prompted. So this worked. But still all my uploaded files are having 644 permissions set :( What am I doing wrong?

    Read the article

  • Drupal install and permissions

    - by Richard
    So I'm really stuck on this issue. An install process is complaining about write permission on settings.php and sites/default/files/. However, I've moved these files temporarily to write/read (chmod 777) and changed the owner/group to "apache" as shown below. -bash-4.1$ ls -hal total 28K drwxrwxrwx. 3 richard richard 4.0K Aug 23 15:03 . drwxr-xr-x. 4 richard richard 4.0K Aug 18 14:20 .. -rwxrwxrwx. 1 apache apache 9.3K Mar 23 16:34 default.settings.php drwxrwxrwx. 2 apache apache 4.0K Aug 23 15:03 files -rwxrwxrwx. 1 apache apache 0 Aug 23 15:03 settings.php However, the install is still complaining about write permissions. I followed steps one and two of the INSTALL.txt but no luck. Update: To further explore the situation, I created sites/default/richard.php with the following code: <?php error_reporting(E_ALL); ini_set('display_errors', '1'); mkdir('files'); print("<hr> User is "); passthru("whoami"); passthru("pwd"); ?> Run from the command line (under user "richard"), no problem. The folder is created everything is a go. Run from the web, I get the following: Warning: mkdir(): Permission denied in /var/www/html/sites/default/richard.php on line 9 User is apache /var/www/html/sites/default Update 2: Safe mode appears to be off... -bash-4.1$ cat /etc/php.ini | grep safe | grep mode | grep -v \; safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH sql.safe_mode = Off

    Read the article

  • Variable size encrypted container

    - by Cray
    Is there an application similar to TrueCrypt, but the one that can make variable size containers opposed to fixed-size or only-growing-to-certain-amount containers which can be made by TrueCrypt? I want this container to be able to be mounted to a drive/folder, and the size of the outer container not be much different from the total size of all the files that I put into the mounted folder, while still providing strong encryption. If to put it in other words, I want a program like truecrypt, which not only automatically grows the container if I put in new files, but also decreases it's size if some files are deleted. I know there are some issues of course, and it would not work 100% as truecrypt, because it basically works on the sector level of the disk, giving all the filesystem-control to the OS, and so when I remove a file, it might as well be left there, or there might be some fragmentation issues that would stop just truncating the volume from working, but perhaps a program can be built in some other way? Instead of providing sector-level interface, it would provide filesystem-level interface? A filesystem inside a file which would support shrinking when files are deleted?

    Read the article

  • mysql mass insert data

    - by user12145
    Edit: I realized that if I construct a large query in memory, the speed has increased almost 10 times of magnitude "insert ignore into xxx(col1, col2) values('a',1), values('b',1), values('c',1)..." Edit: since I have an index on the first column, the insert time creeps up as I insert more. Can I delay the index until the end? Original: I'm using the following to batch insert 10 million rows into mysql db(not all at once, since they don't all fit into memory), it's too slow(taking many hours). should I use load file to improve performance? I would have to create a second file to store all the 10 million rows, then load that into db. are there better ways? PreparedStatement st=con.prepareStatement("insert ignore into xxx (col1, col2) "+ " values (?, 1)"); Iterator d=data.iterator(); while(d.hasNext()){ st.clearParameters(); st.setString(1, (d.next()).toLowerCase()); st.addBatch(); } int[]updateCounts=st.executeBatch();

    Read the article

  • PHP extension causes symbol lookup error

    - by Christian
    Dear, I installed - or better tried to - the NMCryptGate Extension for PHP on my Debian 5.0.8 server. I did this by compiling the sources which came up with no error message. Calling phpinfo() I can see the extension as enabled. BUT, whenever I try calling a method from this extension I get an error logged to the apache error log: /usr/sbin/apache2: symbol lookup error: /usr/lib/php5/20060613+lfs/nmcryptgate.so: undefined symbol: nmlistalloc What is missing? I got two packages from the software company: the php module sources and some files which should - according to their path inside the tar - go to /usr/local/bin|doc|include|lib. I moved them there without any effect. Each of these two packages has its own config file almost looking the same: \# libnmcryptgate.la - a libtool library file \# Generated by ltmain.sh - GNU libtool 1.3.4 (1.385.2.196 1999/12/07 21:47:57) \# \# Please DO NOT delete this file \# It is necessary for linking the library \# The name that we can dlopen(3) dlname='' \# Names of this library library_names='libnmcryptgate.so.1 libnmcryptgate.so libnmcryptgate.so' \# The name of the static archive old_library='' \# Libraries that this one depends upon dependency_libs=' -L. -L/usr/ssl/lib -L/usr/local/ssl/lib -L/usr/local/lib -lssl -lcrypto' \# Version information for libnmcryptgate current=1 age=0 revision=29 \# Is this an already installed library installed=yes \# Directory that this library needs to be installed in libdir='/usr/local/lib' I tried several ways to get it right: moving files, symlinking, changing configurations - always followed by restarting apache - no success. I guess I just have to move the files to the correct location or change the libdir inside the config files but meanwhile I'm totally confused by the two packages: do I need both, which config rules what, do I have to use the libdir variable? And for what? ... Anybody out there hinting me to my source of failure? Thank you in advance, regards, Christian

    Read the article

  • SQL Server restore a backup results in an error.

    - by Mario
    I have a database in dev (SQL Server 2005 on Windows Server 2008) that I need to move to prod (SQL Server 2000 on Windows Server 2003). My process is as follows: Login to dev, open SQL Server Management Studio Right click on the database | Tasks | Backup. Keep all default options (full backup etc.) Move .bak file locally to prod (no network drive), login to prod, open SQL Server Enterprise Manager. Right click Databases node | All Tasks | Restore database. Change Restore as database to reflect the same database name. Click radio button 'From device'. Click 'Select Devices' Click Restore from: Add..., browse to .bak file (small - only 6mb) Now I am ready to restore the database, so I click OK and get the following error: "The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE DATABASE is terminating abnormally." This error is immediate. I have tried a few different variations of this - restoring the db to dev machine with a different db name and log file names (where it originated), creating an empty database with the same physical path to files before and trying to restore to that, making a few different .bak files and making sure they are verified before uploading them to prod. I know for a fact the directory for the .mdf and .ldf files exist on prod, though the files themselves don't exist. If, before I click OK to restore, go to the options tab instead I get the following error: Error 3241: The media family on device 'E:...bak' is incorrectly formed. SQL Server cannot process this media family. RESTORE FILELIST is terminating abnormally. Anyone have any bright ideas?

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Copying symbolic links and filenames with special characters to NAS

    - by Mr E
    I have a new Western Digital My Book Live NAS. I am trying to copy files from an old drive to the NAS. I'm using Ubuntu 12.04 and I've mounted the drive by browsing the network in Nautilus and choosing a shared folder configured on the NAS. The shared folder is then automatically mounted at .gvfs/files on mybooklive. There are two problems so far: File names and directory names containing certain characters (e.g. : or |). Attempting to copy these results in the error message: cp: cannot stat `/path/to/destination.filename': Invalid argument Symbolic links. In Nautilus I get the error message: Symlinks not supported by backend My questions are: Can I connect to the NAS or configure the NAS so that I can copy my files without this problem? (In case it matters, I don't need Windows compatibility.) If not, what can I do to identify all the problem files? Can I do anything to automatically fix my filenames Please let me know if any of this needs clarification. I'm not too familiar with all of this so I may have left out some useful information.

    Read the article

  • Symbolic directory link shared in domain

    - by Sabre
    We have a file server that is 2008R2 STD, it is a member server in a 2008 AD. I need to relocate some of the files and directories and would like to do it behind the scenes more or less without impacting the users. (Reason for this is that some of the files, due to recent software changes, HAVE to be located locally on one of the workstations, but they can be accessed by other applications remotely.) So symbolic links seem the panacea here, I moved a directory to another network share in the same domain (Windows 7 professional), created a symlink to it in the location it used to be in, named it the same thing, and to the local user it seems almost transparent. I.E. When logged into the desktop of the file server, I can go to the directory, open the link, it leaps to the other share as if it were local, exactly what would be expected. Then I tried it from another client computer (Windows 7 professional as well), went through the normal provisioning of R2R and L2R with fsutil... No joy. What I am getting is an access denied "Logon failure: Unknown username or bad password." using the same account that I log on locally to the file server with (Which happens to be the domain admin) So I cannot believe it is telling the truth, or... I assume it is not passing the credentials I am connecting to the first share all the way through the symlink. The end result is I want users on the domain to browser to share A, inside share A is a mixture of directories/files that reside there, and symlinks to directories/files on the second machine over the network in the same domain. Possible? Or am I misunderstanding how the symlink should work?

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • Switch Windows 8 from a hybrid MBR/GPT => GPT only on Macbook Pro Retina

    - by Sid
    I used DiskUtility+Bootcamp Wizard to setup my hard drive for Windows 8 (final MSDN). Somewhere in that process, the Apple tools turned my GPT disk into a hybrid MBR/GPT. All my 4 primary MBR partitions are used up, so when I try turning on Bitlocker in Windows 8, it complains about not finding a System drive. I know on Windows 8 the Bitlocker setup tries to create the 200(?)MB system partition if it's missing. However with all 4 partitions filled I suspect it can't create system drive = it can't find it = throws back an error like "BitLocker Setup could not find a target system drive. You may need to manually prepare your drive for BitLocker". I've already tried disabling hibernation, swap file etc. Now I'm thinking that if I were to get rid of the MBR scheme altogether, perhaps I can be alright within the GPT world without MBR's 4 primary partitions limit. So, how can I get rid of the MBR tables on the hybrid scheme in a manner that still leaves Mac OS and Windows 8 in working conditions? Details: Hardware is the MacbookPro Retina. Primary MBR partitions are consumed as follows: EFI partition HFS+ partition (=encrypted, therefore ="Apple_CoreStorage") HFS+ partition (Recovery partition, contains unencrypted Mac bootloader) NTFS partition (Windows8 all-in-one partition) diskutil list output sid-mbpr:~ sid$ diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *251.0 GB disk0 1: EFI 209.7 MB disk0s1 2: Apple_CoreStorage 160.0 GB disk0s2 3: Apple_Boot Recovery HD 650.0 MB disk0s3 4: Microsoft Basic Data Win8 90.1 GB disk0s4 GPT vs MBR addresses sid-mbpr:~ sid$ sudo gptsync /dev/rdisk0 Password: Current GPT partition table: # Start LBA End LBA Type 1 40 409639 EFI System (FAT) 2 409640 312909639 Unknown 3 312909640 314179175 Mac OS X Boot 4 314179584 490233855 Basic Data Current MBR partition table: # A Start LBA End LBA Type 1 1 409639 ee EFI Protective 2 409640 312909639 ac Apple RAID 3 312909640 314179175 ab Mac OS X Boot 4 * 314179584 490233855 07 NTFS/HPFS Status: GPT partition of type 'Unknown' found, will not touch this disk.** **: Ignore this message, the gptsync tool is old and doesn't understand the UUID for "Apple_CoreStorage" / FileVault2 partitions. Since LBA addresses are alright, safe to ignore this message.

    Read the article

  • How to generate thumbnails for less common video containers (mkv, ogm, mp4, flv, rmvb and mov) in wi

    - by fluxtendu
    So how to generate thumbnails for these containers? I know that the install of "DivX Plus Tech Preview: MKV on Windows 7" does it for MKV. But I think that only some registry changes are really necessary and I want it for other containers. If it's possible to avoiding the install of (always bloated) codecs packs it would be nice... Maybe only installing ffdshow or essential and separated codecs. (some time ago I have try reg files for vista without success...) Update: I have installed Win7codecs & tweaked a little its settings and I got almost everything I want. (I have also re-installed the relevant part of the divx plus tech preview to get something else than an all black preview for MKV) Issues that I still want to resolve: Find a cleaner & lighter method Almost all my rmvb and mov files got an all black preview (installing real media/quick time alternative doesn't help, is it the same with the officials?) With almost all containers (avi, mkv, ogm, mpg), I have (few) random files that don't get the preview. I could play them in WMP or another player and don't have found a pattern in the codecs used. All wmv, flv and mp4 have previews but I have less files in these containers. (I clear my thumbnails cache to test them) More generally I would like to understand how windows handle the containers & codecs to generate the previews. And a software that let me choose arbitrarily the pictures previewed would be convenient too

    Read the article

  • Is there any limit to AIX 5.3 pipe size ?

    - by snowflake
    Hello, I'm in trouble while performing cat/tail/head operation on large files on Aix 5.3. When asking for a cat of several 1Go file redirected to another one: cat file1 file2 file3 > outputfile The outputfile is limited to 2Go (cat: output error and result file is 2147483647 bytes) Filesystem is jfs2. I successfully uploaded through ftp 10Go files on the filesystem without problem. I found nothing relevant in etc/security/limits: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a core file size (blocks) unlimited data seg size (kbytes) 245759 file size (blocks) unlimited max memory size (kbytes) unlimited open files 2000 pipe size (512 bytes) 64 stack size (kbytes) 32768 cpu time (seconds) unlimited max user processes 2048 virtual memory (kbytes) 278527 The problem does not occur on another AIX 5.3 server, I'm just looking for a different configuration that might be the source of the problem. /etc/security/limits on the server without the problem: default: fsize = -1 core = 2097151 cpu = -1 data = 262144 rss = 65536 stack = 65536 nofiles = 20000 ulimit -a on the server without the problem: core file size (blocks, -c) 1048575 data seg size (kbytes, -d) 131072 file size (blocks, -f) unlimited max memory size (kbytes, -m) 32768 open files (-n) 20000 pipe size (512 bytes, -p) 64 stack size (kbytes, -s) 32768 cpu time (seconds, -t) unlimited max user processes (-u) 262144 virtual memory (kbytes, -v) unlimited

    Read the article

  • "Windows cannot find" file when opening Excel spreadsheet

    - by DanH
    For all of my Excel spreadsheets when I attempt to open them (by double-clicking in explorer) I get the message "Windows cannot find C:...". The files are there, and are valid zip files as seen by 7-Zip. There are no apparent lock files in the directories. I did just install Norton-360 over the weekend (replacing Kasperski), but the Norton log shows no events related to Excel. However, while installing Norton I did reboot with some Excel files open. Presumably something is hosed in my Excel configuration but I don't know what. Update (Before actually posting) -- I found an article that suggested turning off Advanced Option "Ignore other applications that use DDE", then doing excel.exe /unregister followed by excel.exe /register. I tried this but I suspect that the two Excel calls were ignored (Excel opened, but no obvious change). With that option off the spreadsheets load OK, but not with it on. And, curiously, spreadsheets load OK with the option on or off if I open Excel first and then open the spreadsheet in it. Does anyone have any idea what effect leaving that option off will have? Update 2 -- I tried running the "repair" option. It said it corrected a couple of config things (without saying what they were), but I still get a failure if I double-click an Excel file with the "Ignore other applications..." option checked. Update 3 -- I managed to fix this problem, but failed at the time to come back and say what I did, and now I can't remember for sure. But I think it had something to do with "Options"/"Save" and some of the values there. Something to do with AutoRecover, perhaps. (Possibly there was a file in recovery and I had to specify "Disable AutoRecover for this workbook" to let bring-up get past it. Or perhaps the AutoRecover file location was hosed.) Anyway, if it happens to someone else, and you find the fix, post it below and I'll mark it answered.

    Read the article

  • Symantec Endpoint Protection Virus Definitions

    - by Gus Denton
    I have done some Googling but I cannot get a definitive answer certainly not from the Symantec KB. I have a Virtualised Win 2003R2 server 32bit. It has been provisioned to me with Symantec Endpoint Protection 11.0.62xxx CLIENT (not a definitions server) the directory C:\Program Files\Common Files\Symantec Shared\VirusDefs is 750MB IT doesn't contain .tmp directories so it is NOT a corrupt definitions server. IT does contain directories named with a date pattern YYYYMMDD.xxx Some of these folders are 12 months old and I would like to recover the space. The sysmantect forums are full of this stuff but a lot of the postings contain links back to documents that are not specific to End Point Protection Client. It appears that I should be able to delete the older folders and all will be OK. with a service restart however there is a warning about having Live Update Administrator Installed Firstly I have no idea if I have this installed how to I check and secondly can I just ditch these old files and restart ? Regards Gus Denton Learning and Teaching Uni of New South Wales Sydney Australia For those trying to assist me I thankyou. I have followed some instructions found on the Symantec site and assumed that the response from Nixphoe would resolve my issue. It appears that as I am on a provisioned VM from a central IT unit I cannot run the Symantec commands from the Run prompt as my admin creds to get me in. (smc -stop) Basically I need to claw back some Diskspace from the c: drive which is being filed up with WSUS patches and Symantec files. I have managed to delete one symantec cache through the live update control panel and recovered 470Mb I suppose my last question for those more experienced than myself is, can I simply remove say the two oldest virus definition folders without completely foobaring the End Point protection and the server ? Regards Gus

    Read the article

  • Running git-svn with cron results in garbage in .git

    - by Paul
    I've setup a git-svn repo with cron to fetch from the svn repo daily. I have a script to do the fetching, and this is what is invoked by cron. Everything is fine with the repo, and the script works fine when executed manually. However, when it runs under cron, empty files get dropped into the .git directory. The files have names that look like they are some base64 output, e.g. juTrvjP6m8 and kcKf3hu3b4. Two of these files show up for every cron run. I thought these might be commit hashes, but they're not, git-show says it's an unknown revision. I set-up the repo as follows: git svn init http://svn.ip.addr/repo git svn fetch svn-remote My script looks like this: cd /gitsvn/dir git svn fetch svn-remote git svn push pub The last line pushes the repo to a separate (bare) public repo from which others can clone. I'm piping the output from the cron job to a file, which looks like this: fatal: unable to run 'git-svn' Counting objects: 21, done. Delta compression using up to 2 threads. Compressing objects: 100% (10/10), done. Writing objects: 100% (11/11), 59.08 KiB, done. Total 11 (delta 8), reused 0 (delta 0) To /gitpub/repo.git 360faf5..a153b0d trunk -> trunk The line "fatal: unable to run 'git-svn'" is alarming, but the fetch seems to go ahead anyway. Any suggestions? Where are these empty garbage files coming from, and how to stop them? Am I in for bigger problems in the future? BTW, I'm using git 1.6.3.3.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >