Search Results

Search found 28288 results on 1132 pages for 'home directory'.

Page 491/1132 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • Advice on Computer Specs for overall development/general use machine

    - by Ender
    At the moment I am restricted to a laptop with 512MB of RAM, a 120GB HDD and a 1.5GHz Intel processor for all my development and general browsing needs, and as you can probably tell using it for anything modern is a painful experience. As a result I've decided to buy myself a new desktop computer, one that will stand the test of time and one that can be upgraded easily. Rather than build the machine myself I've decided to go through Dell as I've had good experiences with them when purchasing computers for my family. I've had my eye on this as it's got a good amount of RAM, has a decent-rated processor and isn't priced too badly. http://www1.euro.dell.com/uk/en/home/Desktops/inspiron-580/pd.aspx?refid=inspiron-580&s=dhs&cs=ukepp1&~oid=uk~en~20211~inspiron-580_d005827~~ Intel® Core™ i5 Processor 750 (2.66GHz, 8MB) Genuine Windows® 7 Home Premium 64bit - English Display Not Included ATI Radeon™ HD 5450 1GB DDR3 graphics 6144MB Dual Channel DDR3 [3x2048] Memory 1TB (7200rpm) SATA Hard Drive DVD +/- RW Drive (read/write CD & DVD) with DVD Burn software 1 year of coverage included with your PC McAfee® Security Centre - 15 Month Protection - English After the pain of using a slow laptop for all this time the main thing I want is speed. I may look to play a couple of basic games on it, nothing too powerful. Obviously I'll be doing some development on it too so it'll have to be able to handle the latest IDE's and Database tools like SQL Server pretty quickly. Finally, should I ever need to improve it I'd like to be able to add more RAM and change some of the parts. I wouldn't have thought this would be a problem but a few people I've spoken to have said that the amount of RAM the motherboard can handle isn't that great. Is this true? How long can I expect to be using this computer before it's too slow? Thanks in advance for the help.

    Read the article

  • Setting up virtualbox for outside access

    - by Morgan Green
    I have a computer running a server that my subdomain on my shared hosting account points to. IE subdomain.mydomain.org goes to my home server. Now then; what I'm wanting to do is be able to access my VirtualBox servers through that subdomain and a different port. E.G Ubuntu Virtual Box Server 1 Username:Ubuntuhost1 Password:MyUbuntuHost1 Port:4000 Internal IP: 192.168.1.60 External IP: 24.29.138.45 Ubuntu Virtual Box Server 2 Username:UbuntuHost2 Password:MyUbuntuHost2 Port:4001 Internal IP: 192.168.1.61 External IP: 24.29.138.45 Now I want to be able to access RDP number 1 through Port 4000, but if I access Port 4001 it will connect to the server on port 4001; both using the same subdomain. The next issue is the fact that even though I know what the IP addresses are on the router for the virtualbox hosts through ifconfig it doesn't change the fact that they don't show up on the router. If anyone knows how to configure this to work please help me out because I've been racking my brain to the highest extent I can. Alright; here's an edit to clarify more; Sorry. My ports on the router are edited to forward Port 4000 on Internal IP 192.168.1.63 (My Ubuntu Internal IP address) Now when I go to my Router Home Page my VirtualBox Internal IP Address doesn't show on the attached device listings, so I set up port forwarding anyways to the VirtualBox Internal IP. My end goal is when I connect to mydomain.org and I connect through port 3389 it takes me to my host computers server, but if I put in mydomain.org and go through port 4000 it's going to redirect to my VirtualBox server; Is this even possible? Sorry; I'm trying to clarify the most I think I can I just don't know how else to explain my issue.

    Read the article

  • Is git-annex appropriate for my scenario?

    - by Karel Bílek
    I have a git repository with source codes I want to put in the open on github. However, I also have gigabytes of data that I don't want to have in the open and in the repo - they are big, they are proprietary, they are "burdened" with copyrights and so on. However, those are also logically "part of the same project" and I wish to have some control over their history (basically, what git already does). Right now, I have them in the directory "data" in the repository and I have the directory ignored and I resign on getting them to git. However, I have read about git-annex and it seems it can do what I want. So, I have two questions. Is git annex appropriate for me? How exactly should I use git annex for my scenario? Meaning - which commands should I use and how? I have tried to read the official documentation but it talks about use cases that I don't care about. I have the data on one computer only and I don't think I will be moving them soon (it's nice to have the possibility, but it's not why I want to use git annex). Also, the documentation is pretty hard to read.

    Read the article

  • IIS 7, FastCGI, PHP and custom php.ini files

    - by Marlon
    I'm running PHP 5.3, FastCGI, and IIS 7 on Windows Server 2008. I have a site which I would like to configure its own php.ini settings for but things aren't working as expected. I am following the tutorial located here. This is what I have done so far: 1) Configured a new website with it's own AppPool. 2) Selected PHP 5.3.6 from the PHP Manager available on the website home on IIS (not the web server home which sets the global version of PHP) 3) Added the following lines to the section of the applicationHost.config file located at system32/inetsrv/config <application fullPath="C:\Program Files (x86)\PHP\v5.3\php-cgi.exe" arguments="-d open_basedir=C:\inetpub\wwwroot\kickasswebsite.com" maxInstances="4" idleTimeout="300" activityTimeout="30" requestTimeout="90" instanceMaxRequests="200" protocol="NamedPipe" queueLength="1000" flushNamedPipe="false" rapidFailsPerMinute="10"> <environmentVariables> <environmentVariable name="PHPRC" value="c:\inetpub\wwwroot\kickasswebsite.com" /> </environmentVariables> </application> 4) I then create a php.ini file located in C:\inetpub\wwwroot\kickasswebsite.com (the location of the root of the website) register_globals = on 5) I then run test.php which simply outputs everything the method call to phpinfo() returns. At this point, I observe that the global setting for register_globals = off (as it should be), but the local setting for register_globals = off, even though I specified it differently in the php.ini file I created at the root of the site. Furthermore, I see these settings in the output of the php.ini Configuration File (php.ini) Path C:\Windows Loaded Configuration File C:\Program Files (x86)\PHP\v5.3\php.ini Scan this dir for additional .ini files (none) Additional .ini files parsed (none) What am I messing up on, or is there a different way to go about this?

    Read the article

  • How to change .htaccess file to work right in localhost?

    - by Manolo Salsas
    I have this snippet code in my .htaccess file to prevent users from hotlinking the server's images: RewriteEngine On RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^http://(www.)?itransformer.es/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://itransformer.es [R,L] Of course, it is not working in my localhost, but don't know how to achieve it. My guess is that I should change the domain name with any wildcard. Any idea? Update I've finally found out the answer thanks to @Chris solution: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} ^https?://%{HTTP_HOST}/.*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] The /usuarios/ directory is because I only want to deny direct access to files inside this directory. Update2 For some reason, it doesn't work again. Finally I think that I found out a better solution: RewriteCond %{REQUEST_FILENAME} .*/usuarios/.*$ [NC] RewriteRule \.(gif|jpe?g|png|wbmp)$ http://%{HTTP_HOST} [R=301,L] I say better solution because what I want to deny is direct access to a file (image). Update3 Well, after a while I discovered above wasn't exactly what I wanted, so the next is definitive: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://itransformer.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] Just two doubts: If I change the above to: RewriteCond %{HTTP_REFERER} ^$ [OR] RewriteCond %{HTTP_REFERER} !^https?://%{HTTP_HOST}.*$ [NC] RewriteRule /usuarios/.*\.(gif|jpe?g|png|wbmp)$ - [R=404,L] it doesn't work. I don't understand why, because %{HTTP_HOST} is equal to itransformer in my localhost, and it should work. The second doubt is why is shown the default 404 page and not my custom page (that is shown in all other 404 responses).

    Read the article

  • Can't mv files between directories on vsftpd

    - by frankyue
    I enabled this in vsftpd.conf chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list user_config_dir=/etc/vsftpd_user_conf and here is the user set in vsftpd_user_conf dirctory ftpupload : local_root=/mnt/upload But /mnt/upload is mounted from another directory /mnt/upload on /opt/upload type none (rw,bind) Here is the list in /mn/upload rough_images/ shoes-pentland/ vendor-upload/ shooting/ Additional, the shooting/ directory is mounted from another place /mnt/upload/shooting on /mnt/shooting none (rw,bind) Now here is the problem. When I use the ftp client to move the files between the directories but failed .Files can moved between any directories except the shooting one. The permission is right . I can move any files between this directories successful by using su ftpupload. It means the vsftpd didn't support the mount bind? Here is the vsftpd.conf listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=000 dirmessage_enable=YES use_localtime=YES xferlog_enable=YES connect_from_port_20=YES chown_uploads=YES chown_username=app xferlog_std_format=NO log_ftp_protocol=YES chroot_local_user=YES chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list user_config_dir=/etc/vsftpd_user_conf ls_recurse_enable=YES secure_chroot_dir=/var/run/vsftpd/empty pam_service_name=vsftpd pasv_enable=YES pasv_max_port=*** pasv_min_port=*** port_enable=YES pasv_address=*** virtual_use_local_privs=YES tcp_wrappers=YES and here is the mtab: /mnt/upload /opt/upload none rw,bind 0 0 /mnt/upload/shooting /mnt/shooting none rw,bind 0 0 all of the permissions under the /mnt/upload are the same: drwxrwxrwx * ftpupload app

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • Lighttpd mod_accesslog not logging fastcgi requests

    - by zepatou
    I have recently installed a lighttpd for serving a python script via mod_fastcgi. Everything works fine except that I don't get the requests handled by mod_fastcgi logged in the access.log file (requests on port 80 are logged though). My lighttpd version is 1.4.28 on a Debian 6.0. I used the same working configuration a Ubuntu server 10.04 with lighttpd 1.4.26 and it worked. Here is my config lighttpd.conf server.modules = ( "mod_access", "mod_alias", "mod_accesslog", "mod_compress", ) server.document-root = "/var/www/" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/home/log/lighttpd/error.log" index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", "index.lighttpd.html" ) accesslog.filename = "/home/log/lighttpd/access.log" url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) server.pid-file = "/var/run/lighttpd.pid" include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" conf-enabled/10-fastcgi.conf server.modules += ( "mod_fastcgi" ) fastcgi.server = ( "/" => ( ( "min-procs" => 1, "check-local" => "disable", "host" => "127.0.0.1", # local "port" => 3000 ), ) ) Any idea ?

    Read the article

  • Are SATA II and SATA 3.0 Gbps compatible?

    - by Johnny Maelstrom
    I am trying to check that if I buy a new internal HDD it will work in the NAS I am buying. Currently I'm confused about naming schemes and once that is resolved whether there is compatibility. I will gladly author this question to be more general if there is not already an article helping with the confusion of SATA naming and standards. I see similar, but not identical questions and will accept this as a duplicate if thought as such. The specifications on the eCommerce site for the NAS says, "Controller Interface Type Serial ATA-150", the product home page for the manufacturer says, "Compatible with SATA and SATA II HDD". The specifications on the eCommerce site for the hard drives say, "Interface Type Serial ATA-300", the product home page for the manufacturer says, "Interface SATA 3.0 Gbps" Wikipedia says many things about different naming conventions, the closest being, "SATA II 3.0 Gbit/s, which was colloquially referred to as "SATA 3G" [bps] or "SATA 300" [MB/s] since 1.5 Gbit/s SATA I and 1.5 Gbit/s SATA II were referred to as both "SATA 1.5G" [b/s] or "SATA 150" [MB/s]). Therefore, they will operate with negligible differences between them." Are SATA II and SATA 3.0 Gbps the same? I feel I'm tantalisingly close to getting a definitive answer here before I purchase, but really want to clear up these naming schemes.

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    Dear ladies and sirs. The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G

    Read the article

  • After few days of server running fine with nginx it start throwing 499 and 502

    - by Abhay Kumar
    Nginx start throwing 499 and 502 after running fine for few days, website is a rails app using thin as the webserver. Restarting the Nginx doent not seem to help. Below the the Nginx config Nginx config under sites-enabled upstream domain1 { least_conn; server 127.0.0.1:3009; server 127.0.0.1:3010; server 127.0.0.1:3011; } server { listen 80; # default_server; server_name xyz.com *.xyz.com; client_max_body_size 5M; access_log /home/ubuntu/www/xyz/current/log/access.log; root /home/ubuntu/www/xyz/current/public/; index index.html; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 150; if (!-f $request_filename) { proxy_pass http://domain1; break; } } }

    Read the article

  • configuring apache with mod_mono for .net app

    - by Mystere Man
    I'm having a huge problem getting mod_mono and apache configured to work correctly. I've had this working at one time, but I can't seem to figure out where i'm going wrong. I'm using mono-server4. I'm trying to use a seperate port from the main website. So I have in /etc/apache2/sites-available (with a link from sites-enabled) a vhost configuration that looks like this: <VirtualHost *:9999> ServerName XXX ServerAdmin web-admin@XXX DocumentRoot /var/xxx MonoServerPath XXX "/usr/bin/mod-mono-server4" MonoDebug XXX true MonoSetEnv XXX MONO_IOMAP=all MonoApplications XXX "/:/var/xxx" <Location "/"> Allow from all Order allow,deny MonoSetServerAlias XXX SetHandler mono SetOutputFilter DEFLATE SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary </Location> <IfModule mod_deflate.c> AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript </IfModule> </VirtualHost> I used mono-server4-admin to create the application mono-server4-admin --path=/var/xxx --app=/XXX --port=9999 When i start apache, it gives the error: Syntax error on line 13 of /etc/apache2/sites-enabled/xxx: Server alias 'XXX, not found. This corresponds with the MonoSetServerAlias statement. So I commented it out, and when I do that apache starts. However, when I try to access the site, I get a 500 error. The access log indicates that it's trying to access the app on port 80, rather than 9999. I'm not sure what the problem is here. Can anyone help me get figure out where I went wrong? My mono-server4-hosts.conf contains this: # start /etc/mono-server4/conf.d/RMRSite/10_XXX Alias /XXX "/var/xxx" AddMonoApplications default "/XXX:/var/xxx" <Directory /var/xxx> SetHandler mono <IfModule mod_dir.c> DirectoryIndex index.aspx </IfModule> </Directory> # end /etc/mono-server4/conf.d/XXX/10_XXX Also, my /etc/mono-server4/conf.d/XXX/10_XXX contains this: This is the configuration file for the XXX virtualhost path = /var/xxx alias = /XXX vhost = localhost port = 9999

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

  • Custom Domain for Google App Engine and Google Apps

    - by Kevin
    I have set up and configured Google App Engine and Google Apps to use my custom domain with a cname 'www'. I have configured my DNS (via fasthosts.co.uk) with the cname and pointed it to ghs.google.com. I can access the website using the google app engine domain at capel-y-crwys.appspot.com but I can't access it via my custom domain www.capelycrwys.org.uk. I have allowed several days for propagation of the DNS etc. The really strange this is I can access the app via my custom domain when I use the web browser on my Android mobile phone. I can't access the app via my custom domain from my home internet connection, my work internet connection or a friends internet connection. I tried a few online web proxies and I can access the app via the custom domain. I posted this question on the google forums code.google.com/appengine/forum/?place=topic%2Fgoogle-appengine%2FfUP-G_0FKE4%2Fdiscussion and a commentor has said he could access the app via the custom domain. So why can't I access it direct via my home internet connection etc? I've tried loads of google searching and even found a similar sounding post here on serverfault serverfault.com/questions/208461/custom-domain-name-server-not-found-google-app-engine-and-google-apps but it doesn't have an answer that helps me.

    Read the article

  • lighttpd: why using port >= 9000 does not work properly

    - by yejinxin
    I had a lighttpd server which works normally. I can access this website from outside(non-localhost) via http://vm.aaa.com:8080. Let's just assume that it's a simple static website, without php or mysql. Now I want to copy this website as a test one(using another port) in the same machine. And I do not want to use virtual host. So I just copy the whole files of original server, including lighttpd's bin/ conf/ htdocs/ lib/ and so on folders. And I made some required change, including changing lighttpd.conf. Now what I'm confused is, if change the port to a number that is less than 9000, it works perfectly. But if the port is changed to a number that is equal or greater than 9000, lighttpd can start, but I can not access the new website from outside, while I do can access the new website from INSIDE(I mean in the same LAN or localhost). The access log from INSIDE is like below: vm.aaa.com:9876 10.46.175.117 - - [08/Oct/2012:13:18:47 +0800] "GET / HTTP/1.1" 200 15 "-" " curl/7.12.1 (x86_64-redhat-linux-gnu) libcurl/7.12.1 OpenSSL/0.9.7a zlib/1.2.1.2 libidn/0.5.6" Command I used to start lighttpd is: bin/lighttpd -f conf/lighttpd.conf -m lib/ -D My lighttpd.conf is like: server.modules = ( "mod_access", "mod_accesslog", ) var.rundir = "/home/work/lighttpd_9876" server.port = 9876 server.bind = "0.0.0.0" server.pid-file = var.rundir + "/log/lighttpd.pid" server.document-root = var.rundir + "/htdocs/" var.cronolog_path = "/home/work/lighttpd_9876/cronolog/sbin/cronolog" server.errorlog = ... accesslog.filename = ... ... So why is this happening? I've tried several diffrent ports, still the same. Isn't that ports between 8000 and 65535 are the same?

    Read the article

  • User-unique .vimrc file for servers as root user

    - by Scott
    I'm getting thrown into an IDE war at the office, where multiple users have root access on our servers, and like to have everything their own way with VIM. Unfortunately, we have our servers locked down enough to where if you want to do anything, you need to have root access. Obviously (although this is obviously frowned upon), we get tired of typing sudo before each command we type, which would require that we constantly type in our wonderfully complex passwords that are mandated on us over and over again, so naturally we all just execute the sudo su - command upon login to avoid all of this. Of course, when it comes to VIM and custom .vimrc files, we are often times stepping on someone else's custom .vimrc file, and we have some whacked out functionality in these files that users have that may overwrite functionality that we have no idea about, much less have the patience to learn either. When as root on a linux box, is there any way for all of us to still maintain our .vimrc file without having to overwrite the file over and over again every time someone wants to use VIM? Ideally, we have many virtual machines all with VIM installed, so a universal solution across all servers would be best, and we do have our Microsoft Windows user specific home directories mounted on the servers under /home/username. Any recommendations for accommodating this?

    Read the article

  • Windows VPN always disconnects after < 3 minutes, only from my network

    - by hemp
    First, this problem has existed for almost two years. Until serverfault was born, I pretty much gave up on solving it - but now, hope is reborn! I've set up a Windows 2003 server as a domain controller and VPN server at a remote office. I am able to connect to and work over the VPN from every windows client I've tried, including XP, Vista, and Windows 7 without issue, from at least five different networks (corporate and home, domain and non.) It works fine from all of them. However, whenever I connect from clients on my home network, the connection drops (silently) after 3 minutes or less. After a short while, it will eventually tell me the connection has dropped and attempt to redial/reconnect (if I've configured the client that way.) If I reconnect, the connection will re-establish and appear to work correctly, but again will silently drop, this time after a seemingly shorter time period. These are not intermittent drops. It happens every single time, in exactly the same way. The only variable is how long the connection survives. It doesn't matter what type of traffic I send. I can sit idle, send continuous pings, RDP, transfer files, all of that at once - it makes no difference. The result is always the same. Connected for a few minutes, then silent death. Since I doubt anyone has experienced this exact situation, what steps can I take to troubleshoot my evanescing VPN?

    Read the article

  • Empty rewrite.log on Windows, RewriteLogLevel is in httpd.conf

    - by ripper234
    I am using mod_rewrite on Apache 2.2, Windows 7, and it is working ... except I don't see any logging information. I added these lines to the end of my httpd.conf: RewriteLog "c:\wamp\logs\rewrite.log" RewriteLogLevel 9 The log file is created when Apache starts (so it's not a permission problem), but it remains empty. I thought there might be a conflicting RewriteLogLevel statement somewhere, but I checked and there isn't. What else could cause this? Could this be caused by Apache not flushing the log file? (I closed it by hitting CTRL-C on the httpd.exe command ... this caused the access logs to be flushed to disk, but still nothing in rewrite.log) My (partial) httpd-vhosts.conf: <VirtualHost *:80> ServerAdmin webmaster@localhost ServerName my.domain.com DocumentRoot c:\wamp\www\folder <Directory c:\wamp\www\folder> Options -Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule . everything-redirects-to-this.php [L] </IfModule> </Directory> </VirtualHost>

    Read the article

  • which vista services can be disabled with impunity?

    - by GwenKillerby
    I use Vista on a HP pavilion DV2 laptop. When I look through all the services my laptop starts, it really seems there's way too much of it. I multi boot with XP and 7. Both startup in 40 seconds. Vista takes four minutes. Is there some software that can determine which services I don't need? On 7, there's no propietary HP stuff at all, yet it seems to run fine. Because all these service, there's a LOT of them and some just sit there doing nothing, monitoring for updates I don't really need or want or need to know about the second they're available. my laptop is the only computer i use at home, there's no home network, aside from the modem-router, which is cabled, not wifi. Take for instance Parental controls, and stuff for people with bad eyesight, Tablet PC. I really never use any of that stuff. Hope this question is specific enough. I've looked at the other questions but they didn't answer me. thanks, Gwen.

    Read the article

  • how can i move ext3 partition to the beginning of drive without losing data?

    - by Felipe Alvarez
    I have a 500GB external drive. It had two partitions, each around 250GB. I removed the first partition. I'd like to move the 2nd to the left, so it consumes 100% of the drive. How can this be accomplished without any GUI tools (CLI only)? fdisk Disk /dev/sdd: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc80b1f3d Device Boot Start End Blocks Id System /dev/sdd2 29374 60801 252445410 83 Linux parted Model: ST350032 0AS (scsi) Disk /dev/sdd: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 2 242GB 500GB 259GB primary ext3 type=83 dumpe2fs Filesystem volume name: extstar Last mounted on: <not available> Filesystem UUID: f0b1d2bc-08b8-4f6e-b1c6-c529024a777d Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal dir_index filetype needs_recovery sparse_super large_file Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 15808608 Block count: 63111168 Reserved block count: 0 Free blocks: 2449985 Free inodes: 15799302 First block: 0 Block size: 4096 Fragment size: 4096 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8208 Inode blocks per group: 513 Filesystem created: Mon Feb 15 08:07:01 2010 Last mount time: Fri May 21 19:31:30 2010 Last write time: Fri May 21 19:31:30 2010 Mount count: 5 Maximum mount count: 29 Last checked: Mon May 17 14:52:47 2010 Check interval: 15552000 (6 months) Next check after: Sat Nov 13 14:52:47 2010 Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: d0363517-c095-4f53-baa7-7428c02fbfc6 Journal backup: inode blocks Journal size: 128M

    Read the article

  • VCL - configuration for Magento and Varnish 3.0.2

    - by Tomas
    I would like to kindly ask if there's someone who can help me configure Varnish for Magento to reach far more hits. My current ratio from varnishstat is: cache_hit=271 cache_miss=926 I'm kindly asking this because I've googled almost every site related to this theme, but 99.9% of configurations don't work because of outdated code. Details of my set-up: I use Varnish on port 80, Apache on port 81, PageCache as Magento varnish module, APC for PHP speed and Memcached for dynamic caching. Load speed is about 1.5s on home-page (Pingdom.com average results) USA ping & 2.5s Europe. Servers are located in Toronto, Canada. EDIT: This is my full VCL configuration http://pastebin.com/885BzHCs (I just use xxx.xxx.xxx.xxx for my IPs) This is the info from the command (varnishtop -i TxHeader -I Cookie): TxHeader Cookie: frontend=965b5...(*lots of numbers); adminhtml=3ae65...(*lots of numbers); EXTERNAL_NO_CACHE=1 "(*lots of numbers)" is just my adding to the info Any idea how to avoid Varnish hitting this cookies? (If I got correctly the idea about avoiding Vanrish hitting the cookie and not caching the home page). Thank you for any help!

    Read the article

  • All network devices freezing when Airport Extreme Base Station is connected. Any ideas?

    - by Jon
    I've been troubleshooting this issue for a while, and through a series of events have it narrowed down to my airport extreme base station. I like this router, since I'm able to connect to IPV6 sites without any insane configuration (my alternate router is too old and doesn't support v6). My question is: Has anyone else had this issue, if so how is it resolved? If not, can you recommend a good IPv6 router? Here is how I came to the conclusion that it is the router: Devices: XBOX 360, HTC Incredible, Home-Built machine running FreeBSD, Home-Built machine running Ubuntu 10.04. 1.) Noticed freezing on Ubuntu Box. 2.) Noticed freezing on XBOX360 3.) Noticed freezing on HTC Incredible (only when connected to my network wirelessly). The above all happened at random times throughout the past few weeks. Over the last few days, I was playing XBOX and noticed that the XBOX and Ubuntu machines both froze. I picked up my phone, and it was also frozen. I reset all devices, power-cycled my router, and all was fine again. About two hours later, it happened again (I was playing Forza III, the XBOX froze; I went to the Ubuntu box and it was frozen; unfortunately, the HTC phone was not connected wirelessly, and the FreeBSD box was turned off). I can't even begin to imaging what a router could be doing to freeze devices with such differing hardware/software/OS, and I feel absurd for coming to this conclusion, but I have nothing else. I hooked up my archaic Netgear router, and have had no problems since. :(

    Read the article

  • daily rsync backups with hard links, checksums, and a new computer

    - by user75058
    I backup my laptop to a Fedora desktop daily using rsync with hard links. This has worked great for almost a year. I recently purchased a new computer, transferred over my data, and would like to continue backing up this computer daily. However, due to the data transfer from the old laptop to the new laptop, the timestamps have obviously changed, and will thus cause my daily rsync backup to re-transfer all of the data. I thought that by adding the -c (checksum) switch to my rsync backup it would match files based on checksum, instead of timestamp and size, and only transfer those files that are different or not present. This appeared to work, but upon examining the new backup, hard links are not being created, and it appears the files that should be hard linked are simply being copied to the new backup directory from the previous backup directory on the backup server. This is very peculiar behavior to me, and I am having trouble figuring out why this is occurring. Checksums match for files that I think should be hard linked. I have looked through the rsync man page and Google'd around a bit and have been unable to find anything for me to better understand this behavior.

    Read the article

  • Apache Server Status page in port 8443

    - by batman
    I'm very new to apache. I tried to enable the server status page of apache. I added the status.conf and status.load to mods-enabled directory. I changed the config of apache2.conf to include all mods-enabled directory. This is the config of staus.conf: <IfModule mod_status.c> # # Allow server status reports generated by mod_status, # with the URL of http://servername/server-status # Uncomment and change the "192.0.2.0/24" to allow access from other hosts. # <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 ::1 # Allow from 192.0.2.0/24 </Location> # Keep track of extended status information for each request ExtendedStatus On # Determine if mod_status displays the first 63 characters of a request or # the last 63, assuming the request itself is greater than 63 chars. # Default: Off #SeeRequestTail On <IfModule mod_proxy.c> # Show Proxy LoadBalancer status in mod_status ProxyStatus On </IfModule> </IfModule> The default settings. I restarted my server. I'm redirecting all ports to 8443. Which in turn turns my requests to localhost:8443/server-status. Which does throw an 404 error. Are there any way to get around this? Thanks in advance.

    Read the article

  • Linux disk usage analyser that acts like symlinks are real files

    - by Rory
    I am using git-annex, an extension to the DVCS git, which is designed for handling large files. It makes heavy use of symlinks. The actual large files are moved to the .git/annex directory and the original files are symlinked to there. I am running out of disk space, and need to clear up, and see what's using all my space. Usually I'd use a disk usage tool like ncdu, Baobab or Filelight. However they treat the symlink as essentially empty, and only count the file that it is pointing to as using any space. Which means when I use git-annex, it shows no space used in the main directories and lots of space used in the .git/annex directory. This is not helpful. Is there any (graphical or ncurses) based disk usage programme for linux (apt-get installable would be easie that is capable (through options or not) of counting a symlink as using up the space that the original file uses up? Many have options for different behaviour for hard links, so makes sense that some should h (I know counting symlinks as using space has flaws, like counting the space space twice, broken symlinks, etc. But that's OK for my purposes)

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >