Search Results

Search found 14175 results on 567 pages for 'home entertainment'.

Page 76/567 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • Complete Guide to Networking Windows 7 with XP and Vista

    - by Mysticgeek
    Since there are three versions of Windows out in the field these days, chances are you need to share data between them. Today we show how to get each version to be share files and printers with one another. In a perfect world, getting your computers with different Microsoft operating systems to network would be as easy as clicking a button. With the Windows 7 Homegroup feature, it’s almost that easy. However, getting all three of them to communicate with each other can be a bit of a challenge. Today we’ve put together a guide that will help you share files and printers in whatever scenario of the three versions you might encounter on your home network. Sharing Between Windows 7 and XP The most common scenario you’re probably going to run into is sharing between Windows 7 and XP.  Essentially you’ll want to make sure both machines are part of the same workgroup, set up the correct sharing settings, and making sure network discovery is enabled on Windows 7. The biggest problem you may run into is finding the correct printer drivers for both versions of Windows. Share Files and Printers Between Windows 7 & XP  Map a Network Drive Another method of sharing data between XP and Windows 7 is mapping a network drive. If you don’t need to share a printer and only want to share a drive, then you can just map an XP drive to Windows 7. Although it might sound complicated, the process is not bad. The trickiest part is making sure you add the appropriate local user. This will allow you to share the contents of an XP drive to your Windows 7 computer. Map a Network Drive from XP to Windows 7 Sharing between Vista and Windows 7 Another scenario you might run into is having to share files and printers between a Vista and Windows 7 machine. The process is a bit easier than sharing between XP and Windows 7, but takes a bit of work. The Homegroup feature isn’t compatible with Vista, so we need to go through a few different steps. Depending on what your printer is, sharing it should be easier as Vista and Windows 7 do a much better job of automatically locating the drivers. How to Share Files and Printers Between Windows 7 and Vista Sharing between Vista and XP When Windows Vista came out, hardware requirements were intensive, drivers weren’t ready, and sharing between them was complicated due to the new Vista structure. The sharing process is pretty straight-forward if you’re not using password protection…as you just need to drop what you want to share into the Vista Public folder. On the other hand, sharing with password protection becomes a bit more difficult. Basically you need to add a user and set up sharing on the XP machine. But once again, we have a complete tutorial for that situation. Share Files and Folders Between Vista and XP Machines Sharing Between Windows 7 with Homegroup If you have one or more Windows 7 machine, sharing files and devices becomes extremely easy with the Homegroup feature. It’s as simple as creating a Homegroup on on machine then joining the other to it. It allows you to stream media, control what data is shared, and can also be password protected. If you don’t want to make your Windows 7 machines part of the same Homegroup, you can still share files through the Public Folder, and setup a printer to be shared as well.   Use the Homegroup Feature in Windows 7 to Share Printers and Files Create a Homegroup & Join a New Computer To It Change which Files are Shared in a Homegroup Windows Home Server If you want an ultimate setup that creates a centralized location to share files between all systems on your home network, regardless of the operating system, then set up a Windows Home Server. It allows you to centralize your important documents and digital media files on one box and provides easy access to data and the ability to stream media to other machines on your network. Not only that, but it provides easy backup of all your machines to the server, in case disaster strikes. How to Install and Setup Windows Home Server How to Manage Shared Folders on Windows Home Server Conclusion The biggest annoyance is dealing with printers that have a different set of drivers for each OS. There is no real easy way to solve this problem. Our best advice is to try to connect it to one machine, and if the drivers won’t work, hook it up to the other computer and see if that works. Each printer manufacturer is different, and Windows doesn’t always automatically install the correct drivers for the device. We hope this guide helps you share your data between whichever Microsoft OS scenario you might run into! Here are some other articles that will help you accomplish your home networking needs: Share a Printer on a Home Network from Vista or XP to Windows 7 How to Share a Folder the XP Way in Windows Vista Similar Articles Productive Geek Tips Delete Wrong AutoComplete Entries in Windows Vista MailSvchost Viewer Shows Exactly What Each svchost.exe Instance is DoingFixing "BOOTMGR is missing" Error While Trying to Boot Windows VistaShow Hidden Files and Folders in Windows 7 or VistaAdd Color Coding to Windows 7 Media Center Program Guide TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Icelandic Volcano Webcams Open Multiple Links At One Go NachoFoto Searches Images in Real-time Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi

    Read the article

  • Using Windows Previous Versions to access ZFS Snapshots (July 14, 2009)

    - by user12612012
    The Previous Versions tab on the Windows desktop provides a straightforward, intuitive way for users to view or recover files from ZFS snapshots.  ZFS snapshots are read-only, point-in-time instances of a ZFS dataset, based on the same copy-on-write transactional model used throughout ZFS.  ZFS snapshots can be used to recover deleted files or previous versions of files and they are space efficient because unchanged data is shared between the file system and its snapshots.  Snapshots are available locally via the .zfs/snapshot directory and remotely via Previous Versions on the Windows desktop. Shadow Copies for Shared Folders was introduced with Windows Server 2003 but subsequently renamed to Previous Versions with the release of Windows Vista and Windows Server 2008.  Windows shadow copies, or snapshots, are based on the Volume Snapshot Service (VSS) and, as the [Shared Folders part of the] name implies, are accessible to clients via SMB shares, which is good news when using the Solaris CIFS Service.  And the nice thing is that no additional configuration is required - it "just works". On Windows clients, snapshots are accessible via the Previous Versions tab in Windows Explorer using the Shadow Copy client, which is available by default on Windows XP SP2 and later.  For Windows 2000 and pre-SP2 Windows XP, the client software is available for download from Microsoft: Shadow Copies for Shared Folders Client. Assuming that we already have a shared ZFS dataset, we can create ZFS snapshots and view them from a Windows client. zfs snapshot tank/home/administrator@snap101zfs snapshot tank/home/administrator@snap102 To view the snapshots on Windows, map the dataset on the client then right click on a folder or file and select Previous Versions.  Note that Windows will only display previous versions of objects that differ from the originals.  So you may have to modify files after creating a snapshot in order to see previous versions of those files. The screenshot above shows various snapshots in the Previous Versions window, created at different times.  On the left panel, the .zfs folder is visible, illustrating that this is a ZFS share.  The .zfs setting can be toggled as desired, it makes no difference when using previous versions.  To make the .zfs folder visible: zfs set snapdir=visible tank/home/administrator To hide the .zfs folder: zfs set snapdir=hidden tank/home/administrator The following screenshot shows the Previous Versions panel when a file has been selected.  In this case the user is prompted to view, copy or restore the file from one of the available snapshots. As can be seen from the screenshots above, the Previous Versions window doesn't display snapshot names: snapshots are listed by snapshot creation time, sorted in time order from most recent to oldest.  There's nothing we can do about this, it's the way that the interface works.  Perhaps one point of note, to avoid confusion, is that the ZFS snapshot creation time isnot the same as the root directory creation timestamp. In ZFS, all object attributes in the original dataset are preserved when a snapshot is taken, including the creation time of the root directory.  Thus the root directory creation timestamp is the time that the directory was created in the original dataset. # ls -d% all /home/administrator         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 # ls -d% all /home/administrator/.zfs/snapshot/snap101         timestamp: atime         Mar 19 15:40:23 2009         timestamp: ctime         Mar 19 15:40:58 2009         timestamp: mtime         Mar 19 15:40:58 2009         timestamp: crtime         Mar 19 15:18:34 2009 The snapshot creation time can be obtained using the zfs command as shown below. # zfs get all tank/home/administrator@snap101NAME                             PROPERTY  VALUEtank/home/administrator@snap101  type      snapshottank/home/administrator@snap101  creation  Mon Mar 23 18:21 2009 In this example, the dataset was created on March 19th and the snapshot was created on March 23rd. In conclusion, Shadow Copies for Shared Folders provides a straightforward way for users to view or recover files from ZFS snapshots.  The Windows desktop provides an easy to use, intuitive GUI and no configuration is required to use or access previous versions of files or folders. REFERENCES FOR MORE INFORMATION ZFS ZFS Learning Center Introduction to Shadow Copies of Shared Folders Shadow Copies for Shared Folders Client

    Read the article

  • Trying to run chrooted suPHP with UserDir, getting 500 server error

    - by Greg Antowski
    I've managed to get suPHP working fine with UserDir (i.e. PHP files run from the /home/$username/public_html) directory, but I can't get it to work when I chroot it to the user's home directory. I've been following this guide: http://compilefailure.blogspot.co.nz/2011/09/suphp-chroot-gotchas.html And adapting it to my needs. I'm not creating vhosts, I just want PHP scripts to be jailed to the user's home directory. I've gotten to the part where you use makejail and set up a symlink. However even with the symlink set up correctly, PHP scripts won't run. This is what's shown in the Apache error log: SoftException in Application.cpp:537: Could not execute script "/home/jimmy/public_html/test.php" [error] [client 127.0.0.1] Caused by SystemException in API_Linux.cpp:444: execve() for program "/usr/bin/php-cgi" failed: No such file or directory The thing is, if I try running either of the following commands in the terminal it works without any issues: /home/jimmy/usr/bin/php-cgi /home/jimmy/public_html/test.php /usr/bin/php-cgi /home/jimmy/public_html/test.php I've been trying for hours to get this going and documentation for this kind of stuff is almost non-existent. If anyone could help me out with this, I'd be extremely grateful.

    Read the article

  • Helicon ISAPI Rewrite Proxy 500 Internal Server Error

    - by Rob Stevenson-Leggett
    Hi, I have a website running at www.domain.com. The client now wants the website to appear to be running under www.otherdomain.com/whatson/brand/ Since the website is umbraco it won't run under a subfolder. I wanted to use ISAPI rewrite to proxy requests to www.domain.com using the following rule in a .htaccess at www.otherdomain.com/whatson/brand/ RewriteRule ^(.*)$ http://www.domain.com/$1 [P,L] However, when I apply this I get an ugly 500 Internal Server Error. There's nothing in the event log. So I turned on ISAPI logging and can see the following 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) init rewrite engine with requested uri /whatson/brand/home.aspx Then it testing all the other rewrite rules on the server. Then this 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) Htaccess process request w:\websites\otherdomain.com\docs2\whatson\brand\.htaccess 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (3) applying pattern '^(.*)$' to uri 'home.aspx' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) forcing proxy-throughput with http://www.domain.com/home.aspx 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (1) go-ahead with proxy request http://www.domain.com/home.aspx [OK] 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) rewrite 'home.aspx' -> '/whatson/brand/home.aspxx.rwhlp?p=0' 111.111.111.111 111.111.111.111 Tue, 12-Jan-2010 13:05:24 GMT [www.otherdomain.com/sid#2045305275][rid#26337200/initial] (2) internal redirect with /whatson/brand/home.aspxx.rwhlp?p=0 [INTERNAL REDIRECT] So it appears to work according to the logs, but I'm not seeing the page come through.. It's worth noting that www.domain.com and www.otherdomain.com are on the same box. LogLevel is 3 and RewriteLogLevel is 3 (I've tried with 9 and debug but there is too much traffic going through the other sites on the box) Any ideas?

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    Hi - I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders. I can't seem to make it work but I will keep trying. Thanks In Advance !

    Read the article

  • Reverse ssh tunneling with Tomato

    - by Deivuh
    Since my ISP restricts some incoming connections, I can't access remotely to my home pc. What I'm trying to do is make a reverse ssh connection from my home's router with Tomato firmware to the office computer, so I can access remotely from the office with that open connection. What I'm doing is running the following from the router: ssh -R 12345:localhost:22 oUser@office Then I leave run top open to keep the connection alive. And from my office what I do is run the following: ssh hUser@localhost -p 12345 but I get the following message with verbose on: OpenSSH_5.5p1 Debian-6, OpenSSL 0.9.8o 01 Jun 2010 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to localhost [::1] port 19999. debug1: Connection established. debug1: identity file /home/oUser/.ssh/id_rsa type -1 debug1: identity file /home/oUer/.ssh/id_rsa-cert type -1 debug1: identity file /home/oUser/.ssh/id_dsa type 2 debug1: Checking blacklist file /usr/share/ssh/blacklist.DSA-1024 debug1: Checking blacklist file /etc/ssh/blacklist.DSA-1024 debug1: identity file /home/oUser/.ssh/id_dsa-cert type -1 ssh_exchange_identification: Connection closed by remote host I've password remote access enabled in Tomato's configuration, so I should be able to access without having the public key on *authorized_keys*, but I've even tried adding it and still the same. I've done the same with my home's computer, and it does work perfectly, but it doesn't with the router. Am I doing something wrong? Thanks in advance.

    Read the article

  • Access Control Lists in Debian Lenny

    - by arbales
    So, for my clients to who have sites hosted on my server, I create user accounts, with standard home folders inside /home. I setup an SSH jail for all the collective users, because I really am against using a separate FTP server. Then, I installed ACL and added acl to my /etc/fstab — all good. I cd into /home and chmod 700 ./*. At this point users cannot see into other users home directories (yay), but apache can't see them either (boo) . I ran setfacl u:www-data:rx ./*. I also tried individual directories. Now apache can see the sites again, but so can all the users. ACL changed the permissions of the home folders to 750. How do I setup ACL's so that Apache can see the sites hosted in user's home folders AND 2. Users can't see outside their home and into others' files. Edit: more details: Output after chmod -R 700 ./* sh-3.2# chmod 700 ./* sh-3.2# ls -l total 72 drwx------+ 24 austin austin 4096 Jul 31 06:13 austin drwx------+ 8 jeremy collective 4096 Aug 3 03:22 jeremy drwx------+ 12 josh collective 4096 Jul 26 02:40 josh drwx------+ 8 joyce collective 4096 Jun 30 06:32 joyce (Not accessible to others users OR apache) setfacl -m u:www-data:rx jeremy (Now accessible to members apache and collective — why collective, too?) sh-3.2# getfacl jeremy # file: jeremy # owner: jeremy # group: collective user::rwx user:www-data:r-x group::r-x mask::r-x other::--- Solution Ultimately what I did was: chmod 755 * setfacl -R -m g::--- * setfacl -R -m u:www-data:rx *

    Read the article

  • PHP create huge errors file on my server feof() fread()

    - by Nik
    I have a script that allows users to 'save as' a pdf, this is the script - <?php header("Content-Type: application/octet-stream"); $file = $_GET["file"] .".pdf"; header("Content-Disposition: attachment; filename=" . urlencode($file)); header("Content-Type: application/force-download"); header("Content-Type: application/octet-stream"); header("Content-Type: application/download"); header("Content-Description: File Transfer"); header("Content-Length: " . filesize($file)); flush(); // this doesn't really matter. $fp = fopen($file, "r"); while (!feof($fp)) { echo fread($fp, 65536); flush(); // this is essential for large downloads } fclose($fp); ? An error log is being created and gets to be more than a gig in a few days the errors I receive are- [10-May-2010 12:38:50] PHP Warning: filesize() [function.filesize]: stat failed for BYJ-Timetable.pdf in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: Cannot modify header information - headers already sent by (output started at /home/byj/public_html/pdf_server.php:10) in /home/byj/public_html/pdf_server.php on line 10 [10-May-2010 12:38:50] PHP Warning: fopen(BYJ-Timetable.pdf) [function.fopen]: failed to open stream: No such file or directory in /home/byj/public_html/pdf_server.php on line 12 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 [10-May-2010 12:38:50] PHP Warning: feof(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 13 [10-May-2010 12:38:50] PHP Warning: fread(): supplied argument is not a valid stream resource in /home/byj/public_html/pdf_server.php on line 15 The line 13 and 15 just continue on and on... I'm a bit of a newbie with php so any help is great. Thanks guys Nik

    Read the article

  • DD-WRT PPTP VPN problem

    - by Tobias Tromm
    I try to configure a DD-WRT as a PPTP client. The VPN Server is Windows Server 2003. This is my scenario: The Windows 2003 Server has set to give to the VPN Client the 10.0.0.81 fixed IP and to add a network route to the remote home. At the remote home I have changed the PPTP Options at DD-WRT to make the connection. The VPN connection is successfully established. ...and Windows successfully add the route to the remote home 192.168.2.X. From the remote home I can successfully access any computer from the VPN server side. The problem is when I try to access the remote home from the Server side. From Server side I only can access\ping DD-WRT ( by VPN Client IP - 10.0.0.81). What's wrong? How I need to do to be a site-to-site VPN? This is what happen when I try to tracert the remote home from local home.

    Read the article

  • Compiling cpp code in netbeans produce errors, how to solve it ?

    - by Rupertt Wind
    i use the netbeans with MinGW and MYSY make /debugger but when i compile a basic cpp code in it and run it it produces two erorrs this is the code runned and the output![alt text][1] box #include <iostream> void main() { cout << "Hello World!" << endl; cout << "Welcome to C++ Programming" << endl; } output is /usr/bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf make[1]: Entering directory `/d/Users/Home/Documents/NetBeansProjects/newApp' /usr/bin/make -f nbproject/Makefile-Debug.mk dist/Debug/MinGW-Windows/newapp.exe make[2]: Entering directory `/d/Users/Home/Documents/NetBeansProjects/newApp' mkdir -p dist/Debug/MinGW-Windows g++.exe -o dist/Debug/MinGW-Windows/newapp build/Debug/MinGW-Windows/newmain.o build/Debug/MinGW-Windows/newfile.o build/Debug/MinGW-Windows/main.o build/Debug/MinGW-Windows/newfile.o: In function `main': D:/Users/Home/Documents/NetBeansProjects/newApp/newfile.cpp:5: multiple definition of `main' build/Debug/MinGW-Windows/newmain.o:D:/Users/Home/Documents/NetBeansProjects/newApp/newmain.c:15: first defined here build/Debug/MinGW-Windows/main.o: In function `main': D:/Users/Home/Documents/NetBeansProjects/newApp/main.cpp:13: multiple definition of `main' build/Debug/MinGW-Windows/newmain.o:D:/Users/Home/Documents/NetBeansProjects/newApp/newmain.c:15: first defined here collect2: ld returned 1 exit status make[2]: *** [dist/Debug/MinGW-Windows/newapp.exe] Error 1 make[2]: Leaving directory `/d/Users/Home/Documents/NetBeansProjects/newApp' make[1]: *** [.build-conf] Error 2 make[1]: Leaving directory `/d/Users/Home/Documents/NetBeansProjects/newApp' make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 1s) how can i solve this ?

    Read the article

  • configuring vsftpd anonymous upload. Creates files but freezes at 0 bytes

    - by Wayne
    vsftpd on ubuntu after sudo apt-get install vsftpd Then did configuration as in the attached /etc/vsftpd.conf file. Anonymous ftp allows cd to the upload directly and allows put myfile.txt which gets created on the server but then the client hangs and never proceeds. The file on the server remains at 0 bytes. Here's the folders and permissions: root@support:/home/ftp# ls -ld . drwxr-xr-x 3 root root 4096 Jun 22 00:00 . root@support:/home/ftp# ls -ld pub drwxr-xr-x 3 root root 4096 Jun 21 23:59 pub root@support:/home/ftp# ls -ld pub/upload drwxr-xr-x 2 ftp ftp 4096 Jun 22 00:06 pub/upload root@support:/home/ftp# Here's the vsftpd.conf file: root@support:/home/ftp# grep -v '#' /etc/vsftpd.conf listen=YES anonymous_enable=YES write_enable=YES anon_upload_enable=YES dirmessage_enable=YES xferlog_enable=YES anon_root=/home/ftp/pub/ connect_from_port_20=YES chown_uploads=YES chown_username=ftp nopriv_user=ftp secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key Here's a file example that attempted to upload: root@support:/home/ftp/pub/upload# ls -l total 0 -rw------- 1 ftp nogroup 0 Jun 22 00:06 build.out This is the client attempting to upload...it is frozen at this point: $ ftp 173.203.89.78 Connected to 173.203.89.78. 220 (vsFTPd 2.0.6) User (173.203.89.78:(none)): ftp 331 Please specify the password. Password: 230 Login successful. ftp> put build.out 200 PORT command successful. Consider using PASV. 553 Could not create file. ftp> cd upload 250 Directory successfully changed. ftp> put build.out 200 PORT command successful. Consider using PASV. 150 Ok to send data.

    Read the article

  • Django Admin Page missing CSS

    - by super9
    I saw this question and recommendation from Django Projects here but still can't get this to work. My Django Admin pages are not displaying the CSS at all. This is my current configuration. settings.py ADMIN_MEDIA_PREFIX = '/media/admin/' httpd.conf <VirtualHost *:80> DocumentRoot /home/django/sgel ServerName ec2-***-**-***-***.ap-**********-1.compute.amazonaws.com ErrorLog /home/django/sgel/logs/apache_error.log CustomLog /home/django/sgel/logs/apache_access.log combined WSGIScriptAlias / /home/django/sgel/apache/django.wsgi <Directory /home/django/sgel/media> Order deny,allow Allow from all </Directory> <Directory /home/django/sgel/apache> Order deny,allow Allow from all </Directory> LogLevel warn Alias /media/ /home/django/sgel/media/ </VirtualHost> <VirtualHost *:80> ServerName sgel.com Redirect permanent / http://www.sgel.com/ </VirtualHost> In addition, I also ran the following to create (I think) the symbolic link ln -s /home/djangotest/sgel/media/admin/ /usr/lib/python2.6/site-packages/django/contrib/admin/media/ UPDATE In my httpd.conf file, User django Group django When I run ls -l in my /media directory drwxr-xr-x 2 root root 4096 Apr 4 11:03 admin -rw-r--r-- 1 root root 9 Apr 8 09:02 test.txt Should that root user be django instead? UPDATE 2 When I enter ls -la in my /media/admin folder total 12 drwxr-xr-x 2 root root 4096 Apr 13 03:33 . drwxr-xr-x 3 root root 4096 Apr 8 09:02 .. lrwxrwxrwx 1 root root 60 Apr 13 03:33 media -> /usr/lib/python2.6/site-packages/django/contrib/admin/media/ The thing is, when I navigate to /usr/lib/python2.6/site-packages/django/contrib/admin/media/, the folder was empty. So I copied the CSS, IMG and JS folders from my Django installation into /usr/lib/python2.6/site-packages/django/contrib/admin/media/ and it still didn't work

    Read the article

  • Making Apache 2.2 on SuSE Linux Case In-Sensitive. Which is a better approach?

    - by pingu
    Problem: http://<server>/home/APPLE.html http://<server>/hoME/APPLE.html http://<server>/HOME/aPPLE.html http://<server>/hoME/aPPLE.html All the above should pick this http://<server>/home/apple.html I implemented 2 solutions and both are working fine. Not sure which one is better(performance). Please Suggest..Also Directive - CheckCaseOnly on never worked Option 1: a)Enable:mod_speling In /etc/sysconfig/apache2 - APACHE_MODULES="rewrite speling apparmor......" b) Add directive - CheckSpelling on (Either in .htaccess or add in httpd.conf) In httpd.conf <Directory srv/www/htdcos/home> Order allow,deny CheckSpelling on Allow from all </Directory> or In .htaccess inside /srv/www/htdcos/home(your content folder) CheckSpelling on Option 2: a) Enable: mod_rewrite b) Write the rule vhost(you can not write RewriteMap in directory. check apache docs ) <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> <VirtualHost _default_:80> <IfModule mod_rewrite.c> Options +FollowSymLinks RewriteEngine on RewriteMap lc int:tolower RewriteCond %{REQUEST_URI} [A-Z] RewriteRule (.*) ${lc:$1} [R=301,L] </IfModule> </VirtualHost> This changes the entire request uri into lowercase. I want this to happen for specific folder, but RewriteMap doesn't work in .htaccess. I am novice in regex and Rewrite. I need a RewriteCond which checks only /css//. can any body help

    Read the article

  • Heroku and Refinerycms: Application failed to start ~ attachment_fu problem

    - by John Deely
    Ok so I'm trying to get Refinerycms working with Heroku, and I'm new at all of this. I've set up an amazon s3 account and added keys and ids to the amazon_s3.yml files. When launched on Heroku at gart.heroku.com I get the following error: App failed to start /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:in read': No such file or directory - /disk1/home/slugs/141557_e8490b3_d5eb/mnt/config/amazon_s3.yml (Errno::ENOENT) from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:inincluded' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:in include' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:inhas_attachment' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/app/models/image.rb:13 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:inrequire_or_load' ... 42 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:ininitialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1 The s3_backend.rb line 187 contains: @@s3_config = @@s3_config = YAML.load(ERB.new(File.read(@@s3_config_path)).result)[RAILS_ENV].symbolize_keys Any help would be great!

    Read the article

  • how do i write an init script for django-supervisor

    - by amateur
    pardon me as this is my first time attempting to write a init script for centos 5. I am using django + supervisor to manage my celery workers, scheduler. Now, this is my naive simple attempt /etc/init.d/supervisor #!/bin/sh # # /etc/rc.d/init.d/supervisord # # Supervisor is a client/server system that # allows its users to monitor and control a # number of processes on UNIX-like operating # systems. # # chkconfig: - 64 36 # description: Supervisor Server # processname: supervisord # Source init functions /home/foo/virtualenv/property_env/bin/python /home/foo/bar/manage.py supervisor --daemonize inside my supervisor.conf: [program:celerybeat] command=/home/property/virtualenv/property_env/bin/python manage.py celerybeat --loglevel=INFO --logfile=/home/property/property_buyer/logfiles/celerybeat.log [program:celeryd] command=/home/foo/virtualenv/property_env/bin/python manage.py celeryd --loglevel=DEBUG --logfile=/home/foo/bar/logfiles/celeryd.log --concurrency=1 -E [program:celerycam] command=/home/foo/virtualenv/property_env/bin/python manage.py celerycam I couldn't get it to work. 2013-08-06 00:21:03,108 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:06,114 INFO spawned: 'celeryd' with pid 11772 2013-08-06 00:21:06,116 INFO spawned: 'celerycam' with pid 11773 2013-08-06 00:21:06,119 INFO spawned: 'celerybeat' with pid 11774 2013-08-06 00:21:06,146 INFO exited: celerycam (exit status 2; not expected) 2013-08-06 00:21:06,147 INFO gave up: celerycam entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,147 INFO exited: celeryd (exit status 2; not expected) 2013-08-06 00:21:06,152 INFO gave up: celeryd entered FATAL state, too many start retries too quickly 2013-08-06 00:21:06,152 INFO exited: celerybeat (exit status 2; not expected) 2013-08-06 00:21:07,153 INFO gave up: celerybeat entered FATAL state, too many start retries too quickly I believe it is the init script, but please help me understand what is wrong.

    Read the article

  • Different versions of JBoss on the same host

    - by Vladimir Bezugliy
    I have JBoss 4 installed on my PC to directory C:\JBoss4 And environment variable JBOSS_HOME set to this directory: JBOSS_HOME=C:\JBoss4 I need to install JBoss 5.1 on the same PC. I installed it into C:\JBoss51 In order to start JBoss 5.1 on the same host where JBoss 4 was already started, I need to redefine properties jboss.home.dir, jboss.home.url, jboss.service.binding.set: C:\JBoss51\bin\run.sh -Djboss.home.dir=C:/JBoss51 \ -Djboss.home.url=file:/C:/JBoss51 \ -Djboss.service.binding.set=ports-01 But in C:\JBoss51\bin\run.sh I can see following code: … if [ "x$JBOSS_HOME" = "x" ]; then # get the full path (without any relative bits) JBOSS_HOME=`cd $DIRNAME/..; pwd` fi export JBOSS_HOME … runjar="$JBOSS_HOME/bin/run.jar" JBOSS_BOOT_CLASSPATH="$runjar" And this code does not depend either on jboss.home.dir or on jboss.home.dir. So when I start JBoss 5.1 script will use jar files from JBoss 4.3? Is it correct? Should I redefine environment variable JAVA_HOME when I start JBoss 5.1? In this case script will use correct jar files. Or if I redefined properties jboss.home.dir, jboss.home.url then JBoss will not use any variables set in run.sh? How does it works?

    Read the article

  • I am getting this error "ssh_exchange_identification:"

    - by adnan kamili
    Every thing was working fine till yesterday and now suddenly I am getting this error if I type ssh -D 9999 [email protected] ssh_exchange_identification: Connection closed by remote host Here is the output: OpenSSH_5.8p1 Debian-7ubuntu1, OpenSSL 1.0.0e 6 Sep 2011 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to 172.16.30.30 [172.16.30.30] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/adnan/.ssh/id_rsa" as a RSA1 public key debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'Proc-Type:' debug3: key_read: missing keytype debug2: key_type_from_name: unknown key type 'DEK-Info:' debug3: key_read: missing keytype debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug3: key_read: missing whitespace debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /home/adnan/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/adnan/.ssh/id_rsa-cert type -1 debug1: identity file /home/adnan/.ssh/id_dsa type -1 debug1: identity file /home/adnan/.ssh/id_dsa-cert type -1 debug1: identity file /home/adnan/.ssh/id_ecdsa type -1 debug1: identity file /home/adnan/.ssh/id_ecdsa-cert type -1 ssh_exchange_identification: Connection closed by remote host

    Read the article

  • cron doesn't execute any task, but writes into log as executed

    - by FractalizeR
    I have strange problem on one of my servers. Cron does not execute any task, but it writes to it's log, that task has been executed successfully. Like some simulation mode is activated... Apr 30 03:03:08 nd-10049 crond[13387]: (root) CMD (php /usr/local/frb/backup.php) Apr 30 03:05:01 nd-10049 crond[13397]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:09:01 nd-10049 crond[19108]: (root) CMD (/etc/webmin/cron/tempdelete.pl ) Apr 30 03:10:01 nd-10049 crond[19467]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:14:44 nd-10049 crontab[21154]: (root) BEGIN EDIT (root) Apr 30 03:15:01 nd-10049 crond[21309]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php>/home/support/public_html/cron/hourly.log) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) REPLACE (root) Apr 30 03:15:38 nd-10049 crontab[21154]: (root) END EDIT (root) Apr 30 03:16:01 nd-10049 crond[14961]: (root) RELOAD (cron/root) Apr 30 03:20:02 nd-10049 crond[22620]: (root) CMD (php /home/support/public_html/cron/cron_hourly.php) There are no errors about cron in common log (messages). The OS is CentOS. What can I do to check what is the problem? What can the problem be?

    Read the article

  • Windows 7 Folder Redirection (GPO)

    - by Kev
    I have been fighting this issue for a day or two now, so I am looking for some insight. I am taking over admin duties in a domain of 800 users, and the previous admins really did not employ much of any GPO settings for the clients of the Domain. In each site, there is a location on the file server where "Home" folders were manually created. EX: \server\home\enduser Whenever a user got a machine, the admin would manually right-click on the "My Documents" folder and manually enter the path to the home folder. We are planning to start putting Windows 7 machines on the Network, and I am wanting to automate as much as I can, everything that was not done in the past. Since everyone has exising "Home" folders I have been fighting and trying to get Folder Redirection to work with a new Windows 7 machine (In a Test OU). I am getting all kinds of errors and I can't get the Windows 7 "Documents" folder to redirect to the users EXISTING home folders. As I stated earlier, all of the Home folders were (and still are) manually created on the File Server and are set with the following Security permissions - Domain Admins - Full Control euser (end user) - Modify (Everything but Full) Can someone point me in the right direction on the proper setting to put in the Folder Redirection GPO to get this to work with the Existing Home folders.

    Read the article

  • How can I tell SELinux to give vsftpd write access in a specific directory?

    - by Arcturus
    Hello. I've set up vsftpd on my Fedora 12 server, and I'd like to have the following configuration. Each user should have access to: his home directory (/home/USER); the web directory I created for him (/web/USER). To achieve this, I first configured vsftpd to chroot each user to his home directory. Then, I created /web/USER with the correct permissions, and used mount --bind /web/USER /home/USER/Web so that the user may have access to /web/USER through /home/USER/Web. I also turned on the SELinux boolean ftp_home_dir so that vsftpd is allowed to write in users' home directories. This works very well, except that when a user tries to upload or rename a file in /home/USER/Web, SELinux forbids it because the change must also be done to /web/USER, and SELinux doesn't give vsftpd permission to write anything to that directory. I know that I could solve the problem by turning on the SELinux boolean allow_ftpd_full_access, or ftpd_disable_trans. I also tried to use audit2allow to generate a policy, but what it does is generate a policy that gives ftpd write access to directories of type public_content_t; this is equivalent to turning on allow_ftpd_full_access, if I understood it correctly. I'd like to know if it's possible to configure SELinux to allow FTP write access to the specific directory /web/USER and its contents, instead of disabling SELinux's FTP controls entirely.

    Read the article

  • A really smart rails helper needed

    - by Stefan Liebenberg
    In my rails application I have a helper function: def render_page( permalink ) page = Page.find_by_permalink( permalink ) content_tag( :h3, page.title ) + inline_render( page.body ) end If I called the page "home" with: <%= render_page :home %> and "home" page's body was: <h1>Home<h1/> bla bla <%= render_page :about %> <%= render_page :contact %> I would get "home" page, with "about" and "contact", it's nice and simple... right up to where someone goes and changes the "home" page's content to: <h1>Home<h1/> bla bla <%= render_page :home %> <%= render_page :about %> <%= render_page :contact %> which will result in a infinite loop ( a segment fault on webrick )... How would I change the helper function to something that won't fall into this trap? My first attempt was along the lines of: @@list = [] def render_page( permalink ) unless @@list.include?(permalink) @@list += [ permalink ] page = Page.find_by_permalink result = content_tag( :h3, page.title ) + inline_render( page.body ) @@list -= [ permalink ] return result else content_tag :b, "this page is already being rendered" end end which worked on my development environment, but bombed out in production... any suggestions? Thank You Stefan

    Read the article

  • How to sync two computers using new MobileMe calendar

    - by CesarGon
    I have been using MobileMe for over a year with success. I use it to sync my Outlook calendars in my work and home computers, using Windows 7 and Outlook 2007. The main Outlook calendar folder in my work computer is replicated to MobileMe as "Work", and synced to my home computer, and the main calendar folder in my home computer is replicated to MobileMe as "Home", and synced to my work computer. This means that I can see both "Work" and "Home" calendars from both computers (as well as from the web interface through me.com), which is very convenient. Yesterday I migrated to the new MobileMe calendar, accepting the suggestion that popped up on the me.com website. After the migration, the MobileMe control panel on each of Windows computers asked me to re-configure my calendar setup, and everything fell apart. The "Home" and "Work" calendar folders in Outlook are now ignored by MobileMe, and new ones named "Home in MobileMe" and "Work in MobileMe" have been created, and placed in a separate Outlook data file rather than the default. This means that now: I now have four folders, two of which are not replicated to MobileMe The two folders that are not replicated reside on a separate data file, so alarms and reminders don't work; they're basically useless to me as calendar folders In addition, the button in the MobileMe Control Panel that used to let me specify what MobileMe folder should be synced against the default Outlook folder has gone. MobileMe is now too smart. Do you have any idea how to undo this mess and go back to a situation where I have two folders, as described in the top paragraph, which keep synced? I don't want an extra data file. Thanks.

    Read the article

  • Why is OpenSSH not using the user specified in ssh_config?

    - by Jordan Evens
    I'm using OpenSSH from a Windows machine to connect to a Linux Mint 9 box. My Windows user name doesn't match the ssh target's user name, so I'm trying to specify the user to use for login using ssh_config. I know OpenSSH can see the ssh_config file since I'm specifying the identify file in it. The section specific to the host in ssh_config is: Host hostname HostName hostname IdentityFile ~/.ssh/id_dsa User username Compression yes If I do ssh username@hostname it works. Trying using ssh_config only gives: F:\>ssh -v hostname OpenSSH_5.6p1, OpenSSL 0.9.8o 01 Jun 2010 debug1: Connecting to hostname [XX.XX.XX.XX] port 22. debug1: Connection established. debug1: permanently_set_uid: 0/0 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa-cert type -1 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa type 2 debug1: identity file /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3p1 Debia n-3ubuntu5 debug1: match: OpenSSH_5.3p1 Debian-3ubuntu5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.6 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'hostname' is known and matches the RSA host key. debug1: Found key in /cygdrive/f/progs/OpenSSH/home/.ssh/known_hosts:1 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Trying private key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_rsa debug1: Offering DSA public key: /cygdrive/f/progs/OpenSSH/home/.ssh/id_dsa debug1: Authentications that can continue: publickey debug1: No more authentication methods to try. Permission denied (publickey). I was under the impression that (as outlined in this question: How to make ssh log in as the right user?) specifying User username in ssh_config should work. Why isn't OpenSSH using the username specified in ssh_config?

    Read the article

  • ssh without password does not work for some users

    - by joshxdr
    I have a new RHEL4 Linux box that I am using to copy data to old Solaris 2.6 and RHEL3 Linux boxes with scp. I have found that with the same setup, it works for some users but not for others. For user jane, this works fine: jane@host1$ ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/osborjo/.ssh/identity debug1: Offering public key: /mnt/home/osborjo/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: read PEM private key done: type RSA debug1: Authentication succeeded (publickey). for user jack it does not: jack@host1 ssh -v remhost debug1: Next authentication method: publickey debug1: Trying private key: /mnt/home/oper1/.ssh/identity debug1: Offering public key: /mnt/home/oper1/.ssh/id_rsa debug1: Authentications that can continue: publickey,password,keyboard-interactive I have looked at the permissions for all the keys and files, they look the same. Since I am using home directories mounted by NFS, the keys for both the remote host and the local host are in the same directory. This is how things look for jane: jane@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jane operator 394 Jan 27 16:28 authorized_keys -rw------- 1 jane operator 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jane operator 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jane operator 1205 Jan 27 16:46 known_hosts For user jack: jack@host1$ ls -l $HOME/.ssh -rw-rw-r-- 1 jack engineer 394 Jan 27 16:28 authorized_keys -rw------- 1 jack engineer 1675 Jan 27 16:27 id_rsa -rw-r--r-- 1 jack engineer 394 Jan 27 16:27 id_rsa.pub -rw-rw-r-- 1 jack engineer 1205 Jan 27 16:46 known_hosts As a last ditch effort, I copied the authorized_keys, id_rsa, and id_rsa.pub from jill to jack, and changed the username in authorized_keys and id_rsa.pub with vi. It still did not work. It seems there is something different between the two users but I cannot figure out what it is.

    Read the article

  • A really smart rails helper needed

    - by Stefan Liebenberg
    In my rails application I have a helper function: def render_page( permalink ) page = Page.find_by_permalink( permalink ) content_tag( :h3, page.title ) + inline_render( page.body ) end If I called the page "home" with: <%= render_page :home %> and "home" page's body was: <h1>Home<h1/> bla bla <%= render_page :about %> <%= render_page :contact %> I would get "home" page, with "about" and "contact", it's nice and simple... right up to where someone goes and changes the "home" page's content to: <h1>Home<h1/> bla bla <%= render_page :home %> <%= render_page :about %> <%= render_page :contact %> which will result in a infinite loop ( a segment fault on webrick )... How would I change the helper function to something that won't fall into this trap? My first attempt was along the lines of: @@list = [] def render_page( permalink ) unless list.include?(permalink) @@list += [ permalink ] page = Page.find_by_permalink content_tag( :h3, page.title ) + inline_render( page.body ) @@list -= [ permalink ] else content_tag :b, "this page is already being rendered" end end which worked on my development environment, but bombed out in production... any suggestions? Thank You Stefan

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >