Search Results

Search found 16602 results on 665 pages for 'directory'.

Page 559/665 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • Login to OS X Server User Account from Local Computer

    - by Brod Wilkinson
    I have OS X Server installed on a mac mini. I've created several User accounts, one of which is Account Name: Bob Password: abc123 From the Mac Mini's login screen I can choose "Server" (main account) "Bob" (Bobs account) and "Other..." OS X Server Accounts, from "Other..." if I input Bobs credentials it will log me in. I also have a macbook air, I would like to be able to select from the Login Screen "Other..." input Bobs credentials and have it login to Bobs account, or any other User Account for that matter. My Server is setup as private with the server address: server.network.private Following some googled instructions as well as apples very own instructions I have: Setup an Open Directory with Username: diradmin Password: abc123 Then on the macbook air gone into System Preferences > Users & Groups > Login Options and clicked Join next to Network Account Server, input my server (server.network.private) with diradmin credentials and its connected. Great. I've also ticked Allow Network Users to Login and Login Window and selected All Users. I was assuming this would allow my macbook air to login to the "Bob" account by selecting "Other..." from the login window although there is no "Other..." option. I then setup a VPN, basic credentials, logged into it on the macbook air and still not much has changed. I am able to share screens with the "Bob" account form my macbook air by logging in by clicking Share Screen... from the Finder under Shared > Network Server and then clicking Login In but this obviously requires the macbook air to already be logged into an account before it can share screens which is not suitable. Is there any way to simply login to the OS X Server User Account from the macbook air's login screen via the "Other..." like it does on the mac mini's login screen? Thanks in advance. Operating System: OS X 10.9 Mavericks OS X Server: Version 3

    Read the article

  • How do you verify a restore?

    - by Nic
    What tool(s) would you use to verify that a restored file structure is whole and complete? My environment is a Windows Server 2008 file server. (We use tape for backup, but that is inconsequential.) I am specifically looking for a tool that will: Record the names of all files and folders below a specified directory Optionally calculate checksums of each file encountered Save this index in a human-readable format Compare the index against restored data and show differences Some background: I recently had to replace the disks in our file server. The upgrade was scheduled to start 36 hours after the most recent full backup, so I created a differential backup. However, it turns out that one of our applications was clearing the archive bit on files saved to the server, so these were not included in the differential backup. I was unaware of this until my users reported some files as missing. Aside from this, are there any other common methods for validating the integrity of a restore? I am frequently told that testing backups by restoring them is the only way to know that backups are working, but how do you deal with the case where it works 99% correctly and the other 1% silently fails?

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • mrepo and grouplist/groupinstall?, mrepo not working as expected with group

    - by user52874
    All, I'm trying to set up mrepo so we can have internal repositories. After quite the slog, things seem to be working as expected EXCEPT for groups. From man createrepo: EXAMPLES Here is an example of a repository with a groups file. Note that the groups file should be in the same directory as the rpm packages (i.e. /path/to/rpms/comps.xml). createrepo -g comps.xml /path/to/rpms So here's what I'm doing: wget -c http://ftp.scientificlinux.org/linux/scientific/6/x86_64/os/repodata/comps-sl6-x86_64.xml cp comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/comps-sl6-x86_64.xml createrepo -g comps-sl6-x86_64.xml /var/mrepo/SL6-x86_64/os/Packages/ lots of output, no apparent errors or warnings BUT.. from a client: yum grouplist Loaded plugins: refresh-packagekit Setting up Group Process Error: No group data available for configured repositories Here's /etc/mrepo.conf: ### Configuration file for mrepo ### The [main] section allows to override mrepo's default settings ### The mrepo-example.conf gives an overview of all the possible settings [main] srcdir = /var/mrepo wwwdir = /var/www/mrepo confdir = /etc/mrepo.conf.d arch = x86_64 mailto = root@localhost smtp-server = localhost pxelinux = /usr/lib/syslinux/pxelinux.0 tftpdir = /tftpboot #rhnlogin = username:password ### Any other section is considered a definition for a distribution ### You can put distribution sections in /etc/mrepo.conf.d ### Examples can be found in the documentation. Here's /etc/mrepo.conf.d/sl6.mrepo: ### Scientific Linux 6 [SL6] name = Scientific Linux 6 release = 6 arch = x86_64 metadata = repomd repoview os = rsync://rsync.scientificlinux.org/scientific/$release/$arch/os/ updates = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/ security = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/security/ fastbugs = rsync://rsync.scientificlinux.org/scientific/$release/$arch/updates/fastbugs/

    Read the article

  • Nginx Server Block Port 8081 Path to Root Folder

    - by Pamela
    I'm trying to password protect all of port 8081 on my Nginx server. The only thing this port is used for is PhpMyAdmin. When I navigate to https://www.example.com:8081, I successfully get the default Nginx welcome page. However, when I try navigating to the PhpMyAdmin directory, https://www.example.com:8081/phpmyadmin, I get a "404 Not Found" page. Permission for my htpasswd file is set to 644. Here is the code for my server block: server { listen 8081; server_name example.com www.example.com; root /usr/share/phpmyadmin; auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } I have also tried entirely commenting out #root /usr/share/phpmyadmin; However, it doesn't make any difference. Is my problem confined to using the incorrect root path? If so, how can I find the root path for PhpMyAdmin? If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • Removing SCIM input method as default from gnome terminal

    - by Mark
    Hello - I am recently back into the Linux world after about a 10 year absence. While I can find my way around most things, terminals and desktop managers are different than I remember. One of the biggest problems that I am encountering today is that when running a gnome terminal (this is Suse 10.0 enterprise), I'm getting behavior in the window that I don't want. Specifically, when I type, my typing is underlined as if something is trying to spell check my window. Further, it seems as if when running vi or less, my keystrokes are only processed by these apps when I hit 'return'. I.e. if I'm running less and want to go back a page, I'll hit b, but nothing happens until I hit 'return'. I seem to have tracked this down to the 'input method". Right clicking in the Gnome terminal allows me to set my input method to one of a dozen values. It seems that currently, it's set to "SCIM Input Method". If I then select 'default' or 'X Input Method', apps (i.e. things like less, vi, and even the bash shell) behave as I would expect. Can someone tell me a) what is this SCIM input method b) how can I make it so that it is not the default? I've poked around various configuration files in my home directory as well as in /etc, but I can't see to find how this is set. I guess as a final question, can I just get rid of SCIM? Or is that tied into the window manager somehow? I do appreciate any clarifications that I can get. Thanks.

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • WAMP starts Apache or Mysql, but not both?

    - by ladenedge
    When I install WAMP, the Apache and Mysql services are set to run as the LocalService user and all works well. However, because I need to access remote UNC paths in my PHP code, I need to run at least Apache as a user that exists on both the local host and the remote host - I'll call him WampUser. When both Apache and Mysql are set to start as WampUser, I cannot start both at the same time. If both are stopped, I can start either successfully. When I attempt to start the other, I get Error 1053: The service did not respond to the start or control request in a timely fashion. This error appears immediately - there is no timeout. When at least one of the services is set to start as LocalService, both start fine. I can, therefore, solve my problem by setting Apache to WampUser and Mysql to LocalService, but I'm more interested in why this is happening in the first place. I'm especially curious because this situation does not occur on other servers - something I've done to this server has made these two services exclusive when running as the same user. Here are some miscellaneous data points: I am using Windows Server 2003. I've provided recursive Full Control to the C:\wamp directory for WampUser. Nothing appears in the event log after the service fails. No log entries appear in either the Mysql log or the Apache error log. Neither application appears in the process list when the appropriate service is stopped. Any ideas?

    Read the article

  • Two instances of Windows Vista on boot up after failed clean install

    - by Dwayne
    I tried to install a clean version of Vista but failed. I ended up with Windows and Windows.old on my C: drive and a dual boot option on boot up. I gave up and booted up the old version and tried to rename the Windows.old to Windows and was asked if I wanted to merge the two folders. I answered yes and all seemed OK until I booted up this morning and was given the choice of two versions of Vista. The first one is the one that failed to installed correctly and the second one is the old version. How can I get rid of the failed installation? I got rid of the bad boot via MSCONFIG. Here is my current situation: several hard drives installed Using C: as my boot drive a much larger drive (H:) for storing most of my files. I found a subfolder in my C:\windows folder named windows. Upon inspection I determined it to be older than the C:\windows folder and therefore it must be the older, working version of the boot. I renamed the C:\windows folder to c:\windows.bad and moved the sub windows to the C: root directory. I also copied it to the h: drive. Now MSCONFIG reports that the copy that is booting is the h: copy. How can I change it back to the C:\ copy and can I delete the C:\windows.bad file set?

    Read the article

  • Etag configuration with multiple apache servers or CDN / How does Google do ETags?

    - by perrierism
    I have an application which is served from two apache2 servers and I want to configure the ETags on static content. In the future I would also like to use a CDN. I see that this is supposed to be a problem because the Etag information will be different from server to server... The ETag format for Apache 1.3 and 2.x is inode-size-timestamp. Although a given file may reside in the same directory across multiple servers, and have the same file size, permissions, timestamp, etc., its inode is different from one server to the next. So if you're using more than one webserver to host your app (like 90% of the webapps you use everyday do), it's supposed to be an issue. However I see Google uses Etags, and certainly they use multiple servers and CDN and edge caching, etc... I get a 304 response for any cached Google content. How do they do it? How do you get around the multiple server issue? Is there a way to configure this with Apache?

    Read the article

  • Mail server: can't connect via POP/IMAP

    - by MelkerOVan
    I've followed this guide on setting up a mail server on my dedicated server. I've been able to send mails from the php application I'm using and the linux commandline (using telnet, php, etc). The problem is that I cannot connect to the server via IMAP/POP which I've setup using Courier. I've tried using thunderbird but it complains that the username or password is wrong. I doubt it is the username/password but I don't know how to trouble shoot this. Edit: Here's the messages in mail.log: Jan 9 22:43:38 mail authdaemond: received auth request, service=imap, authtype=login Jan 9 22:43:38 mail authdaemond: authmysql: trying this module Jan 9 22:43:38 mail authdaemond: SQL query: SELECT id, crypt, "", uid, gid, home, "", "", name, "" FROM users WHERE id = '[email protected]' AND (enabled=1) Jan 9 22:43:38 mail authdaemond: password matches successfully Jan 9 22:43:38 mail authdaemond: authmysql: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: authmysql: clearpasswd=<null>, passwd=password Jan 9 22:43:38 mail authdaemond: Authenticated: sysusername=<null>, sysuserid=5000, sysgroupid=5000, homedir=/var/spool/mail/virtual, [email protected], fullname=peter, maildir=<null>, quota=<null>, options=<null> Jan 9 22:43:38 mail authdaemond: Authenticated: clearpasswd=peter, passwd=password Jan 9 22:43:38 mail imapd: chdir Maildir: No such file or directory

    Read the article

  • Data recovery on a corrupted 3TB disk

    - by Mark K Cowan
    Short version I probably need software to run a deep-scan recovery (ideally on Linux) to find files on NTFS filesystem. The file data is intact, but the references are no longer present. Analogous to recovering data from a "quick-formatted" partition. Hopefully there is a smarter way available than deep-scan, one which would recover filenames and possibly paths. Long version I have a 3TB disk containing a load of backups. Windows 7 SP1 refused to detect the disk when plugged in directly via SATA, so I put it on a USB/SATA adaptor which seemed to work at first. The SATA/USB adaptor probably does not support disks over 2.2TB though. Windows first asked me if I wanted to 'format' the disk, then later showed me most of the contents but some folder were inaccessible. I stupidly decided to run a CHKDSK on my backup disk, which made the folders accessible but also left them empty. I connected this disk via SATA to my main PC (Arch Linux). I tried: testdisk ntfsundelete ntfsfix --no-action (to look for diagnostically relevant faults, disk was "OK" though) to no avail as the files references in the tables had presumably been zeroed out by CHKDSK, rather than using a typical journal'd deletion). If it is useful at all, a majority of the files that I want to recover are JPEG, Photoshop PSD, and MPEG-3/MPEG-4/AVI/MKV files. If worst comes to worst, I'll just design my own sector scanner and use some simple heuristic-driven analysis to recover raw binary blocks of data from the disk which appears to match the structures of the above file types. I am unfamiliar with the exact workings of NTFS but used to be proficient at recovering FAT32 systems with just a hex-editor, so I can provide any useful diagnostic information if you let me know how to find it! My priorities in ascending order of importance for choosing the accepted answer: Restores directory structure Recovers many filenames in addition to the file data Is free / very cheap Runs on Linux Recovers a majority of file data The last point is the most important, but the more of the higher points you match the more rep you'll probably get :)

    Read the article

  • Linux group permissions getting overwritten by owner

    - by Andy
    I am not a user of Linux, but I am encountering some permissions problems with it that I hope someone can shed some light on. Bit of background: A colleague of mine has a Linux box (running Debian I believe) with an SVN repository on it. The repository directory and files 'owner' is my colleauge. We are both members of a group called 'users'. He manages several projects both Linux and Windows apps, while I have one Windows app. For the Windows apps, we both use TortoiseSVN via an SSH link to commit/update. Performing the command 'ls -l' shows the repository files and folders on the Linux box to have the following permissions: -rwxrwx--- john users However, when my colleauge commits to the repository, the permissions change to: -rwxrwx--- john john This then means I get 'Permission denied' when trying to access the repository myself as it appears that the group permissions have been overwritten with only 'owner' permissions. To fix this, a 'chown -R' command is applied to the files/folders to set the permissions back to owner/group, but each time he writes to the repository, the issue repeats. I'm not sure if this is solely an SVN problem, or a more fundamental owner/group issue. Anyone any clue on how to stop this happening, or where to go and look? I'm trying to help out my colleague who is having some trouble resolving this issue. Apologies for the vague info, I hope I have conveyed the problem clear enough. Like I say, I am not a Linux user, I have only put down what I have managed to pick up from looking over his shoulder. Thanks for any pointers I can pass on!

    Read the article

  • Best Practical RT, sorting email into queues automatically using procmail

    - by user52095
    I'm trying to get incoming e-mail to automatically go directly into whichever queue/ticket they are related to or create a new one if none exist and the right queue e-mail setup in the web interface is used. I will have too many queues to have two line items within mailgate per queue. A similar issue was discussed here (http://serverfault.com/questions/104779/procmail-pipe-to-program-otherwise-return-error-to-sender), but I thought it best to open a new case instead of tagging on what appeared to be an answer to that person's query. I'm able to send and receive e-mail (via PostFix) to the default rt user and this user successfully accepts all e-mail for the relative domain. I have no idea where the e-mail goes - it's successfully delivered, but it does not update existing tickets (with a Subject line match) and it does not create any new. Here's and example of my ./procmail.log: procmail: [23048] Mon Aug 23 14:26:01 2010 procmail: Assigning "MAILDOMAIN=rt.mydomain.com " procmail: Assigning "RT_MAILGATE=/opt/rt3/bin/rt-mailgate " procmail: Assigning "RT_URL=http://rt.mydomain.com/ " procmail: Assigning "LOGABSTRACT=all " procmail: Skipped " " procmail: Skipped " " procmail: Assigning "LASTFOLDER={ " procmail: Opening "{ " procmail: Acquiring kernel-lock procmail: Notified comsat: "rt@18337:./{ " From [email protected] Mon Aug 23 14:26:01 2010 Subject: RE: [RT.mydomain.com #1] Test Ticket Folder: { 1616 Does the notified comsat portion mean that it notified RT? The contents of my ./procmailrc: #Preliminaries SHELL=/bin/sh #Use the Bourne shell (check your path!) #MAILDIR=${HOME} #First check what your mail directory is! MAILDIR="/var/mail/rt/" LOGFILE="home/rt//procmail.log" LOG="--- Logging ${LOGFILE} for ${LOGNAME}, " VERBOSE=yes MAILDOMAIN="rt.mydomain.com" RT_MAILGATE="/opt/rt3/bin/rt-mailgate" #RT_MAILGATE="/usr/local/bin/rt-mailgate" RT_URL="http://rt.mydomain.com/" LOGABSTRACT=all :0 { # the following line extracts the recipient from Received-headers. # Simply using the To: does not work, as tickets are often created # by sending a CC/BCC to RT TO=`formail -c -xReceived: |grep $MAILDOMAIN |sed -e 's/.*for *<*\(.*\)>* *;.*$/\1/'` QUEUE=`echo $TO| $HOME/get_queue.pl` ACTION=`echo $TO| $HOME/get_action.pl` :0 h b w |/usr/bin/perl $RT_MAILGATE --queue $QUEUE --action $ACTION --url $RT_URL } I know that my get_queue.pl and get_action.pl scripts work, as those have been previously tested. Any help and/or guidance you can give would be greatly appreciated. Nicôle

    Read the article

  • nginx proxy_pass POST 404 errors

    - by Scott
    I have nginx proxying to an app server, with the following configuration: location /app/ { # send to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } Any request for /app goes to :9001, whereas the default site is hosted on :9000. GET requests work fine. But whenever I submit a POST request to /app/any/post/url it results in a 404 error. Hitting the url directly in the browser via GET /app/any/post/url hits the app server as expected. I found online other people with similar problems and added proxy_set_header Host $http_host; but this hasn't resolved my issue. Any insights are appreciated. Thanks. Full config below: server { listen 9000; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /home/scott/src/ph-dox/html; # root ../html; TODO: how to do relative paths? index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /app/ { # rewrite here sends to app server without the /app qualifier rewrite /app/(.*)$ /$1 break; proxy_set_header Host $http_host; proxy_pass http://localhost:9001; proxy_redirect http://localhost:9001 http://localhost:9000; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } }

    Read the article

  • Test whether svn REPO changes are reflected in Working Copy

    - by user492160
    Requirement Changes will be made to the REPO directory and this should get updated to wc(working copy) as opposed to the normal way of WC REPO. Senario: My svn repo- /var/www/svn/drupal My checkout-dir/working-copy- /var/www/html/drupalsite So I've done: edited post-commit hook to contain: "/usr/bin/svn update /var/www/html/drupalsite" I won't make any change to svn WC. I'll make changes to svn REPO- /var/www/svn/drupal. After changes are made to svn repo, run "svn commit /var/www/html/drupalsite". This will trigger the post-commit hook. This inturn will run "svn update /var/www/svn/drupal" and thus my WC will get updated with the changes of REPO. Query a. Would the above steps 1-3 help achieve my 'Requirement'? b. I'd need advise on how to test if the above setup works successfully or not. I'm at loss about the success of steps 1-3 the reason why query(a) is present. This is a bit more of a concern for me. NB: I'm new to subversion. Whatever I've configured till now have been done by reading articles online. Reason for query (b) is because I'm not into development. It seems to be a php drupal website and I happen to be setting it up. So I'm not aware as to how to make a "PROPER" change in REPO so that it gets reflected in WC. If reflected, my configs are right and the team can start on development. I manually put a random file/folder into REPO dir for seeing a change in WC and ran steps 1-3 but was of no avail and later on learned that it was NOT the way to make a change to a REPO. Pleas advise. Thanks

    Read the article

  • PHP - Centos OpenSSL error

    - by mabbs
    i'm currently having a problem with OpenSSL on my Centos 6.5 Server. it ran perfectly fine until sunday. and i checked the error_log and i saw this error in the log PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib64/php/modules/openssl.so' - /usr/lib64/php/modules/openssl.so: cannot open shared object file: No such file or directory in Unknown on line 0 i tried phpinfo(); and i found that openssl is enabled i tried php -m it returned [PHP Modules] bz2 calendar Core ctype curl date dom ereg exif fileinfo filter ftp gd gettext gmp hash iconv interbase json libxml mbstring mcrypt memcache mysql mysqli openssl pcntl pcre PDO PDO_Firebird pdo_mysql pdo_sqlite Phar pspell readline Reflection session shmop SimpleXML snmp sockets SPL sqlite3 standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib UPDATE this is what i got from rpm -qa | grep php just like what Mike Suggested php-php-gettext-1.0.11-3.el6.noarch php-mcrypt-5.3.3-3.el6.x86_64 php-interbase-5.3.3-3.el6.x86_64 php-pdo-5.3.3-27.el6_5.1.x86_64 php-5.3.3-27.el6_5.1.x86_64 php-mysql-5.3.3-27.el6_5.1.x86_64 php-snmp-5.3.3-27.el6_5.1.x86_64 php-gd-5.3.3-27.el6_5.1.x86_64 php-xml-5.3.3-27.el6_5.1.x86_64 php-pear-1.9.4-4.el6.noarch php-pecl-memcache-3.0.5-4.el6.x86_64 phpMyAdmin-3.5.8.2-1.el6.noarch php-common-5.3.3-27.el6_5.1.x86_64 php-cli-5.3.3-27.el6_5.1.x86_64 php-devel-5.3.3-27.el6_5.1.x86_64 php-mbstring-5.3.3-27.el6_5.1.x86_64 php-xmlrpc-5.3.3-27.el6_5.1.x86_64 php-pspell-5.3.3-27.el6_5.1.x86_64

    Read the article

  • What to do before connecting Ubuntu Server to the internet for the first time?

    - by CodeMonkey
    I just finished installing Ubuntu Server 12.10 on an Asus Eee PC 1000H (to be used as a home server/sandbox) from USB. I installed this software during installation: OpenSSH server LAMP server Samba file server Virtual Machine host I won't use 2, 3 or 4 for a while though. Can/should I turn these off somehow? I have turned home directory encryption on. Security updates are installed automatically. I have chosen a strong password for the single user. I have never plugged in the internet cable so far. Before doing so I'd like to ask: What can/should I do/install to increase security before connecting to the internet? Firewall? Fail2ban? Users/Passwords? Encryption? Enable/Disable functionality? etc. I'm sorry if you get this question a lot. I've searched around quite a while, but it still feels like I might overlook something important.

    Read the article

  • Compile php 5.3 ldap extension

    - by toups
    So trying to follow the very un-descriptive guide at my webhost for compiling a new php extension: **Compiling PHP 5.3 extensions You can also compile and load your own extensions. Here's how:** 1. Download and unpack the extension (from PECL, for instance). 2. If the extension is already compiled (most binary PHP loaders will be, for instance), skip to step 6. 3. /usr/local/php53/bin/phpize 4. ./configure --with-php-config=/usr/local/php53/bin/php-config 5. make 6. Copy the module to your .php/5.3/ directory. 7. Assuming your user is called "username" and your module is named "mymodule.so", add the following to your .php/5.3/phprc: extension = /home/username/.php/5.3/mymodule.so Downloaded Openldap stable release online, uploaded the unpacked gzip via ftp to my server, did step 3, 4, 5. Now on step 6 is says "copy the module...". My question is where is the module for me to copy? Sorry if it's obvious and I'm not seeing it; first time compiling a php extension :O

    Read the article

  • Trying to build OpenSimulator.. nant fails

    - by Gary
    Output of nant is: Buildfile: file:///root/opensim-0.6.8-release/OpenSim.build Target framework: Mono 2.0 Profile Target(s) specified: build [echo] Using 'mono-2.0' Framework init: Debug: [echo] Platform unix build: [nant] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build build Buildfile: file:///root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build Target framework: Mono 2.0 Profile Target(s) specified: build build: [echo] Build Directory is /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug [csc] Compiling 29 files to '/root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/bin/Debug/OpenSim.Framework.Servers.HttpServer.dll'. [csc] /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/AsynchronousRestObjectRequester.cs(103,41): error CS0246: The type or namespace name `TResponse' could not be found. Are you missing a using directive or an assembly reference? [csc] Compilation failed: 1 error(s), 0 warnings BUILD FAILED - 0 non-fatal error(s), 1 warning(s) /root/opensim-0.6.8-release/OpenSim/Framework/Servers/HttpServer/OpenSim.Framework.Servers.HttpServer.dll.build(14,6): External Program Failed: /usr/lib/pkgconfig/../../lib/mono/2.0/gmcs.exe (return code was 1) Total time: 1.2 seconds. BUILD FAILED Nested build failed. Refer to build log for exact reason. Total time: 1.3 seconds. OS is Fedora 7. Any ideas appreciated. :)

    Read the article

  • Can not copy files from NTFS partition

    - by Ali
    I am experiencing a weird problem. I was running Xubuntu on my laptop until yesterday that I had to delete Xubuntu and install Windows. I had a NTFS partition on my Xubuntu that I kept some files on it. Today after installing windows I wanted to move all the files from that partition to an external HDD. I selected all files and folders and clicked on Copy, then I went to the HDD and clicked on paste but nothing happened. I can not do that. I do not know why. I copy the files, and wherever I click paste, nothing happens. If I try to copy the files and folders one by one, I can copy some of them, but some of them do not move. The other problem I have is that I can not open some files, in particular pdf files. When I click on pdf files I get this error: There was an error opening this document. This file cannot be found. Also, I cannot play some mp4 files. I can not open some jpg and txt files. I get this error The directory name is invalid. So in summary, after removing Xubuntu and installing windows 7 I have the following problems with one of the NTFS partitions on my internal drive: Can not copy or cut all folders and files from that partition to any other partition - I also do not get any errors. Can copy some folders and files Can not access some pdf, jpeg, txt and mp4 files and get the above errors. I should also mention I did not change anything for this partition during the installation or formatting the other partitions.

    Read the article

  • "Unable to initialize module" fileinfo php-pecl-Fileinfo.x86_64

    - by Myers Network
    I have a brand new server server that I am trying to get setup up. This is a 64 bit machine that I can not install "fileinfo" or "memcache". I have uninstalled these and reinstalled them using yum and pecl with no luck. Yum install fine "no error" but then get error when running php. pecl from what I can tell is only installing 32bit. Does not put anything in the lib64 directory. Here is my output from php -v: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP 5.2.14 (cli) (built: Aug 12 2010 16:03:48) Copyright (c) 1997-2010 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2010 Zend Technologies Here is some other system info incase you need it uname: Linux server.actham.us 2.6.18-194.26.1.el5 #1 SMP Tue Nov 9 12:54:20 EST 2010 x86_64 x86_64 x86_64 GNU/Linux php -m: PHP Warning: PHP Startup: fileinfo: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 PHP Warning: PHP Startup: memcache: Unable to initialize module Module compiled with module API=20050922, debug=0, thread-safety=0 PHP compiled with module API=20060613, debug=0, thread-safety=0 These options need to match in Unknown on line 0 [PHP Modules] bz2 calendar ctype curl date dbase dom exif filter ftp gd gettext gmp hash iconv imap json ldap libxml mbstring mcrypt mysql mysqli openssl pcntl pcre PDO pdo_mysql pdo_sqlite readline Reflection session shmop SimpleXML sockets SPL standard tokenizer wddx xml xmlreader xmlrpc xmlwriter xsl zip zlib [Zend Modules] Any help would be greatly appreciated, thanks....

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • Setting up vncserver on OpenSolaris zone

    - by k.park
    I am running OpenSolaris 5.10 and set up a sparse zone(inherits most of bin directories from global zone). I ended up copying many etc and var files from global zone, eventually most of the stuff(firefox,gvim, etc.) working through ssh via X11. However, I am having problems setting up vncserver on the zone. This is what I get if I tried to start the vncserver. vncext: VNC extension running! vncext: Listening for VNC connections on port 5911 vncext: created VNC server for screen 0 Fatal server error: could not open default font 'fixed' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 xsetroot: unable to open display '%zone%:11' _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 _X11TransNAMEDOpenClient: Cannot open /tmp/.X11-pipe/X11 for NAMED connection _X11TransOpen: transport open failed for local/%zone%:11 vncconfig: unable to open display "%zone%:11" twm: unable to open display "%zone%:11" xterm Xt error: Can't open display: %zone%:11 I already chmoded /tmp/.X11-pipe with 777, and there is no pipe in /tmp/.X11-pipe or /tmp/.X11-unix directory. Here is my cat /etc/release: OpenSolaris 2009.06 snv_111b X86 Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 07 May 2009 BRAND: ipkg

    Read the article

  • How do I configure PHP5 and Apache2 on Ubuntu Server?

    - by rofls
    I'm trying to follow these instructions (under the Troubleshooting PHP 5 heading). I have PHP installed and when when I run a2enmod php5 it says "Module php5 already enabled". The problem is I created a file, test.php, that's just this: <?php phpinfo(); ?> and put it in /var/www, like the instructions tell me to, but running curl http://localhost/test.php produces an Apache made 404 that says it can't find that file. I have: ServerName localhost DocumentRoot /var/www in one of the sites-available in the /etc/apache2 directory. I should probably figure this out on my own, but the instructions say for troubleshooting do: "If the problem persists, check your PHP file authorisations (it should be readable at least by Ubuntu user "apache"), and check if the PHP code is correct. For instance, copy your PHP file, replace your whole PHP file content by "" (without the quotation marks): if you get the PHP test page in your web browser, then the problem is in your PHP code, not in Apache or PHP configuration nor in file permissions. If this doesn't work, then it is a problem of file authorisation, Apache or PHP configuration, cache not emptied, or Apache not running or not restarted." And I don't know where the PHP file authorisations are or how to do that.

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >