Search Results

Search found 10842 results on 434 pages for 'sshd config'.

Page 49/434 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • XML configuration of Zend_Form: child nodes and attributes not always equal?

    - by Cez
    A set of forms (using Zend_Form) that I have been working on were causing me some headaches trying to figure out what was wrong with my XML configuration, as I kept getting unexpected HTML output for a particular INPUT element. It was supposed to be getting a default value, but nothing appeared. It appears that the following 2 pieces of XML are not equal when used to instantiate Zend_Form: Snippet #1: <form> <elements> <test type="hidden"> <options ignore="true" value="foo"/> </test> </elements> </form> Snippet #2: <form> <elements> <test type="hidden"> <options ignore="true"> <value>foo</value> </options> </test> </elements> </form> The type of the element doesn't appear to make a difference, so it doesn't appear to be related to hidden fields. Is this expected or not?

    Read the article

  • Explain Entity Framework 4's connection strings

    - by metanaito
    I created an Entity Framework file. My database is called MyDB. My Entity Framework file is MyDB.edmx and I used an existing connection string (MyDBConnectionString) to generate the edmx model. It created two more connection strings: MyDBEntities MyDBContainer What are these for? They look exactly the same and both have the information from my old connection string. Do I still need my old connection string?

    Read the article

  • How do I effectively store a connection string in machine.config only?

    - by Scott Bedwell
    We are moving to an environment with multiple engines of MS SQL running on the same server (a test engine and a production engine). We also have separate test and production web servers, and would like for our asp.net applications to "magically" use the test database engine on the test web server and the production database engine on the production web servers. We would like to store the connection strings in machine.config rather than in web.config, but when we put it in machine.config, visual studio's IDE (particularly with datasets) does not recognize that the machine.config contains the connection. Does anyone know of a solution for displaying these machine.config connection strings in visual studio, or of a different solution that would accommodate for this? Thanks.

    Read the article

  • How can I move app.config to a different folder inside the Solution Explorer?

    - by Coder7862396
    I'm using Visual Studio 2010. In my Solution Explorer I like to sort my Project items into folders (a folder for Forms, a folder for Classes, a Misc folder, etc.) It seems though that if I move the "app.config" file to a folder named "Config Files" everything works until I change a setting in the Settings.settings file. Once I do that, a new app.config is created and the one that was in the "Config Files" folder did not get updated. I have searched the entire solution for the text "app.config" and did not find any results. How can I move this file so that my Solution Explorer looks nice and clean?

    Read the article

  • CustomError not working properly

    - by IrfanRaza
    Hello friends, I am using following setting for customError. < customErrors mode="On" defaultRedirect="GenericErrorPage.aspx" < error statusCode="403" redirect="NoAccess.aspx" / < error statusCode="404" redirect="FileNotFound.aspx" / < /customErrors I have a folder "Admin" having access to administrators role. When someone other than administrators tries to access the pages inside admin folder, it is redirected to login page. My expectation is to display "NoAccess.aspx". Whats wrong with this code? Or is there other meaning to statusCode=403. Could someone provide help on this. Thanks for sharing your valuable time.

    Read the article

  • Apache and Rewrite Module

    - by Yvon Blais
    I created a file .htaccess in the /var/www directory., The rights are root root --wxrwxrwxr The content of the file is : Options +FollowSymlinks RewriteEngine on RewriteLogLevel 3 RewriteLog "/var/log/apache2/rewrite.log" RewriteRule ^(.*?)$ testphp.php When I call the page phpinfo.php, I've got: Loaded Modules ... mod_rewrite ... Therefore, the modules is loaded Afer each modification,I restared the server manually with sudo /etc/init.d/apache2 restart The error.log gives Apache/2.2.14 (Ubuntu) PHP/5.3.2-1ubuntu4.2 with Suhosin-Patch configured -- resuming normal operations When I call a page anyone.htm or anyone.php, the rewrite.log does contain nothing and the real page is called. If I understand, the page anyone.php should be replaced by testphp.php Did I make siomething wrong ? Thanks

    Read the article

  • CC.NET File merge task and dynamic values

    - by ccnet
    How can I use CCNetLabel in the file merge task? From what I have found I have to use dynamicValues. I have somethink like this and it is not working any help? <publishers> <merge> <dynamicValues> <replacementValue property="files"> <format>D:\Testoutput\{0}\*.xml</format> <parameters> <namedValue name="$CCNetLabel" value="Default" /> </parameters> </replacementValue> </dynamicValues> </merge> <xmllogger /> <modificationHistory onlyLogWhenChangesFound="true" /> <statistics /> </publishers>

    Read the article

  • JSF: Using same jsp page for different outcomes

    - by ComputerPilot
    Would it be possible to use a navigation-case as shown below with the same view-id but different from-outcomes? In the managed bean, I wanted to compare the from-outcome values and decide on the group panel that I would display on the page. How can I get the from-outcome value in my managed bean? <navigation-case> <from-outcome>modifyphone</from-outcome> <to-view-id>/modifytelephone.jsp</to-view-id> </navigation-case> <navigation-case> <from-outcome>confirmmodifyphone</from-outcome> <to-view-id>/modifytelephone.jsp</to-view-id> </navigation-case> <navigation-case> <from-outcome>submitmodifyphone</from-outcome> <to-view-id>/modifytelephone.jsp</to-view-id> </navigation-case>

    Read the article

  • How do you define paths in application?

    - by Hemaulo
    I'm using global constants, like this: /project /application bootstrap.php /public index.php index.php defines PUBLIC_PATH and APPLICATION_PATH calls APPLICATION_PATH . bootstrap.php bootstrap.php defines LIBRARY_PATH, MODULES_PATH, TEMP_PATH, CONFIG_PATH, ... does real work Also i want to ask if there is better way to do this?

    Read the article

  • How should secret files be pushed to an EC2 (on AWS) Ruby on Rails application?

    - by nikc
    How should secret files be pushed to an EC2 Ruby on Rails application using amazon web services with their elastic beanstalk? I add the files to a git repository, and I push to github, but I want to keep my secret files out of the git repository. I'm deploying to aws using: git aws.push The following files are in the .gitignore: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb Following this link I attempted to add an S3 file to my deployment: http://docs.amazonwebservices.com/elasticbeanstalk/latest/dg/customize-containers.html Quoting from that link: Example Snippet The following example downloads a zip file from an Amazon S3 bucket and unpacks it into /etc/myapp: sources: /etc/myapp: http://s3.amazonaws.com/mybucket/myobject Following those directions I uploaded a file to an S3 bucket and added the following to a private.config file in the .elasticbeanstalk .ebextensions directory: sources: /var/app/current/: https://s3.amazonaws.com/mybucket/config.tar.gz That config.tar.gz file will extract to: /config/database.yml /config/initializers/omniauth.rb /config/initializers/secret_token.rb However, when the application is deployed the config.tar.gz file on the S3 host is never copied or extracted. I still receive errors that the database.yml couldn't be located and the EC2 log has no record of the config file, here is the error message: Error message: No such file or directory - /var/app/current/config/database.yml Exception class: Errno::ENOENT Application root: /var/app/current

    Read the article

  • local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page

    - by jeff
    local install of wp site brought down from host - home page is ok but other pages redirect to wamp config page. I got all local files from host to www dir under local wamp. I got database from host and loaded to new local db and used this tool to adjust site_on_web.com to "localhost/site_on_local" now the home page works great and can login to admin page but when click on reservations page and others of site then site just goes to the wamp server config page even though the url shows correctly as localhost/site_on_local/reservations my htaccess file is this # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress and rewrite-module is checked in the php-apache-apache modules setting. now when I uncheck the rewrite-module is checked in the php-apache-apache modules setting or I clear out the whole htaccess file then the pages just goto Not Found The requested URL /ritas041214/about-ritas/ was not found on this server. Please help as I am unsure now about my process to move local site up and down and be able to make it work and without this I am lost...

    Read the article

  • How to create newline in a rebol block ?

    - by Rebol Tutorial
    let's say I have a config.txt which contains: "param11" "param12" "param21" "param22" I'll load it in memory with config: load %config.txt I can save it back with save %config.txt config So far so good. Now the problem occurs for me when I want to add "param31" "param32" I have tried append config reduce [newline "param31" "param32"] save %config.txt config But that doesn't give the expected result "param11" "param12" "param21" "param22" "param31" "param32" but this instead "param11" "param12" "param21" "param22" #"^/" "param31" "param32" So how to ?

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • debsum and actual md5

    - by Radium
    I have discovered that debsum maybe does not work as i thought. I ran debsum -as And actually i did not see sshd binary in that list. However md5 of the /usr/sbin/sshd file and the numbers given in /var/lib/dpkg/info/openssh-server.md5sums are different. cat /var/lib/dpkg/info/openssh-server.md5sums 968ce0ccc85f3dc64375c689fa165359 usr/lib/openssh/sftp-server ba856dce069acadff587ca95e8e63551 usr/sbin/sshd a8f85459802674a416b903c8be7774d6 usr/share/doc/openssh-client/examples/sshd_config 8c5592e0d522fa0f8f55f3c104479ef5 usr/share/lintian/overrides/openssh-server 24e6a2d6f56d5fd52651db030a4124bb usr/share/man/man5/sshd_config.5.gz 65dbe6d2862940ad7cd945fadaabc2f8 usr/share/man/man8/sftp-server.8.gz 63398534a80e75262e56ac821e2bb3f3 usr/share/man/man8/sshd.8.gz md5sum /usr/sbin/sshd 72a54d63b9f9edbdc0cb0de4715683d0 What is wrong?

    Read the article

  • Just getting started in Spring and my preference is XML config over annotations. Correct or not?

    - by John Munsch
    After having read through some of the Spring docs my inclination is towards using a XML config file rather than annotations on the classes themselves. My reasoning is that by doing so you avoid tying your POJOs to a particular framework. Based on your experience with Spring, are there any advantages that XML configuration have over an annotation based configuration, and if not what are the disadvantages?

    Read the article

  • How to solve: "Connect to host some_hostname port 22: Connection timed out"

    - by Aufwind
    I have two Ubuntu machines. Both have openssh-client and openssh-server installed on them. ssh-ing from machine G (fresh Ubuntu 11.10 installation) to machine K works great. But ssh-ing from machine K to machine G results always in the Error: Connect to host some_hostname port 22: Connection timed out I went through the troubleshooting section of help.ubuntu.com and I got the following results: ps -A | grep sshd # results in 848 ? 00:00:00 sshd - sudo ss -lnp | grep sshd # results in 0 128 :::22 :::* users:(("sshd",848,4)) 0 128 *:22 *:* users:(("sshd",848,3)) - ssh -v localhost # works! - sudo ufw status verbose # yields: "Status: inactive" I haven't change anything in the config file. What can I do to locate the Problem and solve it? Glad about every hint! Edit: ping was succesful in both directions! I did a telnet <machineK> 22 from machin G which resulted in Trying and then in telnet: Unable to connect to remote host: Connection timed out. But telnet the other way around worked just fine! Edit 2: ssh start/running, process 966 # yields: ssh start/running, process 966 /etc/hostname # contains my hostname, let's call it blubb /etc/hosts # contains the following 127.0.0.1 localhost # 127.0.1.1 blubb 129.26.68.74 blubb # I added this! - sudo service ufw status # yields: ufw start/running I installed Gufw and set it to ON. Then I selected from Incoming the option ALLOW. Then I sshed to another machine from where I sshed back to my machine. Still the same error as above: connect to host blubb port 22: Connection timed out Any more hints, what I can check?

    Read the article

  • Where can I get the Natty kernel .config file?

    - by Oli
    I'm using Maverick with the latest available kernels on kernel.org and building them myself. Until now I've been basing my configuration off the stock Maverick kernel and accepting the make oldconfig defaults. I've been doing this for 3 major releases now so I figure I'm starting to slip behind the current "standard". I would like to re-base my kernels off the new Natty .config file. Is this available somewhere online or do I have to download the whole kernel package and extract it?

    Read the article

  • Cannot SSH anymore, what went wrong?

    - by lbwtz2
    I use to ssh to a remote server (no rsa-key, just password). Now the server do not accept the connection any more and throw me this error: ssh_exchange_identification: Connection closed by remote host While I can google a little to find a fix I can't figure out what went wrong since I haven't touched anything on the machine since last login. Can you help me find the cause? EDIT: Inspecting the logs I've found these: /var/auth.log /var/log/auth.log:Dec 26 16:40:32 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 26 16:41:05 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 26 16:43:47 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 27 03:20:06 vps sshd[15567]: error: fork: Cannot allocate memory /var/log/auth.log:Dec 27 16:15:02 vps sshd[15567]: error: fork: Cannot allocate memory And in the same span-time I've also found a lot of these: /var/log/auth.log:Dec 26 13:00:01 vps CRON[1716]: PAM unable to dlopen(/lib/security/pam_unix.so): libcrypt.so.1: cannot map zero-fill pages: Cannot allocate memory /var/log/auth.log:Dec 26 13:00:01 vps CRON[1716]: PAM adding faulty module: /lib/security/pam_unix.so What are these?

    Read the article

  • Dual head setup for Ubuntu 10.04.1 and Windows XP Pro with same hardware configuration

    - by mejpark
    Hello. I have a Dell OptiPlex 360 workstation at work, with 2 x ATI RV280 [Radeon 9200 PRO] graphics cards installed, which are attached to two identical 19" HII flat panel monitors. I'm using the open source Radeon driver with Ubuntu, and the proprietary drivers with Windows. The good news is that dual head configuration works for both OSes. The bad news is, I have to use a different hardware configuration for each OS to achieve this. Hardware config #1: Dual monitors work for Windows XP Pro like this: First display -> external VGA port Second display -> DVI input on gfx card Hardware config #2: Dual monitors work for Ubuntu 10.04.1 like this: First display -> VGA port on gfx card Second display -> DVI input on gfx card I connected up the displays according to Config #2 and booted up Windows, which resulted in a mirror image on both screens. I was unable to login, as the login box was not visible. I unplugged the VGA lead from gfx card and plugged it into the external VGA port (Config #1) - Windows dual head works again, but the VGA-connected screen is not recognised by Ubuntu and remains in standby mode. Is it possible to configure a dual head setup for Ubuntu using Config #1, or am I missing something? I tried setting up dual monitors using Config #1, this morning which didn't work. By default, there is no xorg.conf file in Ubuntu 10.04.1, so I generated one using: $ sudo X :2 -configure X.Org X Server 1.7.6 Release Date: 2010-03-17 X Protocol Version 11, Revision 0 Build Operating System: Linux 2.6.24-27-server i686 Ubuntu Current Operating System: Linux harrier 2.6.32-24-generic #42-Ubuntu SMP Fri Aug 20 14:24:04 UTC 2010 i686 Kernel command line: BOOT_IMAGE=/boot/vmlinuz-2.6.32-24-generic root=UUID=a34c1931-98d4-4a34-880c-c227a2936c4a ro quiet splash Build Date: 21 July 2010 12:47:34PM xorg-server 2:1.7.6-2ubuntu7.3 (For technical support please see http://www.ubuntu.com/support) Current version of pixman: 0.16.4 Before reporting problems, check http://wiki.x.org to make sure that you have the latest version. Markers: (--) probed, (**) from config file, (==) default setting, (++) from command line, (!!) notice, (II) informational, (WW) warning, (EE) error, (NI) not implemented, (??) unknown. (==) Log file: "/var/log/Xorg.2.log", Time: Mon Sep 13 10:02:02 2010 List of video drivers: apm ark intel mach64 s3virge trident mga tseng ati nouveau neomagic i740 openchrome voodoo s3 i128 radeon siliconmotion nv ztv vmware v4l chips rendition savage sisusb tdfx geode sis r128 cirrus fbdev vesa (++) Using config file: "/home/michael/xorg.conf.new" (==) Using config directory: "/usr/lib/X11/xorg.conf.d" (II) [KMS] No DRICreatePCIBusID symbol, no kernel modesetting. Xorg detected your mouse at device /dev/input/mice. Please check your config if the mouse is still not operational, as by default Xorg tries to autodetect the protocol. Xorg has configured a multihead system, please check your config. Your xorg.conf file is /home/michael/xorg.conf.new To test the server, run 'X -config /home/michael/xorg.conf.new' ddxSigGiveUp: Closing log $ sudo X -config /home/michael/xorg.conf.new Fatal server error: Server is already active for display 0 If this server is no longer running, remove /tmp/.X0-lock and start again. Please consult the The X.Org Foundation support at http://wiki.x.org for help. ddxSigGiveUp: Closing log I then booted Ubuntu in failsafe mode, dropped into root shell, and executed $ X -config /home/michael/xorg.conf.new again. The screen went blank and turned off, so I reset the machine. There must be a way round this. Any help to set up a dual head config for Ubuntu using Config #1 would be hugely appreciated. TIA, Mike

    Read the article

  • Ubuntu 10.04 (Lucid) OpenLDAP invalid credentials issue

    - by gmuller
    This won't be a question, but a solution to an infuriating problem on Ubuntu 10.04. If you tried to deploy an LDAP server using this distro following the tutorials below, you'll be on serious trouble. Tutorials: https://help.ubuntu.com/9.10/serverguide/C/openldap-server.html https://help.ubuntu.com/9.10/serverguide/C/samba-ldap.html The error first appear, on the line: "ldapsearch -xLLL -b cn=config -D cn=admin,cn=config -W olcDatabase=hdb olcAccess" It simply won't allow admin to access the "cn=config", thus you won't be able to deploy the LDAP server correctly. After almost a week searching for a solution, I've found this page: https://bugs.launchpad.net/ubuntu-docs/+bug/333733 On comment #5, the solution is presented. Quoting the author: when you get to the setting up ACL part you all of a sudden need to use a cn=admin,cn=config, that doesn't exist creating a config.ldif with dn: olcDatabase={0}config,cn=config changetype: modify add: olcRootDN olcRootDN: cn=admin,cn=config dn: olcDatabase={0}config,cn=config changetype: modify add: olcRootPW olcRootPW: secret dn: olcDatabase={0}config,cn=config changetype: modify delete: olcAccess and adding it with ldapadd -Y EXTERNAL -H ldapi:/// -f config.ldif It's unacceptable that a Linux distribution, popular like Ubuntu, have such ridiculous bug. Hope it helps everyone!

    Read the article

  • Installing paperclip plugin

    - by mnml
    I'm trying to install the paperclip plugin with the following command: ruby script/plugin install git://github.com/thoughtbot/paperclip.git But I'm getting some errors: ruby script/plugin install git://github.com/thoughtbot/paperclip.git --force svn: '/home/app/vendor/plugins' is not a working copy /usr/lib/ruby/1.8/open-uri.rb:32:in `initialize': No such file or directory - git://github.com/thoughtbot/paperclip.git (Errno::ENOENT) from /usr/lib/ruby/1.8/open-uri.rb:32:in `open_uri_original_open' from /usr/lib/ruby/1.8/open-uri.rb:32:in `open' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:863:in `fetch_dir' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:857:in `fetch' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:856:in `each' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:856:in `fetch' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:219:in `install_using_http' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:169:in `send' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:169:in `install' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:734:in `parse!' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:732:in `each' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:732:in `parse!' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:447:in `parse!' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:463:in `parse!' from ./script/../config/../vendor/rails/railties/lib/commands/plugin.rb:871 from script/plugin:3:in `require' from script/plugin:3 Is it because I'm using a old rails version?

    Read the article

  • Why can't I include these data files in a Python distribution using distutils?

    - by froadie
    I'm writing a setup.py file for a Python project so that I can distribute it. The aim is to eventually create a .egg file, but I'm trying to get it to work first with distutils and a regular .zip. This is an eclipse pydev project and my file structure is something like this: ProjectName src somePackage module1.py module2.py ... config propsFile1.ini propsFile2.ini propsFile3.ini setup.py Here's my setup.py code so far: from distutils.core import setup setup(name='ProjectName', version='1.0', packages=['somePackage'], data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] ) When I run this (with sdist as a command line parameter), a .zip file gets generated with all the python files - but the config files are not included. I thought that this code: data_files = [('config', ['..\config\propsFile1.ini', '..\config\propsFile2.ini', '..\config\propsFile3.ini'])] indicates that those 3 specified config files should be copied to a "config" directory in the zip distribution. Why is this code not accomplishing anything? What am I doing wrong? (I have also tried playing around with the paths of the config files... But nothing seems to help. Would Python throw an error or warning if the path was incorrect / file was not found?)

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >