Search Results

Search found 14712 results on 589 pages for 'home hub'.

Page 83/589 | < Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >

  • Amazon EC2 pem file stopped working suddenly

    - by Jashwant
    I was connecting to Amazon EC2 through SSH and it was working well. But all of a sudden, it stopped working. I am not able to connect anymore with the same key file. What can go wrong ? Here's the debug info. ssh -vvv -i ~/Downloads/mykey.pem [email protected] OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012 debug1: Reading configuration data /etc/ssh/ssh_config debug1: /etc/ssh/ssh_config line 19: Applying options for * debug2: ssh_connect: needpriv 0 debug1: Connecting to ec2-54-222-60-78.eu.compute.amazonaws.com [54.229.60.78] port 22. debug1: Connection established. debug3: Incorrect RSA1 identifier debug3: Could not load "/home/jashwant/Downloads/mykey.pem" as a RSA1 public key debug1: identity file /home/jashwant/Downloads/mykey.pem type -1 debug1: identity file /home/jashwant/Downloads/mykey.pem-cert type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.9p1 Debian-5ubuntu1.1 debug1: match: OpenSSH_5.9p1 Debian-5ubuntu1.1 pat OpenSSH_5* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.1p1 Debian-4 debug2: fd 3 setting O_NONBLOCK debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: order_hostkeyalgs: prefer hostkeyalgs: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],[email protected],ssh-rsa,ssh-dss debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-512,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: none,[email protected],zlib debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: kex_parse_kexinit: ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 debug2: kex_parse_kexinit: ssh-rsa,ssh-dss,ecdsa-sha2-nistp256 debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,aes192-cbc,aes256-cbc,arcfour,[email protected] debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: hmac-md5,hmac-sha1,[email protected],hmac-sha2-256,hmac-sha2-256-96,hmac-sha2-512,hmac-sha2-512-96,hmac-ripemd160,[email protected],hmac-sha1-96,hmac-md5-96 debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: none,[email protected] debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: debug2: kex_parse_kexinit: first_kex_follows 0 debug2: kex_parse_kexinit: reserved 0 debug2: mac_setup: found hmac-md5 debug1: kex: server->client aes128-ctr hmac-md5 none debug2: mac_setup: found hmac-md5 debug1: kex: client->server aes128-ctr hmac-md5 none debug1: sending SSH2_MSG_KEX_ECDH_INIT debug1: expecting SSH2_MSG_KEX_ECDH_REPLY debug1: Server host key: ECDSA d8:05:8e:fe:37:2d:1e:2c:f1:27:c2:e7:90:7f:45:48 debug3: load_hostkeys: loading entries for host "ec2-54-222-60-78.eu.compute.amazonaws.com" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:4 debug3: load_hostkeys: loaded 1 keys debug3: load_hostkeys: loading entries for host "54.229.60.78" from file "/home/jashwant/.ssh/known_hosts" debug3: load_hostkeys: found key type ECDSA in file /home/jashwant/.ssh/known_hosts:5 debug3: load_hostkeys: loaded 1 keys debug1: Host 'ec2-54-222-60-78.eu.compute.amazonaws.com' is known and matches the ECDSA host key. debug1: Found key in /home/jashwant/.ssh/known_hosts:4 debug1: ssh_ecdsa_verify: signature correct debug2: kex_derive_keys debug2: set_newkeys: mode 1 debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug2: set_newkeys: mode 0 debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug2: service_accept: ssh-userauth debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: jashwant@jashwant-linux (0x7f827cbe4f00) debug2: key: /home/jashwant/Downloads/mykey.pem ((nil)) debug1: Authentications that can continue: publickey debug3: start over, passed a different list publickey debug3: preferred gssapi-keyex,gssapi-with-mic,publickey,keyboard-interactive,password debug3: authmethod_lookup publickey debug3: remaining preferred: keyboard-interactive,password debug3: authmethod_is_enabled publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: jashwant@jashwant-linux debug3: send_pubkey_test debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Trying private key: /home/jashwant/Downloads/mykey.pem debug1: read PEM private key done: type RSA debug3: sign_and_send_pubkey: RSA 9b:7d:9f:2e:7a:ef:51:a2:4e:fb:0c:c0:e8:d4:66:12 debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). I've already googled everything and checked : Public DNS is same (It hasnt changed), Username is ubuntu as it's a Ubuntu AMI ( Used the same earlier), Permission is 400 on mykey.pem file ssh port is enabled via security groups ( Used the same ealier )

    Read the article

  • Mongodb: why is my mongo server using two PID's?

    - by Lucas
    I started my mongo with the following command: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data 2014-06-07T08:46:30.507+0000 [initandlisten] MongoDB starting : pid=6409 port=27017 dbpat h=/home/lucas/node/nodetest2/data 64-bit host=ecoinstance 2014-06-07T08:46:30.508+0000 [initandlisten] db version v2.6.1 2014-06-07T08:46:30.508+0000 [initandlisten] git version: 4b95b086d2374bdcfcdf2249272fb55 2c9c726e8 2014-06-07T08:46:30.508+0000 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.3 2-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49 2014-06-07T08:46:30.509+0000 [initandlisten] allocator: tcmalloc 2014-06-07T08:46:30.509+0000 [initandlisten] options: { storage: { dbPath: "/home/lucas/n ode/nodetest2/data" } } 2014-06-07T08:46:30.520+0000 [initandlisten] journal dir=/home/lucas/node/nodetest2/data/ journal 2014-06-07T08:46:30.520+0000 [initandlisten] recover : no journal files present, no recov ery needed 2014-06-07T08:46:30.527+0000 [initandlisten] waiting for connections on port 27017 It appears to be working, as I can execute mongo and access the server. However, here are the process running mongo: [lucas@ecoinstance]~/node/testSite$ ps aux | grep mongo root 6540 0.0 0.2 33424 1664 pts/3 S+ 08:52 0:00 sudo mongod --dbpath /ho me/lucas/node/nodetest2/data root 6541 0.6 8.6 522140 52512 pts/3 Sl+ 08:52 0:00 mongod --dbpath /home/lu cas/node/nodetest2/data lucas 6554 0.0 0.1 7836 876 pts/4 S+ 08:52 0:00 grep mongo As you can see, there are two PID's for mongo. Before I ran sudo mongod --dbpath /home/lucas/node/nodetest2/data, there were none (besides the grep of course). How did my command spawn two PID's, and should I be concerned? Any suggestions or tips would be great. Additional Info In addition, I may have other issues that might suggest a cause. I tried running mongo with --fork --logpath /home/lucas..., but it did not work. More information below: [lucas@ecoinstance]~/node/nodetest2$ sudo mongod --dbpath /home/lucas/node/nodetest2/data --fork --logpath /home/lucas/node/nodetest2/data/ about to fork child process, waiting until server is ready for connections. forked process: 6578 ERROR: child process failed, exited with error number 1 [lucas@ecoinstance]~/node/nodetest2$ ls -l data/ total 163852 drwxr-xr-x 2 mongodb nogroup 4096 Jun 7 08:54 journal -rw------- 1 mongodb nogroup 67108864 Jun 7 08:52 local.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 08:52 local.ns -rwxr-xr-x 1 mongodb nogroup 0 Jun 7 08:54 mongod.lock -rw------- 1 mongodb nogroup 67108864 Jun 7 02:08 nodetest1.0 -rw------- 1 mongodb nogroup 16777216 Jun 7 02:08 nodetest1.ns Also, my db path folder is not the original location. It was originally created under the default /var/lib/mongodb/ and moved to my local data folder. This was done after shutting down the server via /etc/init.d/mongod stop. I have a Debian Wheezy server, if it matters.

    Read the article

  • Creating java package on ubuntu?

    - by Gaurav_Java
    I am new to java. Here I am trying to create java package. And try to compile it from another directory . But there is an error like bash: /home/gaurav/Desktop/package2/B.java: Permission denied Here is fy first code and directory is /home/Desktop/package/A.java package package1; public class A { interface A1 { void show(); void display(); } } class B extends A { public void show() { System.out.println("This is show method()"); } public void display() { System.out.println("this is Display metthod()"); } } For compilation I did this command it's works fine. pwd is /home/gaurav javac /home/gaurav/Desktop/package/A.java When I try to compile B.java which is in my Other drive /media/gaurav/iPlay/package/B.java package package2; class B { public static void main(String args[]) { System.out.println("Reached in Main method of B"); package1.A Object = new A(); } } I tired this vommand (grom previous working directory) javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java Error Comes javac -cp /home/gaurav/Desktop/;/media/gaurav/iPlay/package/B.java javac: no source files Usage: javac <options> <source files> use -help for a list of possible options bash: /media/gaurav/iPlay/package/B.java: Permission denied What i am doing wrong? Please it my assignment I am not able to move further without this. I changed permissions.

    Read the article

  • can anyone help me through the preparation of Eclipse IDE for android developer in ubuntu 12.04?

    - by csbl
    I'm new to to Linux, in this particular case, to Ubuntu. I have a small android project I have to finnish until this Friday and I'm still stuck with the install and preparing of the development envrironment. The only thing I did was install the Eclipse IDE. I'm still missing the SDK, JAVA and anything else that might be needed. Can someone help me through this? It's only because I'm running out of time to develop, or else I would embarc on a deeper investigation of this OS. I tried the step to install android platforms, through Eclispe-Help-Install New Software, and I got the follwoing error messages, at the end of the process: [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] 'adb version' failed! /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory [2012-06-06 17:35:56 - adb] Failed to parse the output of 'adb version': Standard Output was: Error Output was: /home/catia/android-sdks/platform-tools/adb: error while loading shared libraries: libncurses.so.5: cannot open shared object file: No such file or directory Can anyone help, please??

    Read the article

  • Oracle Enterprise Data Quality Adds Global Address Verification Capabilities for Greater Accuracy and Broader Location Coverage

    - by Mala Narasimharajan
    Data quality – has many flavors to it.  Product, Customer – you name the data domain and there’s data quality associated with it.  Address verification and data quality are a little different.  in that there is a tremendous amount of variation as well as nuance attached to it.  Specifically, what makes address verification challenging is that more often than not, addresses are incomplete, riddled with misspellings, incorrect postal codes are assigned to locations or non-address items are present.  Almost all data has locations, and accurate locations power a wealth of business processes: Customer Relationship Management, data quality, delivery of materials, goods or services, fraud detection, insurance risk assessment, data analytics, store and territory planning, and much more. Oracle Address Verification Server provides location-based services as well as deeper parsing and analysis capabilities for Oracle Enterprise Data Quality.  Specifically, Pre-integrated with the EDQ platform, Oracle Address Verification Server provides robust parsing, validation, as well as specialized location information for over 240 countries – all populated countries on Earth.  Oracle Enterprise Data Quality (EDQ) is a data quality platform, dedicated to address the distinct challenges of customer and product data quality, and performs advanced data profiling to identify and measure poor quality data and identify rule requirements, as well as semantic and pattern-based recognition to accurately parse and standardize data that is poorly structured.   EDQ is integrated with Oracle Master Data Management, including Oracle Customer Hub and Oracle Product Hub, as well as Oracle Data Integrator Enterprise Edition and Oracle CRM.  Address Verification Server provides key address verification services for Oracle CRM and Oracle Customer Hub.  In addition, Address Verification Server provides greater accuracy when handling address data due to its expanded sources and extensible knowledge repository, solid parsing across locales and countries as well as  adept handling of extraneous data in address fields.  For more information on Oracle Address Verification Server visit:  http://bit.ly/GMUE4H and http://bit.ly/GWf7U6

    Read the article

  • NIC light is turned off after boot on Redhat 4.6 server

    - by hoffmandirt
    I have a 2950 blade server setup with Red Hat 4.6 installed. I cannot get the NIC to work properly after reinstalling Linux. I activated the NIC, but the NIC light will not turn on when I plug the network cable into the hub. The status light on the hub will not turn on either. If I run ifconfig, the NIC status is UP. Also I can ping the IP address that I assign to the Linux machine, but I can't ping anything else that is plugged into the hub. When I reboot the system, the NIC light will stay on until the system fully boots and then it will turn off again. Is there something else that I need to do to get the NIC working? It appears to be disabled even though ifconfig says that it is UP. Maybe I need to configure something within the blade server (iDRAC)?

    Read the article

  • sniffing on a switched LAN

    - by shodanex
    Hi, I often find myself in the position of having to sniff on a connection between for example an arm board I am developing on, and another computer on the network, or out of the network. The easy situation is when I can install a sniffer on the computer talking to the embedded device. When it is not possible, I currently install an old 10Mb/s HUB. However I am afraid my HUB might stop working, and I would like to know some alternative. Here are the alternatives I could think of : Buy another HUB. Is that still possible ? Have some sort of ethernet sniffing bridge, like what they do for USB. I am afraid this kind of device is expensive. use ARP poisoning.

    Read the article

  • setting up rhel 5.x RPM build server for mortal users

    - by Chen Levy
    My task is to setup a RHEL 5.x build host, that can build RPMs for mortal users. On RHEL 6.x with rpm version 4.8, I have in /usr/lib/macros: # Path to top of build area. %_topdir %{getenv:HOME}/rpmbuild On RHEL 5.x with rpm version 4.4, the %{getevn:HOME} is not available. I know that I can use /home/someuser/.rpmmacros: %_topdir /home/someuser/rpmbuild and this will work for that user, however I don't want to do this for every user separately. Moreover, since .rpmmacro will not expand ${HOME} or ~ I suspect it is unsafe to use those. This in turn make /etc/skel unstable for this task (or so I suspect). So in short, my question is: How to setup RHEL 5.x host that allow all users to build RPM packages in their home directory?

    Read the article

  • SSH tunnel RDP through gateway server outside the network?

    - by Mike
    I need to access a PC via RDP that is behind a firewall. There's no way to connect to it directly that I know of. What I'd like to do is SSH from that remote PC to my home Ubuntu server, then connect to the remote PC using my home PC with the Ubuntu server as a gateway. I've tried SSH from remote PC to Ubuntu server, tunneling remote port 3389 to 127.0.0.1:3389, then SSH from home PC to Ubuntu server, tunneling local port 13389 to remote port 3389. At that point I try to RDP into: 127.0.0.1:13389, 127.0.0.2:13389, :3389 - no dice. I suppose I could simply set up an SSH server on my home PC and SSH from remote PC into home PC and then establish the tunnel that way, but I'd rather not go through the hassle of installing and configuring an ssh server on my home PC. I know LogMeIn would work here, but I don't want to go that route for various reasons. Any ideas? Thanks!

    Read the article

  • httpd, vsftpd and the annoying selinux

    - by Christian
    I have a CentOS 6.3 installed with httpd running and vsftpd but I am unable to balance permission between the user able to upload over ftp and their website working. What I do: I create a user with their home directory as `/home/username` I create a sub folder called `html` for their website I chown their directory `chown -R username:apache /home/username` I chmod their directory `chmod -R 750 /home/username` I chcon their directory `chcon -R -t httpd_sys_rw_content_t /home/username` and their website loads fine but they are unable to ftp, but if I do the following, they can ftp but their website doesnt load: chcon -R -t user_home_dir_t /home/username If I disable selinux, the user can ftp and the website loads. so what is the answer to keep selinux?

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Linux - Create ftp account with read/write access to only 1 folder

    - by Gublooo
    Hey guys.... I have never worked on linux and dont plan on working on it either - The only command I probably know is "ls" :) I am hosting my website on Eapps and use their cpanel to setup everything so never worked with linux. Now I have this one time case - where I need to provide access to a contractor to fix the CSS issues on my website. He basically needs FTP (read/write) access to certain folders. At a high level - this is my code structure /home/webadmin/example.com/html/images /css /js /login.php /facebook.php /home/webadmin/example.com/application/library /views /models /controllers /config /bootstrap.php /home/webadmin/example.com/cgi-bin I want the new user to be able to have access to only these folders /home/webadmin/example.com/html/js /home/webadmin/example.com/html/css /home/webadmin/example.com/application/views He should not be able to view even the content of other folders including files like bootstrap.php or login.php etc If any sys admins can help me set this account up - will really appreciate it. Thanks

    Read the article

  • Getting "open_basedir restriction in effect" in spite of adding the correct entry.

    - by akshatc
    I am trying to create a shared hosting scenario, using open_basedir option of php. I am doing this by adding the following to apache2.conf <VirtualHost *:80> ServerName lt1.example.net DocumentRoot /home/akshat/example/tmpblogs/tb1/ php_admin_value open_basedir /home/akshat/example/tmpblogs/tb1/ </VirtualHost> <VirtualHost *:80> ServerName lt2.example.net DocumentRoot /home/akshat/example/tmpblogs/tb2/ php_admin_value open_basedir /home/akshat/example/tmpblogs/tb2/ </VirtualHost> Now when I access lt2.example.net, I get the error: Warning: Unknown: open_basedir restriction in effect. File(/home/akshat/example/tmpblogs/tb2/index.php) is not within the allowed path(s): (0) in Unknown on line 0 Warning: Unknown: failed to open stream: Operation not permitted in Unknown on line 0 Fatal error: Unknown: Failed opening required '/home/akshat/example/tmpblogs/tb2/index.php' (include_path='.:/usr/share/php:/usr/share/pear') in Unknown on line 0 I was getting the same error while accessing "lt1.example.net" too, but then it suddenly became alright. What am I doing wrong here?

    Read the article

  • How can I share the TV card on my home network? with a solution like `ln -s`?

    - by Boris
    My environment: - 2 PCs, a desktop and a laptop, both on Oneiric - they are connected together by ethernet wire - nfs-common is installed and configured: the desktop is the server - a TV tuner card is installed on the desktop, I can watch the French TNT TV with the software Me-TV It works fine, TV on desktop, and my network too: I share folders thanks to NFS. But I would like more: how can I share my TV tuner card from the desktop and be able to watch TV on the laptop too? If possible I would like a solution that allows me to keep using the software Me-TV, on both PCs. I bet that there is a solution to create a fake TV card on the 2nd PC. Something like ln -s on /dev/dvb/adapter0/.

    Read the article

  • Figure out what non-symlink path would be?

    - by David Mackintosh
    On Linux, if I've cd'd around and am now in a directory, is there a way to figure out what the real path to that directory is if I had not used a symbolic link to get there? Consider: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4/5 5 $ cd 5 $ pwd /home/dave/tmp/5 Or: $ pwd /home/dave/tmp $ mkdir -p 1/2/3/4/5 $ ln -s 1/2/3/4 4 $ cd 4/5 $ pwd /home/dave/tmp/4/5 Is there any way to figure out that /home/dave/tmp/5 is really /home/dave/1/2/3/4/5 ?

    Read the article

  • Dualboot harddisk encryption

    - by amfcosta
    I have a system with both Ubuntu 11.10 and Windows 7 and I want to encrypt the whole harddisk or at least some of my partitions. My partition table is something like this (the ones marked with * are the ones that need to be encrypted): Windows boot reserved partition *Windows system partition (ntfs) *Windows data partition (ntfs) Ubuntu root partition (ext4) *Ubuntu home partition (ext4) Ubuntu swap As I said I don't need to encrypt the whole disk. What is the best way to accomplish this? Maybe something (TrueCrypt?) where I enter the password before the system boots so that it decrypts the whole hdd? Or maybe individual encryption using Windows-only encryption (for Windows partitions) and Ubuntu home encryption (well, for Ubuntu home partition)? By the way, I almost always use Ubuntu, so it would be nice if I could continue to boot Ubuntu by default but have an option to boot Windows too (like in grub). EDIT: I was thinking of doing this: encrypting ubuntu home with eCryptfs (I think this is used to encrypt home when selected during installation). Encrypting Windows partitions with TrueCrypt. Still having Grub as a bootloader, when I choose ubuntu everything goes as normal (home is decrypted when login in). When I choose windows the TrueCrypt password prompt shows and windows boots.

    Read the article

  • How do I unmount a tmpfs that is missing from /etc/mtab?

    - by vrinek
    I have the following line in /etc/fstab: none /home/hydra/tmp tmpfs user,noauto,size=1000M,uid=1001,gid=1001 0 0 I can do mount ~/tmp as user hydra and it gets mounted ok. The only problem is that even thought it gets added to /proc/mounts, it does not get added to /etc/mtab. When I try a umount ~/tmp (again as hydra) it complains: umount: /home/hydra/tmp is not mounted (according to mtab) And when I try -f or -n, it complains that I am not root. Some more info on the system that manifests this problem: On sudo umount /home/hydra/tmp, the fs gets unmounted (I think I needed to used -f too) Debian version is testing mount --version - mount from util-linux 2.19.1 (with libblkid and selinux support) ls -l /etc/mtab - -rw-r--r-- 1 root root 921 Nov 14 09:08 /etc/mtab cat /proc/mounts | grep rootfs - rootfs / rootfs rw 0 0 /home, /home/hydra nor /home/hydra/tmp are symbolic links

    Read the article

  • USB device is recognized but has no address

    - by SeanMG
    Good day folks, I'm trying to use a USRP1 with GNURadio if anyone knows what any of that is. I am running Ubuntu on a Windows 7 machine via VMware player. When I connect this USRP1 via USB 2.0 drive to Windows 7 it is recognized as Ettus Research LLC USRP1... When I connect the device to Ubuntu through VMware, it shows: usb device fffe:0002 on my removable devices. When I run lsusb I receive the following: Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 002: ID 0e0f:0003 VMware, Inc. Virtual Mouse Bus 002 Device 003: ID 0e0f:0002 VMware, Inc. Virtual USB Hub Bus 001 Device 004: ID fffe:0002 When I run this program that comes with the USRP driver... uhd_find_devices I receive: -------------------------------------------------- -- UHD Device 0 -------------------------------------------------- Device Address: type: usrp1 name: serial: 00000000 So when I run this program, it does recognize the fact that this device is connected. However, the device has no address, no name, and has a null serial. I need to know the device address so I can run more programs in GNURadio. Does anyone know what the problem is here? Thanks!

    Read the article

  • Can't run command with sudo, even with the full path, I got an error

    - by Keating Wang
    the command starling is /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling when run starling, get the error Permission denied when run rvmsudo starling, works well when run sudo starling, get the error sudo: starling: command not found when run sudo /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling, get the error: /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:247:in to_specs': Could not find starling (>= 0) amongst [minitest-1.6.0, rake-0.8.7, rdoc-2.5.8] (Gem::LoadError) from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:256:into_spec' from /home/keating/.rvm/rubies/ruby-1.9.2-p290/lib/ruby/site_ruby/1.9.1/rubygems.rb:1229:in gem' from /home/keating/.rvm/gems/ruby-1.9.2-p290/bin/starling:18:in' I really want to run the command with sudo, because the error above is the same as running rvmsudo service starling start(I had set starling as a service of the os)

    Read the article

  • PHP include_path doesn't work

    - by 50ndr33
    I have the documents at http://www.example.com/ in /home/www/example.com/www running on Debian Squeeze. /home/www/example.com/ www/ index.php php/ include_me.php In the php.ini I've uncommented and changed to: include_path =".:/home/www/example.com" In a script index.php in www, I have require_once("/php/include_me.php"). The output I am getting from PHP is: Warning: require_once(/php/include_me.php) [function.require-once]: failed to open stream: No such file or directory in /home/www/example.com/www/index.php on line 2 Fatal error: require_once() [function.require]: Failed opening required '/php/include_me.php' (include_path='.:/home/www/example.com') in /home/www/example.com/www/index.php on line 2 As you can see, the include-path is set correctly according to the error. But if I do require_once("../php/include_me.php");, it works. Therefore, something has to be wrong with the include-path. Does anyone know what I can do to fix it?

    Read the article

  • RewriteRule applying pattern even though 1 of the RewriteCond's failed

    - by BHare
    #www. domain . tld RewriteCond %{HTTP_HOST} (?:.*\.)?([^.]+)\.(?:[^.]+)$ RewriteCond /home/%1/ -d RewriteRule ^(.+) %{HTTP_HOST}$1 RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$ /home/$1/client/media/$2 [L] RewriteRule (?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$ /home/$1/www/$2 [L] Here is rewritelog output: #(4) RewriteCond: input='tfnoo.mydomain.org' pattern='(?:.*\.)?([^.]+)\.(?:[^.]+)$' [NC] => matched #(4) RewriteCond: input='/home/mydomain/' pattern='-d' => not-matched #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/media/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(3) applying pattern '(?:.*\.)?([^.]+)\.(?:[^.]+)/(.*)$' to uri 'http://www.mydomain.org/files/images/logo.png' #(2) rewrite 'http://www.mydomain.org/files/images/logo.png' -> '/home/mydomain/www/logo.png' If you note on the 2nd 4 it failed the -d (if directory exists) pattern. Which is correct. mydomain does not have a /home/. Therefore it should never rewrite, atleast according to my understanding that all rewriterules are subject to rewriteconds as logical ANDs.

    Read the article

  • BizTalk 2009 - How do I do t"HAT"?

    - by StuartBrierley
    In my previous life working with BizTalk Server 2004, I came to view HAT (the Health and Activity Tracking tool) as one of my first ports of call in the case of problems with any of our BizTalk solutions.  When you move to BizTalk Server 2009 it is quickly apparent that HAT is no longer with us. HAT was useful in BizTalk 2004 mainly as it provided developers and administrators with a number of useful queries and views of what was going on inside BizTalk at runtime; when and what type of messages were received and sent, what messages had been suspended, what orchestration were running or suspended, you could even follow the process flow of a message or orchestration to see what was going on. With BizTalk Server 2009 much of the functionality of HAT can now be found in the BizTalk Administration console.  Select a BizTalk Group and you will be shown the Group Hub Overview page.  This provides a number of default queries that replicate some of those found in the old HAT. You can also use the Group Hub page to create new queries.  These can then be saved and loaded in other Group Hub instances - useful for creating queries in development for later use in Test, Psuedo-Live and Live environments. In the next few posts I am going to look at some of the common queries that we might miss from HAT and recreate them (or something close) using the new query option. Messages - last 100 received Messages - last 100 sent Messages - last 50 suspended Service instances - last 100 I have yet to try the updated Admin-HAT-Console in anger, and after using old-HAT for so long it may take some getting uesd to, but so far I would say that moving the HAT functionality into the BizTalk Administration console was probably the correct way to go.  Having one tool as the place to look for the combined functionality on offer certainly seems to be the sensible option.

    Read the article

  • Rename Devices in Device Manager

    - by Luke
    This is mostly just for the USB ports, as I recognize everything else in computers.... Anyways, is there a way to rename, or otherwise, identify which USB port (or other hardware for that matter) is which device in Device Manager? I know I can plug in a flash drive, then see what port it is connected with, and find out that way. What I would like though, is to find out that a certain plug is always a certain device in Device Manager. If I can then have a system in mind that always has the same order, I can look and see if a USB port is not being detected or not working properly, and as I uninstall/reinstall USB devices, I know I won't lose my keyboard or mouse, for example. The OS in question currently is Windows 7, but I would accept a solution for ANY version of Windows USB Devices | +--+USB Root Hub Port A | | | ---Keyboard | +--+USB Root Hub Port B | | | ---Mouse | +--+USB Root Hub Port C | ---Empty

    Read the article

  • How do I know if I'm getting the most out of my video card?

    - by b.long
    My computer at home is a bit lacking, so I want to make sure I'm getting the most out of it while I can. Generally speaking, here are the specs: 4GB Memory AMD Athlon(tm) 64 X2 Dual Core Processor 5200+ × 2 64-bit Ubuntu The terminal shows me the following: me@home:~$ uname -a Linux home 3.0.0-17-generic #30-Ubuntu SMP Thu Mar 8 20:45:39 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux me@home:~$ lspci | grep VGA 01:00.0 VGA compatible controller: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] me@home:~$ sudo lshw -C video *-display:0 description: VGA compatible controller product: RV380 [Radeon X600 (PCIE)] vendor: ATI Technologies Inc physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:44 memory:e0000000-efffffff ioport:ac00(size=256) memory:fdef0000-fdefffff memory:fdec0000-fdedffff *-display:1 UNCLAIMED description: Display controller product: RV380 [Radeon X600] vendor: ATI Technologies Inc physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 32 bits clock: 33MHz capabilities: pm pciexpress bus_master cap_list configuration: latency=0 resources: memory:fdee0000-fdeeffff me@home:~$ lspci -nn | grep VGA 01:00.0 VGA compatible controller [0300]: ATI Technologies Inc RV380 [Radeon X600 (PCIE)] [1002:5b62] The additional drivers menu in System Settings shows me nothing useful and my attempt at installing ATI's Catalyst Control center (drivers that came with the video card) failed. I believe the latest version of Ubuntu at the time was 9.x. What should I do? Install an old version of Ubuntu 9? Use some alternative driver? UPDATE: I might try my hand at a bit from this answer next: "Installing Catalyst Manually (from AMD/ATI's site)" . From a terminal, fgl_glxgears returns *"fgl_glxgears: command not found"*. Any thoughts?

    Read the article

< Previous Page | 79 80 81 82 83 84 85 86 87 88 89 90  | Next Page >