Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 498/861 | < Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >

  • Location Services are always disabled in Mac OS X Lion

    - by rplusg
    A simple location services program was working fine on my machine and suddenly stopped working. Upon further exploring the problem, I realized that some process has disabled location services in System Preferences » Security & Privacy » Privacy. I checked Enable Location Services, but again it got disabled automatically. After some research I found that it's not just my program, even built-in system functions are also failing because of this problem for example System Preferences » Date & Time » Time Zone failed to get the current location. Every time I check Enable Location Services, I see the following error in the console logs: 16/10/12 11:23:15.636 AM [0x0-0x42042].com.apple.systempreferences: ERROR,Time,372059595.636,Function,"CLInternalSetLocationServicesEnabled",CLInternalSetLocationServicesEnabled failed 16/10/12 11:23:15.638 AM [0x0-0x42042].com.apple.systempreferences: STACK,Time,372059595.636,1 CoreLocation 0x00007fff8f9957be CLInternalSetLocationServicesEnabled + 110 Notes: WiFi is on I didn't install iOS Simulator I use Xcode Version 4.5 (4G182) I use Boot Camp and made my MacBook Pro dual boot (Mac OS X Lion and Windows 7) I do only Mac development but not iOS

    Read the article

  • Ubuntu 11.10 logs off when clicking shutdown

    - by Rourke
    As the title says: when I click Shutdown from the menu it logs off. When I click shutdown from the log-in menu it does nothing. I'm using a fresh install of Ubuntu 11.10. I can force it to shutdown by the command below, but I don't want to keep typing that whenever I want to shutdown my laptop. sudo shutdown -h now So it's probably processes which arn't closing. I'm a novice linux user, so I have no idea how to rule out the software causing this. I think it's either Gwibber/Empathy, perhaps Mozilla Thunderbird, because this is happening since I started using this. So a few questions: How do I rule out what software is causing this? How do I stop it from not closing on shutdown? If 1. and 2. don't work is it possible to add top command to the shutdown process? Edit: Rourke here. Somehow I cannot accept the below comment from mech-e as the solution. Thank you this was indeed the answer I was looking for!

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • Coldfusion server VERY slow page loads

    - by Kevin
    I inherited a windows server 2003 coldfusion 7 server a few weeks ago. Today a network cable was unplugged by accident from the server. On plugging it back in, pages were NOT loading at all. Rather, we were receiving a generic coldfusion error page. After restarting IIS several times and coldfusion even more than that, we finally got pages to start loading. However, the loading is extremely slow (30+ seconds) on pages that used to load instantly. Loading through the local network (IE localhost/cfide/administrator) does nothing to help the load speed. I am not familiar with IIS or Coldfusion (We're in the process of migrating this to Linux/PHP), so this is all new territory to me. I'm hoping someone may have experienced this issue in the past and can help me solve it. I'm happy to provide any additional information that might be necessary....I'm just not sure what information you might need in order to help. Thanks for your time.

    Read the article

  • Initiate a PHP class where class name is a variable

    - by ed209
    I need some help with an error I have not encountered before and can't seem to find anywhere. In a PHP mvc framework (just from a tutorial) I have the following: // Initiate the class $className = 'Controller_' . ucfirst($controller); if (class_exists($className)) { $controller = new $className($this->registry); } $className is showing the correct class name (case is also correct). But when I run it I get this in the apache error log (no php error) [Wed Mar 31 10:34:12 2010] [notice] child pid 987 exit signal Segmentation fault (11) Process id is different on every call. I am running PHP 5.3.0 on os x 10.6. This site seems to work on 5.2.11 on another Mac. Not really sure where to go next to debug it. I guess it could be an apache setting as much as a php bug or a problem with the code... any suggestions on where to look next?

    Read the article

  • Openfire on Mac OS X: can't log in after setup

    - by Tom
    Hey all, I'm trying to set up Openfire (http://www.igniterealtime.org/projects/openfire/) on Mac OS X. The install goes well, and I can start the server and enter the admin console via its System Preferences pane. I run the setup, including specifying the password for the admin user. However, when I try to log into the admin console, I get the message "Login failed: make sure your username and password are correct and that you're an admin or moderator." What gives? I've tried to RTFM, but the documentation seems to be really sketchy. Nowhere is the setup process mentioned in the install docs.

    Read the article

  • compile error in Ubuntu 10

    - by yozloy
    Hey guys I got a vps which run solusVM. I'm now trying to install ruby 1.9.2 in it. I follow this guide: after I run this command apt-get update apt-get -y install build-essential zlib1g zlib1g-dev libxml2 libxml2-dev libxslt-dev I got this error below root@makserver:/usr/local/src/ruby-1.9.2-p0# apt-get -f install Reading package lists... Done Building dependency tree... Done Correcting dependencies... Done The following extra packages will be installed: libc6 Suggested packages: glibc-doc The following packages will be upgraded: libc6 1 upgraded, 0 newly installed, 0 to remove and 80 not upgraded. Need to get 0B/4252kB of archives. After this operation, 4096B disk space will be freed. Do you want to continue [Y/n]? y debconf: apt-extracttemplates failed: Bad file descriptor (Reading database ... 21594 files and directories currently installed.) Preparing to replace libc6 2.11.1-0ubuntu7.2 (using .../libc6_2.11.1-0ubuntu7.8_amd64.deb) ... open2: fork failed: Cannot allocate memory at /usr/share/perl5/Debconf/ConfModule.pm line 59 dpkg: error processing /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb (--unpack): subprocess new pre-installation script returned error exit status 12 Errors were encountered while processing: /var/cache/apt/archives/libc6_2.11.1-0ubuntu7.8_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) Anybody can tell me how can I correct this. Thanks

    Read the article

  • How to install Ubuntu 13.10 on Hybrid Disk alongside Windows 8.1

    - by user205691
    I am having trouble installing Ubuntu 13.10 on HP Envy 4-1046tx ultrabook. When i bought this, it came with windows 7 pre-installed, but i upgraded it to 8 and now recently to 8.1. But somehow, i feel 8.1 is slower or something went wrong with the upgrade and made my system slow. I want to try Dual booting Ubuntu 13.10 with windows 8.1 The system recovery drive has windows 7 recovery files. SSD has 4GB allocated to windows 8 (i think for hibernation/rapid start). 25GB of SSD is free and i want to install ubuntu on this SSD pointing it to "/" I will also shrink the windows partition (the only other partition available apart from recovery & SSD) to free up 100GB and allocate this space to "/home" during ubuntu installation. I tried the above steps while on windows 8, but not successful. Ubuntu installation went fine, but the grub was not loaded. I tried to deploy linux via EasyBCD, but after that also, selecting linux in the boot would load grub on command prompt and do nothing. While ubuntu installation, i also deleted the raid drivers with sudo dmraid -rE, but still ubuntu didnt recognize my windows. I think i am missing some steps, so this time i want to do it right with proper info before starting the process. My requirements: dual boot Ubuntu with windows 8.1 c:\ shrinked windows with 300GB on sda1, 100GB for /home on sda1 & ubuntu installed on 25GB SSD volume sda2 (this is mSata i think) GRUB or EFI that helps me load both OS properly without breaking anything SWAP partition can be added if needed on sda1 (4gb?)? I have backed up my drive and have a 16GB usb3.0 with ubuntu loaded. I hope i have mentioned everything i need and know.. All i need now is some guidance and what to do right so that this installation goes as planned :)

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Booting Error while using 12.04 booting from GRUB

    - by Paul Z.
    my name is Paul. I have encountered an issue relating to GRUB booting and the booting process in general. I have been running Ubuntu 12.04 LTS on my machine for quite a while. Before that, i had (before) 10.04, 11.04, 11.10, etc. I have been running Ubuntu, in general, but more specifically 12.04 for a long time with little to no problems. The problem: Earlier today, i was using my machine and then decided to take a little break. I shut down my machine (laptop, in case anyone was wondering) and left. Later, I came back ready to start it up and continue. I started it up and it took me to the Toshiba screen (like normal) then to the GRUB screen. I guessed that nothing was truly wrong, and chose the first option (something around the lines of: Ubuntu, with linux 3.22.0-35-generic). I waited for a bit and it still displayed the same purple screen. I restarted it and now chose the option like the first but with recovery at the end. Same result. Later, I waited longer and found that my computer came up with a bunch of lines of script. I waited longer but nothing new happened. What are your suggestions as to fix this problem? I will let my computer run overnight with the recovery setting and will let you know what the result is. Until then, please help. Thank you, your time and effort is greatly appreciated!

    Read the article

  • Agile project management, agile development: early integration

    - by Matías Fidemraizer
    I believe that agile works if everything is agile. In software development area, in my opinion, if team members' code is integrated early, code will be more in sync and this has a lot of pros: Early integration helps team members to avoid painful merges. Encourages better coding habits, because everyone makes sure that they don't break co-workers' code everyday. Both developers and architects (code reviewers) may detect bad design decisions or just wrong development directions in real-time, preventing useless work. Actually I'm talking about getting the latest version of code base and checking-in your own code to the source control in a daily basis. When you start your coding day (i.e. you arrive to your work), your first action is updating your code base with the latest version from the source control. In the other hand, when you're about an hour to leave from your work and go home, your last action is checking-in your code to the source control and be sure that your day work doesn't break the project's build process. Rather than updating and checking-in your code once you finished an entire task, I believe the best approach is fixing small and flexible personal milestones and checking-in the code once you finish one of these. I really believe that this coding approach fits better in the agile project management concept. Do you know some document, blog post, wiki, article or whatever that you can suggest me that could be in sync with my opinion?. And, do you find any problem working with this approach?. Thank you in advance.

    Read the article

  • Debian wheezy keyboard shortcut for both opening and closing a terminal

    - by Peter
    I recently installed tilda and I would like to open it and close with the same keyboard shortcut. I wrote little something in bash that closes tilda if it is open and opens tilda when there is no such a process in ps -ef. It looks like this: a=ps -ef | fgrep -i tilda | cut -d' ' -f4 | head -1;if [ $a ] ; then kill $a; else tilda; fi It seems to be working (at least partially) when I commit this in terminal, but when I assign this command to specific keyboard shortcut (for example alt+1) it does nothing. Any suggestions? btw. is it possible to assign this shortcut for button '`' like in Quake?

    Read the article

  • Eris Update to 2.1 &ndash; No Problems

    - by Bunch
    I updated my Eris to Android’s 2.1 OS last night and everything went pretty well. I had wanted to update the phone mainly for two reasons. The first was to have the Navigation feature of the Google Maps application and to make YouTube work again. YouTube used to work on the phone and then stopped a few weeks ago. But before I started I looked around various forums and blogs to see what doom and gloom folks were talking about the update. Based on what I read and some common sense I used the following steps: Made sure there was a good charge on the phone. Connected to our WiFi (the download is about 77 megs, not very 3G friendly). Checked that the contacts were synced with the GMail account. As a precaution I copied off any music and pictures from the SD card to my laptop. Started the update process and waited for the download and installation to complete. It took maybe 30 minutes to do everything. After the update I re-synced the contacts and tested out the phone (it does make calls after all). I did need to download the text to speech application to get the Maps Navigation to work but that was easy enough since the application prompted me to download it. And now the YouTube application works again. Technorati Tags: Android

    Read the article

  • Cannot install Ubuntu on an Acer Aspire One 756

    - by Byron807
    I have used Ubuntu before, in virtual machines, but today I decided to make the leap and I bought a netbook to install Ubuntu as a "real" OS alongside Windows. The netbook I bought is an Acer Aspire One 756, with a 64-bit Intel processor, 4GB RAM, and Windows 8 as the default OS. I have now encountered several obstacles that actually prevent me from installing Ubuntu 12.10. Here are all the things I have tried so far: Used a live CD, in combination with a USB DVD drive. (I should point out that the Aspire One does not have an optical drive.) The computer does not boot in Ubuntu; the drive keeps spinning, but nothing happens, even though I changed the boot order in the BIOS. Used a USB drive created via the tool available on pendrivelinux.com. Again, I've made changes to the BIOS to make sure the computer tries to boot from USB before using the built-in HDD. The results vary in this case: sometimes, the computer keeps rebooting like crazy until I remove the USB drive, at which point the computer boots into Windows 8, as expected. If I use a different USB drive, I get an error message that says that the USB drive has been blocked due to "the current security policy". Tried to install Ubuntu via Wubi. The program appears to install something, but at some point during the installation process, I get a non-specified error message and nothing else happens. I am not sure if these are known issues; in any case, searching the forum has not yielded any results, so I thought I should simply describe my problem here in the hope that this question has not been answered before. I would greatly appreciate any help with this annoying problem. Of course, if anything is unclear, do not hesitate to ask for further details.

    Read the article

  • The Simplicity of the Oracle Stack

    - by user801960
    For many retailers, technology is something they know they need to optimise business operations, but do they really understand it and how can they select the solutions they need from the many vendors on the market? Retail is a data heavy industry, with the average retailer managing thousands of SKUs and hundreds of categories through multiple channels. Add to this the exponential growth in data driven by social media and mobile activities, and the process can seem overwhelming. Handling data of this magnitude and analyzing it effectively to gain actionable insight is a huge task, and needs several IT components to work together harmoniously to make the best use of the data available and make smarter decisions. With this in mind, Oracle has produced a video to make it easier for businesses to understand its global data IT solutions and how they integrate seamlessly with Oracle’s other solutions to enable organisations to operate as effectively as possible. The video uses an orchestra as an analogy for IT solutions and clever illustration to demonstrate the value of the Oracle brand. This video can be viewed at http://medianetwork.oracle.com/video/player/1622148401001. To find out more about how Oracle’s products and services can help retailers to deliver better results, visit the Oracle Retail website.

    Read the article

  • Lighttpd with FastCGI won't create /tmp/fcgi.sock on startup?

    - by Marlon
    I'm running lighttpd-1.4.19 on a debian 5 box and try to run web2py with fastcgi. The problem with that is, that lighttpd does not create the socket file /tmp/fcgi.sock. If I'm creating the file by myself touch /tmp/fcgi.sock lighttpd will start but will throw this error after some time running: unexpected end-of-file (perhaps the fastcgi process died): pid: 0 socket: unix:/tmp/fcgi.sock My config looks like this: fastcgi.server = ( "/handler_web2py.fcgi" = ( "handler_web2py" = ( #name for logs "check-local" = "disable", "socket" = "/tmp/fcgi.sock", "idle-timeout" = 20, "max-procs" = 1 ) ), ) Is there any known problem with running lighttpd on debian 5? Thanks for any help. I have pasted the whole lighttpd config: http://pastie.org/1660646

    Read the article

  • What are the mandatory Linux kernel modules to run inside of ESXi

    - by Marcin
    I'm used to rolling my own kernels for servers, as it nicely minimizes the number of exploits (and the resulting patches) to take care of. In a traditional (bare metal) world, the whole process is about knowing what you have (hardware), and what you need (Ethernet, IPv4, iptables, etc.) In a virtualized environment, some things stay the same (still need Ethernet and IPv4), some things go away (power management), and then there are some new needs (vxnet3, or vmware-tools, even though that's compiled outside of the kernel). So my question mostly concerns itself with the last two categories: what can I remove completely, and what new stuff do I want? For example, what IO scheduler do I want, if all my disk operations are going through another filesystem/scheduler/cache to get to the virtual disk? Do I need hyper-threading enabled, or is the VM going to show them to me anyway as a CPU anyway? Do I need Large Receive Offload turned on, or is that something that the hypervisor's network drivers are going to do for me?

    Read the article

  • Inspiring the method of teaching. Example- C++ :)

    - by Ashwin
    A year ago I graduated with a degree in Computer Science and Engineering. Considering C++ as the first choice of programming language I have been in the process of learning C++ in many ways. At first - five years back - I had many conceptions, most of which were so abstract to me. It started when I knew almost everything about Structs in C and nothing about Classes in C++. I went through a great time experimenting them all and learning a lot. I had a hard time evaluating Procedural programming vs Object-Oriented Programming. Deciding when to choose Procedural or Object-Oriented Programming took a great deal of patience for me. I knew that I cannot underestimate any of these Programming styles... Though Procedural programming is often a better choice than simple sequential unstructured programming, when solving problems with procedural programming, we usually divide one problem into several steps in order regarded as functions. Then we call these functions one by one to get the result of the problem. When solving problems with Object Oriented Priciples we divide one problem into several classes and form the interaction between them. Evaluating these two at the beginning (as a learner) required a lot of inspiration and thoughts. Instructing to think step by step. Relative concepts to understand deeply. Intensive interests to contrast both solving in both POP and OOP. If you were ever a mentor: What ideas/methods would you teach to students in which it will Inspire them to learn a programming language (in general, computer sciences)?

    Read the article

  • How to handle editing a large file for a non-technical user

    - by Luke
    I have a client who is given a tab delimited .txt file containing hundreds of thousands of rows. I have a user story as follows: As a user I want to take the text file and add a new value at the end of each line which contains the concatenated value of two of the columns. for example if the file read text_one text_two I need to output the following (preferably to a .txt file) text_one text_two text_onetext_two My first approach was to ask the vendor supplying the file to do the concatenation before providing the file, the easiest way to solve a problem is to eliminate it right? however they are very uncooperative and have point blank refused. I've looked at building a simple javascript application that does this client side so a non-technical user could select the file using a file selector. This approach has a few problems The file could be over a GB in size and so can't be loaded straight into memory, I've tried and the browser crashes There is no means to write a file in javascript so I'd need to output the content to the screen and have the user save it (somehow) I was thinking if I could get around the filesize limitations I could just output the edited content to the page and have the user save the page as a .txt file, however I think there is a better way than using javascript that will still accommodate the users lack of technical know-how. Please consider this question to be stack agnostic, but bear in mind that a nice little shell script or python script would be deemed unsuitable for a non technical user unless there is a way of "packaging" it nicely for a non-technical user. Updates The file is too large to open in excel. The process needs to be run weekly, but it doesn't require scheduling or automation...(yet)

    Read the article

  • Changed file and now I cannot access my SSH anymore

    - by Arnold
    I was trying to get my dedicated server to have a couple of VPS's installed using this tutorial: http://linux-vserver.org/Installation_on_CentOS In the process I had to change a file: /etc/ssh/sshd_config The documentation advises to change it into: ListenAddress <host IP address> Guess what? I literally added <host IP address> instead of the dedicated servers IP. I restarted the server and now I'm not able to access my SSH anymore. Can anyone help me to gain access to my SSH again? I'm using CentOS 6.

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • HP-UX -> Linux incremental remote backup

    - by stack_zen
    Hi. I've the need to setup a differential backup process from a range of remote HP-UXes to a central RHEL5 server. I'd happily go with rsync, problem is, stock HP-UX 11.11 has no built-in rsync and I don't have permissions to install any software on the remote stock HP-UXes. How should I approach this? HP-UX provides: fbackup (HP-UX exclusive) cpio (available in RHEL5, allows backing up only the files which changed, but always grabs the totality of the file) ssh remote_user@remote_host 'find /u01/engine/logs/ -type f -name "*.log" | cpio -o | gzip -' | cpio gunzip - | -idmv Those solutions don't really answer my incremental (bandwidth efficiency) problem do they?

    Read the article

  • Copying photos from camera stalls - how to track down issue?

    - by Hamish Downer
    When I copy files from my camera (connected via USB) to the SSD in my laptop a few files get copied and then the copy stalls. I'm not sure why, any ideas or things to investigate appreciated, or bug reports to go and look at. I have read this answer - the camera (Canon 40D in case that matters) mounts fine using gvfs. I can see the photos in Nautilus, or in the terminal (in /run/user/username/gvfs/... ) and I can copy a few photos, but not many. Using the terminal or Nautilus the process hangs until the camera goes to sleep. Digikam fails to copy any at all, as does Rapid Photo Downloader. Shotwell did manage it in the end, but that is very much a work around for me. I have disabled thumbnail generation by nautilus. Load average stays about 1 while this is happening, while CPU usage is half idle, half wait (and a little user/sys for other programs). None of the programs at the top of the cpu list in top are related to copying photos. There is not much in the logs - from /var/log/syslog Dec 2 16:20:52 mishtop dbus[945]: [system] Activating service name='org.freedesktop.UDisks' (using servicehelper) Dec 2 16:20:52 mishtop dbus[945]: [system] Successfully activated service 'org.freedesktop.UDisks' Dec 2 16:21:24 mishtop kernel: [ 2297.180130] usb 2-2: new high-speed USB device number 4 using ehci_hcd Dec 2 16:21:24 mishtop kernel: [ 2297.314272] usb 2-2: New USB device found, idVendor=04a9, idProduct=3146 Dec 2 16:21:24 mishtop kernel: [ 2297.314278] usb 2-2: New USB device strings: Mfr=1, Product=2, SerialNumber=0 Dec 2 16:21:24 mishtop kernel: [ 2297.314283] usb 2-2: Product: Canon Digital Camera Dec 2 16:21:24 mishtop kernel: [ 2297.314287] usb 2-2: Manufacturer: Canon Inc. Dec 2 16:21:24 mishtop mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-2" Dec 2 16:21:24 mishtop mtp-probe: bus: 2, device: 4 was not an MTP device This problem has only started recently and I've had all the hardware for ages. I have also recently upgraded to 12.10, though I'm not sure if the problem started when I upgraded or after the upgrade. I also note this similar question but it is currently unanswered and I'm providing more detail

    Read the article

  • Nodejs for processing js and Nginx for handling everything else

    - by Kevin Parker
    I am having a nodejs running on port 8000 and nginx on port 80 on same server. I want Nginx to handle all the requests(image,css,etc) and forward js requests to nodejs server on port 8000. Is it possible to achieve this. i have configured nginx as reverse proxy but its forwarding every request to nodejs but i want nginx to process all except js. nginx/sites-enabled/default/ upstream nodejs { server localhost:8000; #nodejs } location / { proxy_pass http://192.168.2.21:8000; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; }

    Read the article

  • I have deleted python files in usr/bin and cant reinstall it

    - by Plonkaa
    I am a novice at Ubuntu and unfortunately i have deleted 3 files in the usr/bin folder python 2.7 python python 2.6 Now my update manager wont work and when i type in python into gnome it says that it is no longer there. Please help me ive tried loads of different things but it just wont work. The closest i got was the following: I typed in sudo apt-get -f install and i thought i had fixed it but then i got a error message - Reading package lists... Done Building dependency tree Reading state information... Done The following packages were automatically installed and are no longer required: gir1.2-folks-0.6 gir1.2-polkit-1.0 libcogl5 mutter-common gir1.2-json-1.0 libcaribou0 gir1.2-accountsservice-1.0 gir1.2-clutter-1.0 gir1.2-gkbd-3.0 gir1.2-networkmanager-1.0 caribou libcogl-common libmutter0 gir1.2-mutter-3.0 gjs gir1.2-caribou-1.0 libclutter-1.0-0 gir1.2-telepathylogger-0.2 libclutter-1.0-common cups-pk-helper gir1.2-upowerglib-1.0 gir1.2-cogl-1.0 libmozjs185-1.0 gir1.2-telepathyglib-0.12 gir1.2-gee-1.0 libgjs0c gnome-shell-common Use 'apt-get autoremove' to remove them. The following extra packages will be installed: ubuntu-sso-client The following packages will be upgraded: ubuntu-sso-client 1 upgraded, 0 newly installed, 0 to remove and 35 not upgraded. 2 not fully installed or removed. Need to get 0 B/57.7 kB of archives. After this operation, 16.4 kB of additional disk space will be used. Do you want to continue [Y/n]? y Setting up python-minimal (2.7.2-7ubuntu2) ... /var/lib/dpkg/info/python-minimal.postinst: 4: python2.7: not found dpkg: error processing python-minimal (--configure): subprocess installed post-installation script returned error exit status 127 Errors were encountered while processing: python-minimal E: Sub-process /usr/bin/dpkg returned an error code (1) any advice is appreciated!

    Read the article

< Previous Page | 494 495 496 497 498 499 500 501 502 503 504 505  | Next Page >