Search Results

Search found 18092 results on 724 pages for 'matt long'.

Page 371/724 | < Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >

  • Is there ever a reason to do all an object's work in a constructor?

    - by Kane
    Let me preface this by saying this is not my code nor my coworkers' code. Years ago when our company was smaller, we had some projects we needed done that we did not have the capacity for, so they were outsourced. Now, I have nothing against outsourcing or contractors in general, but the codebase they produced is a mass of WTFs. That being said, it does (mostly) work, so I suppose it's in the top 10% of outsourced projects I've seen. As our company has grown, we've tried to take more of our development in house. This particular project landed in my lap so I've been going over it, cleaning it up, adding tests, etc etc. There's one pattern I see repeated a lot and it seems so mindblowingly awful that I wondered if maybe there is a reason and I just don't see it. The pattern is an object with no public methods or members, just a public constructor that does all the work of the object. For example, (the code is in Java, if that matters, but I hope this to be a more general question): public class Foo { private int bar; private String baz; public Foo(File f) { execute(f); } private void execute(File f) { // FTP the file to some hardcoded location, // or parse the file and commit to the database, or whatever } } If you're wondering, this type of code is often called in the following manner: for(File f : someListOfFiles) { new Foo(f); } Now, I was taught long ago that instantiated objects in a loop is generally a bad idea, and that constructors should do a minimum of work. Looking at this code it looks like it would be better to drop the constructor and make execute a public static method. I did ask the contractor why it was done this way, and the response I got was "We can change it if you want". Which was not really helpful. Anyway, is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF?

    Read the article

  • making cookies persistent in IE8

    - by Jamie Stevens
    There's a website I sign into frequently, and I'm getting sick of entering my username and password every time. The website can remember who I am so long as I don't close my browser (Internet Explorer 8), but when I do it forgets me, and asks me to login again. I'm guessing this is because it's using a cookie (and perhaps a session) that expires when I close my browser. Is there anyway to make this information persistent across each time I load my browser? (I tried exporting the cookies to a file, and then importing them as soon as the browser was reloaded, but that didn't work either... I'm thinking the cookie text file needs to be modified somehow.) (FYI The website is http://blackboard.unh.edu, but you won't have access unless you happen to be a student there :-) NOTE: I'm not interested in using any password remembering features in the browser. The only solution I'm open to is making the cookie / session persistent somehow!

    Read the article

  • links for 2011-03-16

    - by Bob Rhubart
    InfoQ: Randy Shoup on Evolvable Systems Randy Shoup discusses evolvable systems: how to run different versions of a system in parallel during migrations, decoupling a system with events, schemas at eBay and much more. (tags: ping.fm) InfoQ: Heresy & Heretical Open Source: A Heretic's Perspective Douglas Crockford presents a debate existing around XML and JSON, and the negative effect of the Intellectual Property laws on open source software. (tags: ping.fm) Oracle Technology Network Architect Day: Toronto Registration is now open for this day-long event, to be held at the Sheraton Centre Toronto on April 21. Registration is free, but seating is limited.  (tags: oracle otn enterprisearchitecture cloudcomputing) Harry Foxwell: The Cloud is STILL too slow! "Considering the exponentially growing expectations of what the Web, that is, "the Cloud", is supposed to provide, today's Web/Cloud services are still way too slow." - Harry Foxwell (tags: oracle otn cloud) Architecture Standards - BPMN vs. BPEL for Business Process Management (Enterprise Architecture at Oracle) Path Shepherd gives props to Mark Nelson. (tags: entarch oracle otn) ORCLville: Oracle Fusion Applications: If I Were An AppsTech Oracle ACE Director Floyd Teter says:" If I were an Oracle AppsTech with an eye on Fusion Applications, there are three tools/technologies I'd want... (tags: oracle otn oracleace fusionapplications) Events OverviewYour brain on #entarch - OTN Architect Day - Denver - March 23 This free event includes sessions on Cloud Computing, Application Portfolio Rationalization, System Optimization, Event-Driven Architecture, plus food, beverages, an lots of peer networking. Seating is limited. (tags: oracle entarch otn)

    Read the article

  • How to recover from failed Mysql schema update, with replication?

    - by OmerGertel
    I have two MySQL servers configured with master-slave replication. Before we deploy a new application version we: 1) STOP SLAVE 2) Take a MySQL dump of the slave. However, if a mistake is done during the deployment of the new schema version (a table is dropped by mistake, for example), having the slave intact doesn't help. Our service is write-intensive, so we can't turn it back up until we have a master working. If we now load the mysql dump back into the master, it will take a long time during which our service remains down. What is the best-practice to recover from such a mistake? How can I setup the system so I can easily promote the slave, turn on our service and only then tend to the broken database? Mainly, I'm worried with re-syncing the slave and the master after changes are done on the slave.

    Read the article

  • SharePoint 2010 in the cloud a.k.a. SharePoint Online

    - by Sahil Malik
    Ad:: SharePoint 2007 Training in .NET 3.5 technologies (more information). There are 3 ways to run SharePoint On premises, you buy the servers, and you run the servers. Hosted servers, where you don’t run the servers, but you let a hosting company run dedicated servers. Multi-tenant, like SharePoint online – this is what I am talking about in this blog post. Also known as SaaS (Software as a Service). The advantages of a cloud solution are undeniable. Availability, (SharePoint line offers a 99.9% uptime SLA) Reliability. Cost. Due to economies of scale, and no need to hire specialized dedicated staff. Scalability. Security. Flexibility – grow or shrink as you need to. If you are seriously considering SharePoint 2010 in the cloud, there are some things you need to know about SharePoint online. What will work - OOTB Customization, collaboration features etc. will work SharePoint Designer 2010 is supported, so no code workflows will work Visual Studio sandbox solutions, client object model will work. What won’t work - SharePoint 2010 online cloud environment supports only sandbox solutions. BCS, business connectivity services is not supported in SharePoint online. What you can do however is to host your services in Azure, and call them using Silverlight. Custom timer jobs will not work. Long story short, get used to Sandbox solutions – and the new way of programming. Sandbox solutions are pretty damn good. Most of the complaints I have heard around sandbox solutions being too restrictive, are uninformed mechanisms of doing things mired in the ways of 2002. .. or you could just live in 2002 too. Comment on the article ....

    Read the article

  • Is it OK to reoccupy my old GitHub username to protect repository redirections?

    - by Idan Arye
    I'm considering changing my GitHub username from the old alias I was using as a kid to my real name. I'm concerned about my repository URLs. GitHub will redirect the old URLs, but if someone creates a new account using my old username and creates a repository with the same name as one of my repositories, the URL redirection will break and the URL will lead to their repository, not mine. Now, this is understandable, and GitHub recommends to not count on the redirect in the long term, and update all the remotes, but I'm concerned about some Vim plugins I'm hosting on GitHub. It's a common practice to manage Vim plugins with Git(either as separate repositories or as submodules), and if one of the plugins' remotes break you'll get error messages when you try to batch-update all your plugins(it happened to me once...). It's not that hard to solve, and the chances that'll happen are slim, but I would still like to avoid causing trouble to the users of my plugins... To prevent this, I think to create a new account with my old username. That way I can avoid the risk of someone else taking my old username and breaking the redirects of my old repositories. While researching this approach I've found GitHub's Name Squatting Policy. According to that policy, GitHub can delete or rename inactive accounts. To my understanding, they do this to prevent Cybersquatting, but surely this isn't the case with my fake account - I'm not holding someone else's name in an attempt to sell it to them, I'm merely occupying a name I was using to protect my old URLs... So, is it acceptable to go with this plan an create a fake account with my old username?

    Read the article

  • Finding optimal ddrescue command line options where Accuracy > Speed

    - by gav
    I'm read up a bit about this tool and obviously looked at the man pages. The trouble is that ddrescue takes so long I need to get the command right first time. I wasn't sure how to improve on the vanilla; $ sudo ./ddrescue -v /dev/disk0s5 MyVolImage.dmg MyVolRescue.log $ sudo ./ddrescue -v MyVolImage.dmg /dev/disk1s3 MyVolRestore.log From HSF+ to HSF+ drives Source (Broken) HDD is connected via USB 2.0 Dest HDD is inside MacBook I would choose accuracy over speed There seem to be a lot of options but I'm not sure how they impact quality and speed of recovery. Thanks, Gav

    Read the article

  • Editing the Microsoft Security Essentials context-menu

    - by GPX
    As all MSE users would know, the context-menu item that it adds to Explorer is really long, with one whole sentence "Scan with Microsoft Security Essentials...". Is there a way to edit this and shorten it? I figured out the the file shellext.dll is responsible for registering the context menu. I used ResEdit to edit the DLL and changed the string table entry from Scan with ($BrandName) to Scan with MSE. But it still won't change. I've also tried de-registering the DLL and then registering it again. No luck! Any ideas? Or am I doing something wrong?

    Read the article

  • Must developers understand the business domain or should the specification be sufficient?

    - by Jerome C.
    I work for a company for which the domain is really difficult to understand because it is high technology in electronics, but this is applicable to any software development in a complex domain. The application that I work on displays a lot of information, charts, and metrics which are difficult to understand without experience in the domain. The developer uses a specification to describe what the software must do, such as specifing that a particular chart must display this kind of metrics and this metric is the following arithmetic formula. This way, the developer doesn't really understand the business and what/why he is doing this task. This can be OK if specification is really detailled but when it isn't or when the author has forgotten a use case, this is quite hard for the developer to find a solution. At the other hand, training every developer to all the business aspects can be very long and difficult. Should we give more importance to detailled specification (but as we know, perfect specification does not exist) or should we train all the developers to understand the business domain? EDIT: keep in mind in your answer that the company could used external developpers.

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Can make the proxy settings invisible when I share my internet connection via wifi?

    - by Neil
    This is probably a long shot... I have an HTC Desire and frustratingly found out after I got it that it doesn't support network proxy settings. We have a wireless network at my office that uses a proxy. My desktop at work runs ubuntu. I was wondering if the following set up would work: Plug a USB Wireless adapter into the desktop that has a working internet connection using the proxy. Setup the wireless adapter as an ad-hoc network Share the internet connection over the ad-hoc network. Make it so that the use of the proxy is invisible to users of the shared network connection. Connect the Android phone to the ad-hoc wireless network and utilise the internet connection. My question is this: Is this possible or should I give up now and not even try? I think I can handle steps 1, 2, 3 and 5. I just have no idea if step 4 even makes sense, let alone is possible. Thanks

    Read the article

  • Chrome is "glitching", on my Mac, is this a known issue?

    - by Fraggle
    Ok, so on my MacBook Pro I'm seeing some weird stuff when using Chrome. Occasionally, the screen will momentarily get messed up (see pic). Usually this happens for less than 1 second but one time it froze like that long enough for me to get the screen shot shown. Only seems to happen when using Chrome (Version 35.0.1916.114). OSX 10.9.3 Happens on both primary monitor and when attached to secondary display Does not seem to happen to other windows being shown on same screen Is this a known issue? Possible causes?

    Read the article

  • Starting with text based MUD/MUCK game

    - by Scott Ivie
    I’ve had this idea for a video game in my head for a long time but I’ve never had the knowledge or time to get it done. I still don’t really, but I am willing to dedicate a chunk of my time to this before it’s too late. Recently I started studying Lua script for a program called “MUSH Client” which works for MU* telnet style text games. I want to use the GUI capabilities of Mush Client with a MU* server to create a basic game but here is my dilemma. I figured this could be a suitable starting place for me. BUT… Because I’m not very programmer savvy yet, I don’t know how to download/install/use the MU* server software. I was originally considering Protomuck because a few of the MU*s I were more impressed with began there. http://www.protomuck.org/ I downloaded it, but I guess I'm too used to GUI style programs so I'm having great difficulty figuring out what to do next. Does anyone have any suggestions? Does anyone even know what I'm talking about? heh..

    Read the article

  • Dialing into multiple PPP connections on Ubuntu

    - by sharjeel
    I have multiple 3G USB based Modems. I would like them to keep connected simultaneously, NOT necessarily aggregating their bandwidth; a separate intelligent application would manage their utilization effectively. However I am running into problem of setting up proper routes for the ppp0,ppp1 interfaces: when one of them connects, other's entries in the routing table get updated so it is no more usable. If I reconnect the second one, it would override the first one's routing entries. If I do it over and over, sometimes both of them's entries disappear while in rare cases the two work well. I have tried it both using NetworkManager as well as WVDial but issue pops up in both of these. Perhaps both of them use same PPP dialer at the backend and thats why this issue appears. What is the proper solution to make them work together? In the long run, I'd also like them to automatically dial in once USB gets connected.

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

  • http to https upgrade -- SEO troubles

    - by SLIM
    I upgraded my site so that all pages have gone from using http to https. I didn't consider that Google treats https pages differently than http. I re-created my sitemap to so that all links now reflect the new https and let it be for a few days. (Whoops!) Google is now re-indexing all https pages. I have about 19k pages on the site, and Google has already indexed about 8k of the new https. The problem is that Google sees all of these as brand new pages when many of them have a long http history. Of course most of you will recognize the problem, I didn't set up a 301 from the old http to the new https. Is it too late to do this? Should I switch my sitemap back to http and then 301 to the new https? Or should I leave the sitemap as is, and setup 301 redirects anyway.. I'm not even sure if Google is trying to reach the http site anymore. Currently the site is doing 303 redirects (from http to https), although I haven't figured out why yet. Thanks for any suggestions you can offer.

    Read the article

  • reducing Windows 7 size

    - by Sejanus
    My Windows 7 uses around 16 GB while Windows XP only around 4 GB hard disk space. Seems weird. I use Windows only for gaming so I dont need a lot of stuff they have to offer. What is the best way to reduce Windows 7 size? What can I delete / uninstall and how? Also I'd like reduce RAM and processor usage as much as possible (as long as it doesnt hurt game performance)... turn off all that fancy stuff and so on. What can I turn off and how? Thanks in advance!

    Read the article

  • How to write constructors which might fail to properly instantiate an object

    - by whitman
    Sometimes you need to write a constructor which can fail. For instance, say I want to instantiate an object with a file path, something like obj = new Object("/home/user/foo_file") As long as the path points to an appropriate file everything's fine. But if the string is not a valid path things should break. But how? You could: 1. throw an exception 2. return null object (if your programming language allows constructors to return values) 3. return a valid object but with a flag indicating that its path wasn't set properly (ugh) 4. others? I assume that the "best practices" of various programming languages would implement this differently. For instance I think ObjC prefers (2). But (2) would be impossible to implement in C++ where constructors must have void as a return type. In that case I take it that (1) is used. In your programming language of choice can you show how you'd handle this problem and explain why?

    Read the article

  • Is there a successor to NTFS?

    - by hak8or
    What I am asking is if there is any file system that is known to be a possible successor of NTFS? I am asking because I just bought a new external, and realized that the path to a file, including the file name itself, cannot add up to more than 255 characters. This is known as the "Long File Name" by microsoft. I am assuming this is due to the file system limitation, so I am searching for any possible alternatives. I have a windows 7 based machine, but I am under the assumption that there would be third party software that would work with windows to make the new file system accessible by windows explorer.

    Read the article

  • SQL SERVER – Fix Visual Studio Error : Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL

    - by pinaldave
    In one of the virtual environment while I was trying to add SQL Server Database (.mdf) file to asp.net project I encountered following error: Connections to SQL Server files (.mdf) require SQL Server Express 2005 to function properly. Please verify the installation of the component or download from the URL:  For a long time I am using SQL Server 2012 but this error was a bit interesting to me. I realize that there should not be any need of the SQL Server 2005 installation. I quickly figured out that I can remove this error if I do as mentioned below: Open Microsoft Visual Studio Select Tools >> Options >> Database Tools >> Data Connections Enter the name of an installed instance in “SQL Server Instance Name” field. Click OK If you do not know the instance name, you can follow either of the options. 1) Use the command line sqlcmd utility 2) Using SQL Server Management Studio Is there any other way to resolve this error? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: sqlcmd, Visual Studio

    Read the article

  • ArchBeat Link-o-Rama for 2012-09-06

    - by Bob Rhubart
    Oracle Technology Network Architect Day - Boston, MA - 9/12/2012 Sure, you could ask a voodoo priestess for help in improving your solution architecture skills. But there's the whole snake thing, and the zombie thing, and other complications. So why not keep it simple and register for Oracle Technology Network Architect Day in Boston, MA. There's no magic, just a full day of technical sessions covering Cloud, SOA, Engineered Systems, and more. Registration is free, but seating is limited. You'll curse yourself if you miss this one. Register now. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." Using FMAP and AnalyticsRes in a Oracle BI High Availability Implementation | Christian Screen "The fmap syntax has been used for a long time in Oracle BI / Siebel Analytics when referencing images inherent in the application as well as custom images," says Oracle ACE Christian Screen. "This syntax is used on Analysis requests an dashboards." More on Embedded Business Intelligence | David Haimes David Haimes give an example of Timeliness as "one of the three key attributes required for BI to be considered embedded BI." Thought for the Day "Architect: Someone who knows the difference between that which could be done and that which should be done. " — Larry McVoy Source: Quotes for Software Engineers

    Read the article

  • Populating cells with data from another spreadsheet after just keying in a few letters

    - by Wendy Griffin
    I have 1 workbook with 2 spreadsheets. Spreadsheet 2 column A contains a long list of company names, Columns B - H contain critical information about the company. Spreadsheet 1 contains all of the columns as Spreadsheet 2 plus some other columns. What I'm trying to achieve is that when you start to type in the first 3 characters of a company name on Spreadsheet 1 it would then have a drop down of the companies (as listed on Spreadsheet 2) that share the first 3-5 letters and you would select one. Upon selecting a company name all of the corresponding company information would populate in the other columns on spreadsheet 1 automatically. This is to avoid copying a row from Spreadsheet 2 and pasting it in Spreadsheet 1. Any help with this would be greatly appreciated. Cheers!

    Read the article

  • What are your "must have", free (gratis), programs?

    - by flybywire
    Poll: What software must you always keep handy? I don't care if it is open source, freeware, or demo, as long as its price is $0. Neither do I care if it is for desktops, handhelds, netbooks, web based, cellphones. If it is free to use – and essential to your happiness and well-being – put it in this list. Rules: Please, list only ONE application per answer, so that people can vote up the items that they prefer. Please do not post applications that have already been posted - instead, up-vote the existing answer.

    Read the article

  • Oracle Linux Tips and Tricks: Using SSH

    - by Robert Chase
    Out of all of the utilities available to systems administrators ssh is probably the most useful of them all. Not only does it allow you to log into systems securely, but it can also be used to copy files, tunnel IP traffic and run remote commands on distant servers. It’s truly the Swiss army knife of systems administration. Secure Shell, also known as ssh, was developed in 1995 by Tau Ylonen after the University of Technology in Finland suffered a password sniffing attack. Back then it was common to use tools like rcp, rsh, ftp and telnet to connect to systems and move files across the network. The main problem with these tools is they provide no security and transmitted data in plain text including sensitive login credentials. SSH provides this security by encrypting all traffic transmitted over the wire to protect from password sniffing attacks. One of the more common use cases involving SSH is found when using scp. Secure Copy (scp) transmits data between hosts using SSH and allows you to easily copy all types of files. The syntax for the scp command is: scp /pathlocal/filenamelocal remoteuser@remotehost:/pathremote/filenameremote In the following simple example, I move a file named myfile from the system test1 to the system test2. I am prompted to provide valid user credentials for the remote host before the transfer will proceed.  If I were only using ftp, this information would be unencrypted as it went across the wire.  However, because scp uses SSH, my user credentials and the file and its contents are confidential and remain secure throughout the transfer.  [user1@test1 ~]# scp /home/user1/myfile user1@test2:/home/user1user1@test2's password: myfile                                    100%    0     0.0KB/s   00:00 You can also use ssh to send network traffic and utilize the encryption built into ssh to protect traffic over the wire. This is known as an ssh tunnel. In order to utilize this feature, the server that you intend to connect to (the remote system) must have TCP forwarding enabled within the sshd configuraton. To enable TCP forwarding on the remote system, make sure AllowTCPForwarding is set to yes and enabled in the /etc/ssh/sshd_conf file: AllowTcpForwarding yes Once you have this configured, you can connect to the server and setup a local port which you can direct traffic to that will go over the secure tunnel. The following command will setup a tunnel on port 8989 on your local system. You can then redirect a web browser to use this local port, allowing the traffic to go through the encrypted tunnel to the remote system. It is important to select a local port that is not being used by a service and is not restricted by firewall rules.  In the following example the -D specifies a local dynamic application level port forwarding and the -N specifies not to execute a remote command.   ssh –D 8989 [email protected] -N You can also forward specific ports on both the local and remote host. The following example will setup a port forward on port 8080 and forward it to port 80 on the remote machine. ssh -L 8080:farwebserver.com:80 [email protected] You can even run remote commands via ssh which is quite useful for scripting or remote system administration tasks. The following example shows how to  log in remotely and execute the command ls –la in the home directory of the machine. Because ssh encrypts the traffic, the login credentials and output of the command are completely protected while they travel over the wire. [rchase@test1 ~]$ ssh rchase@test2 'ls -la'rchase@test2's password: total 24drwx------  2 rchase rchase 4096 Sep  6 15:17 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc You can execute any command contained in the quotations marks as long as you have permission with the user account that you are using to log in. This can be very powerful and useful for collecting information for reports, remote controlling systems and performing systems administration tasks using shell scripts. To make your shell scripts even more useful and to automate logins you can use ssh keys for running commands remotely and securely without the need to enter a password. You can accomplish this with key based authentication. The first step in setting up key based authentication is to generate a public key for the system that you wish to log in from. In the following example you are generating a ssh key on a test system. In case you are wondering, this key was generated on a test VM that was destroyed after this article. [rchase@test1 .ssh]$ ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/home/rchase/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/rchase/.ssh/id_rsa.Your public key has been saved in /home/rchase/.ssh/id_rsa.pub.The key fingerprint is:7a:8e:86:ef:59:70:ef:43:b7:ee:33:03:6e:6f:69:e8 rchase@test1The key's randomart image is:+--[ RSA 2048]----+|                 ||  . .            ||   o .           ||    . o o        ||   o o oS+       ||  +   o.= =      ||   o ..o.+ =     ||    . .+. =      ||     ...Eo       |+-----------------+ Now that you have the key generated on the local system you should to copy it to the target server into a temporary location. The user’s home directory is fine for this. [rchase@test1 .ssh]$ scp id_rsa.pub rchase@test2:/home/rchaserchase@test2's password: id_rsa.pub                  Now that the file has been copied to the server, you need to append it to the authorized_keys file. This should be appended to the end of the file in the event that there are other authorized keys on the system. [rchase@test2 ~]$ cat id_rsa.pub >> .ssh/authorized_keys Once the process is complete you are ready to login. Since you are using key based authentication you are not prompted for a password when logging into the system.   [rchase@test1 ~]$ ssh test2Last login: Fri Sep  6 17:42:02 2013 from test1 This makes it much easier to run remote commands. Here’s an example of the remote command from earlier. With no password it’s almost as if the command ran locally. [rchase@test1 ~]$ ssh test2 'ls -la'total 32drwx------  3 rchase rchase 4096 Sep  6 17:40 .drwxr-xr-x. 3 root   root   4096 Sep  6 15:16 ..-rw-------  1 rchase rchase   12 Sep  6 15:17 .bash_history-rw-r--r--  1 rchase rchase   18 Dec 20  2012 .bash_logout-rw-r--r--  1 rchase rchase  176 Dec 20  2012 .bash_profile-rw-r--r--  1 rchase rchase  124 Dec 20  2012 .bashrc As a security consideration it's important to note the permissions of .ssh and the authorized_keys file.  .ssh should be 700 and authorized_keys should be set to 600.  This prevents unauthorized access to ssh keys from other users on the system.   An even easier way to move keys back and forth is to use ssh-copy-id. Instead of copying the file and appending it manually to the authorized_keys file, ssh-copy-id does both steps at once for you.  Here’s an example of moving the same key using ssh-copy-id.The –i in the example is so that we can specify the path to the id file, which in this case is /home/rchase/.ssh/id_rsa.pub [rchase@test1]$ ssh-copy-id -i /home/rchase/.ssh/id_rsa.pub rchase@test2 One of the last tips that I will cover is the ssh config file. By using the ssh config file you can setup host aliases to make logins to hosts with odd ports or long hostnames much easier and simpler to remember. Here’s an example entry in our .ssh/config file. Host dev1 Hostname somereallylonghostname.somereallylongdomain.com Port 28372 User somereallylongusername12345678 Let’s compare the login process between the two. Which would you want to type and remember? ssh somereallylongusername12345678@ somereallylonghostname.somereallylongdomain.com –p 28372 ssh dev1 I hope you find these tips useful.  There are a number of tools used by system administrators to streamline processes and simplify workflows and whether you are new to Linux or a longtime user, I'm sure you will agree that SSH offers useful features that can be used every day.  Send me your comments and let us know the ways you  use SSH with Linux.  If you have other tools you would like to see covered in a similar post, send in your suggestions.

    Read the article

  • Disable of discrete/native graphics on MacBook Retina when using apps like VMware, VLC, etc

    - by badkitteh
    I got a MacBook Pro Retina a few months ago and I'm really happy with it. However, since I do a lot of work in different environments/OSes, I make heavy use of VMware to have them with me when I'm on the road. The MBPr has a great battery life - as long as integrated graphics are being used. Unfortunately as soon as I launch VMware, the MacBook switches to discrete graphics and battery life is effectively halved. I noticed that after installing gfxCardStatus, and now I'm wondering if there is a way to force the integrated graphics being used all of the time, so I can enjoy maximum battery life. Thanks.

    Read the article

< Previous Page | 367 368 369 370 371 372 373 374 375 376 377 378  | Next Page >