Search Results

Search found 30785 results on 1232 pages for 'solution explorer'.

Page 355/1232 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • How to edit known_hosts when several hosts share the same IP and DNS name?

    - by Frédéric Grosshans
    I regularly ssh into a computer which is a dual-boot OS X / Linux computer. The two OS instance do not share the same host key, so they can be seen as two host sharing the same IP and DNS. Let's say the IP is 192.168.0.9, and the names are hostname and hostname.domainname As far as I understood, the solution to be able to connect to the two host is to add them both to the ~/.ssh/know_hosts file. However, it is easier said than done, because the file is hashed, and has probably several entries per host (192.168.0.9, hostname, hostname.domainname). As a consequence, I have the following warning Warning: the ECDSA host key for 'hostname' differs from the key for the IP address '192.168.0.9' Is there an easy way to edit the known_hosts file, while keeping the hashes. For example, how can I find the lines corresponding to a given hostame? How can I generate the hashes for some known hosts? The ideal solution would allow me to connect to seamlessly to this computer with ssh, no matter whether I call it 192.168.0.9, hostname or hostname.domainname, nor if it uses its Linux hostkey or its OSX hostkey. However, I still want to receive a warning if there is a real man-in-the middle attack, i.e. if another key than these two is used.

    Read the article

  • Paid antivirus solutions for Windows

    - by AP Erebus
    NOTE: If your looking for recommendations on free antivirus, check this question: http://superuser.com/questions/2/free-antivirus-solutions-for-windows Much like the above, I'm curious to opinions on the best PAID antivirus solution, personal or commercial. Enterprise solutions are welcome and as much detail regarding costs is welcome. Personally I'm looking for a licence that will grant me more than 1 computer install and quality technical support, for personal use. as in the free antivirus question: See if your antivirus of choice is already listed. Chances are it is. If you spot an answer that mentions one you already use, vote that up if you think it's a good solution. If you know of a feature or drawback not listed, or can include experiences in dealing with it, please edit the answer accordingly. If you know of any that can also be used at work please point this out. This covers all Windows platforms from XP, Vista and Windows 7. If you see an existing entry that needs an update or to add your testimonial, please do.

    Read the article

  • Splitting build cross the network?

    - by Dandikas
    Is there a known solution for splitting build process cross the network machines? Use case: We are an average software development company. We own around 50 development workstations (Quad Core 2.66Ghz, 4 GB ram, 200 GB raid). No need to tell that at any single moment not every machine is loaded to the max. There are 5 to 15 projects running simultaneously at any single moment. Obviously all of them are continuously build on server, than deployed to proper environment. Single project build is taking from 3 to 15 minutes. The problem: Whenever we build 5 projects in a row the last project is going to be ready after around 25 - 50 minutes. Building in parallel does not solve the problem (build is only a part of the game, than you need to deploy, run tests etc.) YES the correct solution is to add another build server, but "That involves buying new Expensive hardware, and we already spent a lot!". Yea, right(damn them)! Anyway. What about splitting build among developers workstation? Lets say whenever we need to build project "A" we check 5 workstations and start build on all that are not overloaded. The build can be canceled by a developer if he really needs all the power of his machine as long as there is at least 1 machine that is still building. After build is finished deployment can be performed to a proper environment (hosted on some server, not on workstation :) ). The bigger the company the more this makes sense to me. Anyone tried something like this? Are there any good practices? Any helpful software?

    Read the article

  • Share Exchange Calendar Outside Organization

    - by CalCurious
    I'm trying to figure out the best way to meet a user's (Corp-A-User) request to share their calendar with someone at another company (Corp-B-User). We're running Microsoft SBS 2008 with Exchange 2007 and SharePoint. The remote user is running Exchange, version unknown. Corp-A-User wants to give the Corp-B-User the ability to create appointments on Corp-A-User's calendar. This will naturally require sharing of Free/Busy information. Corp-A-User naturally lacks the vision to seen ANY problem with giving Corp-B-User full access to their calendar. But, I see the problems with that and would prefer that Corp-B-User have only the ability to see Free/Busy and create appointments. Most of the external publishing options that I have thought of, such as WebDav, allow displaying a user's calendar, but there are problems with security and the ability to create appointments. Right now, I'm thinking the cleanest solution would be to use a Google calendar along with Google Calendar Sync for the two user's Outlook clients. But, I'm not sure if there isn't a better way and I hate teh idea of pushing a corporate calendar up to Google. Not to mention the issues likely to pop up from the multiple sync paths. Does any one have a good solution for this scenario that would be willing to share what they use?

    Read the article

  • CDN Rerouting on 404 (file not yet in synch with original storage)

    - by Alan Ristic
    Here is the problem. I've setup my app(on EC2) to store uploaded images directly on Amazon S3. I'd like to be able to serve static files(cdn) from my 'home' server so I wrote script that does sync from S3. But there is a window of (at least) one minute in synch. Now I see two solutions on the problem of pics not been available on 'home' server here: 1.I write script on EC2 (where the app resides) to fetch from DB pics that have status of "not-yet-synch", which is default state when user uploads picture. The script then does a ping to picture and if it gets OK response, updates DB from "not-yet-synch" to "synch". 2.Prefered solution would be to let apache (in this case) redirect request for an image if it sees 404 (e.g. doesent find image requested) to S3. This way I wouldn't need script from solution 1. So what approach do you suggest I take in solving this redundancy problem? Or what is practice in production environments? To further clarify; I'd like so serve images first from 'home' server, if that fails serve them from S3. Tnx, Alan

    Read the article

  • SSL wildcard certificates and trailing 'www'

    - by user173326
    I've got a wildcard SSL certificate for *.mydomain.com. I'm using nginx, and redirecting all traffic for http to https, and also rewriting the URLs without a trailing www (if there is one). So it has, 1) http://subdomain.mydomain.com ---> https://subdomain.mydomain.com 2) http://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 3) https://www.subdomain.mydomain.com ---> https://subdomain.mydomain.com 4) https://subdomain.mydomain.com ---> https://subdomain.mydomain.com However, since my cert is for *.mydomain.com, case 3 gets an SSL error in chrome ('This is probably not the site that you are looking for!'), but if you click through it gets redirected and all is well. I understand why, since the initial connection is for https with a www (2 levels of subdomains), which doesn't match what is on the wildcard certificate. I thought a solution would be to get an additional cert for *.*.mydomain.com to cover www.*.mydomain.com. But it seems like that won't work. I spoke to agents from namecheap and comodo, and both said *.*.mydomain.com was not possible. I also came across this: https://support.quovadisglobal.com/KB/a60/will-ssl-work-with-multilevel-wildcards.aspx Is there a solution to this? To be able to cover www.*.mydomain.com?

    Read the article

  • Is it possible to change "working directory" of XeTeX?

    - by Herbert Sitz
    Using XeTeX there are many working files that get created in process of producing the pdf, and they litter the directory where my main .tex file is. Is it possible to change the working directory of XeTeX so that it stores all these scratch files in some other directory, out of the way? There is a previous question on Superuser.com that discusses a utility that cleans up the working files by deleting them after they're produced: http://superuser.com/questions/95712/how-to-avoid-littering-ones-tex-directories-with-intermediate-files That solution doesn't work for me since I'm using XeTeX, but also it seems like it would be preferable to simply be able to designate a "scratch" directory where all working files are saved. I haven't been able to find any info on how to do it though. Is there a way? (My question is prompted partly because of the fact that I often work with files in a directory that is shared using DropBox, so it creates a lot of unnecessary traffic if files are getting created and destroyed willy nilly. I don't know if it affects speed in any way, but the idea of having a separate working directory that is not shared/replicated by DropBox would be a cleaner solution, even if I could use the method suggested in the earlier thread.)

    Read the article

  • Membership in two domains

    - by imagodei
    Hello! I would your suggestions for an effective solution for a person, who needs to access resources in two Windows domains and wants to use one computer. It's about our CEO, who has accepted a second position in another company. Accessing files and folders isn't big problem. The greatest challenge I see is that he wants to conveniently access Exchange accounts in both companies; he would like to send and receive mail in single Outlook if possible (two profiles?) There is also a challenge with calendars: he would like to have one calendar for all activities from both Exchange accounts. Creating a POP3 account for accessing second Exchange server is a last resort, because obviously there is a problem with scheduling meetings and other calendar related tasks. Forwarding and receiving all mail/tasks on primary Exchange server is inconvenient because simple replying to original sender is disabled; and also when manually changing the recepient, he will receive mail from the wrong address. We were considering Virtualisation, that is setting up an instance of virtual machine inside existing installation and then joining this virtual computer to a second domain. Then installing another MS Outlook. This would of course mean two different Outlook accounts, two different calendars, but would at least enable our CEO to access all information from a single laptop. Does anyone have any other idea? I know setting up two domains on a single computer is a no-go (without much hacking at least), but effective workarounds are appreciate. The thing I am looking here is high usage/efficiency/productivity, but also as elegant solution from the administration point of view. Thank you very much (if you managed to read this through, this is a good sign ^_^ )

    Read the article

  • Walkthrough/guide building aplication server for multi tenant web app [on hold]

    - by Khalid Adisendjaja
    The web app will detect a subdomain such as tenant1.app.com, tenant2.app.com, etc to identify tenant environment, each tenant environment will have a different database credential (port,db name,etc) but still connecting to the same database server. Each tenant should use app.com for their main domain, using their own domain is prohibitted. Each tenant will have their own rest api endpoint such as tenant1.app.com/api/v1/xxxx, tenant2.app.com/api/v1/xxxx, tenant3.app.com/api/v1/xxxx I've come to a simple solution by setting a wildcard subdomain (*.app.com) on webserver Apache/Nginx vhost configuration file. I have googled so many concept for building a multi-tenant app server but still don't understand how to really done it, what is the right way to do it and what is actually required to do this task. So I've come to this questions, Do I need a proxy server, dns masking, etc.. How to monitor each tenants activity What about server performance, load balancing, and scalability How to setup ssl certificate for each tenant what about application cache for each tenant Is it reliable to use the setup for production etc ... I have a very litte experience on server infrastructure, so I'm looking for a DIY walkthrough, step by step guide, or sophisticate solution ready to implemented for production

    Read the article

  • How do I load balance between two Linux machines?

    - by William Hilsum
    Inspired by the Stack Overflow network, I am now obsessed with HAProxy and trying to use it myself. At the moment, each HAProxy box has got two network cards (well, two configured, I can have a maximum of 4 and wasn't sure if they needed their own one for management between the boxes). On both machines, the backend one (eth1) is a private IP that goes to a switch connected to the webservers, and the front facing one (eth0) has a public internet IP that is routed straight though. In addition, I have created an additional virtual ip for eth0 called eth0:0 which has got a third public ip address. I just about get how to use it for load balancing between multiple web servers that are behind it, but, I am failing to load balance between the two HAProxy boxes - they appear to fight for the virtual IP, but, this does not appear to be a smart solution. Now, by using the virtual shared IP address, this solution appears to work and does seem to give me maximum uptime, but, is this the correct way to do it, or is there a smarter way? I have been looking at other Linux packages such as keepalived, but, I have only been using Linux (server) for a week now and am at the limits of my understanding. Is there anyone who has done this before and can you advise anything for maximum uptime?

    Read the article

  • Preferred mail system/server for a company?

    - by Trevoke
    Say you are responsible for setting up an email solution at a company. Which would be your choice? I know of the following options, but many of them not well: Gordano Mail System Exchange Exim Postfix Qmail Zimbra For having used it a little over two years, I really, really like Gordano Mail System. They offer a whole bunch of things, like calendaring, anti-spam, anti-virus, extremely complete and filterable logging options, aliases, a customizable webmail interface... And their software can be installed on both a Windows or Linux OS. In addition, their support is top-notch, their knowledgebase comprehensive (and, I will admit with a touch of pride, I have contributed, with my questions, to the addition of a few articles in there). Of course, they're not free, which can be a problem, but they're not Exchange, and they do offer pretty much everything that Exchange offers -- which is great if you want to stay away from that, but need all the features. Although, if you need a Blackberry Exchange Server, or something similar, I'm not sure what you should go for. So.. What would your choice be? Why? I've never played with a more DIY email solution, but I'm sure many people here have and wouldn't trade their setup for the world :)

    Read the article

  • How do I collect SNMP readings from intermittently-connected sites?

    - by Luke404
    I am collecting SNMP data on-site for a number of systems, currently using Cacti. These systems are spread on a number of sites that aren't always connected to internet, but I also need to centralize the data on a single system (datacenter housed server) and get graphs out of it. If I directly poll remote systems with a centralized Cacti I'd loose data when a site is not connected to internet. I should record data on-site (I have a server at each site and I can run whatever I want on it) and then 'sync' everything to the central system. One hack could be a cacti or directly an rrdtool on site and then periodically rsync RRD data to the central Cacti system, but that doesn't sound like a 'clean' solution: every RRD would have to be defined at both places and rsync scripts setup with the specific file names. Can you suggest a better solution? Cacti is not a requirement but I'd like to use something like that on the central system. On-site systems need only to collect data I don't need to graph it there or manage users rights to view data and stuff like that, users will only access the centralized system.

    Read the article

  • Send keystrokes simultaneously to both host and slave over internet?

    - by donodarazao
    I would like to watch movies with a friend who lives far away from me. For this, the playback should be synchronized on both our pc. However, we have some constraints: Due to our low bandwidth internet, any form of streaming solution wouldn't work. We do however both have the same copy of the movie on our harddisks. We use movies to learn languages and because of this, we very frequently pause and rewind. The typical "3...2...1...go!" solution over skype wouldn't work because it would soon get out of sync. I imagine an approach that sends keystrokes simultaneously to both our pc would work (for example, if I press space to pause the movie at my pc, space should also be send to his pc). Any ideas how this could be realized? I looked into Synergy and InputDirector, but both neither seem to be an option, because I don't want to see the desktop of my friend, I want to see my desktop Keystrokes should be sent simultaneously to both pc, not just to one pc We have both Windows 7x64, and we might use any media player (VLC, XBMC,...).

    Read the article

  • Install and enforce a scheduled task across a Windows domain

    - by Ricket
    We have a small domain of about 70 Windows computers (XP and 7). We want to schedule a command (an update mechanism) to run on all computers periodically, and we want the task to run regardless of the computer's connection to our network (i.e. the task should run even on a laptop that isn't connected to our VPN). We have a Microsoft System Center Essentials 2010 server so that might come in handy. The options I see are these: Do it completely manually. Install the scheduled task by hand or remotely using psexec (and the at command?) for each computer in our network. Enforce that newly imaged computers should have this task installed on them before deployed to the employee, or the task should be in the image. High initial cost (having to do this for each of 70 computers) but building it into the image might work... But there is some maintenance in making sure the task is added to everything. And I fear that a year or two down the road, we will have forgotten about it or gotten sloppy or had new IT employees who miss this step and some computers won't have the task. Having one of our servers run a script that loops through all computers and psexec's the command on each computer in the network -- it would only run on running, connected computers, so this solution wouldn't work. I suspect SCE could do something like this too, but again this is not a good solution. Neither of these are ideal, and I'm certain there is a better way to do it -- right? What is the best way to accomplish this task?

    Read the article

  • What program sent which packet to the network [closed]

    - by Erik Johansson
    I would like to have a tcpdump like program that shows which program sent a specific packet, instead of just getting the port number. This is a generic problem I've had on and off sometimes when you have and old tcpdump file lying around you have no way to find what program was sending that data.. The solution in how i can identify which process is making UDP traffic on linux ? is an indication that I can solve this with auditd, dTrace, OProfile or SystemTap, but doesn't show how to do it. I.e. it doesn't show the source port of the program calling bind().. The problem I had was strange UDP packets, and since those ports are so short lived it took me a while to solve this issue. I solved this by running an ugly hack similar to: while true; date +%s.%N;netstat -panut;done So either a method better than this hack, a replacement for tcpdump, or some way to get this info from the kernel so I can patch tcpdump. EDIT: This was asked on superuser "tracking what programs sends to net", no good solution though.

    Read the article

  • What is the optimum way to secure a company wide wiki?

    - by Mark Robinson
    We have a wiki which is used by over half our company. Generally it has been very positively received. However, there is a concern over security - not letting confidential information fall into the wrong hands (i.e. competitors). The default answer is to create a complicated security matrix defining who can read what document (wiki page) based on who created it. Personally I think this mainly solves the wrong problem because it creates barriers within the company instead of a barrier to the external world. But some are concerned that people at a customer site might share information with a customer which then goes to the competitor. The administration of such a matrix is a nightmare because (1) the matrix is based on department and not projects (this is a matrix organisation), and (2) because in a wiki all pages are by definition dynamic so what is confidential today might not be confidential tomorrow (but the history is always readable!). Apart from the security matrix, we've considered restricting content on the wiki to non super secret stuff, but off course that needs to be monitored. Another solution (the current) is to monitor views and report anything suspicious (e.g. one person at a customer site having 2000 views in two days was reported). Again - this is not ideal because this does not directly imply a wrong motive. Does anyone have a better solution? How can a company wide wiki be made secure and yet keep its low threshold USP? BTW we use MediaWiki with Lockdown to exclude some administrative staff.

    Read the article

  • Monospace font which supports at least both of Korean hangul and the Georgian alphabet?

    - by hippietrail
    Being both a language enthusiast and a programmer, I find myself often doing programming or text processing involving foreign language alphabets and scripts. One annoyance however is that CJK fonts (those which support Chinese, Japanese, and/or Korean) usually only contain glyphs for Latin, Greek, and Cyrillic at best. Often the Asian glyphs will be beautiful but the other glyphs can be quite ugly. Just as often in text editors you can only choose a single font, not one for CJKV and one for other, which will be each used for rendering the appropriate characters. Korean is one of the languages I'm most interested in currently. I only need hangul / hangeul for monospaced editing, hanja isn't common enough to be a problem. Another of the languages I'm currently involved in is Georgian, which has its own alphabet which is a little exotic but has pretty good support in common fonts on Windows and *nix. But I am as yet unable to find a font with good Korean glyphs and also Georgian glyphs. My editor of choice is gVim, so an answer telling me how to set it to use two fonts together would be just as good. Currently I'm using it mostly under Windows 7 so a vim-specific solution would be needed rather than a *nix-specific solution.

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • How do you use VIM to edit tabular data (tables)? Specifically, BIND (named) DNS db files.

    - by Richard Bronosky
    I'm usually a purist when it comes to vimming. I don't like remapping keys, or learning to rely on a bunch of plugins. I like to feel just as powerful on foreign boxen as I do on my own dev box. I do, however, believe in syntax files. Even though the solution may not be a syntax file (bindzone.vim is what I use), I want it bad enough to do whatever. I regularly view or edit tab (or comma, but that would be a bonus) delimited data. I hate having to set my tabstop to some ridiculous number in order to have everything line up. Example: The BIND zone files are ~40+,6,2,5,15+. So, even though I could view them on a single screen, if I set ts=40, I cannot. I have been searching for a "dynamic tab size" solution for years, but no luck. I hate that my only good way of editing or even visualizing tabular data is to scp it to my work station and open it in Open Office. There has to be a better way.

    Read the article

  • How can I batch convert SVG files containing text to PDF files (specifically on CentOS 5.3 x86_64)?

    - by molecules
    I would like to programatically convert SVG files to PDF files. However, the SVG files contain text that must be searchable in the generated PDF files. Also, it has to work on Red Hat Enterprise Linux 5.3 or CentOS 5.3 for the x86_64 architecture. It would be nice if it were Open Source or at least not very expensive. Here is what I've tried. All of these, except Batik, work fine on Debian Lenny. Inkscape I can get it installed using autopackages from http://inkscape.modevia.com/ap, but when I use it from the command line, the text is not searchable. Batik rasterizer [sic] When it converts SVG files to PDF files, the text is no longer searchable. svg2pdf The source for this and several of its dependencies are available to download. I have been trying to get it to compile on CentOS, but haven't had success yet. I found a precompiled version for Debian x86_64, but it doesn't work on CentOS. rsvg-convert Generated PDF isn't searchable on CentOS 5.3. Perhaps installing a newer version of cairo would help. Thanks to DaveParillo for mentioning rsvg-convert (on superuser). SOLUTION (but perhaps some of the above will still be useful to the reader) princeXML It works fine on CentOS when installed from source. For some reason it doesn't work when installed from the .rpm. Thanks Erik Dahlström! (provided solution that worked for my case on stackoverflow) Cross posted on stackoverflow

    Read the article

  • Autologin 2 Windows users OR Login another user from the desktop

    - by fpdragon
    I'm using two windows users on my HTPC at the same time. One is just for watching videos and one for administration via remote. This setup is quite ideal for me since windows can handle multiple concurrent logins and the win "rdp concurrent hack" (Google). The problem is, I want both users to be logged in automatically when the pc was started. It shall be possible to watch tv and also the admin user shall be automatically logged in to start my scripts and other tasks, even if I haven't logged in via remote desktop manually. Later, when I want to admin my htpc I can just rdp connect the admin user without interrupting the video playback on the actual HTPC's screen and check my cleanup tasks, downloads, ... witch already executed for this admin user. But right now I found no solution to automatically login user A from a user B desktop and I also found no solution to autologin both users immediately at startup. As a workaround I have to fire up my other notebook machine and login one time with the remote user via rdp. From this time on the remote admin user is running concurrent with the main user in the background of the machine. The other workaround would be... after startup switch user from main user to admin user and then back again. But that also requires manual steps. I'm on a Windows 8 System right now but all infos for Win7 or XP would be also interesting. thanks a lot for all ideas. PS: just to prevent useless posts... don't tell me that only one user can be logged in to windows. ;)

    Read the article

  • Update a bootable OS X drive clone with rsync?

    - by Joe
    The question: is it possible to keep a boot-able backup drive clone of OS X updated with rsync? If rsync is not a viable option are there alternatives? The Setup: My situation is as shown above. One internal Samsung 840 SSD [120g] in use as my OS X 10.8 boot disk on a recent model Mac Mini. I have successfully cloned that drive with disk utility to a 125g partition of another HDD in an external USB 3 enclosure and at that point I am able to boot to it. The Goal: As my last system went out in a fiery blaze taking much valuable data with it, I have a new respect for a proper backup solution and really want to do this right. My goal is to achieve an automated differential backup/update from Disk A to Disk B while most importantly maintaining boot-ability on the external drive. And I would prefer to do this differentially to minimize stress on the drives. Hence rsync was the first thing to come to mind. What I have tried: following along with Jamie Zawinski's differential mac bootable backup solution running this manually initially worked - i tested it with only very miniscule file change and everything was fine / external booted and all. now after subsequent passes rsync fails throwing errors particularly relating to updating 'boot.efi' (not at the machine currently I will update the precise log message once I return home) is this a drive partition size issue? does rsync require more space? if it cant be done, are there any alternatives? i've heard whispers of dd

    Read the article

  • using a second computer as a mere screen/monitor in X (VNC?)

    - by lara michaels
    Hello My goal is to use three monitors with my Linux system. It is a laptop, so adding another video card is not the easiest solution. (I have investigated a number of such options: getting a docking station with a PCI slot, USB/Cardbus vga adapters, etc, and for the time being don't want to go that way.) I am wondering if using an older desktop+screen I have lying around as the third "monitor" might be the easiest solution, if only there is a way to get it to work as a seamless, integrated desktop. I was wondering if I can use VNC or perhaps X itself (?) to achieve the following: computer A is my main computer; it has all my files, etc. computer B is used just to display on an additional screen keyboard+mouse are connected to computer A use VNC or X to connect the two so that computer B shows a X screen that is just as if it was a third physical screen connected to computer A. I don't know if the last point is clear, but what I mean is that I would like to be able to: be able to have my window manager assign/move around virtual desktops on all three screens move windows back and forth between the screens attached to computer A and the screen of computer B be able to copy something in an app being shown on a screen of computer A and paste it into an app being shown on the screen attached to computer B access the filesystem on my main computer (A) when using applications that are being shown on the screen attached to computer B Basically, I would like X to treat computer B just like it was nothing but a third physical screen... Is this doable? : ) ~lara

    Read the article

  • Determine if the "yes" is necessary when doing an SCP

    - by glowcoder
    I'm writing a Groovy script to do an SCP. Note that I haven't ran it yet, because the rest of it isn't finished. Now, if you're doing an scp for the first time, have to authenticate the fingerprint. Future times, you don't. My current solution is, because I get 3 tries for the password, and I really only need 1 (it's not like the script will mistype the password... if it's wrong, it's wrong!) is to pipe in "yes" as the first password attempt. This way, it will accept the fingerprint if necessary, and use the correct password as the first attempt. If it didn't need it, it puts yes as the first attempt and the correct as the second. However, I feel this is not a very robust solution, and I know if I were a customer I would not like seeing "incorrect password" in my output. Especially if it fails for another reason, it would be an incredibly annoying misnomer. What follows is the appropriate section of the script in question. I am open to any tactics that involve using scp (or accomplishing the file transfer) in a different way. I just want to get the job done. I'm even open to shell scripting, although I'm not the best at it. def command = [] command.add('scp') command.add(srcusername + '@' + srcrepo + ':' + srcpath) command.add(tarusername + '@' + tarrepo + ':' + tarpath) def process = command.execute() process.consumeOutput(out) process << "yes" << LS << tarpassword << LS process << "yes" << LS << srcpassword << LS process.waitfor() Thanks so much, glowcoder

    Read the article

  • ADUC Exchange tabs - Windows 7 & Exchange 2003

    - by John Gardeniers
    I have the admin tools install on a Win 7 64 bit machine but would like to see the Exchange tabs in ADUC. Googling shows this is a popular request and the most common solution (and the only one which appears to work to all) is to install Exchange Server Management for Vista using esmvista.msi /q. That may well have worked on beta versions of Win 7 but is definitely not working with my OEM copy of Win 7. Can this perhaps be made to work by installing from an Exchange 2007 CD (which I don't have at this time), bearing in mind that we have Exchange 2003 only? Can someone please offer a solution that works? I figure some of you must have solved this by now. Edit: I don't know if this is relevant or not but the Win 7 machine is also running Office 2010 Pro. About the bounty I had intended to award the bounty to gWaldo for having taken the extra steps to try to help me with this issue. However, as I was about to do so my screen started scrolling and I actually clicked on the answer posted by natxo asenjo, who's answer offended me, without realising it. Perhaps if I wasn't rushing I might have noticed but that's now history.

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >