Search Results

Search found 2484 results on 100 pages for 'maintain'.

Page 66/100 | < Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >

  • Developing web sites that imitate desktop apps. How to fight that paradigm? [closed]

    - by user1598390
    Supposse there's a company where web sites/apps are designed to resemble desktop apps. They struggle to add: Splash screens Drop-down menus Tab-pages Pages that don't grow downward with content, context is inside scrollable area so page is of a fixed size, as if resembling the one-screen limitation of desktop apps. Modal windows, pop-ups, etc. Tree views Absolutely no access to content unless you login-first, even with non-sensitive content. After splash screen desapears, you are presented with a login screen. No links - just simulated buttons. Fixed page-size. Cannot open a linked in other tab Print button that prints directly ( not showing printable page so the user can't print via the browser's print command ) Progress bars for loading content even when the browser indicates it with its own animation Fonts and color amulate a desktop app made with Visual Basic, PowerBuilder etc. Every app seems almost as if were made in Visual Basic. They reject this elements: Breadcrumbs Good old underlined links Generated/dynamic navigation, usage-based suggestions Ability to open links in multiple tabs Pagination Printable pages Ability to produce a URL you can save or share that links to an item, like when you send someone the link to an especific StackExchange question. The only URL is the main one. Back button To achieve this, tons of javascript code is needed. Lots and lots of Javascript and Ajax code for things not related with the business but with the necessity to hide/show that button, refresh this listbox, grey-out that label, etc. The coplexity generated by forcing one paradigm into another means most lines of code are dedicated to maintain the illusion of a desktop app. What is the best way to change this mindset, and make them embrace the web, and start producing modern, web apps instead of desktop imitations ? EDIT: These sites are intranet sites. Users hate these apps. They constantly whine about them, but they have to use them to do their daily work. These sites are in-house solutions, the end-users have no choice but to use them. They are a "captive audience". Also, substitution will not happen because of high costs. But at least if that mindset is changed, new developments would be more web-like.

    Read the article

  • Best language on Linux to replace manual tasks that use SSH/Telnet? [on hold]

    - by Calab
    I've been tasked to create and maintain a web browser based interface to replace several of the manual tasks that we perform now. I currently have a "shakey" but working program written in Perl (2779 lines) that uses basic Expect coding, but it has some limitations that require a great deal of coding to get around. Because of this I am going to do a complete rewrite and want to do it "right" this time. My question is this... What would be the best language to use to create a web based interface to perform SSH/Telnet tasks that we would normally do manually? Keep in mind the following requirements: Runs on a CentOS Linux system v5.10 Http will be served by Apache2 This is an INTRANET site and only accessible within our organization. User load will be light. No more that 5 users accessing it at one time. perl 5.8.8, php 5.3.3, python 2.7.2 are available... Not sure what other languages to check for, or what modules might be installed in each language. The web interface will need to provide progress indicators and text output produced by the remote connection, in real time as it is generated. If we are running our process on multiple hosts, they should be in individual threads so that they can run side by side, not sequentially. I want the ability to "trap" on specific text generated by the remote host and display an alert to the user - such as when the remote host generates an error message. I would like to avoid as much client side scripting (javascript/vbscript) as I can. Most users will be on Windows PC's using Chrome or IE as a browser. Users will be downloading the resulting output so they can process it as they see fit. I currently have no experience with "Ajax" or the like. Most of my coding experience is old 6809 assembly, Visual Basic 6, and whatever I can cut/paste from online examples in various languages (hence my "shaky" Perl program) My coding environment is Eclipse for remote code editing, but I prefer stuff like UltraEdit if I can get a decent syntax file for the language I'm using. I do have su access on the server, but I'm not the only one using this server so I can't just upgrade/install blindly as I might impact other software currently running on the machine. One reason that I'm asking here, instead of searching (which I did) is that most replies were, "use language 'xyz', but you need to use an external SSH connection" - like I'm using Expect in my Perl script. Most also did not agree on what language that 'xyz' should be. ...so, after this long posting, can someone offer some advice?

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • kickstart: reference floppy drive via %ksappend or %include

    - by virtualeyes
    Having trouble getting %ksappend or %include to work when referencing a local floppy drive. Booting off remote server's cd-rom drive I am able to load the CentOS 6 minimal install image, and then add ks=hd:fd0/ks-jvm.cfg to boot params to load kickstart init file from floppy disk. That works fine. The problem is that I want to load a streamlined generic init file off the floppy and then, within the init, %ksappend or %include specific config files relative to the type of server I'm building (JVM, MySQL, Apache, etc.) I do not have DHCP, networking needs to be specified statically, so %ksappend and %include both fail when attempting to reference http://some-LAN-IP/foo.cfg since networking has not yet been set. The kickstart setup only works when I glob in the entire config into a single file, which is great, but ugly and difficult to maintain when I return later, having forgotten the original setup. At this point I'd be happy if I could get %ksappend or %include working with a floppy drive reference in the %post section; that would consolidate a lot of common boilerplate that all kickstarts will rely on (sshd_config, rsync config, resolve.conf, and so on) Thanks for providing the magic floppy drive reference that is eluding me!

    Read the article

  • Multiple authoritative DNS server on same IPv4 address

    - by Adrien Clerc
    I'd like to maintain a DNS tunnel on my self-hosted server at example.com. I also have a DNS server on it, which serves everything for example.com. I'm currently using dns2tcp for DNS tunneling, on the domain tunnel.example.com. NSD3 is used for serving authoritative zones, because it is both simple and secure. However, I have only one public IPv4 address, which means that NSD and dns2tcp can't listen on the same IP/port. So I'm currently using PowerDNS Recursor using the forward-zones parameter like this: forward-zones-recurse=tunnel.example.com=1.2.3.4:5354 forward-zones=example.com=1.2.3.4:5353 This enables request for authoritative zone to be asked to the correct server, as well as for tunnel requests. NSD is listening on port 5353 and dns2tcp on port 5354. However, this is bad, because the recursor needs to be open. And it actually answers to any recursive query. Do you have any solution for that? I really prefer a solution that doesn't involve setting up BIND, but if you are in the mood to convince me, don't hesitate to do so ;) EDIT: I change the title to be clearer.

    Read the article

  • Any Recommendations for a Web Based Large File Transfer System?

    - by Glen Richards
    I'm looking for a server software product that: Allows my users to share large files with: The general public securely to 1 or more people (notification via email, optionally with a token that gives them x period of time to download) Allows anyone in the general public to share files with my users. Perhaps by invitation. Has to be user friendly enough to allow my users to use this with out having to bug me as the admin. It needs to be a system that we can install on our own server (we don't want shared data sitting on anyone else's server) A web based solution. Using some kind or secure comms channel would be good too, eg, ssh Files to share could be over 1 GB. I found the question below. WebDav does not sound user friendly enough: http://serverfault.com/questions/86878/recommendations-for-a-secure-and-simple-dropbox-system I've done a lot of searching, but I can't get the search terms right. There are too many services that provide this, but I want something we can install on our own server. A last resort would be to roll my own. Any ideas appreciated. Glen EDIT Sorry Tom and Jeff but Glen specifically says that he's looking for a 'product' so given that I specialise in this field thought that my expertise in this area may have been of use to him. I don't see how him writing services is going to be easy for him to maintain going forward (large IT admin overhead) or simple for his users and the general public to work with.

    Read the article

  • Multiple contacts with shared information

    - by Keith Thompson
    Background: I currently have several hundred contacts, synchronized between a Microsoft Exchange server and several mobile devices. I also save exported copies of the contacts in .vcf format. Is there a good way (application, file format, whatever) to maintain contacts with shared information? A very common scenario is that I have contacts for two or more people who live in the same house, for example: John Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-2222 Mobile: 555-555-3333 E-mail: [email protected] Jane Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-4444 Mobile: 555-555-5555 E-mail: [email protected] As you can see, both contacts have the same home address and phone number, but distinct names and work and mobile phone numbers. (Other information might also be either shared or distinct.) The applications and file formats I'm familiar with don't seem to have a good way to deal with this. If I use a single "John & Jane Doe" contact for both, it's difficult to distinguish the distinct information (if I want to call Jane's mobile phone rather than John's). If I use a separate contact for each, I have to remember to update both of them (or all of them for N 2) when they move or change their home phone number. An ideal solution would let me create a record containing information for their household, and have each of their contact records contain a reference to the household record, so that when I view John's contact record I see both shared and distinct information. Is there anything out there that has good support this kind of thing? (I would think there would be, since it's a very common scenario.) (I suppose I could roll my own system that generates merged .vcf files from some extended format, but that wouldn't play well with synchronizing across multiple devices.)

    Read the article

  • Preventing Windows version of Vim from destroying other file systems permissions

    - by dborba
    I am currently using the windows version of gVim to edit source files on a networked drive mapped to a linux system, as well as local files created in cygwin. The problem is that the windows version of gVim destroys the original file permissions on the respective systems. IE: Files on cygwin are defaulted to 077. When edited by the windows version of vim they are saved as 777.This problem doesn't even occur when using ms-notepad (as well as all other editors I've tried), so I am not quite sure why gVim does it. A possible solution would be to use cygwin's gVim for everything, but that's rather cumbersome as it requires running an x11 environment to support it, and it causes some problems when running some commands from within gVim (or vim for that matter) when working on the networked drive. Any ideas how I might be able to maintain the existing file permissions? Edit: This morning while on a different machine the problem with cygwin did not occur. Cygwin & gVim were the same version, however the other machine is running WinXP while the machine the problem is occurring on runs Win7.

    Read the article

  • Why datacenter water cooling is not widespread?

    - by MainMa
    From what I read and hear about datacenters, there are not too many server rooms which use water cooling, and none of the largerst datacenters use water cooling (correct me if I'm wrong). Also, it's relatively easy to buy an ordinary PC components using water cooling, while water cooled rack servers are nearly nonexistent. On the other hand, using water can possibly (IMO): Reduce the power consumption of large datacenters, especially if it is possible to create direct cooled facilities (i.e. the facility is located near a river or the sea). Reduce noise, making it less painful for humans to work in datacenters. Reduce space needed for the servers: On server level, I imagine that in both rack and blade servers, it's easier to pass the water cooling tubes than to waste space to allow the air to pass inside, On datacenter level, if it's still required to keep the alleys between servers for maintenance access to servers, the empty space under the floor and at the ceiling level used for the air can be removed. So why water cooling systems are not widespread, neither on datacenter level, nor on rack/blade servers level? Is it because: The water cooling is hardly redundant on server level? The direct cost of water cooled facility is too high compared to an ordinary datacenter? It is difficult to maintain such system (regularly cleaning the water cooling system which uses water from a river is of course much more complicated and expensive than just vacuum cleaning the fans)?

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • How to configure Hyper-V failover cluster to live migrate when dynamic memory runs out?

    - by Matt Johnson
    Appologies in advance that this is not a direct programming question, but I have a feeling that the solution involves custom powershell scripts (maybe), so this is as good a place to ask as any. I maintain a website that has a large Hyper-V cluster for SQL Servers. We are using Windows 2008 R2 SP1, and the new "dynamic memory" feature. I've already ready reviewed the Best Practices Guide, and implemented it's suggested configuration. Everything works well, except that when SQL demand increases memory pressure to expand to more memory than is available on the physical machine, the memory status goes into the "Warning" state and stays there. I assume the hypervisor is using a swapfile on the host to fulfill the memory requirement, thus slowing the virtual machine down. When this happens, there are plenty of other nodes in the cluster that have available resources. I can live-migrate the virtual server over there and everything works, and the warnings go away. Now how can I automate this? I see no menu options in either Hyper-V or the Failover Cluster Manager for performing a migration or shutdown when dynamic memory goes into the warning state. Any ideas about how to script this, or monitor it and invoke the action directly, would be helpful. If the solution involves coding, powershell would be ideal, but I could envison this as a .Net Service that monitors for this state and kicks off the migration request. I just don't know what objects are involved in doing the monitoring or kicking off the live migration. Thanks in advance.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • How does a vsftpd server work and how to configure it?

    - by ysap
    I was asked to configure a FTP server, based on the vsftpd package. The server is running on a remote machine to which I have a superuser privilege access. Being unfamiliar with the mechanics of FTP servers, I tried to figure out how user ftp accounts are configured. The previous maintainer used a shell script, which works on a list that we maintain to track users accounts and passwords, to configure the ftp accounts. From reading the script, I see that he generates a list of usernames and passwords, and actually creates a user account on the Linux machine. This means that for each user that we configure in the list, a new user account is being added by the adduser command: adduser --home /home/ftp --no-create-home $user (but w/o a private /home/username directory - using the /home/ftp instaed). Each of these users can log into his account using the ssh command. This fact seems a little strange to me, as I'd think that the ftp account should be decoupled from the Ubuntu user accounts. As another side effect, when a user connects using a web browser, he is connected to the /home/ftp directory. However, he can then use "Up to a higher level directory" link to go up and effectively have access to all of our system. So, the questions are: Is this really how the FTP server supposed to work in terms of configuring ftp accounts? If not, how do I configure the vsftpd server in a way that I have only the superuser Ubuntu account on that machine and all ftp account are... just FTP user accounts? Additionally, these ftp account should be configured in terms of how and what they are allowed to access.

    Read the article

  • Automatically check for Security Updates on CentOS or Scientific Linux?

    - by Stefan Lasiewski
    We have machines running RedHat-based distros such as CentOS or Scientific Linux. We want the systems to automatically notify us if there are any known vulnerabilities to the installed packages. FreeBSD does this with the ports-mgmt/portaudit port. RedHat provides yum-plugin-security, which can check for vulnerabilities by their Bugzilla ID, CVE ID or advisory ID. In addition, Fedora recently started to support yum-plugin-security. I believe this was added in Fedora 16. Scientific Linux 6 did not support yum-plugin-security as of late 2011. It does ship with /etc/cron.daily/yum-autoupdate, which updates RPMs daily. I don't think this handles Security Updates only, however. CentOS does not support yum-plugin-security. I monitor the CentOS and Scientific Linux mailinglists for updates, but this is tedious and I want something which can be automated. For those of us who maintain CentOS and SL systems, are there any tools which can: Automatically (Progamatically, via cron) inform us if there are known vulnerabilities with my current RPMs. Optionally, automatically install the minimum upgrade required to address a security vulnerability, which would probably be yum update-minimal --security on the commandline? I have considered using yum-plugin-changelog to print out the changelog for each package, and then parse the output for certain strings. Are there any tools which do this already?

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • Backing up SQL NetApp Snapshots using TSM

    - by WerkkreW
    In our environment we have a 3 node SQL 2005 Cluster which is on NetApp storage. We are currently using SMSQL (NetApp SnapManager for SQL) to take Snapshot backups of the data. This works great, but due to some audit requirements we are also forced to maintain some copies on tape. We have used NDMP in other places across the enterprise but we do not want to use it in this specific instance. Basically what I need to do is, get the most recent snapshot copy of the databases on tape, via Tivoli Storage Manager (TSM). What I have done is, obtained a basic Windows Server 2003 VM with SnapDrive installed, which is SAN attached and zoned to the NetApp, and I have written a batch file to do the following: Mount the latest __RECENT snapshot lun to the host, using a specific drive letter Perform a TSM based incremental backup Dis-mount the LUN This seems to work fine, except sometimes the LUN's do not mount due to some sort of timeout. Also, due to my limited knowledge of windows batch scripting, I have no way to monitor the success or failure of these backups since I do not know how to send a valid return code back to the TSM scheduling service. Is there a more efficient/elegant way to accomplish this without NDMP?

    Read the article

  • What open source ecommerce webshops offer #1: usability, #2: PayPal integration, and #3: ease of administration and use

    - by Jonathan Hayward
    I've spent several days trying to deploy Satchmo, in the process asking several questions about deployment (http://stackoverflow.com/questions/11277407/can-anyone-explain-this-error-message-deploying-a-satchmo-project-under-gunicorn, http://stackoverflow.com/questions/11277685/is-there-a-howto-to-fcgi-for-deploying-satchmo, and http://stackoverflow.com/questions/11278295/what-is-the-most-stable-release-of-satchmo). Django's tagline is "The web framework for perfectionists with deadlines," and Satchmo's tagline is equally forceful: "The webshop for perfectionists with deadlines." I'm looking more to set up, configure, design, etc., rather than code for this one, and I'm taking a bit of a hint that for me at least the "with deadlines" bit is something that I cannot manage. Deployment has been a time sink. So, taking a step back, I don't specifically need to edit and extend the source code; what I want are first, good usability and a clean experience for the end-user, then being easy to deploy/install/manage/maintain, and enough so that even if you're having a slow day it should at most be one day's work to install, one day's work to get running, and one day's work to rebrand as white label (for simple branding). What ecommerce webshops should I be looking at?

    Read the article

  • Project and Business Document Organization

    - by dassouki
    How do you organize, maintain edits, revisions and the relationship between: Proposals Contracts Change Orders Deliverables Projects How do you organize your projects for re-usability? For example, is there a way to add tags to projects, to make them more accessible? What's a good data structure to dump all my files on an internet server for easy access? Presently, my work folder is setup as follows: (1)/work/ (2)/projects (3)/project_a (4)/final (which includes all final documents) (5)/contracts (5)/rfp_rfq (5)/change_orders (5)/communications (logs all emails, faxes, and meeting notes and minutes) (5)/financial (6)/paid (6)/unpaid (5)/reports (4)/old (include all documents that didn't make it into the project_a/final/ (3)/project_b (4) ... same as above ... (2)/references (3)/technical_references (3)/gov_regulations (3)/data_sources (3)/books (3)/topic_based (each area of my expertise has a folder with references in them) (2)/business_contacts (3)/contacts.xls (file contains all my contacts) (2)/banking (3)/banking.xls (contains a list of all paid and unpaid invoices as well as some cool stats) (3)/quicken (to do my taxes and yada yada) (4)/year (2)/education (courses I've taken (3)/webinars (3)/seminars (3)/online_courses (2)/publications (includes the publications I've made (3)/publication_id We're mostly 5 people working together part-time on this thing. Since this is a very structured approach, I find it really difficult to remember what I've done on previous projects and go back and forth easily. What are your suggestions on improving my processes? I'm open to closed and open source software (as long as the price isn't too high). I also want to implement a system where I can save most of the projects online to increase collaboration and efficiency and reduce bandwidth especially on document editing. Imagine emailing a document back and forth 5-10 times a day.

    Read the article

  • Finding a backup and synchronization solution

    - by Andrea Zilio
    I'm having difficulties to find a backup and synchronization solution with the following characteristics: Cross-platform: Windows, Linux, Mac Offsite backup (so Internet Backup) Data deduplication Transfer only new/modified bits of modified files Secure: Data encrypted before leaving computer Maintain multiple versions of files (even deleted files) Folder synchronization integrated with backup and across multiple computers connected to the internet (not necessarily in the same LAN) I think that the Folder Sync feature needs a better explanation. The use case is this: you have a desktop pc and a laptop. The desktop pc contains a folder with some files and this folder is part of the backup (so it was selected to be backed up). The laptop does not contain that folder or that files at all. Then you're abroad with your laptop and you need that folder. So you want to be able to open the backup program, select that folder from the backup and download it in your laptop mantaining it synchronized with the backed up version. When you then come back home and switch on your desktop pc you want the folder we're talking about to be updated in the desktop PC. Does anyone knows any service with all these features? I've only found SpiderOak to support all the features I've mentioned but I'm not completely satisfied by the time taken to complete a backup. Sometimes it seems to hang for minutes with no reasons at all and folder synchronization occurs only after all files are backed up (instead folder sync should have a separated queue independent from other backup operations and synchronization should occurs frequently... for example every 5 minutes or less, independently from the frequency of normal backup operations)

    Read the article

  • Using multiple computers effectively

    - by Benjamin Oakes
    I have some extra (old) Macs and PCs around the house and a MacBook that's sometimes overworked. I'm looking for tips on using multiple computers effectively. Basically, I'd like to add to the following list. Here's what I'm using so far: Teleport: lets you use a single mouse and keyboard to control several Macs, like Synergy Built-in file sharing: lets me run programs on another Mac, but only maintain one copy of the data Bazaar: distributed version control Mail.app, Thunderbird, etc.: IMAP for my mail accounts TuneConnect: control iTunes on another Mac with a nice interface, using the library on my MacBook (if I choose it by pressing option at startup) over file sharing OmniFocus: syncs across computers pretty seamlessly Web browsing across computers VNC/Remote Desktop Running X-windows programs using ssh -Y hostname for headless operation (but they die when I sleep the connecting computer -- something like GNU screen would be ideal) Plain-old ssh with GNU screen Really, a better idea of what I do might be necessary. Generally though, I'd like to distribute tasks across more than one computer when possible, but not have much overhead in doing so. The perfect solution? An Xgrid-like program that pushes processing across multiple computers automatically and seamlessly (although that seems unlikely). Here's what I have, in case it makes a difference: MacBook (Dual 2.16 GHz, OS X 10.6.3) eMac (1.25 GHz, OS X 10.4.11, soon to be 10.5) Dell Dimension (800 MHz, some version of Ubuntu) -- no dedicated monitor PowerMac G3 (400 MHz, OS X 10.4.11) -- no dedicated monitor iMac G3 DV (400 MHz, OS X 10.4.11) -- currently in the kitchen for recipes, email, web browsing, music, movies (DVDs), etc. (Total, they cost me around $650, mostly for the MacBook. Freecycle is wonderful, just in case you haven't heard of it.) I'm really only using the MacBook and eMac at this point, but I'd like to push more onto it and possibly the PowerMac and Dell.

    Read the article

  • Splitting Servers into Two Groups

    - by Matt Hanson
    At our organization, we're looking at implementing some sort of informal internal policy for server maintenance. What we're looking at doing is completing maintenance on our entire server pool every two months; each month we'll do half of the servers. What I'm trying to figure out is some way to split the servers into the two groups. Our naming convention isn't much to be desired (but getting better) so by name or number doesn't really work. I can easily take a list of all the servers and split them in two, but with new servers are being added constantly, and old ones retired, that list would be a headache to maintain. I'd like to look at any given server and know if it should have its maintenance done this month or next. For example, it would be nice to look at the serial number. If it started with an even number, then it gets maintenance done on even months and vice-versa. This example won't work though as a little over half of the servers are virtual. Any ideas?

    Read the article

  • Centralized Windows/Mac Patch Management that is easy to use

    - by BiggsTRC
    I'm looking for advice on what patch management solutions you would recommend based upon your experience. I'm also looking for which ones you would not recommend based upon your experience. We have a mixed network of Windows and Mac clients. Our central servers are all Windows servers, although I have considered putting in a Mac server to better handle our Mac clients. The issue we are facing currently is that we need to maintain the patches on all of our third-party applications. Right now we use WSUS, which handles with patching of Windows and some Microsoft products but that is about it. I need something to cover the other applications, specifically things like Adobe products (Reader, Flash, Dreamweaver, etc.) Our network isn't that big (maybe 200 clients) and I don't have a person to dedicate just to patching and maintaining a patch management solution. Thus very large and complicated solutions like System Center are most likely out. I have recently been looking at Dell's Kace K1000 solution (http://www.kace.com/products/systems-management-appliance/). It seems simple and it provides a lot of tools in one package that I would like/need as well. I like the fact that it is self-contained in an appliance and that it is designed for solutions like mine. However, I'm not sure if this is the best solution. I've also looked some at Shavlik's Netchk solution (http://www.shavlik.com/netchk-protect.aspx) but I don't need an anti-virus product. However, it looks like they might have a very good patch database. My question is this: What are your thoughts on these to products? Are there better products out there? Are there issues that I'm not considering? I want something that is very good at patching a broad range of products, that is simple to use, that takes a minimal amount of management (like WSUS), and that (hopefully) works with Mac and Windows.

    Read the article

  • Way to speed up load-balanced ssl using nginx?

    - by paulnsorensen
    So the setup for our website is 4 nodes running rails 3 and nginx 1 that all use the same GoDaddy certificate. Because we are a paid site, we have to maintain PCI-DSS compliance and thus have to use the more expensive SSL ciphers -- also we force SSL using Rack. I've recently switched over to Linode's NodeBalancer (which I've read is an HACluster), and we're not getting the performance we'd ideally like. From what I've read, it looks like terminating the SSL on the nodes using the high cipher is what is causing the poor performance, but I'd like to be thorough. Is there anything I can do? I've read about other ways to terminate the SSL before the NodeBalancer (like using stud), but I don't know enough about these solutions. We certainly don't want to do anything experimental or anything that has a single point of failure. If there really isn't anything I can do to speed up the SSL handshake, my alternative would be to support certain pages on Rails using a secure and insecure subdomain. I've found a few guides that walk through that, but my resulting question is in this situation, would it be better to have nginx handle forcing ssl on the secure subdomain instead of rails? Thanks!

    Read the article

  • Software distribution from web server to client using PHP/FTP

    - by Jenolan
    I develop and maintain a number of add-ons and utilities for various widget (mainly aMember) which generally means I need to install php based codes onto other people's systems. Whilst I have a VPS and have access to rsync and all sorts of yummy tools most of the people I deal with have a basic ftp access and that's all folks. To upload from my local system is also a problem as I am satellite based (two-way) so it is fairly slow and expensive and in any case the files are already on my server. So there is no rsync, fxp, ssh and I can't really install anything as it is obviously not my system, they would be justifiably miffed if I started installing file managers or other things onto their sites. What I have been trying to find is a utility that I can run on my server from the web, preferably php based, that will be like a file manager but a bit different. Two panels. LH-Side the local server .. pretty much like a standard FM application RH-Side ability to login via FTP to the clients system Then I can fiddle as required. The closest thing I have found is net2ftp but it doesn't have the gui interface, at the moment I simply ssh into my server power up ncftp and run that way, but something easier to use would be mucho niceness. Thanks in advance! Larry

    Read the article

  • Two NIC's 2 Internet Connections, 1 Windows Server 2008 RC2, Routing help required

    - by PJZ
    Hello, I have a Windows 2008 server and 4 other client machines on my home network. I have two internet connections. The main connection is setup with a home router and DHCP on that for all the clients on the network. The secondary connection is just a cable modem which is plugged directly into the server. Local Area Connection: This NIC has an external IP and is connected to the Cable Modem. Local Area Connection 2: This NIC has an internal IP (192.168.0.102) and allows access to all the internal computers. It also has internet access via the local router. So here lies the problem, I want to use the Cable connection on the server for the internet traffic (so that the traffic for server/clients are seperated) but I also need to maintain local access. I am wondering how to make it so that all the internet traffic goes via that NIC because at the moment it goes through the local NIC. As a secondary problem I would also like to forward the connection of one application used by the clients via the server and the cable/server internet because of poor routing for it on the main connection. This perhaps is something for another question though. Thanks for any help you can offer me. Regards PJ

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69 70 71 72 73  | Next Page >