Search Results

Search found 19212 results on 769 pages for 'side projects'.

Page 601/769 | < Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • IPSec VPN using ZyWALL IPSec VPN Client: unable to connect from some providers

    - by Reshi
    I'm trying to configure an IPSec VPN to one company from my home. The company has SANET internet service provider. I was able to create a VPN connection from another company that has the same internet service provider. The problem begins when I'm trying to connect from another ISP like Orange or Telekom. Here is the log from ZyWall: 20120816 10:06:18:359 Default (SA Gateway-P1) SEND phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:375 Default (SA Gateway-P1) RECV phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:390 Default (SA Gateway-P1) SEND phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:718 Default (SA Gateway-P1) RECV phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:734 Default (SA Gateway-P1) SEND phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default (SA Gateway-P1) RECV phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default phase 1 done: initiator id [email protected], responder id 111.112.113.114 20120816 10:06:18:765 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) RECV phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] 20120816 10:06:48:968 Default (SA Gateway-P1) SEND Informational [HASH] [NOTIFY] type DPD_R_U_THERE 20120816 10:06:48:984 Default (SA Gateway-P1) RECV Informational [HASH] [NOTIFY] type DPD_R_U_THERE_ACK ZyWall informs me that the tunnel was opened. But I can't ping or access any computer in the network. My configuration at home: ISP: Orange Optical connection Terminal: GPON OPTICAL NETWORK TERMINAL G-25E Router: TPLink TL-WR941N --> SPI Firewall Enabled --> VPN - IPSEC Passthrough Enabled I was wondering if the problem could not be on ISP side (that he blocks somehow this connection because in SANET ISP it worked fine) or even in my terminal or router. What could I check? Where could be the problem ?

    Read the article

  • Data Store/Volume disconnecting. How to resume copy of VMDK?

    - by Serge
    I'm having an issue with my ESXi 4.1 hosts losing the datastore with FC SAN after a power outage. All 3 hosts disconnect so it's definitely a SAN issue. I've tried to resolve the issue on the SAN side with the SAN software support and Adaptec hardware support. No luck there. So I'm stuck with a SAN that will randomly disconnect the volume. I need to get the virtual machines (VMDK files) from the datastore. The problem is I can only get 5-20% before the data store disconnects. I have backups that are slightly older that I can use to replicate the VMDK differences to. What has not worked so far: Powering up the VMs, will boot up for 5-15 minutes then freeze vCenter migrate or clone of VM, will fail after similar period of time vCenter copy/paste of VMDK. Was able to get one 30GB VMDK and no luck after that. vMware Data Recovery. Fails at low %, can't resume, so next backup starts from begining. Veeam Backup & Recovery. Same as above, no resume function. If I can just find a backup solution that will resume from the failed spot that would solve my issue. Anyone have any ideas that I could try? EDIT 1 The SAN is Open-E DSS 6 running on a Supermicro 24 drive enclosure with 4 port Qlogic FC. Adaptec 52445 RAID card.

    Read the article

  • Looking for Remote Control that works with everything (even Windows 7 Media Center)

    - by T Reddy
    Using my Google-Fu, it seems that the most basic of things one gets with any DVR is the remote control. Had I known it would be difficult just to get a consumer IR receiver for Windows 7 I may not have bothered to build an HTPC. But too late, I already have the HTPC ready to go (minus the CETON card...) So I'm moving away from TiVo, I hate paying the monthly fees and my box is ancient. I'm looking for these solutions to my HTCP setup...I want to: Switch audio from HDMI to SPDIF via the remote control (i.e., switch from TV to Receiver) (as a side note, the built-in audio on the mobo has software to do this). Pressing the volume button on the remote will always change the TV's volume (or the Receiver's if possible) and NOT the PC's volume. The remote/receiver works well around 25 feet. Bonus if the IR Receiver can work with my existing TiVo remote (or other remotes laying around the house) I read a review of the Bluetooth TiVo remote...it sounds promising...but I'm not sure if it is great for Windows 7 HTPC?

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • Exchange 2010 CAS Removal == Broken???

    - by Doug
    Hi there, I recently upgraded to exchange 2010 and have a setup with 2 of my servers running CAS roles - EXCH01, EXCH02 EXCH02 just happens to also have a mailbox role where a lot of the users sit EXCH01 is my front facing CAS server, and is facing the net with SSL etc and incoming mail moving through it as a hub transport layer server as well. As i was trying to lean things out in my VM environment i removed the CAS role from EXCH02 and all hell broke loose. All the mail users that have a mailbox on EXCH02 had their homeMTA set to a deleted items folder in AD and so did their msExchHomeServer properties. After a complete battle i manually fixed these issues to the oldvalues, and in the mean time reinstalled CAS on EXCH02 (management was going nuts with out OUTLOOK working so i just put things back the way they were in a hurry.) I must add as a strange thing on the side, that before i reset these to point at EXCH02 i tried EXCH01 and it failed. I still want to remove the CAS role from EXCH02 as it should really not have it (error on install/planning on my part) and would have thought that this would not cause the issues it did, i assumed that the fact that there was another CAS server in the admin group all would be good. Was i wrong in my assumption? and what can i do to complete this successfully the second time round? Do i need to rehome all the mailboxes to the CAS server? is this a bug in the role uninstall?

    Read the article

  • Ping and crawling not working, site still resolving

    - by Andrew Alexander
    Ok, so we're trying to figure out why the site of one of our clients isn't being crawled by Google (we've ruled out robots.txt or meta tags) When we go to the site, either IP address or domain name, the site resolves, everything works. However, Google is getting a 302 redirect (which it apparently isn't following for crawling), and when we ping the address, it times out (note, the site is still resolving in the browser throughout all of this). The site is built in ASP.Net (I assume C#) and so my thoughts were that it was an errant redirect rule, or some other sort of server side issue. We also thought that it might be due to incorrect domain pointing (but if we try to ping the IP, it doesn't work, so that sorta rules that out). We're really not sure what is causing all of these errors, or even if they have one single source. Anyone have any ideas what could be going on? Do you need any more information? To boil it down in a TL; dr: * Site resolving in browser, both IP and domain name. No problems here. * Site not being crawled by Google (gets a 302 it doesn't seem to follow) - it is not due to robots.txt or meta tags * Ping is not working for the IP address. This is very odd, because again, the IP address seems to work fine in the browser. * Our thoughts are either redirect rule issue, domain pointing issue, or possibly some errant code - or some combination of the three

    Read the article

  • Does the OSS Backup Solution amanda.org support sparse files?

    - by user97961
    I want to (or better have to) do Backups of my KVM Virtual Machine images. I have searched for days for a good Backup Soloution. I know amanda is a very good solution. It would be kinf if someone kenn tell me if the following is supported: Trigger the Creation of LVM Snapshot (by invoking a Shell Script that I will write for that purpose) Do a Differential/Delta Backup on my KVM LVM qcow2 sparse file. = I only want to copy the actually changed bits/bytes (=Delta Backup). And it has to support that the file to be backuped up is a sparse file. (Rsync seems to have some kind of problems in regard to this (if the file does not exist yet on the other side... Then it will create a full file, not a sparse file)) Release the LVM Snapshot (By invoking a Script that I will write for that purpose) It's strange, I have nowhere found any documentation about this fact when searching the internet. Zmanda (Commercial Edition) has support vom XEN VM Backup (but not for KVM as far as I can tell)...

    Read the article

  • OpenOffice Calc: How can I count the number of different items with data pilot?

    - by manu
    Hi all, I have a rather long spreadsheet with historical information of issues solved by some user on a collaborative environment. The spreadsheet have the following (relevant) columns date, week no., project, author id, etc... The week no. is calculated from the date, is basically the year concatenated with the week number within that year; for instance, both 2009-02-18 and 2009-02-20 yield the week number 200908 - the 8th week of year 2009; and 2009-02-23 yields 200909 - the 9th week of year 2009. I need to count how many different users (given by author id) contributed to some project, on a weekly basis. I have setup a data pilot with the week as Row Field, the project as the Column Field, and count-author as the Data Field. However, this counts the author id as different instances. This is not what I need. I need to count how many different users contributed to each project on a weekly basis. I expect to get something like: projects week Project1 Project2 Project3 200901 10 2 200902 2 7 Each inner cell containing how many different users contributed. With the count-author configuration, what I get is how many contributions (total) got the project on that week. Is there a way to tell OpenOffice Calc to do what I want?

    Read the article

  • How do I increase the buffer size for domain sockets in OS X 10.6

    - by Chas. Owens
    In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; unlink "foo"; my $sock = IO::Socket::UNIX->new ( Local => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; while (<$sock>) { chomp; print "[$_]\n"; } And the client code looks like #!/usr/bin/perl use strict; use warnings; use IO::Socket; my $sock = IO::Socket::UNIX->new ( Peer => 'foo', Type => SOCK_DGRAM, Timeout => 600, ) or die "Could not create socket: $!\n"; for my $i (1 .. 1_000_000) { print $sock "$i\n" or die $!; } close $sock; The error message I get is No buffer space available at write.pl line 15.. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).

    Read the article

  • DVD Share on Vista Home Premium Failing

    - by hpunyon
    UPDATE: - I can't find any Local Policy Editor for Vista Home Premium, as suggested. - I did learn about registry keys: allocatecdroms, allocatefloppies, allocatedasd and tried adding these keys (individually and collectively) and setting them to both 0 or 1. There was no positive affect on read access to the DVD root folder - always Access Denied. ORIGINAL POST: Failing read access to the root folder of a DVD drive in Vista Home Premium laptop using the Guest account - Access Denied. The client is an XP Home PC that can see, but not access, the data in the share. I'm only trying to read the data DVD - not trying to write/burn anything. On the Vista laptop, I have: All Firewalls and Antivirus disabled.UAC disabled. Password checking disabled. "Advanced Shared" the DVD drive, with "Everyone" having full-access permissions to the share. Tried adding Guest and Anonymous users having full-access permissions to the share. RestrictAnonymous=0 set in the registry. Both PC's are in the same workgroup (MSHOME) The XP Home client sees the shared DVD in \Vista_Hostname\ but when I double click the drive icon on the client, I get a popup that access is denied, check with the administrator, etc. I can share other folders on the Vista PC and see and READ these from the XP Home client. If I enable password checking on the Vista side, I get a user/password popup, and I can authenticate (using my known Vista account, that happens to have Admin rights) and then I can get to see and read the DVD data. I need to open this up so that the (default) Guest user can see and access the DVD data files.

    Read the article

  • Apache reports a 200 status for non-existent WordPress URLs

    - by Jonah Bishop
    The WordPress .htaccess generally has the following rewrite rules: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> When I access a non-existent URL at my website, this rewrite rule gets hit, redirects to index.php, and serves up my custom 404.php template file. The status code that gets sent back to the client is the correct 404, as shown in this HTTP Live Headers output example: http://www.borngeek.com/nothere/ GET /nothere/ HTTP/1.1 Host: www.borngeek.com {...} HTTP/1.1 404 Not Found However, Apache reports the entire exchange with a 200 status code in my server log, as shown here in a log snippet (trimmed for simplicity): {...} "GET /nothere/ HTTP/1.1" 200 2155 "-" {...} This makes some sense to me, seeing as the original request was redirected to page that exists (index.php). Is there a way to force Apache to report the exchange as a 404? My problem is that bogus requests coming from Bad Guys show up as "successful requests" in the various server statistics software I use (AWStats, Analog, etc). I'd love to have them show up on the Apache side as 404s so that they get filtered out from the stat reports that get generated. I tried adding the following line to my .htaccess, but it had no effect (I'm guessing for the same reason as the previous redirect rules): ErrorDocument 404 /index.php?error=404 Does anyone have a clever way to fix this annoyance? Additional Info: OS is Debian 6.0.4, and Apache version looks to be 2.2.22-3 (hosted on DreamHost) The 404 being sent back to the client is being set by WordPress (i.e. I'm not manually calling header() anywhere)

    Read the article

  • "Network Error - 53" while trying to mount NFS share in Windows Server 2008 client

    - by Mike B
    CentOS | Windows 2008 I've got a CentOS 5.5 server running nfsd. On the Windows side, I'm running Windows Server 2008 R2 Enterprise. I have the "Files Services" server role enabled and both Client for NFS and Server for NFS are on. I'm able to successfully connect/mount to the CentOS NFS share from other linux systems but am experiencing errors connecting to it from Windows. When I try to connect, I get the following: C:\Users\fooadmin>mount -o anon 10.10.10.10:/share/ z: Network Error - 53 Type 'NET HELPMSG 53' for more information. (IP and share name have been changed to protect the innocent :-) ) Additional information: I've verified low-level network connectivity between the Windows client and the NFS server with telnet (to the NFS on TCP/2049) so I know the port is open. I've further confirmed that inbound and outbound firewall ports are present and enabled. I came across a Microsoft tech note that suggested changing the "Provider Order" so "NFS Network" is above other items like Microsoft Windows Network. I changed this and restarted the NFS client - no luck. I've confirmed that the share folder on the NFS server is readable/writable by all (777) I've tried other variations of the mount command like: mount 10.10.10.10:/share/ z: and mount 10.10.10.10:/share z: and mount -o anon mtype=hard \\10.10.10.10:/share * No luck. As per the command output, I tried typing NET HELPMSG 53 but that doesn't tell me much. Just "The network path was not found". I'm lost on how to proceed with troubleshooting. Any ideas?

    Read the article

  • iptables configuration under ubuntu

    - by aioobe
    I'm following a tutorial on setting up a dns-tunnel. I've run into the following instruction: Now you need to enable forwarding on this server. I use iptables to implement masquerading. There are many HOWTOs about this (a simple one, for example). On Debian, the configuration file for iptables is in /var/lib/iptables/active. The relevant bit is: *nat :PREROUTING ACCEPT [6:1596] :POSTROUTING ACCEPT [1:76] :OUTPUT ACCEPT [1:76] -A POSTROUTING -s 10.0.0.0/8 -j MASQUERADE COMMIT Restart iptables: /etc/init.d/iptables restart The problem is that I don't have any /var/lib/iptables/active. (I'm on ubuntu.) How can I accomplish this? I suspect that I should just interact with the iptables command somehow but I have no clue what to write. Best would probably be if I could put the commands in a script somehow I suppose. (A side-note. If I execute a few iptables-commands it wont be there for ever, right? The rules will be discarded on reboot?)

    Read the article

  • Does this mean the router is faulty?

    - by Ashfame
    I have a router to which I have my desktop (running Ubuntu) connected via LAN & I use it on my phone via wifi. Sometimes it happen that the LAN one will stop working for no reason but the wifi will work fine. And it will resolve away by itself. Since last night, the router was restarting again & again on its own, so I lodged a complaint about it and they said the router is faulty and will be replaced, but I know they don't know anything about how things work & is just going to shoot an arrow in the dark. These restarts has happened for the first time, LAN-wifi issue described earlier is a common one (but not frequent one). So is the router faulty or there is some issue from my ISP side which will continue to persist even after they change the router? My very best guess is that they will replace it with an older refurbished router which will tend to give me more troubles in the coming time, so its better if I change it only its faulty (this is a new one - 6months old, I am its first hand user). I am happy to provide any details.

    Read the article

  • Problems with 5.1 digital out on Ubuntu 12.04

    - by user895319
    I've recently bought a new PC, installed Ubuntu and am now unable to get 5.1 digital sound working. Simple analogue stereo works fine on both the front and rear connectors. On my old box I connected the coax connection from my soundcard to my surround sound amplifier, set Settings-Sound to "Digital Stereo Duplex" and it worked. My old soundcard doesn't fit in my new machine so I'm using the built-in sound hardware. I'm connecting the combination output socket on the back of the PC via the same cable to my surround amp as before. The MB is an MSI Global H61M-P31 with an RealTek ALC887 sound chip. When I go to Settings-Sound I only see "Headphone Built-in Audio" and "Analogue Output Built-in Audio" - no digitial options. The output from aplay -l is: default Playback/recording through the PulseAudio sound server sysdefault:CARD=PCH HDA Intel PCH, ALC887-VD Analog Default Audio Device front:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Front speakers surround40:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 4.0 Surround output to Front and Rear speakers surround41:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 4.1 Surround output to Front, Rear and Subwoofer speakers surround50:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 5.0 Surround output to Front, Center and Rear speakers surround51:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 5.1 Surround output to Front, Center, Rear and Subwoofer speakers surround71:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog 7.1 Surround output to Front, Center, Side, Rear and Woofer speakers dmix:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct sample mixing device dsnoop:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct sample snooping device hw:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Direct hardware device without any conversions plughw:CARD=PCH,DEV=0 HDA Intel PCH, ALC887-VD Analog Hardware device with all software conversions While googling for ALC887 I've seen some references to "ALC887 -VD Analog" and some to "ALC887 -VD Digital". Does anyone know if I need to force it to chance mode somehow? It's worth mentioning that when I set the output to 5.1 digital surround in Windows 7 on the same machine I still don't get any sound so it's not a unique Linux problem. Thanks for any help.

    Read the article

  • Grub and Renaming... Why does Kubuntu 9.10 make it so hard?

    - by NH
    I'm trying to rearrange the grub menu in Kubuntu 9.10 (similar to this post), but unfortunately, Kubuntu includes the latest (and NOT greatest) version of grub, which no longer uses the elegant menu.lst. ARG! So anyway, I'm digging around in /etc/grub.d and I can't figure out how to rename the files in order to get them to boot in another order. (on a side note, I can't get xPUD to show up in the boot list... but that is a little less important) So why doesn't it work to do sudo grub in the terminal? (that seems to be the easiest option, but that doesn't work either.) Further, why can't I rename the files? Do I need to do it in the terminal? If so, how do I rename the file with the terminal? Can I run Dolphin (or Konqueror or whatever) as root (or su)? And don't tell me I need to try CHMOD first; I already tried that, and I still couldn't rename the file.

    Read the article

  • Which CPU for XEN - LAMP testbed - Budget

    - by deploymonkey
    Dear serverfault knowledgeables, im in a decision dilemma right now, which I can't resolve due to lack of hands on experience. I need to build a testbed for basically virtualizing a LAMP application (os'ses not yet decided) including server side calculations. I'll opt for XEN since it seems better supported by cloud hosters at the moment. The hardware is for a proof of concept for a startup doing saas and might be used for closed live alpha/beta later on. After testing, the testbed might be a) deployed as a colocated white box server b) used as workstation Single socket is enough. We want to have ECC memory for reliability, this excludes most of the consumer line at intel. If intel CPU, then threaded cpu (HT) is preferred have at least 16 gig ram If justified by price and reliability is not too bad, a high quality desktop MB instead of a server MB would be worth a try It came down to the opteron 6128 vs. the xeon 5620 for me after a lot of research, but I don't necessarily have to be right. Which CPU is preferrable, concerning TCO (MB price, power requirements 24/7...) , Opteron 6128 or Xeon 5620? Which one offers better performance in real world applications? (Do You have any other suggestions I probably overlooked?) Thank You for Your consideration

    Read the article

  • Running multiple copies of openssh-server (sshd) on Ubuntu

    - by cecilkorik
    I may be attacking this problem the wrong way, if so let me know. I have a server which is available through SSH from both the public internet and the local LAN. I would like to have two very different security policies for each, by running two copies of sshd with two different sshd_config files each on a different port. Some of the things I'd like to change is to allow password or public-key authentication on the LAN, but public-key only from the internet. All (real) users could login from the LAN side, but only certain authorized users would be individually whitelisted to login through the internet. As far as I can tell this requires having two different SSH daemons running on different ports with different sshd_configs. I am fine with the different ports part, I can easily forward port 22 to any port I want through my firewall. So my question is what is the best way to actually START the second sshd under Ubuntu 10.04 LTS. Is there a recommended way to do something like this? Surely I am not the first person with this sort of need. I have a bit of experience with upstart, and I can manually hack the second sshd into /etc/init/ssh.conf I suppose but I'm not sure if that will get overwritten by the package. However I do it, It's important to ensure both sshd processes always get restarted after any automatic or manual upgrade of the openssh-server package. Thanks in advance.

    Read the article

  • How to get Synergy working on Ubuntu 11.10 and Windows 7?

    - by Linda
    I'm using Ubuntu 11.10 32-bit and Windows 7 64-bit, however, Synergy only works when a window (application or folder) is open and touching the edge of the screen where the mouse should "jump". In other words, if a window is open and maximized, Synergy works normally. Without any windows, the mouse does not jump to the other screen. My steps: (Ubuntu) apt-get install -y quicksynergy (Windows) Install Synergy (I've tried both 1.3.8 and 1.4.8 and both 32 and 64-bit) On Ubuntu 11.10 32-bit (Synergy Server config): ~/.quicksynergy/synergy.conf section: screens myubuntu: mywin7: end section: links myubuntu: right = mywin7 mywin7: left = myubuntu end On Ubuntu 11.10 32-bit: $ /usr/bin/synergys -f --config .quicksynergy/synergy.conf ... 2012-04-25T14:04:12 NOTE: client "mywin7" has connected /build/buildd/synergy-1.3.6/lib/server/CServer.cpp,287 (output hangs here) On Windows 7 64-bit: Synergy 1.3.8 Client on Microsoft Windows 7 x86 (WOW64) started client connecting to 'myubuntu': ###.###.###.###:24800 connected to server (output hangs here) At this point, things should work, but my mouse still can't change screens unless a window is maximized on my Ubuntu machine. Everything is running on port 24800. No firewall on Ubuntu. Firewall port 24800 open on Windows 7. This was previously working on Ubuntu 10.10 and Windows 7 (so only Ubuntu has been upgraded). I'm open to using either 32 or 64-bit on either server or client side, but I just want to get it working on Ubuntu 11.10 and Windows 7! I'm also using Ubuntu Classic (no effects), and not Unity.

    Read the article

  • Get Safari to use different autocompletion on different URLs on same hostname

    - by Luke404
    I have a webserver publishing different services over the same SSL VirtualHost, the two most commonly used being PhpMyAdmin and Cacti. These (and others) use 'cookie' style authentication, asking user and password in an HTML form (thus not using HTTP Authentication). Being on the same hostname, the Safari browser didn't manage too well stored passwords: if I login to one app with user foo, and then go to app two it would propose me user foo and its password in the login form. Changing just the username to bar used to be sufficient to let Safari autocomplete the correct password in its form field. Annoying, but I could live with it - usernames are short and easy to remember when compared to the passwords we use. After the update to safari5 this seems to be no longer true: if I store in safari (actually user keychain on OSX) credentials for https://www.foobarbaz.com/app1 AND credentials for https://www.foobarbaz.com/app2 there seem to be no way for it to autocomplete both based on the url. Even editing the keychain to add the path (it will store only the hostname by default) does not help. Is there anything I can do to let it work the way I want while still keeping everything on one hostname? Modifying anything server side is of course possible, but I can't switch apps to HTTP Auth (and not every one will support it anyway) to use different 'realms'.

    Read the article

  • Why does my Intel Tolapai network chip not transmit packets?

    - by Hanno Fietz
    I'm trying to deploy an embedded system (NISE 110 by Nexcom) based on the Intel EP80579 (Tolapai) chip. Tolapai apparently integrates controllers for Ethernet etc. on a single chip (Intel homepage). The machine can't get a network connection. Diagnosis as far as I could manage: Drivers drivers from Intel compiled and installed without problems (version 1.0.3-144). Kernel version and Linux distribution (CentOS 5.2, 2.6.18) match the driver's installation instructions. drivers are loaded and show up in lsmod (module names are gcu and iegbe) interfaces eth0 and eth1 show up in ifconfig ifconfig I can bring up the interfaces with fixed IP pinging the interface locally works ifconfig shows flag UP but not RUNNING Link ethtool shows "Link detected: no", "Speed: unknown (65536)" and "Duplex: unknown (255)" Link LED is on on the other side of the cable, ethtool shows "Link detected: yes" and reports a speed of 1000 Mbps, which has allegedly been auto-neogotiated with the problematic device. Network traffic analysis the device does not reply on ARP, ICMP echo or anything else (iptables is down) when trying to send ICMP or DHCP requests, they never reach the other end activity LED is off on the device, on at the other end. I tried the following without any effect: Different cables (2 straight, one crossed), I get the link LED lit up on each. Three different devices on the other end (one PC, one netbook, one router) Fixed ARP table entries on both sides Connecting both network ports of the machine with each other, won't ping through the cable, but will ping locally. Tried straight and crossed cables for that.

    Read the article

  • Server Clustering (Django, Apache, Nginx, Postgres)

    - by system-matrix
    I have a project deployed with django, Apache, Nginx and Postgres. The project has requirement of live data viewable to customers. The projects main points are: 1. Devices in field send data to server(devices are also like website users) after login. 2. There is background import process which imports the uploaded data in postgres. 3. The webusers of the system use this data and can send commands to the devices, which devices read when they login. 4. There are also background analysis routines running on the data. All the above mentioned setup and system is deployed on one amazon EC2 cloud machine. The project currently supports over 600 devices and 400 users. But as the number of devices are increasing with time the performance of the server is going down. We want to extend this project so that it can support more and more devices. My initial thinking is, We will create one more server like current one and divide the devices amongst these to servers. But Again We need a central user and device managment point though django admin. Any Ideas? What are the best possible ways to create a scalable architecture? How can I create a Postgres Cluster and Use it with Django, if possible?

    Read the article

  • GUI interfaces to ATI card behave weirdly out of the box and after updates.

    - by jdk
    My Lenovo W500 came with an ATI Mobility FireGL V5700 and both the Catalyst control center software and Vista display manager display four monitors. What's really annoying is the behaviour. My two active displays (laptop display + my external monitor) are always #s 3 and 4 respectively which doesn't make sense. This is out of the box. Additionally dragging & dropping is jumpy and displays #1 and 2 (always inactive because they don't exist to the software) are often preventing me from dragging #3 and 4 to the rightmost side. They also auto-snap to weird positions and certain sensible positions like position one directly over top of the other are not possible. The exact same annoyances are present when using the Windows Display manager too. In other words the interface is crap and I'm looking for a fix that's not wishing I had gone with nVidia instead. I've updated drivers, and Catalyst control centre. Have latest Windows and AMD/ATI updates. Any thoughts? Graphics Software Driver Packaging Version 8.563.2.1-090401a-079160C-Lenovo Provider ATI Technologies Inc. 2D Driver Version 7.01.01.849 2D Driver File Path /REGISTRY/MACHINE/SYSTEM/ControlSet001/Control/Class/{4D36E968-E325-11CE-BFC1-08002BE10318}/0001 Direct3D Version 7.14.10.0630 OpenGL Version 6.14.10.8306 Catalyst® Control Center Version 2009.0401.1328.22301 Graphics Hardware Primary Adapter Graphics Card Manufacturer Powered by ATI Graphics Chipset ATI Mobility FireGL V5700 Device ID 9591 Vendor 1002 Subsystem ID 2126 Subsystem Vendor ID 17AA Graphics Bus Capability PCI Express 2.0 Maximum Bus Setting PCI Express 2.0 x16 BIOS Version 010.088.000.021 BIOS Part Number BK-ATI VER010.088.000.021.034663 BIOS Date 2009/09/30 Memory Size 512 MB Memory Type DDR3 Core Clock in MHz 600 MHz Memory Clock in MHz 700 MHz

    Read the article

  • How to have a shell script available everywhere I SSH to

    - by aib
    I have a shell script which I simply cannot do without: bar from Theiling Online I use SSH a lot and on a variety of *nix servers. However, I am not a system administrator and usually don't have the time or privileges to install it on every server I connect to. It is apparently a very portable sh script and has command line options to export itself as a shell function, which got me thinking: Could I use one of OpenSSH's subjectively obscure features to export it everywhere I go? My first thought was to assign the source to an environment variable like BAR = "cat -v" and then execute it on the other side as `$BAR`, but 1) I can't even get the cat example to to work locally, 2) I don't know how to put the script's actual multiline source into an environment variable and 3) I have yet to see a machine with PermitUserEnvironment enabled. I guess I could even do with an ssh option to write a file called ~/bar at logon, but a more volatile solution would be better. Calling wget http://.../bar at logon would be unacceptable. Any ideas? P.S. Putty-specific solutions, though I doubt any would exist, are also fine.

    Read the article

< Previous Page | 597 598 599 600 601 602 603 604 605 606 607 608  | Next Page >