Search Results

Search found 11188 results on 448 pages for 'django manage py'.

Page 380/448 | < Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >

  • Outlook 2010 IMAP account - send on behalf

    - by Master of Celebration
    So I was looking for a possibility to manage the mail distribution of online shops, newsfeeds, etc. and have a nice solution via distribution groups aka. alias addresses. In example, I register an account on eBay using "[email protected]" (where org.com is my company obviously). That address is an alias and can be managed on my on-premise mail server setting destination to somebody's mailbox independent from logging on to eBay - in case somebody else shall do the eBay-stuff, I can quick change the destination of that alias :-) So far, so good - and now to the problem: Using Microsoft Outlook 2010 and an IMAP account on our mail server, I cannot figure out how to remove that "on behalf of"-string visible in the from-field when sending a message under that [email protected] address. That's quite a pity, because especially eBay doesn't accept/forward mails not coming from the registered address.. Using other mail clients (e.g. Mozilla Thunderbird), the problem does not occur so I guess it's Outlook specific. I cannot "grant" permission to "send as", because that address is not a mailbox, but rather an alias only. Furthermore, the mail accounts are not Exchange, but IMAP! Does anybody have any other ideas to "remove" that annoying string? Consideration: We have to use Microsoft Outlook for some reason! :-)

    Read the article

  • SBS 2008 Sharepoint Database good Memory Limit.

    - by ldelgado
    I manage a small network running on Small Business Server 2008. Lately, the Sharepoint embedded database is getting out of control with its memory usage. I've got a total of 16 GB of RAM on this server, and the Sharepoint database sometimes uses almost 8 GB of RAM. This never happened before, and it started happening after I installed Backup Exec 2010. It happens after a backup is performed. So I suspect there is a memory leak involved. I am working on that issue, but this question isn't about that. I would like to limit the amount of memory the Embedded database uses. I know how to do it. My question is, what would be the ideal amount of memory that I can allocate to Sharepoint? There are only 4 users on my network. One of the users uses two computers but not at the same time. They use sharepoint for a company calendar, and sometimes they share files that way also. Let me know if you need to know anything else. Thanks,

    Read the article

  • NAS for Mac OS X Server

    - by SamAdmin
    I'm using Mac OS X Server and want to allow the users that connect to their network accounts to store their data on a NAS drive. I want the users to connect to the Lion server as this allows for better policies and management for me and for their afp share to be located on a NAS drive. I've looked into home directories and network logins however I don't want the users to connect into a different login environment, just an authentication against their provided account on the Lion server and for their finder to take them to their own storage area - located on the NAS drive. Currently I am using FreeNAS for both authentication and storage however there are getting to be far too many people to manage each afp share and account, plus just using FreeNAS is extremely limiting for expansion and if something goes wrong with 1 entity the entire system goes down. Using the Lion server for user accounts and policies will be much better for this expanding business. I have looked into LDAP, using the Lion server as an LDAP server to authenticate against for FreeNAS however I have had issues with this and thought a different approach could be better from the other side of the situation... Providing the account with somewhere to store data rather than the afp share authenticating against an LDAP server. I am wrong to try it this way? Is it possible to logically add storage to a Mac OS X Server which can be recognised as a local drive, so can be used for network accounts?

    Read the article

  • How can see what processes makes my server slow?

    - by Steven
    All my websites on my server are extremely slow or not loading at all. Even server admin (Plesk) will not load some times. There's been no changes to the sites for the last coupple of months. How can I see what processes is making my server slow? My environment looks like this: Server: VPS running Linux 2.8.x OS: Centos 5 Manage interface: Plesk 9.x Memmory: 1024MB CPU: 2.2GHz My websites run on PHP and MySQL. I finally managed to telnet (Putty + SSH) in to my server. Running top did not show any processes using more than max 2% CPU and none were using exesive memmory. I also got a friend to install a program that checks the core files, and all seemed fine. So I'm leaning towards network issues or some other server malfunction. But I'm not able to find out what can be wrong. Here are some answers to Sean Kimball: I don't run mail services on my server yet There are noe specific bandwidth peaks. Prefork looks like this <IfModule prefork.c> StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 256 MaxRequestsPerChild 4000 </IfModule> Not sure what you mean with DNS question. But I think it's up and running. There are no processes running wild Where can I find avarage load? Telnet is disabled and I have to log in using SSH :)

    Read the article

  • Security Token for Mac/Linux/Windows, self-managed, pref. open source?

    - by DevelopersDevelopersDevelopers
    I'm looking to buy an evaluation security token (combined smart card/usb reader) for my business that works on: Windows 7 x64 OS X 10.6.x x64 Ubuntu Linux (64 or 32 bit, 10.04 or 10.10, I can bend based on possible tokens) Functionality I need is: Login authentication Authentication for whole-disk encryption (in Linux/Windows, Mac is flexible here) Signing/encryption using PGP and x.509 certificates RSA-2048 key-capable (1024 not good enough.) I can manage the certificates myself Open source middleware/drivers (not necessarily FOSS, just source available. Can flex on this, I just want to be able to audit the code. OpenSC-compatible on Linux would be great.) Is there any token that can do all of this? Or would I need multiple ones to accomplish this? Or do I need to look at smart cards and readers to get this? I have been researching this for a while and have had a heck of a time even getting accurate information about products. Also, I am in the USA, and it appears that EU export laws prevent me from buying from there, so those vendors are out. I was looking at Feitian tokens from Gooze, but since they are in France I can't buy.

    Read the article

  • SMF restarting service whenever there's output?

    - by Phillip Oldham
    I'm trying to add a custom service to SMF's configuration, which seems successful in that the service starts and there is a log file, but therein lies the problem; the service, on start-up, prints some logging messages to the stderr. It seems that SMF is seeing those messages and, believing them to be errors, restarts the service, giving up after a number of tries and leaving the service off. Here's part of the log output: [ Mar 30 14:59:54 Enabled. ] [ Mar 30 14:59:54 Executing start method ("java server.CustomServer"). ] Starting server... [ Mar 30 15:00:04 Method or service exit timed out. Killing contract 107. ] Running the server directly on the commandline is fine, and AFACS there are no errors being encountered during startup, other than the output. What would be the best way to manage this service with SMF? The logging is needed for diagnosing problems, and would be problematic to disable. Is it possible to configure this service to only restart if the service exists?

    Read the article

  • Setting up a global MySQL Cluster in the cloud

    - by GregB
    I'm giving the question an overhaul to more specifically identify where I need help. I use two tools to manage a bunch of cloud server: Puppet and Rundeck. Both of these can be configured to use a mysql backend. I'd like to setup an instance of each application in both the U.S., and the U.K., treating the U.K. servers as hot stand-bys in case of failure in the U.S. I want to use a MySql cluster so that the data is automatically replicated from the U.S. to the U.K. Because these are hot standbys, high performance is not a goal. Redundancy and data integrity are most important. My question revolves around the setup of the mysql cluster. I want to run three servers, each one running a data node, a sql node, and a management node. Is this a valid configuration for mysql server? If so, could someone point me in the right direction for creating such a setup? I've downloaded the offical tarball, and the official debian, and the documentation for them contradicts many of the online tutorials. I'm installing on Ubuntu 10.04.

    Read the article

  • How do I serve Ruby on Rails applications on Windows Server 2008?

    - by Adam Lassek
    I have spent the last several hours attempting to get Ruby on Rails running on a Windows server with no luck. At first I tried configuring a test application through IIS7's FastCGI support, but the documentation for this is not very good. I've been following this blog entry, and this one, and this one, and this one but everything seems to be missing major steps, or are out of date. And every article keeps linking back to this Howto from rubyonrails.org that doesn't exist. The sense that I'm getting is that even if I manage to make this work, IIS' FastCGI isn't good enough to use in a production environment anyway. So it looks like my best bet is to setup a reverse proxy in IIS that points to Apache & Mongrel/Passenger using ARR and UrlRewrite. Is there anybody else out there stuck deploying a Rails application on a Windows stack? Am I on the right track? Can you give me a better idea of how to configure this? I believe Plesk already installed an instance of Apache/Tomcat running on this server using a different port, so adding another virtual host shouldn't be difficult; the hardest part seems to be setting up the reverse proxy through IIS.

    Read the article

  • How to get Subversion repository from svn:// and https://?

    - by Hikari
    I know these are noob questions, but I never got my own Subversion running before and I'm kinda lost. I installed VisualSVN in Windows, but it doesn't support svn:// protocol by default, only HTTP or HTTPS. It is working fine over HTTP, and I'm able to manage it from its management tool, see its repositories and get their HTTP-based URL, and from that I'm able to use Tortoise to check out and check in. I'm able to check out from a repository URL using Tortoise: http://Main:90/svn/HikariKrumo/ But I need svn:// protocol for Redmine to access it. Redmine says to support http:// but it reports this error message: The entry or revision was not found in the repository.. And I need HTTPS to access it from Internet. If I can get Redmine to access it from svn:// I can just configure it to use HTTPS in place of HTTP, and I hope it all to works. I like VisualSVN because of its management tool, but I can use another Subversion distro if needed, as long as it supports svn:// and https://. I'm getting crazy on it because it should be simple but I can't get it to work.

    Read the article

  • What are possible results/side effects if replication between DC's in a Windows domain is unable to occur?

    - by hydroparadise
    There's plenty of administration literature out there how to properly manage Windows servers. But in dealing with real life, things don't always occur like you want them to. In Microsoft's Windows Server 2003 Administrator's Companion, out of 1400+ pages, theres only one page that I could find when it comes up setting up additional domain controlers. They make it sound seemless and don't reveal a whole lot on what happens if "peer" DC's are unable to replicate. Down to the specific issue at hand, we had a DC go down about a month ago due to a bad RAID controller. There was nothing critical that waranted imediate attention, so bringing it back up got put on the back burner. A month later, we get the DC back up and running and everyting seemed ok. The next day, nobody is able to logon complaining that the "user does not exist" or "unable to establish a trust relationship". Knowing that I had just put the downed DC back on the network, I immediately took it back off the network and had everybody restart the workstations. After that, exchange was fine, shares became available, and everybody was able to log in. After doing some event log swimming, it would appear that everything started due to replication issues on the SYSVOL. I've read where you can force replication, but that would mean putting it back on the network. I am afraid to put the DC back on the network in fear that something else could go wrong. So, what other issues could one expect to run into where two DC's are unreplicated for over a month?

    Read the article

  • Autossh startup on Ubuntu 10.04 - fails after powering off

    - by grant
    I'm using upstart to keep a reverse ssh tunnel alive using auto ssh similar to Using Upstart to Manage AutoSSH Reverse Tunnel. This works fine, except after a manual power down I can no longer connect to the machine through the "central server" using the tunnel. I receive "ssh_exchange_identification: Connection closed by remote host". The autossh process is running on the client. I can connect again after re-starting networking. I'm trying to figure out why this is failing consistently after a manual shutdown. Is it possible that I need to do some cleanup on startup that would allow the tunnel to work in this situation, or are there some other debugging/troubleshooting steps I can take to determine the problem? Machine A is the client machine, using autossh. This machine sits behind a firewall and uses the following command in upstart to create an ssh tunnel: /usr/bin/autossh -fN -i /keyfile -o StrictHostKeyChecking=no -R 20098:localhost:22 user@centralserver Machine B we'll call the "central server", which sits in the cloud and is the host. This machine is "centralserver" in the command above. When Machine A is hard powered off, and back on, I cannot connect to it by SSH'ing from my machine (C) to Machine B in the cloud, then using the following command to get to Machine A: ssh -p 2098 user@localhost Again, after a reboot of the client (A), this works fine. It is only after a hard power down that the problem occurs. There are autossh processes that are running on the client machine (A) after powering down and back up, but they just don't seem to doing their job.

    Read the article

  • Setting up DNS using BIND

    - by dupdupdup
    i have troubles setting up my db files. Please kindly point me in the right direction! i need to define a nameserver that manage a domain example.org.au then i need it to have two records. one called server which is the ip address of current machine the other called www where www.example.org.au will be pointed to another ip address. i cant seem to get my system to work. This is my db.example.org.au file example.org.au. IN SOA server.example.org.au. ( 1; 3; 1h; 1w; 1h ) ; ; ;Host addresses localhost.example.org.au IN A 127.0.0.1 www.example.org.au. IN A 192.168.1.200 ; another virtual machine server.example.org.au IN A 192.168.1.199 ; current virtual machine If possible Please correct my errors! thanks! Any good guides out there? Thanks in advance ! :)

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Booting Windows from different partition than system

    - by szamil
    I have bought an SSD disk, but my laptop (Dell Precision M6300) refuse to use it as a target disk for windows (AHCPI on/off, BIOS up-to-date). I can't exchange the disk unfortunately... But fortunately, I've managed to install windows using USB disk case. The problem is, that when I put that disk as my internal drive it can't boot. (Disk read error, Three Finger Salute ... ) So I tried with Linux (openSUSE), I manage to install it as well, but when I tried to boot GRUB from internal drive I get errors again. (Should I try GRUB2?) I figured out that I can boot into that internal hard drive's openSUSE system using small USB drive with GRUB, kernel and image on it. So, I just run GRUB from USB drive, it loads necessary stuff from the USB drive and then continues from the internal drive. I want to do the same with Windows. But GRUB (rootnoverify and chainloader +1) does not boot my windows on internal drive. The question: is there any chance to copy the critical windows' boot files into the USB drive, to make it possible to boot from that USB drive, but continue booting from internal (different in general) drive? The USB drive would became a system hardware key! ;-) Disk: Plextor M5S 128GB Sata III, laptop has Sata II, but it's compatible anyway, right?

    Read the article

  • SQL Server 2012 memory usage steadily growing

    - by pgmo
    I am very worried about the SQL Server 2012 Express instance on which my database is running: the SQL Server process memory usage is growing steadily (1.5GB after only 2 days working). The database is made of seven tables, each having a bigint primary key (Identity) and at least one non-unique index with some included columns to serve the majority of incoming queries. An external application is calling via Microsoft OLE DB some stored procedures, each of which do some calculations using intermediate temporary tables and/or table variables and finally do an upsert (UPDATE....IF @@ROWCOUNT=0 INSERT.....) - I never DROP those temporary tables explicitly: the frequency of those calls is about 100 calls every 5 seconds (I saw that the DLL used by the external application open a connection to SQL Server, do the call and then close the connection for each and every call). The database files are organized in only one filgegroup, recovery type is set to simple. Some questions to diagnose the problem: is that steadily growing memory normal? did I do any mistake in database design which probably lead to this behaviour? (no explicit temp-table drop, filegroup organization, etc) can SQL Server manage such a stored procedure call rate (100 calls every 5 seconds, i.e. 100 upsert every 5 seconds, beyond intermediate calculations)? do the continuous "open connection/do sp call/close connection" pattern disturb SQL Server? is it possible to diagnose what is causing such a memory usage? Perhaps queues of wating requests? (I ran sp_who2, but I didn't see a big amount of orphan connections from the external application) if I restrict the amount of memory which SQL Server is allowed to use, may I sooner or later get into trouble?

    Read the article

  • puppet execution of a python script where os.system(...) command is not working

    - by philippe
    I am trying to manage Unix users with puppet. Puppet provides enough tools to create accounts and provide authorized_keys files for instance, but no to set up user password, and it tell to the user. What I have done is a python script which generate a random password and send it to the user by email. The problem is, it is not possible to launch passwd Unix command with python, I have then written a bash script with the command: echo -ne "$password\n$password\n" | passwd $user passwd -e $user Launched manually, the script works fine and the created user has its password sent by email. But when puppet launches it, only the python script gets executed, as if the os.system('/bin/bash my_bash_script') is ignored. No error is displayed. And the user gets its password, but the passwd commands are not launched. Is there any limitation with puppet preventing to perform what I described? Or, how can I otherwise change the user account, its expiration, and send password by email? I can provide more information, but right now, I don't know which are accurate. Many thanks!

    Read the article

  • How can I determine the IP addresses allocated by DHCP on a router that I'm connected to?

    - by user234831
    This "router" is not a typical situation. I'm using my phone as a hotspot and can only configure a select number of DHCP options. I can manage the limit on how many devices/clients can use my phone as a hotspot. I have to select from a radio-button list with the options: 2,3,4,5, or 8 I can specify the DHCP starting IP address. In this case, it begins at 192.168.6.106 When I'm connected via WIFI to my phone, an ipconfig /all command shows me that the default gateway is 192.168.6.1 and my IPv4 address is 192.168.1.148. I have the luxury of connecting another device to the phone and that device was assigned 192.168.1.121. I've tried connecting to 192.168.6.1, hoping for some sort of router setup page that I'm used to seeing, but there is no such thing or maybe it's just a matter of incompatable operating systems. In summary, the "router" (phone) has an IP address of 192.168.6.1 and a DHCP server that begins at 192.168.6.106 and allows up to 8 connections. Normally, I would assume a range of 192.168.6.106 - 192.168.6.113, but connected clients are showing otherwise. How can I figure out which IP addresses are set aside by DHCP for clients?

    Read the article

  • Google Sites (via Apps) setup questions

    - by Dave
    I thought that it would be a piece of cake to set up a Google site via Google Apps, but perhaps my previous (limited) experience with web development has given me unrealistic expectations. I have actually had a really tough time finding help with the exact question that I have, which is: How do I change the home page contents??? You see, I'm used to having hosting with someone like GoDaddy, where I can just ftp in and drop my HTML files in the www folder. From research I have found that this is simply not possible with any flavor of Google Sites. That's fine, I can live with it. So let's say I have www.mydomain.com. When I hit that URL, it redirects me to a very long URL (unfortunately) like https://sites.google.com/a/mydomain.com/sites/system/app/pages/meta/domainWelcome, which just says: Google Apps Welcome to mydomain.com If you are the domain administrator get started creating your home page with Google Sites Great! I want to do that. So I click on the "If you are the..." link and end up at a screen where I can choose a template, a name, and some visibility options. If I click on My Sites, there isn't a "default" site, i.e. the one that www.mydomain.com displays. I figured that maybe I just have to create a site first, so I went ahead and did that. My first test was to create a site that was publicly accessible. I thought that maybe if I did that, the Google would decide that this must be my home page since it's the only one. But it doesn't, and I still get the "Welcome to" page. Under "More Actions", I didn't see anything interesting except for "Manage site". I went in there and had a peek around, and didn't see anything about using this as the default home page. Am I looking for something that just doesn't exist? I can't believe there isn't a way to modify the "domain welcome to" page...

    Read the article

  • Permissions for Multiple User VPS

    - by adnymarc
    I have a Linode VPS server that I have recently setup and am migrating to from Mediatemple, where I have a VPS managed by Plesk. I dislike the Plesk interface and the mess it makes of a lot of things, but appreciated its ability to allow multiple people access to different domains on a server. I have most everything setup the way I would like it, but am having issues with permissions for my domain directories. I am running Ubuntu 8.04 LTS and Apache 2 as my web server. I have domains successfully located in /var/www/vhosts/domainname.com but have to modify files as root in order to add/change files for the domains. I would like to setup access with the following criteria: Each domain can have a user assigned to it (and allow for the same user to manage multiple domains - could even create symlinks in their home folder to their domains) Certain users will have shell access and may be chrooted to the domain directory they control FTP needs to be setup and able to correctly access the domains so that content editors for each domain can upload/download without permissions issues I am relatively new to linux sysadmin and have searched for a good guide to help solve these issues but haven't been able to find one yet. Thanks in advance for your help.

    Read the article

  • ubuntu 12.04 kvm virtual server network setup, can't get the machine to be connectable

    - by xyious
    I have worked on my Ubuntu Server host for weeks now and I just can not manage to get the virtual machines into the network.... here's what I need to do: I need to be able to create virtual machines that have IP addresses that can be reached from the outside (192.168 network). I need to be able to connect to the virtual machines through ssh, ftp, http and preferably https, anything else doesn't matter that much. So far everything seems simple enough and I have a lot of leeway in terms of IP address range and server/client configuration. I have the option of taking part of a /24 net as most IPs aren't used, and if it's absolutely necessary I have the option of creating a new /24 subnet. Also have the option of reformatting and reinstalling OS on the host and recreating the virtual machines as nothing has been done other than trying to get virtual machines to work. I would prefer if the virtual machines were just part of the normal network which would be 192.168.5.0/24. The host machine has 2 network cards so I don't even necessarily need the Host to be connectable in the same /24 network. I have tried (I think) just about everything from about 5 different tutorials on bridging (giving br0 the same IP that eth0 used to have (Host is able to connect to VM and vice versa, VM doesn't have outside network access), having eth0 set up like it always was and having br0 have a different IP (same as above), NAT with port forwarding (which I would have preferred not to use but will if it works), turning off one of the hosts network cards and just using one of them, different subnets.... etc. I do know my way around iptables fairly well.... Host is 64bit Ubuntu Server 12.04, using libvirt/kvm. edits: Local network is 192.168.5.0/24, host has static ip 192.168.5.254, GW .5.1 which is also nameserver. We have a second Local network at 192.168.10.0/24 with .10.1 GW, but both hosts and VMs were supposed to go into the .5 subnet. The .10 subnet isn't required, but it wouldn't be horrible if the Host were only accessible in the .10 subnet.

    Read the article

  • What's the proper way to prepare chroot to recover a broken Linux installation?

    - by ~quack
    This question relates to questions that are asked often. The procedure is frequently mentioned or linked to offsite, but is not often clearly and correctly stated. In an objective to concentrate useful information in one place, this question seeks to provide a clear, correct reference for this procedure. What are the proper steps to prepare a chroot environment for a recovery procedure? In many situations, repairing a broken Linux installation is best done from within the installation. But if the system won't boot, how do you fix it from within? Let's assume you manage to boot into an alternate system. Once there, you need to access your broken installation in order to fix it. Many recovery How-Tos recommend using chroot in order to run programs as if you are actually booted into the broken installation. What is the basic procedure? Are there accepted best-practices to follow? What variables need to be considered in order to adapt the basic preparation steps to a particular recovery task? As this is Community Wiki, feel free to edit this question to improve it as well.

    Read the article

  • How to fix "The connection was restarted"?

    - by Altar
    I have a php script with ajax. Few days ago, without making any changes, the script stopped working. The script is working sometimes but it is very slow. Other times it doesn't load completely. Yet other times it loads and empty page or it shows a "The connection was restarted" message. We have tried loading the page from 4 different computers in two different countries (two different ISPs). There is nothing relevant in the server's log. We contacted iPage.com hosting support. We sent screenshots. And all we manage to get from them was an incomplete screenshot and the message 'We tested. The page loads'. So, they won't admit the fault. It looks like they loaded the page once and declared it is working and went back to playing solitaire. I know their support. We had all kind of problems with iPage hosting. My questions are: 1. There is any way to make this error more easy to reproduce? I mean instead of fixing the error to get it appear more often. 2. There is any way to test a server response, to see if it drops connections?

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

  • Using Openfire for distributed XMPP-based video-chat

    - by Yitzhak
    I have been tasked with setting up a distributed video-chat system built on XMPP. Currently my setup looks like this: Openfire (XMPP server) + JingleNodes plugin for video chat OpenLDAP (LDAP server) for storing user information and allowing directory queries Kerberos server for authentication and passwords In testing with one set of machines (i.e. only three), everything works as expected: I can log in to Openfire and it looks up the user information in the OpenLDAP database, which in turn authenticates my user with Kerberos. Now, I want to have several clusters, so that there is a cluster on each continent. A typical cluster will probably contain 2-5 servers. Users logging in will be directed to the closest cluster based on geographical location. Something that concerns me particularly is the dynamic maintenance of contact lists. If a user is using a machine in Asia, for example, how would contact lists be updated around the world to reflect the current server he is using? How would that work with LDAP? Specific questions: How do I direct users based on geographical location? What is the best architecture for a cluster? -- would all traffic need to come into a load-balancer on each one, for example? How do I manage the update of contact lists across all these servers? In general, how do I go about setting this up? What are the pitfalls in doing this? I am inexperienced in this area, so any advice and suggestions would be appreciated.

    Read the article

  • Incorrect Internal DNS Resolution

    - by user167016
    I'm having a DNS issue. Server 2008 R2. The first clue was that after being off the network for a month, I could no longer Remote Desktop into my workstation by name, it wouldn't find it. Both via VPN and internally. But if I connect using its IP, that works. Now I notice in the server's Share and Storage Management, in Manage Sessions, it's displaying the incorrect computer name for some users. So I try, for one example: Ping -a 192.168.16.81 Pinging BOBS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: - replies all successful Then I try Ping RICHARDS_COMPUTER Pinging RICHARDS_COMPUTER.ourdomain.local [192.168.16.81] with 32 bytes of data: -all replies successful In DHCP, .81 belongs to RICHARDS_COMPUTER I did try flushdns. Not sure if this is related, apologies if it's not, but when I try to connect, I also get prompted: "The identity of the remote computer cannot be verified. Do you want to connect anyway? The remote computer could not be authenticated due to problems with its security certificate. It may be unsafe to proceed.." It then lists the correct name as the name in the certificate from the remote computer, but claims that the certificate is not from a trusted authority. Any thoughts are most appreciated!

    Read the article

< Previous Page | 376 377 378 379 380 381 382 383 384 385 386 387  | Next Page >