Search Results

Search found 9715 results on 389 pages for 'servers'.

Page 305/389 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Chroot jail of Nginx and php

    - by sqren
    I'm hosting multiple websites on one VPS, and want to chroot each website, eg. /chroot/website1 /chroot/website2 I'm using makejail, which is a highlevel tool, for creating the jails, and copying the libraries and dependencies. Easy peasy. Each website will need nginx, php and mysql. For php I'm using php5-fpm which actually supports chroot by configuration, however I'm not using this (maybe I should?) My question is which approach of the following three is the better: 1) Every website will have its own seperated instance of nginx, php and mysql. The downside is, that each webserver + php has to listen to a different port. I also need a "master" nginx web server in front of them, reverse proxying to the chrooted servers behind it. Probably most secure, but also most advanced. 2) I don't make any chroot jails manually. I setup one nginx web server, that proxies php requests to php-fpm, on different ports. I can have multiple php-fpm configurations each with is own chroot'ed folder. This is quite managable - however only php will be chrooted. Not the actual webserver. Is this secure enough. Also, I tried this option out, and it seems I will need to use TCP instead of sockets for connecting to MySQL. 3) You tell me ;) I'm quite new to chroot jailing, so please correct me if I'm wrong in my assumptions. I've been reading all the tutorials I could find, however, I find the market for chroot guides very scarce. Any help or inputs much appreciated!

    Read the article

  • Backup server (OSX) like time machine to backup remote ubuntu 12.04 server [on hold]

    - by Mad
    I've searched my ass of for an good solution to backup my ubuntu server thats in a datacenter. Local we have an osx server with some external drives attached to it. This is for the local working stations that handle timemachine. What i like to do is fetch the files (or mount the root of my ubuntu server) and make an time machine backup from it. I just have one problem that if my osx server crashes i can't put back the system because it contains not only the osx server but also the ubuntu server from the data center. I've used Back in time on ubuntu to do the exact same thing but this was to Ubuntu (local) from Ubuntu (datacenter). So does anybody has an solution? Here are my requirements: Set time intervals for backups; need to be backed up nightly. Set time intervals for keeping backups; hourly, weekly, monthy etc Able to back up all computers and servers from an offsite location the local osx server (10.9). Manageable from that one location to login with ssh to do rsync or rsnapshot Has a GUI (osx) Act like time machine, backup only the files that has been changed. Restore to a point back in time.

    Read the article

  • SSH garbling characters in vim/nano on remote server

    - by geerlingguy
    ... and it's driving me insane. Basically (this has been happening over the past couple months), I log into a few different CentOS servers (one Linode, another VPS, and a shared host to which I have shell access), running 5.5, 5.7, and 6, from my Mac running OS X Lion, using Terminal. Basically: $ ssh [email protected] [remote-host] $ nano somefile.txt Once I start editing the file, if I use the arrow keys to move around the cursor, or start deleting, then typing again, the cursor jumps around a bit, and if I save the file and reopen it, it's obvious that the cursor was, in fact, jumping all over the place on a line for no apparent reason. I end up getting things like "This is a neof text." When I had typed in (to the cursor-crazy editor) "This is a line of text." It's a big problem when it comes to editing configuration files, because I often have to edit one line, save and close, then reopen just to make sure that line is right... then edit another line... and it's getting quite annoying. I found Linode Lish Shell Vim and Nano rendering troubles: lines not appearing / cursor positions wrong, but I don't know if that relates much, since that's specifically referring to lish.

    Read the article

  • Cisco "copy startup-config tftp" results in a 0 byte file on the server?

    - by Geoffrey
    I'm pulling my hair out figuring this out. My startup-config is good, I can view it with a show command. I'm trying to copy it to a tftp server: asa5505# copy startup-config tftp Address or name of remote host []? ipaddress Destination filename [startup-config]? t !! %Error writing tftp://ipaddress/t (Timed out attempting to connect) On my TFTP server (SolarWinds), I get the following: binary, PUT. Started file name: C:\TFTP-Root\t binary, PUT. File Exists, C:\TFTP-Root\t binary, PUT. Deleting Existing File. binary, PUT. Interrupted by client, cause: The process cannot access the file 'C:\TFTP-Root\t' because it is being used by another process I've used tftpd32 with same results. I've tried different servers, even one on the same network as the asa ... same results. It'll create a 0 byte file and never do the dump. What's going on? Everything is working normally except for this.

    Read the article

  • Remote Sending of Emails via SMTP/EXIM Issue

    - by Christian Noel
    I have been encountering a problem when sending messages via EXIM. Here is the scenario: I have 2 servers lets just say host1.com = where all my apps and programs are hosted. host2.com = is another server which handles some apps but is also my smtp mail server. whm and cpanel are installed in both hosts as well as exim. right now, messages are being sent out as [email protected] to clients. host1.com uses the [email protected] so that it can send messages outbound as well. here's the problem, after a few hours from a fresh reboot of host1.com, sending messages from host1.com is no longer possible because i encounter an error that states: system/vendor/swift/Swift/Connection/SMTP.php [309]: The SMTP connection failed to start [tls://mail.host2.com]: fsockopen returned Error Number 110 and Error String 'Connection timed out'` also note that this was working fine earlier (like 10 hours ago) but then it suddenly fails. everytime i restart the host1.com then sending messages will work again. i have checked logs and traces but to no avail the only means of fixing this problem is restarting host1.com.

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • Flash plugin locks up Firefox, Chrome and Safari behind a corporate proxy, IE6 works fine

    - by Shevek
    At work I am forced by corporate policy to use IE6. Obviously this is not so good so I use FF for most of my browsing. However there is a problem once I have installed the Flash plug-in - FF locks up when trying to load Flash media. Looking at the status bar at the time of the lock up it appears this happens when the browser tries to get cross domain data. The Flash Active X plug-in in IE does not suffer this issue. I have tried it in a brand new profile in FF with Flash as the only plug in and it locks up. We have 2 different proxy servers and both exhibit the same problem. I have also tried Chrome and Safari and both lock up with the plug-in installed. So, has anyone else had this problem and solved it? Or, is there any way to disable cross domain data access in the flash plug-in? Or, is there any way to disable the "This site needs an additional plug-in" ribbon which appears when the plug-in is not installed. Many thanks!

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • Qmail Patching Makes me Nervous

    - by JM4
    We have a system running CentOS 5 with Plesk 8.6 and Qmail running. Our primary domain is hosted through Media Temple. When Plesk and Qmail are hosted on a single Dedicated Virtual server, it reads the primary server IP and domain and reports that when sending emails from the system. Our pages are written in PHP so we are using the mail() function. While our email goes out to everybody, several enterprise email domains reject our email because it shows a different originating IP (our primary server IP and domain) than the domain we list in the 'from' address. This is not modifiable. Every domain we own of course does have its own IP as well underneath our primary server IP. I have seen several places online that provide a patch, specifically - which allows Domain Binding: "DomainBindings -- For servers that host multiple domains or have multiple IP addresses assigned to them, it is sometimes useful (or important) to have qmail use a specific IP address for its outgoing mail. By default, qmail uses whatever address the OS chooses for all outbound connections. With this patch, you can specify which address to use. It uses a control file similar to smtproutes to specify the outbound IP address to use, based on the sender's domain (local copy) (pyropus.ca)" Qmail Link First off I do not have netqmail installed so I'll need to find another source, but also I am completely unfamiliar with applying patches to qmail. Will I lose email services if I patch? Is it a simple apply and use process? Will my existing email accounts and data be restored after the patch? I am very, very new to unix/linux so this does make me a bit nervous but I am the only person who can make the change and it is one our company "HAS" to have. Any ideas?

    Read the article

  • DFSR NTFS Permissions Not Working!??!

    - by megadood
    I have two windwos 2008 standard servers running DFSR okay. I can create a file on one server, it is replicated to the other okay etc. I have the namespace shared folder on each server shared with full control administrators / everyone change/read permissions. I then browse to the folder on server 1 e.g.\server1\namespace\share\folder1. I right click the folder, and configure the NTFS permissions as I would like for example Adminsitrators Full Control / One User Read/Write Access / No other users in the user list. I save this and then double check the second server e.g. \server2\namespace\share\folder1. I right click the same folder name as before and can see the NTFS permissions have replicated accordingly. I right click the folder and go to properties - security - advanced - effective permissions and select a user that shouldnt be able to get into that folder e.g. testuser. It agrees with the NTFS permissions and shows that testuser has no ticks next to any permissions so should be denied access. I logon to any network PC or the server as testuser. Browse to \server1\namespace\share\folder1. It lets me straight in, no access denied messages. The same applies to server2. It seems as thought all my NTFS permissions are being ignored. I have 1 DFS share and then all the subfolders are a mixture of private folders and public folders so need the NTFS permissions to work ideally. Any idea whats going on? Is this normal? From my tests all users can access any DFSR folder under the namespace\share which is quite worrying. Thanks

    Read the article

  • Data Protection Manager System Protection Backups Failing

    - by TrueDuality
    I'm just starting to setup DPM 2010 in a test environment with a Domain Controller and a File Server. Everything seem to be working fairly well and I can get all of my backup jobs to succeed except for the "Computer\System Protection" backups. Both servers are running fully up to date 64 bit Windows Server 2008 R2 Enterprise with Service Pack 1. The error that is being provided is: DPM cannot create a backup because Windows Server Backup (WSB) on the protected computer encountered an error (WSB Event ID: 517, WSB Error Code: 0x8078001D). (ID 30229 Details: Internal error code: 0x809909FB) This Microsoft Knowledge Base article describes the issue perfectly and provides a hotfix. I downloaded the hotfix, moved it onto the affected server, attempt to run it and receive the following error: The update is not applicable to your computer. I've verified that I have indeed downloaded the 64 bit version. According to this thread the hotfix got rolled into Service Pack 1, yet I'm still experiencing the issue. Both machines do have the Windows Server Backup feature installed. Can anybody point me in the right direction? What am I missing?

    Read the article

  • Sendmail Sends but never Delivers

    - by Jeremy
    I have tried 10 different emails hosted at Google, Yahoo!, GoDaddy, and some that are privately hosted, and each time I get the following errors. I have blocked sensitive information, but you will be able to see the errors. Feb 16 17:06:50 xxxxx sendmail[31824]: o1GM6ovJ031824: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30054, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GM6oJo031825 Message accepted for delivery) Feb 16 16:54:19 xxxxx sendmail[31625]: o1GLsJPP031625: [email protected], ctladdr=www-data (33/33), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30097, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (o1GLsJah031626 Message accepted for delivery) Feb 17 09:05:52 xxxxx sm-mta[10620]: o1H6Z3jM005734: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=07:30:49, xdelay=01:15:36, mailer=esmtp, pri=571331, relay=aspmx3.googlemail.com. [209.85.222.4], dsn=4.0.0, stat=Deferred: Connection timed out with aspmx3.googlemail.com. Feb 17 10:35:23 xxxxx sm-mta[12828]: o1HEZwn8011833: to=<[email protected]>, ctladdr=<[email protected]> (33/33), delay=00:59:25, xdelay=00:12:36, mailer=esmtp, pri=300353, relay=aln-mailrelay.att.net. [12.102.252.75], dsn=4.0.0, stat=Deferred: Connection timed out with aln-mailrelay.att.net. If you take a look, they all send, but then (HOURS later) I get an error "stat=Deferred: Connection timed out with {server}". I'm at my wits end, because I use this same setup on each of my servers, and they all work.

    Read the article

  • Trouble serving vhosts when trying to set up wildcard subdomains with dnsmasq in local development e

    - by Jeremy Kendall
    I'm trying to get wildcard DNS enabled on my laptop using dnsmasq. I realize that this has been asked and answered more than once on this forum, but I can't get the solution to work for me. Steps taken so far: Installed dnsmasq Set address=/example.dev/127.0.0.1 in dnsmasq.conf Set listen-address=127.0.0.1 in dnsmasq.conf Ensured nameserver 127.0.0.1 is in /etc/resolv.conf Set prepend domain-name-servers 127.0.0.1; in /etc/dhcp3/dhclient.conf Created a vhost for example.dev Restarted apache and dnsmasq Note: example.dev is not set in /etc/hosts My vhost for example.dev <VirtualHost *:80> ServerName example.dev DocumentRoot /home/jkendall/public_html/example/public ServerAlias *.example.dev # This should be omitted in the production environment SetEnv APPLICATION_ENV development <Directory /home/jkendall/public_html/example/public> DirectoryIndex index.php AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> The setup above will server example.dev locally without any problem. It will also serve test.example.dev, but test.example.dev returns the default apache "It works!" index.html from /var/www rather than my index.php in /home/jkendall/public_html/example/public. The solution in this Server Fault thread suggests that address=/.example.dev/127.0.0.1 would resolve my problem, but when I try to use that solution, restarting dnsmasq results in a failure with the error message dnsmasq: error at line 62 of /etc/dnsmasq.conf For grins, I moved my project over to /var/www/example and modified the vhost appropriately. I got the same result as described above. At this point I'm not sure what other steps I can take to resolve the issue. Thoughts?

    Read the article

  • Windows 7 disk backup and clone for deployment to multiple systems

    - by gregmac
    I'm in the process of deploying some new PCs (there's only 8), all identical hardware. What I'd like to do is install Windows 7 (64bit), join to domain etc, install a bunch of other software, and then clone that drive to multiple other machines. I'd also like to be able to use it as a backup image, so the machine can be restored back to that image at some future date. I understand this involves at least sysprep, but I am confused after reading some tutorials that talk about using Windows Automated Installation Kit, or hacks with the registry and custom-build batch files. This process seems overly complex to me: I did something similar 10+ years ago, and and don't remember it being this bad. Surely things have improved in a decade? There's also some products that involve having network servers running deployment software, network boot, etc etc.. this is way more than I want to set up. My systems are all identical hardware. Is there a simplified way to clone PCs? Preferably (since I'm a lazy developer, and not an IT admin) I'd like to find some off-the-shelf product that I can run after I get the machine setup, that will spit out a bootable DVD I can run on all the other systems, which will boot up, ask for a computer name, join it to the domain, and that's it. Does such as product exist?

    Read the article

  • Server cost for smartphone app with web service

    - by FrankieA
    Hello, I am working on a smartphone application that will require a backend web service - but I have absolutely clueless to how much it will cost. Web Service will handle: - login of users - cataloging of our user base - holding minimal profile information for users (the only binary data is a display picture which will be < 20k each) - performing some very minor calculation/algorithm before return results - All the above will be communicated to server from a smartphone (iPhone/BlackBerry/Android) Bandwidth Requirements: - We want to handle up to 10k users throughout the day. - I predict 10k * 50 HTTP requests a day = 500,000 requests a day * 30 = 15 million requests a month Space Requirements: - Data will be in SQL database. - I predict 1MB/user * 10k = 10GB + overhead. In other words - space is not a big issue. Software Requirements: (unless someone knows an alternative) - Windows Server 2008 + IIS - MSFT SQL Server Note: This is 100% new to me, so please hit me with all you got. Do I need Windows Server or are there alternative? Is it better to get multiple cheap servers to distribute load? Will Amazon S3 work for me? How about Windows Azure? Thank you!!

    Read the article

  • Trying to run QEMU with a file as hda

    - by Felix
    I'm trying to run QEMU and use a simple file on the host system as the guest's hard drive. Here's what I attempted so far: $ dd if=/dev/zero of=/home/felix/vm/archlinux.img bs=1MB count=8192 8192+0 records in 8192+0 records out 8192000000 bytes (8.2 GB) copied, 86.6054 s, 94.6 MB/s $ qemu -hda /home/felix/vm/archlinux.img -cdrom archlinux-2009.08-netinstall-i686.iso -boot d Then I try to install Archlinux to that file. It goes pretty well (it's able to format it from what I can tell) until I start installing packages, when I get errors like this: And, of course, everything goes downhill from there (unable to mount the partition, corrupted files, ...). What am I doing wrong? Note: I'm just doing this for entertainment purposes. I don't intend to actually use this on servers or anything. The only use I can think of for this kind of installation would be to actually get an 8GB USB stick and dd that file to it and wham! You have a bootable stick with a fully fledged and customized OS, and without torturing the stick through the installation.

    Read the article

  • How to fix a Postfix/MySQL/Dovecot Unknown Host Issue?

    - by thiesdiggity
    I am having an issue with one of my Postfix/Dovecot mail servers and I'm unsure how to fix the problem. I will try to explain it in detail, here it goes: I have an Ubuntu server setup using Virtual hosting with Postfix, Dovecot and MySQL. We have one domain setup as a virtual domain, for this example I am going to use mail.example.com. Under that domain we have one email address. I have another server (MS Exchange) setup using another one of my sub-domains, ex.example.com. The problem is that when I SMTP into the account on mail.example.com and try to send an email to an account on ex.example.com, I get the email returned back to us with an "unknown host" error. Now, I know that the mail.example.com server can resolve the ex.example.com domain because I can ping/dig while SSH'd into it. I can also log into Postfix via Telnet and send an email to an ex.example.com mailbox. I'm guessing that it has something to do with Postfix/Dovecot looking locally for the domain in the virtual domain list because of the tld domain (example.com)? If that's the case, how do I get Postfix/Dovecot to only look locally for the entire URL (mail.example.com) and if it doesn't find it, send it to the correct server by looking up the MX/A records (which I know exist and are setup correctly)? I have been working on this all day and any guidance would be GREATLY appreciated! Thanks for your time!

    Read the article

  • AWS lighttpd: Sending a copy of requests to test.

    - by Martin
    I have a load balanced service on AWS. So the ELB evenly distributes the load across my servers. Each server is running lighttpd that does logging and forwards the requests to my service (on the same machine). I have written a new version of the service. It is installed and running on an EC2 machine test1 (basically a mirror of our current server but the new service running instead of the original) and I have done some preliminary tests that look good. But what I would like to do is mirror a fraction of incoming traffic to the new version of the service so I can do some comparisons between an original version and the new version based on real traffic. Thus I was thinking I could modify one box behind the ELB to duplicate its traffic to the test1. I was thinking I could modify the configuration of lighttpd so that each request is mirrored/duplicated. i.e. the original service keeps responding as before but a mirror request is sent to test1 but the reply is just dropped). Unfortunately I have not been able to work this out. Any ideas on how I could mirror the requests from one box to itself and test1. Or any other ideas for testing.

    Read the article

  • Biztalk 2009 logshipping with SQL 2008

    - by Manjot
    Hi, I am setting up biztalk logshipping for Biztalk 2009 database. Following http://msdn.microsoft.com/en-us/library/aa560961.aspx article, I am doing the following to setup biztalk logshipping on destination server: Enable Ad-hoc queries by: sp_configure 'show advanced options',1 go reconfigure go sp_configure 'Ad Hoc Distributed Queries',1 go reconfigure go sp_configure 'show advanced options',0 go reconfigure go Execute LogShipping_Destination_Schema & LogShipping_Destination_Logic in master on destinations server Run: exec bts_ConfigureBizTalkLogShipping @nvcDescription = '', @nvcMgmtDatabaseName = '', @nvcMgmtServerName = '', @SourceServerName = null, -- null indicates that this destination server restores all databases @fLinkServers = 1 -- 1 automatically links the server to the management database When I run this I am receiving the following error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. After some research I found some info : Usually this error means that the SQL Server Service Principal Name (SPN) was not configured, and NTLM was not being used as an authentication mechanism. SQl services are runing under different domain accounts. So, I asked the domain admin to create SPNs for the servers, SQL service accounts for beoth source and destination using name and FQDN. enabled computer name and service accounts for delegation. When I run the following: select * from sys.dm_exec_connections I get the the same error: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON' Any help please?

    Read the article

  • Windows Server 2008 Remote Desktop printing blank pages

    - by Colin Pickard
    I have a Windows Server 2008 (not R2) machine which has problems with redirected printing. Clients connecting via Remote Desktop have their printers redirected and appearing for them to print to, but printing from applications on the server to local printers is giving blank pages, missing pages, or pages with headers/footers but no middle section. The issues are consistant for similar prints, but sometimes other prints and/or applications will work correctly. I have installed PDFCreator locally on the server, and the same print jobs sent by the same application appear correctly in the PDFs. Printing that PDF via the redirected printer prints correctly. I have tried the following: Installing drivers. I’ve installed several drivers different drivers, for both the client and server operating system and architecture, on the client and the server. Reinstalling the printers. I’ve tried reinstalling on remote print servers, the clients, and the host server, and tried different client machines. Granting everyone full permissions on the print spool folder on the server. Editing the registry to forward non-USB ports (http://support.microsoft.com/kb/302361) None of these have made any difference. The clients are using Windows 7 or Windows XP and none of them have any issues with printing locally. Any ideas? Thanks!

    Read the article

  • Can I use Outlook 2010 (beta) with OWA account?

    - by Dan
    One of the new features of Outlook 2010 (beta) is the support for multiple Exchange accounts. I'm wondering if there is any way to use this together with a (different) Outlook Web Access account to also get that email in Outlook. Specifially, in additional to my regular corporate (Exchange) account, I also use another corporate account through OWA. With this second account, the only supported access is through OWA; while POP3 access is available, it is not actually suported. I'm not very familiar with configuring Exchange servers, but in talking to those who are, it sounds like enabling Outlook Web Access is (slightly) different than allowing access from Outlook via HTTP(s). Is that correct? If so, it doesn't really semm quite right as absolute worst-case, one could (theoretically) resort to screen-scraping OWA. Edit: this looks to be about the same as Activesync/OWA Desktop Client? (This doesn't have anything to do with the question, but I'm actually using this second corporate account in Outlook by POP3'ing to Gmail, and then IMAP4 from Gmail to Outlook. Obviously, it would be much nicer to add it as a second Exchange account.).

    Read the article

  • Dual Xeon Server voltages are low

    - by Mindflux
    I've got a whitebox server running CentOS 5.7. It's a Dual Xeon 5620, 24GB of RAM. The mainboard is a SuperMicro X8DT6-F and the chassis is a SC825TQ-R720LPB. Dual 720W Power supplies. We had a big power outage a couple weeks back that took down everything, I don't have any pre-power outage figures for this server, and the only reason I noticed these is because when I was bringing up the servers I was checking them out with more scrutiny than usual. http://i.imgur.com/rSjiw.png (Image of voltage readings) As you can see, CPU1 DIMM is low, +3.3V is high, 3.3VSB is high, +5v is high, +12v is REAL LOW (out of normal 5% (plus/minus))... and VBAT is off the charts. With my whitebox VAR we've tried the following: Swap out PSU with another server I have with the same PSUs. Try different power cord Update BMC/IPMI firmware in case readings were wrong (They aren't) Update BIOS Try different PDU Try a different outlet and/or circuit Replaced Voltage Regulator Unit At this point, the only thing we haven't done, seemingly is replace the mainboard.. which is what the next step will be unless something else shines some light on the situation. I should mention the system is rock solid otherwise which is a surprise given the 12v voltage is that far off.

    Read the article

  • Missing Home Folder XP Clients 2008R2 Domain

    - by minamhere
    We just completed a migration from Server 2003 to Server 2008R2. Everything seems to have gone well except that many of our desktops have stopped mapping the Home Folder as set in Active Directory. Other mappings that are defined on individual clients are mapping just fine, these mappings are all on the same file server as the failing Home Folders. Half of the users are on 1 file server and half are on another. Users from both servers are having this problem. I have enabled the Group Policy setting to "Wait for network before logging in". I enabled the policy to "Run Logon Scripts synchronously". There are no errors on the Domain Controller or either File Server. When I enabled Group Policy Preferences as an attempted workaround, I get this error: The user 'V:' preference item in the '<Policy Name>' Group Policy object did not apply because it failed with error code '0x800708ca This network connection does not exist.' This error was suppressed. This seems to indicate that the network connection is not ready by the time Group Policy is processed. But isn't this the point of the "Wait before logging in" and "Run Logon scripts synchronously" settings? Some other background facts: The new Server 2008R2 installation is a Virtual Machine. It is on a new Subnet in a different building from the old server. DNS and DHCP were also migrated from the old DC to this new DC. These Home Folders were all working properly before the migration. Are there new security restrictions/policies in Server 2008R2 that might be causing this? Is there a way to check whether I have an underlying network connectivity issue? Maybe moving the server to the new building is causing a delay/timeout? Any thoughts or ideas on what could be causing this or how I can resolve this? Thanks.

    Read the article

  • Favorite tricks with linux kernel boot parameters?

    - by ~drpaulbrewer
    Most linux bootloaders let you edit the kernel boot command line before booting. There are often lots of parameters available -- Knoppix, for instance, has a list on their Knoppix Cheat Codes page -- but most are applicable only to compatibility and special situations. A few are hidden gems. Common usages of these codes are to boot to single-user mode, alter screen mode or drivers, or to specify an alternative root directory. Other more exotic uses are possible. Some linux distributions let you copy the boot cd into ram. Others (e.g., Ubuntu) let you use preseed files to clone installs when setting up multiple systems -- useful when installing a lab full of computers without having to baby sit each install. What other tricks have you found useful in system installs, repairs, backups, restores, establishing temporary servers, or other tasks? To add your favorite trick to the list: As much of the code for these options goes on either in initrd, or in a service handler that detects the kernel parameters, please list *(1) the kernel boot line parameter, (2) what it does, (3) the linux distribution and any required packages to activate the feature*. Thanks.

    Read the article

  • Can IIS (Ideally Azure) do SSL Proxying?

    - by Acoustic
    My team has been asked to add a new feature to a project we're working on, and none of can find authoritative details on whether it's possible with Windows/IIS. The short of it is that we're hoping to have customers update their DNS with a CNAME record to point their website to our server instead of theirs (they why's are trivial - it's what the app does on behalf of your site). We're using a reverse proxy with several custom modules to serve particular content from the original servers. So far everything works perfectly until we encounter SSL. Is there a way to have IIS serve up an SSL certificate from another server? In other words, is there a way to be a trusted man in the middle? I'm hoping that's possible so that we don't have to require all our clients to re-issue their SSL certs. Frankly, we don't want to have to manage hundreds of certs. I'd also like to avoid a UCC situation if there's a way to because it seems to require re-creating the cert each time a client is added. So, any pointers on proxying/hosting SSL (or even dynamic SSL hosting like http://www.globalsign.com/cloud/) would be appreciated.

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >