Search Results

Search found 9106 results on 365 pages for 'course'.

Page 239/365 | < Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >

  • MySQL Master-Master w/ multiple read slave cost effective setup in AWS

    - by Ross
    I've been evaluating Amazon Web Services RDS for MySQL and costing out potential scenarios involving a simple multi-AZ deployment read/write setup vs. a multi-AZ deployment mysql master (hot-standby) with additional read-only slaves. the issue I'm trying to cost-optimize includes their reserved instance vs on-demand instances. Situation 1: purchase reserved multi-az setup for Extra-large-hi-mem(17GB RAM) instance for $5200/yr and have my application query the master all the time. the problem is, if I don't need all the resources of the (17GB RAM) all the time and therefore, especially not a hot-standby, what alternatives for savings can a better topology create, like potentially situation 2 below: Situation 2: purchase reserved multi-az setup using smaller master instances than above for the master-master hot-standby to receive the writes only. Then create and load balance several read-only slaves off the master and add/remove and/or scale up/down the read slaves based on demand. This might only cost $1000 + the on-demand usage of the read slaves. My thinking is, if I have a variable read-intensive application load, with low write load, the single level topology in situation 1 means I'm paying for a lot of resources at the write level of topology when I don't need them there. My hope is that situation 2 can yield cost savings from smaller reserved instances on the master-master resource level allowing me to scale up and down and/or out on the read-level according to demand as needed. Does anyone see a downside to doing this or know of some reason this isn't possible with RDS? Any other thoughts or advice always welcome of course. Thanks in advance, R

    Read the article

  • SQL 2008 - db mail issue

    - by Chris
    Hello. I have two instances of SQL Server 2008. One was upgraded from SQL Server 2000 and one was a clean, new install. The instances are running on different nodes of the same cluster, although I have tried having them both on the same node with identical results. SQL Mail operates perfectly on both instances. DB Mail operates perfectly on the newly installed instance. On the upgraded instance, DB Mail does not send any mail. Of course, I am not positive that the fact this instance is upgraded has anything to do with the issue, but it might. The configuration of my db mail profile and account looks identical to my functioning instance. In the configuration of the 'alerts' tab in the SQL Agent properties i have tried selecting both DB Mail and SQL Mail to no avail. Both instances use the same SMTP server with the same authentication (domain with db engine account). All messages sent via sp_send_db mail and those sent via the 'test email' option are visible in the sysmail_allitems queue and remain there as 'unsent'. The send_status eventually changes to 'failed'. The only messages in the sysmail_event_log are 'mail queue stopped by login domain\myuser', 'mail queue started by login domain/myuser' and 'activiation successful.'. selecting from the externalmailqueue has the same number of rows as sysmail_allitems. i have tried bouncing the agent, the entire instance and moving the other functioning instance to the other node in the cluster. any thoughts? thx.

    Read the article

  • How to get multipath working for Ubuntu Server 12.04

    - by mlampi
    I'm working on a project which aims to make use of Ubuntu servers running on enterprise class hardware. In our case that means IBM HS23E blade servers, QLogic 4GB fibre channel extension cards and quite old IBM DS4500 disk array with two controllers. At the moment we have fibre channel as only boot option and Ubuntu Server 12.04 installed just fine and is able to boot without multipath. I'm not a linux professional myself but in our team we have people who will understand the technical stuff. Don't let my post confuse :) The current situation is that we have only one fibre channel connection to a single disk array controller. Real life case would be of course quite different. At minimum we should have two fibre channel ports connected to two different switches and two different controllers. However, we have no idea how to set up multipath tool. Is the DM-MPIO the right software? At minimum we should be able to boot when multiple connections are available and achieve fault tolerance when any of them should be down. Since the disk array is not the latest hardware, I managed to find RDAC driver sources only for 2.6.x kernel. And we have 3.2.x. Another issue is to build a multipath.conf. The said driver sources are from IBM support and the QLogic drivers provided to Ubuntu installer are from Ubuntu site. It seems that RHEL and SLES would have near out of the box support but that is not an option for our project. Actual questions: - What is the recommended software tool for multipath for Ubuntu Server 12.04? - Is there available pre-made configurations or templates? Does it require disk array / controller specific settings or do a more generic config work? - Do you have expriences on similar setup and like to share the knowledge? I'll provide you with any additional information you might require. Thanks in advance.

    Read the article

  • SQL Server 2008 second instance times out when logging in -- but only the first time?

    - by Kromey
    This is a strange one that has plagued me for a while now. When logging in to the second instance of SQL Server 2008 on one of our database servers, we get a timeout error: Cannot connect to servername\mssqlserver2. Additional information: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. (Microsoft SQL Server)") (This is the error message when trying to connect with Microsoft SQL Server Management Studio; other tools experience the same error, but of course say it in different ways.) Immediately re-attempting to log in works just fine, so whatever the cause it's ephemeral! This happens regardless of user or authentication type (both Windows and SQL Server authentication methods are supported on this instance). What's even weirder, though, is that the first instance on this server has never once demonstrated this problem. Server is a Windows Server 2008 R2 virtual server, hosted in Microsoft Hyper-V (host is likewise Server 2008 R2). The server has 2GB of RAM, and seems to regularly be using 90% of that -- could low memory be the cause of this issue? I could see this second instance -- which is not used very often yet -- being swapped out to disk, and then taking too long to load back into memory to respond in time to the connection request, but I'd rather have more than just my own hunch before I go scheduling a downtime for this server (the first instance is used regularly) and then just throwing extra resources at it in the blind hope that the problem goes away.

    Read the article

  • Using our own certificate authority for business email encryption

    - by LumenAlbum
    I've read the available similar questions on serverfault but I haven't quite found a definite answer to the security aspect of it - hence here's my question: I'm administrator of an office working with tax data and we want to start using certificate-based eMail encryption with our clients. Considering the prices for issued certificates by VeriSign & Co I was wondering if we couldn't issue the necessary certificates with a certificate authority of our own. I realize that they do not offer the trust hierarchy that commercial certificates do but I don't see why we would need that. Most of our clients have small businesses and only 20% of them even exchange data with us via email. So if we were to issue certificates for those 20% and our employees, that would enable us to use encrypted emails. Of course they would have to trust our certificate authority and thus once receive our public root certificate. But if we would hand them out to them (or install it) personally, they'd know that it really is our certificate. Is thery a huge security risk that I am missing here? As long as nobody has access to our certificate authority server nobody should be able to interfere with security, right? And the client certificates would be generated and handed out by us, as well... Please advise me if I am making an error in judgement here and thank you in advance.

    Read the article

  • Why Virtual Box won't give me option to create 64 bits guests?

    - by Eduardo Born
    My host is x64 bits Windows 8.1. I downloaded the latest Virtual Box (4.3) and I'm trying to create a VM with a 64 bits Ubuntu OS (ubuntu-12.04.3-desktop-amd64). When I go to New VM wizard, it doesn't give me option to select "Ubuntu (x64)" as I have seen in other people's screenshots, only just "Ubuntu". As a result, the ISO can't boot. I tried in another PC and Virtual Box gives the x64 variants to most listed OS... Control Panel shows x64 OS, x64 processor. My host laptop is a Sony Vaio VPCZ22UGX/N, Intel® Core™ i7-2640M processor. CPUz shows Vx-t is available on my processor, of course. Here is what I tried so far: I enabled IO APIC as required in the docs. I have virtualization enabled in the BIOS. It works fine in VMware. Check that Hyper-V is not running or even installed on my Windows. Same for VMware. I also tried running the command: VBoxManage modifyvm [vmname] --longmode on for that VM, but no change.. I think the issue is really that I can't select x64 variant of the Ubuntu OS for that VM. Other people seem to indicate that's a requirement, but I don't get that option for some reason. I spent a lot of time and can't find what's wrong... Anyone knows what could be missing here? Thank you very much!! Eduardo

    Read the article

  • Reasonable automatic HTML to PDF conversion (in UNIX/Linux environment)

    - by Alex Balashov
    Is there a way to generate PDF documents from HTML files automatically in Linux where the PDF offers some kind of reasonable level of resemblance to the input file? A command-line tool - as opposed to an interactive GUI of some kind - is key. I have tried htmldoc and some related cousins, of course. But these tools are hopelessly stone-age; htmldoc doesn't support CSS at all. You won't find a lot of HTML documents these days that don't have at least some CSS styling. I don't really care about stupid effects or minor embellishments, but the issue is that CSS is at the core of most layouts these days; not many folks are using 6 layers of nested tables anymore. So, if the conversion tool has no grasp of CSS whatsoever, it's not just a matter of "the document doesn't look quite right"; it is likely to not meet the minimum standard of usability at all. It has been suggested to me by some folks to try to use the Gecko rendering engine to generate images that can be converted to PDFs, but I have no idea how one would go about doing this, let alone easily. I have no trouble believing that there are good commercial tools that do this, but I'm really looking for an open-source package if possible, as the endeavour itself is an open-source one and doesn't pay. Thanks in advance!

    Read the article

  • "Server Unavailable" and removed permissions on .NET sites after Windows Update [closed]

    - by andrewcameron
    Our company has five almost identical Windows 2003 servers with the same host, and all but one performed an automatic Windows Update last night without issue. The one that had problems, of course, was the one which hosts the majority of our sites. What the update appeared to do was cause the NETWORK user to stop having access to the .NET Framework 2.0 files, as the event log was complaining about not being able to open System.Web. This resulted in every .NET site on the server returning "Server Unavailable" as the App Domains failed to be initialise. I ran aspnet_regiis which didn't appear to fix the problem, so I ran FileMon which revealed that nobody but the Administrators group had access to any files in any of the website folders! After resetting the permissions, things appear to be fine. I was wondering if anyone had an idea of what could have caused this to go wrong? As I say, the four other servers updated without a problem. Are there any known issues involved with any of the following updates? My major suspect at the moment is the 3.5 update as all of the sites on the server are running in 3.5. Windows Server 2003 Update Rollup for ActiveX Killbits for Windows Server 2003 (KB960715) Windows Server 2003 Security Update for Internet Explorer 7 for Windows Server 2003 (KB960714) Windows Server 2003 Microsoft .NET Framework 3.5 Family Update (KB959209) x86 Windows Server 2003 Security Update for Windows Server 2003 (KB958687) Thanks for any light you can shed on this.

    Read the article

  • Choose identity from ssh-agent by file name

    - by leoluk
    Problem: I have some 20-30 ssh-agent identities. Most servers refuse authentication with Too many failed authentications, as SSH usually won't let me try 20 different keys to log in. At the moment, I am specifying the identity file for every host manually, using the IdentityFile and the IdentitiesOnly directive, so that SSH will only try one key file, which works. Unfortunately, this stops working as soon as the original keys aren't available anymore. ssh-add -l shows me the correct paths for every key file, and they match with the paths in .ssh/config, but it doesn't work. Apparently, SSH selects the indentity by public key signature and not by file name, which means that the original files have to be available so that SSH can extract the public key. There are two problems with this: it stops working as soon as I unplug the flash drive holding the keys it renders agent forwarding useless as the key files aren't available on the remote host Of course, I could extract the public keys from my identity files and store them on my computer, and on every remote computer I usually log into. This doesn't looks like a desirable solution, though. What I need is a possibility to select an identity from ssh-agent by file name, so that I can easily select the right key using .ssh/config or by passing -i /path/to/original/key, even on a remote host I SSH'd into. It would be even better if I could "nickname" the keys so that I don't even have to specify the full path.

    Read the article

  • MacBook Pro (OSX Lion) - shutdown automatically before reaching login screen

    - by mkk
    When I try to lunch my MacBook Pro I can see a progress bar on loading screen. It goes to 1/15 or something like this and then it shut downs - I cannot reach even login screen. It happened to me 2 months ago, I have 'fixed' this by formatting my hard drive and installing OSX (Lion) again. This time I think that situation is a little bit different - I am able to enter single-user mode by pressing cmd + s. I then type /sbin/fsck -yf, I get the error: ** Checking Journaled HFS Plus volume. The volume name is Macintosh HD ** Checking extents overflow file. ** Checking catalog file. Invalid node structure (4, 24704) ** The volume Macintosh HD could not be verified completely. /dev/rdisk0s2 (hfs) EXITED WITH SIGNAL 8 but when I type exit, I can the login screen and I can log in. I tried a lot of things, booting from recovery partition and choosing disk utility to repair the disc, but I get error that it cannot be repaired. I have googled for hours and the only real solution I have found was to buy Disc warrior that might fix the issue. Any other suggestions? Secondary question is what causes this issue? I thought the reason are bad sectors, but Smart Utility haven't found any. I found suggestion that RAM could cause this kind of issue as well, so I downloaded rember and made memory test - all tests passed. Right now I have used my solution of entering single-mode user and then typing exit, however I am not sure how long it will 'work'. Of course I have back-uped what I considered important. Thanks for the help in advance! UPDATE: I guess Smart Utility was not very useful, I mnaged to get input/output error, which I believe is equivalent to bad sector.

    Read the article

  • Firewall for internal networks

    - by Cylindric
    I have a virtualised infrastructure here, with separated networks (some physically, some just by VLAN) for iSCSI traffic, VMware management traffic, production traffic, etc. The recommendations are of course to not allow access from the LAN to the iSCSI network for example, for obvious security and performance reasons, and same between DMZ/LAN, etc. The problem I have is that in reality, some services do need access across the networks from time to time: System monitoring server needs to see the ESX hosts and the SAN for SNMP VSphere guest console access needs direct access to the ESX host the VM is running on VMware Converter wants access to the ESX host the VM will be created on The SAN email notification system wants access to our mail server Rather than wildly opening up the entire network, I'd like to place a firewall spanning these networks, so I can allow just the access required For example: SAN SMTP Server for email Management SAN for monitoring via SNMP Management ESX for monitoring via SNMP Target Server ESX for VMConverter Can someone recommend a free firewall that will allow this kind of thing without too much low-level tinkering of config files? I've used products such as IPcop before, and it seems to be possible to achieve this using that product if I re-purpose their ideas of "WAN", "WLAN" (the red/green/orange/blue interfaces), but was wondering if there were any other accepted products for this sort of thing. Thanks.

    Read the article

  • How do I make rsync also check ctime?

    - by Benoît
    rsync detects files modification by comparing size and mtime. However, if for any reason, the mtime is unchanged, rsync won't detect the change, although it's possible to spot it by looking at the ctime. Of course, I can tell rsync do compare the whole files' contents, but that's very very expensive. Is there a way to make rsync smarter, for example by checking mtime+size are the same AND that ctime isn't newer than mtime (on both source and destination) ? Or should I open a feature request ? Here's an example: Create 2 files, same content and atime/mtime benoit@debian:~$ mkdir d1 && cd d1 benoit@debian:~/d1$ echo Hello > a benoit@debian:~/d1$ cp -a a b Rsync them to another (non-exisiting) directory: benoit@debian:~/d1$ cd .. benoit@debian:~$ rsync -av d1/ d2 sending incremental file list created directory d2 ./ a b sent 164 bytes received 53 bytes 434.00 bytes/sec total size is 12 speedup is 0.06 OK, everything is synced benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:Hello d2/a:Hello d2/b:Hello Update file 'b', same size and then reset its atime/mtime benoit@debian:~$ echo World > d1/b benoit@debian:~$ touch -r d1/a d1/b Attempt to rsync again: benoit@debian:~$ rsync -av d1/ d2 sending incremental file list sent 63 bytes received 12 bytes 150.00 bytes/sec total size is 12 speedup is 0.16 Nope, rsync missed the change. benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:Hello Tell rsync the compare the file content benoit@debian:~$ rsync -acv d1/ d2 sending incremental file list b sent 144 bytes received 31 bytes 350.00 bytes/sec total size is 12 speedup is 0.07 Gives the correct result: benoit@debian:~$ grep . d*/* d1/a:Hello d1/b:World d2/a:Hello d2/b:World

    Read the article

  • Durability of Websockets Server

    - by smitchell360
    I am starting to experiment with websockets. Does anyone know of a websockets server (open source or paid) that provides a durable store of the websocket "channel"? All of the examples that I have found do not address durability -- if a websockets server goes down, all "channel" data is lost. Services such as Pusher do not really discuss whether they address the durability issue (and I have not received a response from tech support yet). Happy to roll my own, but would rather not reinvent the wheel. EDIT: I'm not looking for websockets 101 information. That is readily available and understood. I'm looking for a server (open source or paid) that supports websockets and has a durable store for the websocket data so that, in the event that a server fails, a new server can take over where the original one left off. Two main purposes: 1. support failover scenarios contemplated by the websockets Network Working Group http://tools.ietf.org/html/draft-ibc-websocket-dns-srv-02#section-5.1 (most importantly so that missed messages are sent when a client connects to a failover server) 2. support scenarios where new subscribers must receive all past messages that were published. Of course this can be handled at the application layer...but that is not what I am looking for.

    Read the article

  • Does Guest WiFi on an Access Point make any sense?

    - by uos??
    I have a Belkin WiFi Router which offers a feature of a secondary Guest Access WiFi network. Of course, the idea is that the Guest network doesn't have access to the computers/devices on the main network. I also have a Comcast-issues Cable Modem/Router device with mutliple wired ports, but no WiFi-capabilities. I prefer to only run one router/DHCP/NAT instead of both the Comcast Router and the Belkin Router, so I can disable the Routing functions of the Belkin and allow the Comcast Router to But if I disable the Routing functions of the Belkin device, the Guest WiFi network is still available. Is this configuration just as secure as when the Belkin acts as a Router? I guess the question comes down to this: Do Guest WiFi's provide security by 1) only allowing requests to IPs found in-front of the device, or do they work by 2) disallowing requests to IPs on the same subnet? 1) Would mean that Guest WiFi on an access point provides no benefit 2) Would mean that the Guest WiFi functionality can work even if the device is just an access point. Or maybe something else entirely?

    Read the article

  • RoboCopy errors on Windows Server 2008

    - by Steve
    I am getting bizarre error with RoboCopy in Server 2008. It will randomly hang with "The specified network name is no longer available." error. Once that happens, it will continue to fail on the retries. But of course the remote server IS still available on the network and can be reached with other tools. I think it must be somehow permission related but I can't figure out what is wrong. Any ideas would be much appreciated. Other info: Options : *.* /S /E /COPY:DAT /NP /R:10 /W:30 If I turn on the /B option it will fail 100% of the time at the very beginning (that's why I think it has to be somehow permission-related) The two servers are standalone and I am doing a NET USE command prior to the robocopy It does not matter what user account on the remote server. Tried both Administrator and another user which was also a member of the local Administrators group UAC is turned off on both sides It is not always the same file that hangs. Sometimes it will get through half or more and sometimes it will fail on the first file

    Read the article

  • OpenWrt vs DDWrt

    - by Ioan Paul Pirau
    I have a TP-Link Wr1043ND router and I want to install one of these two firmwares: OpenWRT DD-WRT I read that I can install custom packages and do much more than I can with the original firmware. I would like to ask someone with experience in using both OpenWRT and DD-WRT which he would recommend and why. And to give a few reference points I'm interested in: reliability – network stability both on cable and wireless and on the usb drive performance – network speed, very important also usb drive speed configurability – the possibility to add extensions such as a torrent client, FTP, SSH, WWW and SVN server directly ease of use – the ease of installation and configuration of the router support/docs – how much info there is if you stumble upon a problem and you have to find some documentation, or if there's any free support (but that's a longshot) Of course I don't imagine that I will find the perfect firmware and that one is vastly superior over the other. Also if there's anyone out there who uses one of these firmwares on a TP-Link Wr1043ND, it would be great to get some feedback about the impact of the changes from the original firmware. P.S. I'm open also for Tomato if it's the better one.

    Read the article

  • Load balancing a Windows File Share using HA-Proxy

    - by NathanE
    After pulling my hair out over DFS I just had this weird and potentially dangerous idea come into my head whereby, just possibly, I might be able to use HA-Proxy to load balance a file share between servers. I've done some remedial packet traces and it does appear that TCP port 445 is the only thing involved in using Windows file sharing. I've always thought for many years that UDP 139, 135 etc were also involved in at least establishing the connection - but apparently not! So I setup a basic test: listen SMBTest *:445 mode tcp server Smb1 172.16.61.201:445 server Smb2 172.16.61.202:445 And you'll never guess what... it works??? (!) Now obviously there is the whole concern about synchronisation between the file servers (of course). That could easily be taken care of with a little bit of Robocopy script. And considering I only need a HA read-only file share there wouldn't be any issues with regard to file locking etc. Can anyone tell me if what I'm playing with here is fire? I really didn't think it would work at all and now I'm a little shocked. What would be the downsides? Could this be relied upon for a production environment?

    Read the article

  • Apache2 VirtualHost on Debian not working

    - by milo5b
    I am having some problems with Apache2 configuration. I have already tried to look for documentation on the web (Apache's site, Debian's site, here on serverfault, etc), but nothing really helps. I have tried different configurations, but my current configuration is the following (/etc/apache2/sites-available/default): <VirtualHost *:80> ServerAdmin [email protected] ServerName mysite.dev ServerAlias mysite.dev DocumentRoot /var/www/mysite.dev/httpdocs/ ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> <VirtualHost *:80> ServerAdmin [email protected] ServerName livesite.com ServerAlias www.livesite.com DocumentRoot /var/www/livesite.com/httpdocs/ <Directory /var/www/livesite.com/httpdocs/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined </VirtualHost> mysite.dev it's just an entry in hosts file on my client machine, while livesite.com it's an actual DNS record which would resolve to the same IP as the IP set in hosts file for mysite.dev. The problem is that when i try to type mysite.dev in my browser, it would automatically go to livesite.com. I tried to have different /etc/apache2/sites-enabled/ files (/etc/apache2/sites-enabled/mysite.dev , /etc/apache2/sites-enabled/livesite.com ) - and of course with the actual sites-available related files, but achieving the same results. I have tried to have a peak on error.log and access.log but there's nothing I can see. My httpd.conf contains: AccessFileName .htaccess And I have no /etc/apache2/conf.d/virtual.conf file. Any help would be greatly appreciated - if I did not provide enough info please let me know I will do my best to provide all necessary info. Thanks

    Read the article

  • Add and remove letterhead in Word document

    - by Daniel Wolf
    Our company has letterheaded paper (pre-printed paper with our logo on it). Whenever we send something out by mail, we print it on that paper. However, when we send the same document via email, we convert it to a PDF file. Now the problem is: when converting a Word document to PDF, it should contain the letterhead. When printing the same document on paper, it should not (or else the letterhead would be printed twice). Currently, we are using two different Word document templates - one with letterhead, one without. So whenever we want to add or remove the letterhead, we have to create a new document with the other template and copy and paste everything over. Nasty solution. What I'm looking for is some simple way to switch the letterhead on and off. What I've tried so far: Switching the template: There does not seem to be a simple way to switch the template for an existing document. Using a picture watermark: Our letterhead goes all the way to the border of the page. (No printer supports this, of course, but it is fine for export to PDF.) Apparently depending on the current default printer, Word will not allow a borderless watermark, instead shifting the image around. Using the page header: When editing the page header, I can insert pictures at arbitrary positions, which is great. However, I could not find a way (short of macros) to enable/disable just the pictures in the header. (The text should remain there.)

    Read the article

  • How do I configured postfix and to use SES, and still be able to forward email from unverified external addresses?

    - by Jeff
    We are using postfix for email group lists (eg "[email protected]" will go to all members) from Amazon EC2 systems. For a variety of reasons (scalability and reliability) we would like to use SES for all outgoing emails. I was able to configure postfix to use SES as the SMTP for outgoing emails. This works fine for all verified emails. But of course, when an outsider emails me at "[email protected]", it chokes. Postfix is configured to forward to my gmail account (via the virtual table), the SES rejects it because the outside user is not verified. So none of our mailing groups configured through postfix will work this way. I would be happy to rewrite all "From" addresses before sending (and simply leave the Reply To as the original sender), but I cannot seem to find a working configuration. No matter what I set in canonical or generic regexps, SES seems to reject all forwarded emails. Surely somebody must have configured postfix with SES to handle virtual addresses? How does this work?

    Read the article

  • no mails routed to/from new Exchange 2010

    - by Michael
    I have an Exchange Server 2003 up and running for years. Now I am in the mid of transition to Exchange Server 2010, I already installed it, put the latest Servicepack on it and everything seems fine, BUT: Mails do not get delivered to MailBoxes on the new Exchange 2010. e.g. when I create a new mailbox on the old server, Emails in and out to/from it work like a charm. But as soon as I move it to the new server, emails get stuck. Noe delivered from outside or old mailboxes, not send out from the new server to enywhere. Sending between Mailboxes on the new Server of course is working. I can see the connectors between old and new Server in the Exchange 2003 Admin Tool, but I cannot find these nowhere on the new server. I have also setup sending connectors at the new server to send out mails directly, but that does not work. In all other areas, the servers are perfectly working together - moving mailboxes between, seeing each other etc. "just" they dont exchange (!) any emails - Any ideas what I missed? I also followed the hints from: Upgrading from Exchange 2003 to Exchange 2010, routing works in one direction only There Emails were transported at least in one direction, in my case they are not transported at all. Both my connectors are up and valid abd have the correct source/target shown on Get-RoutingGroupConnector | FL Kind regards Michael

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • Can I recover a rm -rf-ed Mercurial repository?

    - by WishCow
    I made the mistake of wiping out my entire project directory with a quick "rm -rf project". Of course, the .hg directory went with it. I had about 15-20 changesets, that I have not pushed to anyone, and I would really really like to get those back. The system is a Ubuntu machine, and the partiton where the delete happened is ext3, the project consist mostly of PHP files. I know about the guideline to not write to the disk in question. The first idea was to use the tool named scalpel, to get the PHP files back and diff them with the current version from the repo, and somehow carve the changes out. While it succeeded, it did not recover the file names (or there is a switch I'm missing), so I'm left with a few thousand sequentially named .php files, combing through them is not an option. Can a kind soul please save me, and suggest a way to: a) get the repo back, or b) get the files back, with filenames For those wondering how I did such a stupid thing: I was working on a file in Vim which I wanted to remove from the repository: :!hg rm % This complained that the file is in a subrepository, so I specified the following: :!hg rm % -R engine which complained that file has modifications, use -f to force. And this is when somehow, I made up the following command: :!rm -rf % -R engine Somehow, seeing "force" makes me do a rm -rf by reflex.

    Read the article

  • How can I add subdomains of default accepted domain of Exchange 2010

    - by Christoph
    I have an Exchange 2010 that has several accepted domains. Now I want this server to accept - besides the default SMTP domain - all subdomains of the default domain. The documentation in Technet states When you create an accepted domain, you can use a wildcard character (*) in the address space to indicate that all subdomains of the SMTP address space are also accepted by the Exchange organization. For example, to configure Contoso.com and all its subdomains as accepted domains, enter *.Contoso.com as the SMTP address space. It is, however not possible to add e. g. *.contoso.com if contoso.com is already configured. Exchange complains in this case that the domain is already configured. It is also not possible to edit the "value", i. e. the domain name of an accepted domain. I know that I cannot modify the default accepted domain, but changing it to another does not help either, because the domain name itself can never be edited. The last idea was deleting the accepted domain and re-creating it with "*." prepended. This is, however, also impossible because it is of course not possible to delete or modify the default address policy and if a domain name is used in an address template it cannot be removed from the accepted domains. The question is: How can I make my Exchange 2010 server accept any subdomain of its default accepted domain with a wildcard?

    Read the article

  • Improve speed of "start menu" in Linux Mint 10 - Ubuntu 10.10 derivative

    - by Gabriel L. Oliveira
    I have a global menu (including application, administration and system tabs) that is taking too much time (for me) to load (about 2.5 seconds). Of course, this time is taken only during first start. After it have loaded, next times are better ( less than 0.2 miliseconds) The menu was taking more time before (about 5 seconds), and I found that was because of the 'Other' part of the menu, that included many applications installed with Wine, so I removed all of them (I didn't need them at all). I have a "normal" knowledge of programming, and I think that the process of starting the menu for the first time has some kind of "cache function", that tries to find which apps are present that need to be placed under menu to be shown to user. But didn't found this function so that I could analyze in details what he is doing (if searching for files under "~/.local/share/applications" or anything else). Also, I found that hitting "Alt-F2" also fires this "cache function", because after waiting it to load, the process of opening the menu took less than 0.2 miliseconds. So, could anyone help me in order to reduce this time? I found on internet that some user could reduce the time by resizing the icons of applications. But found here that most of my icons are already at 25x25 size. Any other idead? Maybe a multiprocess to load it, or include it under startup... don't know. Ps: Sorry if this is an awkward question, but I just do not like waiting for things to happen, and think that this process should be smoother than it's now. Also, thanks in advance!

    Read the article

< Previous Page | 235 236 237 238 239 240 241 242 243 244 245 246  | Next Page >