Search Results

Search found 2115 results on 85 pages for 'chum of chance'.

Page 57/85 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Requiring SSH-key Login From Specific IP Ranges

    - by Sean M
    I need to be able to access my server (Ubuntu 8.04 LTS) from remote sites, but I'd like to worry a bit less about password complexity. Thus, I'd like to require that SSH keys be used for login instead of name/password. However, I still have a lot to learn about security, and having already badly broken a test box when I was trying to set this up, I'm acutely aware of the chance of screwing myself while trying to accomplish this. So I have a second goal: I'd like to require that certain IP ranges (e.g. 10.0.0.0/8) may log in with name/password, but everyone else must use an SSH key to log in. How can I satisfy both of these goals? There already exists a very similar question here, but I can't quite figure out how to get to what I want from that information. Current tactic: reading through the PAM documentation (pam_access looks promising) and looking at /etc/ssh/sshd_config. Edit: Alternatively, is there a way to specify that certain users must authenticate with SSH keys, and others may authenticate with name/password? Solution that's currently working: # Globally deny logon via password, only allow SSH-key login. PasswordAuthentication no # But allow connections from the LAN to use passwords. Match Address 192.168.*.* PasswordAuthentication yes The Match Address block can also usefully be a Match User block, answering my secondary question. For now I'm just chalking the failure to parse CIDR addresses up to a quirk of my install, and resolving to try again when I go to Ubuntu 10.04 not too long from now. PAM turns out not to be necessary.

    Read the article

  • "postgres blocked for more than 120 seconds" - is my db still consistent?

    - by nn4l
    I am using an iscsi volume on an Open-E storage system for several virtual machines running on a XenServer host. Occasionally, when there is a very high disk I/O load on the virtual machines (and therefore also on the storage system), I got this error message on the vm consoles: [2594520.161701] INFO: task kjournald:117 blocked for more than 120 seconds. [2594520.161787] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162194] INFO: task flush-202:0:229 blocked for more than 120 seconds. [2594520.162274] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [2594520.162801] INFO: task postgres:1567 blocked for more than 120 seconds. [2594520.162882] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. I understand this error message is caused by the kernel to inform that these processes haven't been run for 120 seconds, most likely because a disk access to the storage system has not yet been processed. But what is the effect on the processes. For example, will the postgres process eventually write its data when the storage system is idle again after a few minutes, so that all data is still consistent? Or will it abort the write, leaving some tables in an inconsistent state? I certainly expect that the former should be the case - if the disk access is slow, postgres (or any other affected process) should just wait as long as it takes. I can live with the application hanging for a few minutes. But if there is a chance for data corruption then any of these errors is really bad news. Please advise what to do here.

    Read the article

  • Restoring snapshot for Microsoft Exchange server

    - by Mugen
    Hi, The background: I need to do some testing with Microsoft exchange server. Specifically, I'll be installing some software on the Microsoft exchange server machine and uninstalling that same software again. The problem I face: While I repeatedly do this with different versions of my software there is a chance that sometime later the Exchange server installation might get corrupted. When that happens I would need to reinstall Exchange server which I feel is somewhat of a chore. So what I am planning to do is to install the Ms Exchange server on a virtual machine in VMware ESX server and take a snapshot so that during my work whenever the installation is corrupted I can restore the snapshot. So here's my question: Would restoring the snapshot for the Microsoft Exchange server virtual machine work correctly? I'm not familiar with the intricacies of exchange server and any changes (if any) that happen with the Domain controller when we use or install an exchange server (Personally I don't think that should happen but just making sure). I have a shortage of time and hence decided to post this question here. Could someone please tell me whether restoring a snapshot for Exchange server would work fine? Thanks a load, Mugen

    Read the article

  • How does one change the UUID of a Volume on Mac OSX 10.6?

    - by Emmel
    Does anyone know how to change the UUID of a Volume? The background for this question is that I have a duplicate UUID issue: I have /Volumes/OldMacHD with a UUID of XYZ. I have /Volumes/Mirror1 with a UUID of XYZ (same UUID! I bet that's because OldMacHD USED to be part of this mirror). I got these UUIDs via 'diskutil info /dev/thatdisknumber | grep UUID'. I'd like to change the UUID of 'Mirror1'. I discovered by chance the 'hfs.util' utility, since these are HFS volumes after all. The man page for hfs.util says that if you issue the -s flag, this changes the UUID. However, if you type hfs.util all by itself, it doesn't show you the -s option at all, just every option besides that! Grr. I tried it anyway: sudo /System/Library/Filesystems/hfs.fs/hfs.util -s /dev/disk4 (the raid volume). Nothing happens. No error message, no success message. UUID exactly the same. I tried it while the volume was unmounted. Any ideas?

    Read the article

  • Syncronization between folders MAC OS Lion

    - by Andre Carvalho
    I have an iMac at home and I use a Macbook pro for work. I also have a time capsule at home containing my main folder with my main files. I use it as a NAS besides the Time Machine backup tool. I have several personal files I need to be accessing both at home and at work. My wife, who works at home, uses sometimes the same .XLS files and .DOC files I might have used during my day at work, away from home. My question is: Is there a software, or tool that a I can use to sync my iMac and my MB Pro folders? Remembering that: There might be a chance that my wife and I have changed the same files during the day, so the files would have to be merged so none of the information added by either me or my wife would be lost. The software/tool that would be installed on the MB Pro would need to mount the Time Capsule volume so it could locate the main folder on it. It has to be done automatically when my MB is at home ( with a schedule option ); I have tested some softwares like synctwofolders and Chronosync but none fulfilled all my needs. The first couldn't mount the Time Capsule Volume and didn't have the many schedule options. I really liked Chronosync, but it doesn't merge the files. When it detects a conflict ( for instance: my wife changed a .DOC file on the iMAC and I changed the same file on the MB it asks you to choose which version you want to keep instead of allowing you simply to merge them ). I don't have much experience with automator or scripts but maybe you can give me a hand with that.

    Read the article

  • Is it ever good to share a userid?

    - by Ladlestein
    On Un*x, Is it ever a good idea to have one userid that many different people log into when they do stuff? Often I'm installing software or something on a Linux or BSD system. I've developed software for 24 years now, so I know how to make the machine do what I want, but I've never had responsibility for maintaining a multi-user installation where anyone really cared about security. So my opinions feel untested. Now I'm at a company where there's a server that many people log into with a single userid and do stuff. I'm installing some software on it. It's not really a public-facing server, and is only accessible via VPN, but it's used by many people nonetheless, to run tests on custom software, things like that. It's a staging server. I'm thinking that at the very least, using a single user obscures an audit trail, and that's bad. And it's just inelegant, because people don't have their own spaces on the server. But then again, with more userids, maybe there's a greater chance that one can be compromised, allowing attackers to gain access. ?

    Read the article

  • can't Remote desktop to windows XP, blaming the server side

    - by Jin
    After rebooting my work PC (windows XP sp3) this Wednesday (thank to Microsoft Tuesday), I found that I can't remote desktop to my work PC from home (with VPN to company). I have been remote-desktop to work for years and I am really surprised since connectivity is not the problem, so I brought up wireshark to sniff the packets. I can see after TCP handshake, client sent X.224 Connection Request 03 00 00 13 0e e0 00 00 00 00 00 01 00 08 00 03 00 00 00 server sent X.224 Connection Confirm. 03 00 00 0b 06 d0 00 00 12 34 00 According to "MS-RDPBCGR", the official spec on RDP, the server should include Negotiation Response in the "Connection Confirm" message but it didn't. It's empty. I googled a lot but didn't find any clue on why server did that. By the way, I used the same remote desktop client and can connect to other windows XP PC. Here are a couple of pieces of information that may help to give a clue: Since TCP handshake (server port being 3389), I believe the svchost service is actually running. going to control panel -- system window, --- "Remote" tab, the remote desktop is indeeded checked and it states that my username is allowed. according to the packet capture, client didn't even get a chance to tell server what user was trying to logon. Yes, the progress bar showed up a few seconds and then it went back to the "Remote desktop Connection" window again. Searched "windowsupdate.log", didn't find any appearance of the word "remote".

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • Error loading operating system: format Windows 7 to Windows XP Service Pack 3

    - by Blerta
    I saw that there are other questions like mine here. But O also saw that some problems where solved with fixmbr from a Windows 7 recovery console, but that didn't work for me. I bought my laptop with Vista installed and later reformated and used Windows 7. During formatting with Windows 7 I had some problems with my hard drive and found out it was dead so I bought a new one. I wanted to reformat with Windows XP,because Windows 7 is consuming more RAM that it is able handle and I wanted to use it for other programs. So I formatted with Windows XP Service Pack 3 but after first reboot a message appeared: "Error loading operating system" Reading here, I assumed that maybe I had installed it on the wrong partition and maybe having two OS now, so I used fixmbr but it is still not starting up. Anyway I am sure that is not the case of two operating systems. Is there any chance that when the computer designed to work with Vista you would face problems with Windows XP? Like not recognizing a hard drive?

    Read the article

  • With WebMatrix, How do I Connect to a MySQL Database on a Colleague's Machine?

    - by Ash Clarke
    I have scoured Google trying to discover how to do this, but essentially I want to connect to a colleague's MySQL database for working together on a Wordpress installation. I am having no luck and keep getting an error about the connection not being possible: Unable to connect to any of the specified MySQL hosts. MySql.Data.MySqlClient.MySqlException (0x80004005): Unable to connect to any of the specified MySQL hosts. at MySql.Data.MySqlClient.NativeDriver.Open() at MySql.Data.MySqlClient.Driver.Open() at MySql.Data.MySqlClient.Driver.Create(MySqlConnectionStringBuilder settings) at MySql.Data.MySqlClient.MySqlPool.GetPooledConnection() at MySql.Data.MySqlClient.MySqlPool.TryToGetDriver() at MySql.Data.MySqlClient.MySqlPool.GetConnection() at MySql.Data.MySqlClient.MySqlConnection.Open() at Microsoft.WebMatrix.DatabaseManager.MySqlDatabase.MySqlDatabaseProvider.TestConnection(String connectionString) at Microsoft.WebMatrix.DatabaseManager.IisDbManagerModuleService.TestConnection(DatabaseConnection databaseConnection, String configPathState) at Microsoft.WebMatrix.DatabaseManager.Client.ClientConnection.Test(ManagementConfigurationPath configPath) at Microsoft.WebMatrix.DatabaseManager.Client.DatabaseHierarchyInfo.EnsureLoaded() The connection details are copied from my colleague's connection string, with the exception of the server being modified to match the IP address of his machine. I'm not sure if there is a firewall port I have to open or a configuration file I have to modify, but I'm not having much luck so far. (There is a strong chance that, by default, web matrix / iis express doesn't set the mysql database it creates to accept remote connections. If anyone knows how to change this, that would be grand!) Anyone have any ideas?

    Read the article

  • Error loading operating system: format Windows 7 to Windows XP Service Pack 3

    - by Blerta
    I saw that there are other questions like mine here. But O also saw that some problems where solved with fixmbr from a Windows 7 recovery console, but that didn't work for me. I bought my laptop with Vista installed and later reformated and used Windows 7. During formatting with Windows 7 I had some problems with my hard drive and found out it was dead so I bought a new one. I wanted to reformat with Windows XP,because Windows 7 is consuming more RAM that it is able handle and I wanted to use it for other programs. So I formatted with Windows XP Service Pack 3 but after first reboot a message appeared: "Error loading operating system" Reading here, I assumed that maybe I had installed it on the wrong partition and maybe having two OS now, so I used fixmbr but it is still not starting up. Anyway I am sure that is not the case of two operating systems. Is there any chance that when the computer designed to work with Vista you would face problems with Windows XP? Like not recognizing a hard drive?

    Read the article

  • Windows Vista claims wireless key is the wrong length

    - by humble coffee
    A family member of mine is house sitting and has been given the details of their wifi. The access point is an Airport Express, it has WEP encryption (I think) and they've been given a passphrase to use. I know it's a passphrase and not the encrypted key as it's an English word. The passphrase is 10 characters long. The problem is that Vista complains that it's not a valid key as it must be a 5 or 13 character non-hex key or a 10 or 26 character hex key. (From what I've read this suggests the encryption is WEP?) I've found a couple of suggested solutions, but I'm not actually at the house at the moment so I wanted to make sure I have a good chance of getting it to work when I'm there but have no internets to ask. Solution 1: Vista needs to be told explicitly what kind of encryption and key is being used. Specify in the connection settings that you are using WEP and that it is a "shared key". Solution2: Try converting the passphrase to hexadecimal using an ASCII-hex converter and entering that.

    Read the article

  • How do I configure nVidia drivers on a Portable Ubuntu setup?

    - by Nicholas Flynt
    I've been pulling my hair out over this one for a couple of days now, google is no help. I've created a wonderful (until this issue) portable copy of Ubuntu linux that will boot on mostly anything by using a USB enclosure for my laptop's 80GB SATA drive. So far so good, it boots and runs on everything, and on non-nVidia card setups was even detecting the drivers, or letting me install the required drivers for hardware acceleration and compiz. Because you know, the wobble windows are the most awesome thing ever. Anyway, my desktop machine had an nVidia card, so I'm thinking, sure, I'll just install the nVidia drivers like before and everything will work happily. Not so-- now the desktop and any other nVidia cards work great, but it seems to have completely disabled any other graphics cards. When the kernel module detects that an nVidia card isn't present, it shoots up this nasty little dialog box giving me the option to boot into "low graphics" mode, which doesn't even allow me to use the correct screen resolution, much less see the installed graphics card and try to configure a driver for it. Is there any way to configure Ubuntu (with the dreaded nVidia kernel module) so that it can use nVidia's drivers when an nVidia card is present, and default to the normal (not low-graphics) setup in other cases, so that it has a fair chance of using what's actually present? I'm not afraid to much with config files, I just don't know the underlying system well enough to feel comfortable diving in without a push in the right direction. Thanks guys!

    Read the article

  • How do I run a stable Windows XP kvm guest on Ubuntu 10.04?

    - by Jean-Paul Calderone
    I have three Windows XP guests running on a recently upgraded 64-bit Ubuntu 10.04 system. Occasionally (on the order of once every few days), one of the guests will become non-responsive and the kvm process on the host which is running that guest will start consuming 100% CPU. It will continue to do so until it is killed. When restarted, it will be fine for a while, and then the issue repeats. The kvm command line used to run all three guests is this: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 1024 -smp 1 -name bigdog21vmxp1 \ -uuid ea47ff84-125b-16f7-9a4d-a6d0d8bab46a \ -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/bigdog21vmxp1.monitor,server,nowait \ -monitor chardev:monitor \ -localtime \ -boot c \ -drive file=/var/lib/libvirt/images/windowsxp-1.qcow2,if=ide,index=0,boot=on,format=qcow2 \ -net nic,macaddr=54:52:00:02:06:0e,vlan=0,name=nic.0 \ -net tap,fd=58,vlan=0,name=tap.0 \ -chardev pty,id=serial0 \ -serial chardev:serial0 \ -parallel none \ -usb \ -usbdevice tablet \ -vnc 127.0.0.1:1 \ -k en-us \ -vga cirrus \ -soundhw es1370 Why do the systems misbehave this way sometimes? And what configuration can I change in order to fix it? Or, if the problem is due to a bug in kvm, what is the process for isolating a kvm failure so that the developers have a chance of fixing it?

    Read the article

  • Is there a way to use something similar to a capture group for apache2 server name

    - by Zipper
    I have a server that sits behind an AWS load balancer. The LB can't do automatic redirect from HTTP to HTTPs, and the LB is doing my SSL. So I need to setup apache on my servers to redirect any request on port 80 to https://FOOBAR m where FOOBAR is the domain that came in. I haven't been able to find a way of doing that so far. I'm an apache newb though. What I'm trying to do is something similar to this. I'll use regex as an example <VirtualHost *:80> ServerName (.*) Redirect / https://\1 </VirtualHost> If there's a better way to do this, please let me know. EDIT: Sorry I should have explained why this is happening. I actually have a tomcat server running my app on port 8080, and the LB points to that. From what I can tell so far my requests come in on http (which is expected), but when my app server sends redirects (for login purposes) it tries to redirect to http, instead of https. I haven't had a chance to fully investigate this, but I wanted to work around it for now by point the LB to point to the apache server, and have any port 80 requests redirect to 443. EDIT2: The other reason I'm interested in doing this, is that since the LB can't do the redirect, I need to have another redirect mechanism in place to tell the browser to go to https://FOOBAR

    Read the article

  • ADO 2.8, VMWare and SQL Server - sudden bulk dropped connections

    - by CodeByMoonlight
    We have an overworked server currently running a single SQL Server 2000 instance on physical hardware, and about 40 different apps interact with it on a daily basis. Last year, the RAID controller failed and we had no spare, so IT Support hurriedly migrated it overnight to a copy running on a VMWare Server. While it was on that server everything ran much quicker due to it being a big improvement in spec. However, the biggest app using it had occasional serious errors which never occurred on physical hardware. Specifically, several times a week it would disconnect batches of users - anywhere from just ten to hundreds at once, and all at the same time. It didn't affect any particular users or PCs or offices - all were affected equally. The only common thing was the app, which is a VB6 app using ADO 2.8 to connect. The other apps connecting to that virtualised instance of SQL Server seemingly had no problems, although they were (and are) responsible for only a tiny fraction of the work involving this server. The upshot is that after about two weeks of loving the speed and hating the random mass disconnections (which we were never able to find a cause for), we sadly took the decision to return to physical hardware and the disconnections vanished. Now we've reached the point where the old server just can't handle all that's being asked of it, and we're intending to migrate everything to 2 or more other servers. The snag is that there's a good chance they'll have to be virtual ones again. Given what happened last time, I'm trying to find out what possible reasons there could be for these mass disconnections. We were running VMWare ESX, but the network is Novell-based. Also, the server had a linked server setup to connect to an Informix server using a known-to-be-buggy ODBC driver, and this is used throughout the day. Any ideas on the cause(s)?

    Read the article

  • Network Load Balancing, intermittent port problem

    - by Jimmy Chandra
    Trying to troubleshoot an intermittent problem. I think it might be related to an NLB issue. We are using Windows Network Load Balancing to balance load for our multiserver SharePoint front ends. Say... Web Front End 1 IP is 192.168.1.100 and Web Front End 2 IP is 192.168.1.101, the NLB is setup to load balance both WFE servers on any incoming traffic to the IP 192.168.1.200. Sometimes we got an intermittent issue where when we try to access the SharePoint site using 192.168.1.200:8080 (say the site is set up to run on port 8080) from a remote client, it will display page not found. Pinging the 192.168.1.200 will give responses, but when trying to telnet to 192.168.1.200:8080 it just won't connect. However, browsing the SharePoint site directly on individual WFE (192.168.1.100 and 192.168.1.101) show no problem whatsoever. My guess also (we didn't get a chance to try it yet, but I think it should work), if I try connecting remotely to individual server, it will respond just fine. But any attempt on trying to connect using the virtual IP (192.168.1.200) will fail miserably. Funny thing is, after a while it will return back to normal. Anyone had similar experience with this type of problem while implementing NLB before? We are doing this in a virtual environment.

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell)?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • APACHE - PHP - Bounce emails

    - by user1179459
    I want to improve our mail server lists by handling all the bounces we get in our websites. I have a website which has over 8000+ users and another website which as over 1500+ users, they are emailed various notifications every second, ie. job alerts, email alerts, I am using POP connections with EXIM on APACHE server, most scripts are based on PHP language generates email on the fly. Problems i have But some users has registered long time ago and now quite few has bounce email address Some users register with dummy emails like [email protected] which never existed but a valid email address, any chance of stopping this without asking them login to the email account and clicking links which dont work most times.. (too annoying to the end user) Server is sending unnecessary emails can be avoided if i know they dont exists ? Solutions i need to have Is there a way i can download the bounce email list somewhere (WHM/Cpanel), i know exim mail has it but its not readable (i need a file like CSV or something similar to scan them over and write a php script to delete the users from the database ?) I need to know if there is any function in the PHP that can check the existence of the email address on the fly ? so that i can set the email send function in the mailer class to check before it sends out. On the server will bouncing emails are going to eatup lots of server resources ? like memory/cpu on processing them ? or are they are minimal where we dont have to worry about this at all ? May be a opensoruce or linux software to capture them and view them in a report basis and cleaning them up ? I am not a linux expert or server admin but i do lot of PHP coding, so please be descriptive of the solutions specially if they are linux commands Thank you!

    Read the article

  • How to give wife emergency access to logins, passwords, etc.?

    - by Torben Gundtofte-Bruun
    I'm the digital guru in my household. My wife is good with email and forum websites but she trusts me with all our important digital stuff -- such as online banking and other things that require passwords, but also family photos and the plethora of other digital things in a modern home. We discuss relevant actions but it's always me that executes the actions. If I should get "hit by a bus" then my wife would be thoroughly stranded -- she would have no idea what digital stuff is where on our computer, how to access it, what online accounts we have, and their login credentials are. It would also leave my many public appearances (personal websites, email accounts, social networks, etc.) unresolved. To complicate things, I'm one of those people who don't use password as my password everywhere; I use a mix of SuperGenPass and LastPass, and also two-factor authentication whenever possible. I don't have much hope that she would find her way through a written explanation of all that in a stressful situation. I could just tell her that she should ask my tech-savvy twin brother and then entrust him with my LastPass master passphrase. I feel that would have a high chance of success, but it's inelegant and leaves my wife without control of the information. How can I ensure that my wife has access to my digital remains?

    Read the article

  • Outlook, Word, and normal.dot (2003 Edition)

    - by mosiac
    I have one user that for some reason has been having macro issues with her normal.dot file. At first the fix was just remove the file because she isn't actually needing to save anything. This was really a temp fix. We found out that for some reason every time she opened up word it was trying to modify normal.dot but not asking. I set it up to ask so at least we could control the changes going on to normal.dot. There was one file disabled in Word that we enabled because it was a document she never used anymore, making us think that maybe that was the issue. We have automatic antivirus updates and scans so there is little chance of a virus. The issue has stopped as far as just using Word itself. She can open, close, edit, save, etc and never get the dialog. In Outlook however if she clicks reply or forward to an e-mail but decides not to send it, and just close it. She gets the pop up to save changes to normal.dot. This leads me to believe something in outlook about how she is setup to use Word as an e-mail editor is causing the problem. Am I even on the right track here? Short form: Word works fine with normal.dot, as an Outlook mail editor wants to change normal.dot. No idea what to do.

    Read the article

  • Architecture for highly available MySQL with automatic failover in physically diverse locations

    - by Warner
    I have been researching high availability (HA) solutions for MySQL between data centers. For servers located in the same physical environment, I have preferred dual master with heartbeat (floating VIP) using an active passive approach. The heartbeat is over both a serial connection as well as an ethernet connection. Ultimately, my goal is to maintain this same level of availability but between data centers. I want to dynamically failover between both data centers without manual intervention and still maintain data integrity. There would be BGP on top. Web clusters in both locations, which would have the potential to route to the databases between both sides. If the Internet connection went down on site 1, clients would route through site 2, to the Web cluster, and then to the database in site 1 if the link between both sites is still up. With this scenario, due to the lack of physical link (serial) there is a more likely chance of split brain. If the WAN went down between both sites, the VIP would end up on both sites, where a variety of unpleasant scenarios could introduce desync. Another potential issue I see is difficulty scaling this infrastructure to a third data center in the future. The network layer is not a focus. The architecture is flexible at this stage. Again, my focus is a solution for maintaining data integrity as well as automatic failover with the MySQL databases. I would likely design the rest around this. Can you recommend a proven solution for MySQL HA between two physically diverse sites? Thank you for taking the time to read this. I look forward to reading your recommendations.

    Read the article

  • vmdk Recovery after migration from 3.5 to 4 and fallback tentative.

    - by olgirard
    Hy, I've tryed to migrate some VM from my 3.5i environment to a brand new vSphere 4.0 U1. The two platforms are running simultaneously, sharing the same SAN. I Migrate my VM by stopping it, unregistering in vcenter (esx ver. 3.5, i call it esx3), register in vSphere (esx ver. 4, i call it esx4), and migrate upgrade virtual hardware before powering it up (First mistake). vMotion was enabled on esx4, seem to be a second mistake. After a day or so, i encountred problems joigning the esx server (esx4) and decided to unregister my server for esx4 and fallback to esx3. esx3 refused to boot, i supposed this was due to virtual hardware in Version 7 so i recreated a new VM pointing to the vmdk of the old VM. Everithing seemed fine until i log into the server and discover that i was running on the original disk ith every snapshots ignored even those created on esx3. I tried to reboot VM on esx4 but VM doesn't power up because "The parent virtual disk has been modified since the child was created". I've got a copy of a later state of the drive but generated between two snapshots (ovf generated with canverter standalone) as a backup. Do i have a chance to recover at least some files on the virtual drive or (as i tink) all is played, i've done enought mistakes for this time. Thanks for your help.

    Read the article

  • Is browser based wireless authentication secure?

    - by johnnyb10
    Our wireless network previously used a preshared WPA/WPA2 key for guest access, which allows them access to the Internet. (Our employee access uses 802.1x authentication). We just had a wireless consultant come in to fix various wireless issues we had; one of the things he wound up doing was changing our guest access to HTML-based instead of the preshared key. So now that guest SSID is open (instead of using WPA) and users are presented with a browser-based login screen before they can get on the Internet. My question is: Is this an acceptable method from a security standpoint? I would assume that having an open network is necessarily a bad idea, but the consultant said that the traffic is still using PEAP, so it's secure. I didn't get a chance to question him further on this because we ran late and a bunch of other things came up. Please let me know what you think about the advantages/disadvantages of using HTML-based wireless authentication as opposed to using a preshared WPA key. Thanks...

    Read the article

  • Which is the fastest way to move 1Petabyte from one storage to a new one?

    - by marc.riera
    First of all, thanks for reading, and sorry for asking something related to my job. I understand that this is something that I should solve by myself but as you will see its something a bit difficult. A small description: Now Storage = 1PB using DDN S2A9900 storage for the OSTs, 4 OSS , 10 GigE network. (lustre 1.6) 100 compute nodes with 2x Infiniband 1 infiniband switch with 36 ports After Storage = Previous storage + another 1PB using DDN S2A 990 or LSI E5400 (still to decide) (lustre 2.0) 8 OSS , 10GigE network 100 compute nodes with 2x Infiniband Previous experience: transfered 120 TB in less than 3 days using following command: tar -C /old --record-size 2048 -b 2048 -cf - dir | tar -C /new --record-size 2048 -b 2048 -xvf - 2>&1 | tee /tmp/dir.log So , big problem here, using big mathematical equations I conclude that we are going to need 1 month to transfer the data from one side to the new one. During this time the researchers will need to step back, and I'm personally not happy with this. I'm telling you that we have infiniband connections because I think that may be there is a chance to use it to transfer the data using 18 compute nodes (18 * 2 IB = 36 ports) to transfer the data from one storage to the other. I'm trying to figure out if the IB switch will handle all the traffic but in case it just burn up will go faster than using 10GigE. Also, having lustre 1.6 and 2.0 agents on same server works quite well, with this there is no need to go by 1.8 to upgrade the metadata servers with two steps. Any ideas? Many thanks Note 1: Zoredache, we can divide it in two blocks (A)600Tb and (B)400Tb. The idea is to move (A) to new storage which is lustre2.0 formated, then format where (A) was with lustre2.0 and move (B) to this lustre2.0 block and extend with the space where (B) was. This way we will end with (A) and (B) on separate filesystems, with 1PB each.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >