Search Results

Search found 16032 results on 642 pages for 'sync framework'.

Page 456/642 | < Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >

  • My bios datetime is resetting to 2002, what should I do?

    - by Thierry Lam
    I bought my PC brand new in mid-2006. I'm currently dual booting Win XP 32 bits and Ubuntu Karmic Desktop. Over the last few days, when I boot up my computer, it tells me that my bios time is not set. I now have 2 choices for booting in: Windows first: I can safely get into Windows but the date now shows 2002/01/01 at 00:00. From there, Windows will not sync its time to its servers, I have to manually advance the date myself. I can also advance the date from the bios at startup. Ubuntu first: Since the time date is still set at 2002, the boot system cannot find my linux partition. I can fix that issue by booting to Windows, set the date properly and the linux issue will be fixed automatically. The above two issues can also be fixed if I manually set the bios date/time manually when I turn on my computer. It's annoying to do that every single day since I shut down my PC daily. Am I having an OS issue or a hardware issue? How can I resolve this problem?

    Read the article

  • Using rsync to migrate files between windows servers: problems with Excel spreadsheets

    - by HorusKol
    We've been migrating about 220 GB of data from a Windows 2003 Server to a Windows 2008 Server, and because of the time it would take to copy that data and the necessity of keeping it available for users, I came up with the idea of using rsync on an Ubuntu server to broker the migration. (I might have gone for a proper Windows solution - but the applications I found were a bit pricey for a one-shot like this - and permissions are not a problem). All well and good - and today I'm making the last sync and confirming that the new server is up-to-date using diff, but I"ve noticed an odd thing with Excel spreadsheets (.xls). Every instance of an Excel spreadsheet that has already been copied in a previous in a previous synchronisation is being marked as "already up-to-date" by rsync. However, when I then run a diff, I'm told that the files differ. I'm manually copying them, as there are but a handful, but I was wondering what might be causing this. No other filetype in the entire 220 GB tree has had any problem like this - just the Excel/xls files. It'd be great if someone could come up with an explanation.

    Read the article

  • creating a backup system with freenas

    - by masfenix
    We are currently in the process of opening a new accounting firm in the new year (actually moving from our previous location). I am looking for a cheap/free solution to back up our files (small, text files couple of kb). I was impressed with FreeNas and Windows Backup but I found out that Windows Backup only saves for a maximum of 2 years. The work machines will be running Windows 8 or Windows 7. There can be many work machines however we have only one to start with (ie, think of it as just one employee). I have an old core 2 duo with 2 gigs of ram that I can convert to a server if need be. I want the syncing to be done through LAN since the data is confidential and should never touch the outside world. So ideally, I would like the following scenario: A skydrive/dropbox like service to sync my client files over work machines and a central server. The "server" part should store history of files (i don't know how this will be done since the file will have the same name?). This isn't really necessary, but I can see it become useful. I am not familiar with RAID, so does any software RAID solution exist? I will most likely be buying 2 hard drives.

    Read the article

  • MySQL ADO.NET Connector & MSSQL Integration Services

    - by user1114330
    Here I am, day three... attempting to sync a data view on a Windows Vista box (64 bit) running MSSQL 2012 and Visual Studio 2010. Sanity is slipping and hunger for progress fills my attention. I went through hell trying to get the MySQL ODBC drivers to get the job but to no avail...everyone seems to be lost and all the threads I can find are solutions that do not work for me. The problem: System DSN's not being seen by SSIS. SSIS DSN Not Showing as ODBC Data Source I make the decision to try out the ADO.NET connector...and to my surprise it is actually in the selection list in data sources in SSIS. So I take off running to create a Data Flow Task, create an ADO.NET Source (a local MSSQL DB)...all is good as usual. Then I move swiftly to creating a ADO.NET Destination, enter my credentials...wow, I am selecting a database finally on my linux server! Happy thinking that I finally have figured a way to get the job done. Then I move to mappings...nope, something is wrong...I am getting an error that hurts my eyes: Pipeline component has returned HRESULT error code 0xC0208457 from a a method call. Error at Data Flow Task [ADO NET Destination [81]]: Failed to get properties of external columns. The table name you entered may not exist or you do not have SELECT permission on the table object and an alternative attempt to get column properties through connection has failed. Detailed error messages are" You have an error in your SQL syntax check the manual that corresponds to your MySQL server version for the right syntax to use near "database".tablename" at line 1. The descriptor files on path C:\Program Files (x86)\Microsoft SQL Server\110\DTS\ProviderDescriptors\ does not contain schema information for connection of type MySQL.Data.MySqlClient.MySqlConnection. So it looks like it can't the information and therefore I cannot map the tables properly. Any ideas on this would be ultra helpful...thanks in advance to All!

    Read the article

  • CloudFront with Custom Origin and ELB

    - by kmfk
    We are using CloudFront for our static assets but also wanted to allow for Gzip. We set up a new distribution with a custom origin pointing back to our application servers which are behind a elastic load balancer. We manually keep the files in sync across the cluster and update them when we publish. However, with this set up, we get nothing but Miss and RefreshHits from CloudFront, which so far has defeated the purpose. Is there any additional settings in order to use an ELB as your custom origin? In the docs, it references this as a viable solution. It appears when we point the distribution to a single server in our production cluster, cloudfront properly caches our assets. Is it possible that the sticky sessions cookie and the subsequent header that gets added by it could be an issue? Cache-Control: no-cache="set-cookie" //Added by load balancer Any ideas? FYI - currently, we have our custom origin pointing to a single EC2 instance, so caching is working correctly - in case you try to curl the file below. Example headers: curl -I http://static.quick-cdn.com/css/9850999.css HTTP/1.0 200 OK Accept-Ranges: bytes Cache-Control: max-age=3700 Cache-Control: no-cache="set-cookie" Content-Length: 23038 Content-Type: text/css Date: Thu, 12 Apr 2012 23:03:52 GMT Last-Modified: Thu, 12 Apr 2012 23:00:14 GMT Server: Apache/2.2.17 (Ubuntu) Vary: Accept-Encoding X-Cache: RefreshHit from cloudfront X-Amz-Cf-Id: K_q7Zy3_jdzlEJ85ukELVtdx1GmuXqApAbZZ7G0fPt0mxRMqPKX5pQ==,RzJmPku-rEIO9WlvuSoKa8hiAaR3dLk5KC4cQMWWrf_MDhmjWe8n6A== Via: 1.0 28c34f9fbf559a21ee16594849e4fc9c.cloudfront.net (CloudFront) Connection: close

    Read the article

  • How do you backup your own files? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • What is the correct authentication mechanism when there are users inside and outside the domain?

    - by Gary Barrett
    We have a Windows 7 enterprise desktop data entry app for mobile (laptop) users with local SQL Express 2008 R2 Express db that syncs data with an SQL Server 2008 R2 Server db. Authentication is required before syncing the data. The existing group of users are part of the organisation's domain so normal scenario and they connect to the Sql Server directly. But there are plans for a second group of app users who belong to various partner organisations so they are outside our domain and have their own various separate domains/accounts. The aim is to deploy the desktop app to them and they will periodically sync data to our SQL Server. What I am uncertain of: Is it possible to authenticate users from another domain? Can permissions be managed via Active Directory etc? Which authentication protocol should be used in this scenario? Windows, Forms, SQL, etc? The IT people are requesting users of the system be managed via Active Directory. Is it possible to manage the external domain users access via Active Directory?

    Read the article

  • Synergy Windows 7 Screen Saver issue

    - by SynergyUser
    I have synergy server running on a Windows 7 laptop, I have another laptop running windows XP as the Client. When synergy is first started the Screen Saver will start after the 5 mins of inactivity no problem but after mouse/keyboard input then waiting another 5 minutes + the screen saver will not come back on. The mouse is on the server NOT the client. I have tried unchecking "sync screen savers" I have tried right clicking and running the synergy server as administrator. I have tried versions 1.3.4, 1.3.6, 1.4.2 both 32 bit and 64 bit. I have tried running in XP compatibility mode. I have tried disconnecting the client. It feels like I've tried everything. The setup is as follows: The server is below the Client, the client is above the server. The server has a secondary monitor attached to it and yes I've tried running without the secondary monitor. the sever is an acer 5552G-5828. Any help would be awesome! I love synergy but without the screensavers it is annoying. Thanks!

    Read the article

  • SFTP, SCP, Secure Webdav: which is the most suitable ?

    - by Xavier Maillard
    Hi, currently, I am hosting a webdav share setup in order to store files I need anywhere I am. It is available via HTTPS. Things are that I do not need all the HTTP machinery -i.e. my nginx http server is only there for this webdav folder. I am not sure I made the best choice. My requirements on the client side are: secured transfers mountable as a network drive at work with 'near realtime sync' usable for any OS I could use (including my mobile (android)) At first, I chose webdav since it would pass through my work proxy (which refuses all that is not on HTTP/S (port 80 or 443)). Today, I am not satisfied with the setup and even if nginx memory footprint is pretty small, its webdav support is not really "clean" and full. What would you recommend between SFTP, SCP and the current webdav solution ? I think SFTP is the closest solution but I still have to find out how to pass through my proxy ;) SCP seems quite limited as I read about it (only file transfers if I read right). Cheers

    Read the article

  • Change Outlook's default calendar to iCloud (meeting requests end up in wrong calendar)

    - by flohei
    Following scenario: I've got a main computer (Windows 7, Office 2010) which is being used to manage contacts, meetings, etc. using Outlook. Now I've added an iPad and an iPhone to sync using iCloud. I moved all appointments and contacts from the old PST file to the iCloud file. All the data syncs nicely. The email account I'm using in Outlook is an IMAP account which opens up another data file which brings us to a total of three data files in Outlook's side bar. The problem: When one of our clients sends us meeting requests via email they show up in the IMAP's inbox. When we open them up they automatically get added to Outlook's default calendar (the one in the original PST). Is there any chance not to add them to that calendar but the iCloud one? Basically we could completely get rid of the original PST since we don't use it at all anymore but the settings do not allow me to remove this PST file and set the iCloud one as default. Thanks!

    Read the article

  • Removing a device in "removed" state from Linux software RAID array

    - by Sahasranaman MS
    My workstation has two disks(/dev/sd[ab]), both with similar partitioning. /dev/sdb failed, and cat /proc/mdstat stopped showing the second sdb partition. I ran mdadm --fail and mdadm --remove for all partitions from the failed disk on the arrays that use them, although all such commands failed with mdadm: set device faulty failed for /dev/sdb2: No such device mdadm: hot remove failed for /dev/sdb2: No such device or address Then I hot swapped the failed disk, partitioned the new disk and added the partitions to the respective arrays. All arrays got rebuilt properly except one, because in /dev/md2, the failed disk doesn't seem to have been removed from the array properly. Because of this, the new partition keeps getting added as a spare to the partition, and its status remains degraded. Here's what mdadm --detail /dev/md2 shows: [root@ldmohanr ~]# mdadm --detail /dev/md2 /dev/md2: Version : 1.1 Creation Time : Tue Dec 27 22:55:14 2011 Raid Level : raid1 Array Size : 52427708 (50.00 GiB 53.69 GB) Used Dev Size : 52427708 (50.00 GiB 53.69 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Fri Nov 23 14:59:56 2012 State : active, degraded Active Devices : 1 Working Devices : 2 Failed Devices : 0 Spare Devices : 1 Name : ldmohanr.net:2 (local to host ldmohanr.net) UUID : 4483f95d:e485207a:b43c9af2:c37c6df1 Events : 5912611 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 0 0 1 removed 2 8 18 - spare /dev/sdb2 To remove a disk, mdadm needs a device filename, which was /dev/sdb2 originally, but that no longer refers to device number 1. I need help with removing device number 1 with 'removed' status and making /dev/sdb2 active.

    Read the article

  • Software RAID 1 Configuration

    - by Corve
    I have created a software RAID 1 quite some while ago and it always seemed to work for me. However I am not completely sure that I have configured everything right and do not have the experience to check so I would be very grateful for some advice or just verification that all seems right so far. I am using Linux Fedora 20 (32 bit with plans to upgrade to 64bit) The RAID 1 should consist of two 1TB SATA hard drives. This is the output of mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Jan 29 11:25:18 2012 Raid Level : raid1 Array Size : 976761424 (931.51 GiB 1000.20 GB) Used Dev Size : 976761424 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jun 7 10:38:09 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : argo:0 (local to host argo) UUID : 1596d0a1:5806e590:c56d0b27:765e3220 Events : 996387 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 0 1 active sync /dev/sda The RAID is mounted successfully: friedrich@argo:~ ? sudo mount -l | grep md0 /dev/md0 on /mnt/raid type ext4 (rw,relatime,data=ordered) Basically my question are: Why do I only have 1 active device? What does the State removed at bottom mean? Also I noticed some strange error messages that I see on the console on system start and shutdown and always repeating in the background when I switch with Ctrl + Alt + F2: ... ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: COMRESET failed (errno=-32) ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ... Are these errors related to the RAID? Something seems wrong with the SATA devices.. All together the system works (I can read and write to the mounted raid) but I always had these strange errors on startup shutdown (probably always in the background). Thx for your help

    Read the article

  • How to safely move where itunes saves music/iphone apps/and meta data to another internal Drive?

    - by GingerLee
    In the past, when I have moved my itunes data from one computer to another, I usually just follow these steps: Copy the contents of two folders: %USERPROFILE%\Music\iTunes %USERPROFILE%\AppData\Roaming\Apple Computer 1) Install iTunes on the new computer, start it and close it (don't let it search for music). 2) Copy all the files in the above folders from old PC to new PC. 3) Start iTunes and authorize the new computer (and deauthorize old one). 4) Before syncing, update all iphone apps to current versions on both my iphone and in itunes. 5) The Sync. The above steps always work for me, and basically Itunes on my new PC works exactly as it did on the old PC. My Question: In the hopes of bybassing the above steps in the future, I would like to just have Itunes use another internal Drive that I use for file storage (e.g. D:/) as the path for the above two directory? Then if I move to new PC again, I could just setup itunes to use the correct path. Is that possible yet with minimal implications? If so how?

    Read the article

  • Google Drive terminates without error on startup

    - by Iszi
    I've used Google Drive for awhile now, but it won't start up after installing on my latest system re-build. I'm still using the same OS, hardware, and basic software load (antivirus, firewall, etc.) that I have for years during which I had not previously had problems with Drive. OS: Windows 7 Ultimate x64 Google Drive Version: 1.12.5329.1887 Now, whenever I try to run Google Drive, it just spawns two instances of the executable which die shortly after. No error messages are posted to the desktop, and nothing indicating any problem is written to the Event Log. After some research, I've yet to find anyone having the same problem who's found an answer. I did find out how to run Google Drive in diagnostic mode, using the --vv parameter at the command line. After that, I opened up the sync log and got this: 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 OS: Windows/6.1.7601-SP1 2013-10-31 17:11:24,039 INFO pid=3664 1892:MainThread logging:1600 Google Drive (build 1.12.5329.1887) 2013-10-31 17:11:24,039 DEBUG pid=3664 1892:MainThread logging:1608 DEBUGGING DUMP is ON. 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 ERROR, UNEXPECTED EXCEPTION 2013-10-31 17:11:24,051 ERROR pid=3664 1892:MainThread logging:1575 [Error 5] Access is denied Traceback (most recent call last): File "<string>", line 232, in Main File "<string>", line 118, in RegisterCustomFileTypes File "P:\p\agents\hpal4.eem\recipes\353983091\base\b\drb\googleclient\apps\webdrive_sync\windows\build\pyi.win32\main\outPYZ1.pyz/windows.registry", line 62, in GetValue WindowsError: [Error 5] Access is denied 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Crash reporting disabled. Ignoring report. 2013-10-31 17:11:24,052 INFO pid=3664 1892:MainThread logging:1600 Exiting with error code: 0 I'm running on an account with Administrator-level permissions, and have even tried using "Run As Administrator" on the EXE. I'm not sure why it's looking for a P:\ drive, as no such volume has ever been mounted on this system. What should I do to try to further troubleshoot, and resolve, this issue?

    Read the article

  • DRBD as a block device for XEN VM (Centos 5.3)

    - by SaberTooth
    Hi all, I have setup a drbd resource between 2 server nodes - everything works correctly when doing sync tests between the two. (I want to create a HA cluster using drbd,xen and heartbeat) However, when I try and create a XEN VM with Centos as guest operating system, I get through to the partitioning screen on the install but when I select a partitioning type the next screen gives me the following error : "An error has occurred - no valid devices were found on which to create new file systems. Please check your hardware for the cause of this problem." This is the first time attempting create a setup like this and searching Google does not help much... my config files for DRBD and XEN.... DRBD (just the section that is pertinent) on xennode0 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; flexible-meta-disk internal; } on xennode1 { device /dev/drbd0; disk /dev/sda5; address X.X.X.X:7788; meta-disk internal; } XEN kernel = "/boot/xeninstall/vmlinuz" ramdisk = "/boot/xeninstall/initrd.img" extra = "text" name = "VM" maxmem = 3000 memory = 3000 vcpus = 4 on_poweroff = "destroy" on_reboot = "restart" on_crash = "restart" vfb = [ ] disk = [ "phy:/dev/drbd0,sda1,w", "tap:aio:/srv/xen/xenswap.img,sda2,w" ] vif = [ "mac=00:16:3e:11:67:ae,bridge=xenbr0" ] root = "/dev/sda1 ro" Thanks in advance!

    Read the article

  • Ubuntu on VPS becomes unresponsive: BUG: soft lockup - CPU#0 stuck for 22s

    - by Bhante Nandiya
    We have a VPS running Ubuntu, on Xen. The problem is this, about once a day, for about 20-50 minutes, at a random time, the server becomes completely unresponsive to the outside world. After this period, it becomes responsive again, as if nothing had happened, it doesn't lose uptime, it doesn't restart. It just starts responding again as if it had been in suspended animation. These outages occur under conditions of non-exceptional memory and cpu, for example 70% mem, 5% cpu. I have stopped all non-essential services so the usage is very even. These outages don't particularly occur during times of increased memory/cpu (during daily tasks), they sometimes occur at times of very low cpu use (<2%), but in the past also occured during swapping. These blackouts have been occurring both under Ubuntu 12.04 LTS, and Ubuntu 14.04 LTS - no change at all (I upgraded Ubuntu specifically to see if it helped this problem). It is possible to log into our webhosts site, and use their administration console to see error messages from during this time. Presumably, these messages are from the Xen virtualization, the main message goes like this: BUG: soft lockp - CPU#0 stuck for 22s! [ksoftireqd/0:3] (repeats many times) SysRq : Emergency Sync (Sometimes this is the only message in the console) Others seen previously under different load situations include: BUG: soft lockup - CPU#0 stuck for 22s! [swapper/0:0] (repeated many times) or: INFO: rcu_sched detected stall on CPU 0 (t=15000 jiffies) (repeated many times with t getting bigger) From googling around I've tried various kernel parameters such as nohz=off and acpi=off to no avail. All tech support has said is that other Ubuntu installations are not suffering the same problem. Anyone got any ideas or experience with this problem?

    Read the article

  • Fresh install CentOS 6.4 64b with directadmin slowly consumes all memory and crashes

    - by Coen Ponsen
    Dear server fault community, This is my first question on server fault, i'm new to server (mis)configuration so please forgive me for asking something stupid :) I'm running Directadmin on a CentOS 6.4 64b with 4GB memory and over 10000Gh virtual machine. I migrated my websites because my former vps couldn't keep up anymore. Only half of the websites from this 1GB machine were migrated jet. So the migration is still in progress and already my server crashes every day. The server performance up until that moment is perfect. The directadmin log files show nothing out of the ordinary. Yesterday only the mysql server crashed but it also crashed the entire machine before. The memory usage in DA seems to be normal: directadmin directadmin (pid 3923 22158 22159 22160 22161 22162 )8.75 MB dovecot dovecot (pid 3851 ) 47.8 MB exim exim (pid 1350 ) 1.29 MB httpd (pid 21525 21528 21529 21530 21531 21532 21546 21571 21742 21743 21744 )490.4 MB mysqld mysqld (pid 1299 ) 287.8 MB named named (pid 3807 ) 16.3 MB proftpd proftpd (pid 1481 ) 1.91 MB sshd sshd (pid 1173 21494 ) 5.16 MB Restarting services immediately frees up memory, but slowly over time the memory usage increases(about 24 hours to crash). The commands: # sync # echo 3 > /proc/sys/vm/drop_caches Will free al memory correct. I could just create a cronjob but it seems the wrong way around to me. I can't seem to pinpoint the cause. Any advices, references or tips are highly appreciated! Greetings, Coen edit: free -m : after drop_caches: total used free shared buffers cached Mem: 3830 735 3095 0 0 21 -/+ buffers/cache: 712 3117 Swap: 991 0 991 I'll post another one this evening.

    Read the article

  • How can one keep secure regular backups of his desktop on a remote server through aDSL? [on hold]

    - by Antonis Christofides
    I'm a system administrator and I use rsnapshot to backup some servers, duplicity for some others. Both work fine, each one with advantages and disadvantages. Despite that, I am at a loss on how to backup my own private files. I'd use duplicity to automatically backup my files to a remote server; but the problem is that once in a while I must do a full backup. My emails and important files are 9G, and I expect this to increase. Uploading through aDSL at 1Mbit would be 20 hours. Too much. rsnapshot doesn't require periodic full backups (only the first time), but it must be running on the remote server and have a means to connect to my computer; if the server is compromised (or simply if the NSA decides to use it), my own machine is also compromised. Not good. The only solution I've come up with is use encfs, use unison to synchronize the files to a remote server, and use duplicity or rsnapshot on the remote server to backup these files. In that case, the question is whether I can sync the files on many computers; is it possible for encfs to be used with the same key on many computers? I also think that if I append one character to the unencrypted file, its encrypted encfs counterpart might change a lot, so that incrementals with duplicity would be less efficient—but not a big deal. Maybe also, when I need to restore a file, finding the correct file to restore could be a pain, because of filename encryption. I wonder whether there is any other possibility that I've overlooked. Maybe I'm asking too much for my personal use, and I should settle with an external disk?

    Read the article

  • How to extract attachments from Exchange 2003 database

    - by John
    I have an ancient Exchange 2003 server that I'm getting ready to retire. All user accounts have been migrated to Google Apps for Business, so no new mail is being sent or received on the server. There are less than 50 accounts on the server, but some are very large so that the whole Exchange database is between 10 and 20 GB. The largest account has over 100,000 messages. I believe that in the migration to Gmail, some attachments were not migrated. For peace of mind, I'd like to get the attachments out of the Exchange database. The only way I know of to do this is to set up a 2nd computer with Outlook on it, set up one of the accounts, and then sync the whole mail history and get the attachments out that way. Is there something simpler that I can do? Here are two possibilities: An Exchange attachment retrieval tool/script that pulls attachments for all accounts directly out of the Exchange database. An Exchange PST exporter tool/script that will export PST files for all accounts so that I can just load the PST files into Outlook at will.

    Read the article

  • DNS manager in Windows Server 2012 Essentials - My one server appears twice

    - by tetranz
    I have a newly installed Windows Server 2012 Essentials. It works pretty good although I'm working on some DNS improvements. Something that seems a little weird is in DNS Manager, my server appears twice. Once as hostname and once as hostname.mydomain.local. They seem to be identical and locked in sync. If I change one, the other follows. Is this normal? Does anyone know why I have this? I'm talking about the top level on the navigation. The very top is DNS and then these two below. Zones, forwarders etc are below them. I've found a couple of forum posts of people asking the same thing but no useful answer. All tutorials etc I can find with screenshots show only one which makes me uncomfortable. The server was installed out of the box as standard with the wizards. I know about the recommendation not to use .local but the wizards didn't give me any other option.

    Read the article

  • RAID degraded on Ubuntu server

    - by reano
    We're having a very weird issue at work. Our Ubuntu server has 6 drives, set up with RAID1 as follows: /dev/md0, consisting of: /dev/sda1 /dev/sdb1 /dev/md1, consisting of: /dev/sda2 /dev/sdb2 /dev/md2, consisting of: /dev/sda3 /dev/sdb3 /dev/md3, consisting of: /dev/sdc1 /dev/sdd1 /dev/md4, consisting of: /dev/sde1 /dev/sdf1 As you can see, md0, md1 and md2 all use the same 2 drives (split into 3 partitions). I also have to note that this is done via ubuntu software raid, not hardware raid. Today, the /md0 RAID1 array shows as degraded - it is missing the /dev/sdb1 drive. But since /dev/sdb1 is only a partition (and /dev/sdb2 and /dev/sdb3 are working fine), it's obviously not the drive that's gone AWOL, it seems the partition itself is missing. How is that even possible? And what could we do to fix it? My output of cat /proc/mdstat: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda2[0] sdb2[1] 24006528 blocks super 1.2 [2/2] [UU] md2 : active raid1 sda3[0] sdb3[1] 1441268544 blocks super 1.2 [2/2] [UU] md0 : active raid1 sda1[0] 1464710976 blocks super 1.2 [2/1] [U_] md3 : active raid1 sdd1[1] sdc1[0] 2930133824 blocks super 1.2 [2/2] [UU] md4 : active raid1 sdf2[1] sde2[0] 2929939264 blocks super 1.2 [2/2] [UU] unused devices: <none> FYI: I tried the following: mdadm /dev/md0 --add /dev/sdb1 But got this error: mdadm: add new device failed for /dev/sdb1 as 2: Invalid argument Output of mdadm --detail /dev/md0 is: /dev/md0: Version : 1.2 Creation Time : Sat Dec 29 17:09:45 2012 Raid Level : raid1 Array Size : 1464710976 (1396.86 GiB 1499.86 GB) Used Dev Size : 1464710976 (1396.86 GiB 1499.86 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Thu Nov 7 15:55:07 2013 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : lia:0 (local to host lia) UUID : eb302d19:ff70c7bf:401d63af:ed042d59 Events : 26216 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 0 0 1 removed

    Read the article

  • Whats the difference between pulling from a branch into master and pushing that branch onto master?

    - by Justin808
    In Tortoisegit, on the repository, I right-click and select sync. At the top of the dialog there are options for Local Branch and Remote Branch. If the local branch is named DeveloperA and the remote branch is master and I do a push, what happens? If the local branch is master and remote branch is DeveloperA and I Pull, what happens? If I am on the master branch and right click, select Merge and change the From to be my DeveloperA branch, what happens? If I try to push from master to remote master and the remote is updated git stops and tells me to pull. It seems if I push from DeveloperA to master it doens't stop, it just clobbers, it that correct? We're having an issue using git where the remote master branch gets clobbered at times and we are trying to figure out why. For example there is a developer working on his DeveloperA branch. He'll pull from master to get any updates, then push to master to push out his changes. But there are times that the push lists more files in the Out Commit list than he's edited. The odd thing is he can't revert those files as git is saying they are up to date and have not been modified. Yet when he pushes git pushes the files out. The problem is if there are changes between his pull and push the changes get clobbered.

    Read the article

  • Postgresql Data Aggregation over WAN Securely

    - by Zach
    Hey guys, Need some advice on how to proceed with this situation: My current scenario is that I have several postgresql (50+) boxes deployed throughout various locations and data centers and a beefy postgresql box setup at a homebase location. All of the deployed boxes have identical database layouts. I'm looking for a solution that would allow for a few things. I realize some of these options overlap and some might only contain mutually exclusive solutions. However, I'm interested to hear your thoughts :) Remotely query the deployed boxes and pull the results back to the homebase box for processing. Nightly (remote) "sync" or dump the deployed boxes' databases to a master database on the homebase box. Remotely push a table entry to all of the deployed boxes from the homebase box. Ensure security of data in transit, and remotely deployed boxes. Up to this point I've been floating on a homebrew multithreaded python/perl system that SSH's into these boxes remotely, which are ACL'ed off to the homebase server and pulls (or pushes) the raw query results over the ssh connection. I have even touched #2 (remote syncing) as I know that would get nasty really quick. I'm interested in any ideas for a more elegant solution that can scale up and stick to my FreeBSD/Linux environment.

    Read the article

  • initrd problem and Kernel panics after openSUSE 11.2 upgrade.

    - by unixbhaskar
    Once I have done the upgrade form openSUSE11.1 to openSUSE11.2 by doing this: zypper dup Now I tried to boot the system and it failed sync with VFS and kernel panic, so clearly a initrd problem . if I'm not mistaken. Now a bit of explanation about the problem: while upgrading it shows me the error updating initramfs( I forgot the exact error or might be warning).Oh yeah it shows some grub warning too. I have had been doing that from a chroot environment.. with all the required file mounted in proper place in the chroot environment. Now .after bit googling and painfully looking the susegeek.com forum and opensuse.org forum I have decided to recreate the initrd ...but the fellow called "mkinitrd" is real real crap as I hev been pointed out by few forum members. I tried to make an initrd image by myself, failed to do so .as it shows error that device not found( if I boot into suse live cd and mount the partition ) then I tried from the chrooted env and it says "there is no space left on the device" A bit bemused :( yeah most of you pointed it right may lack of knowledge of mine. Kindly suggest me and show me steps to do it correctly and get opensuse11.2 up and running. TIA

    Read the article

  • Stronger laptop_mode in Linux

    - by Vi
    Can I have stronger laptop mode in Linux? I want to spin down the hard drive and prevent it to spin up even if something wants to read something not in cache. In general I want to have these modes: Normal Current laptop mode Stronger laptop mode: spin up only when needs to read something uncached (and cache it). No spinups to write something unless really memory pressure (Exception: explicit "sync" command in console). Kernel is allowed to keep processes in D-sleep for 10 seconds for that. Forced laptop mode: do not spin up, period. Keep offending processes in D-sleep unless I turn off this mode. Like there is a bomb instead of hard drive. I also want to have access times tracked (mount -o atime), but I don't want the hard drive to be spinned up only to update them. Is there some settings or kernel patches that can get closer to this? May be I should write special io scheduler for "forced laptop mode"? E.g. echo suspend > /sys/block/sda/queue/scheduler to lock the drive and echo cfq > /ys/block/sda/queue/scheduler to unlock it again?

    Read the article

< Previous Page | 452 453 454 455 456 457 458 459 460 461 462 463  | Next Page >