Search Results

Search found 3049 results on 122 pages for 'sync'.

Page 35/122 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • Windows 7 - SyncToy alternative - Sync folder with Network drive...

    - by alex
    I have a Windows 7 laptop, which i want to back up to a network folder. There is a drive (partition) on my laptop machine that i want to backup to a network drive- if i delete a file in the folder on my laptop, it should also be deleted from the backup... I used to use syncToy, however i understand this does not work correctly with windows 7 - at least not with a large number of files. Are there any alternatives?

    Read the article

  • Outlook and IMAP - Outlook doesn't allow the Drafts and Trash folders to sync with the respective IMAP folders

    - by Matt
    I'm using Outlook 2007 and Outlook 2010 against an IMAP server (the problem exists across many, like Gmail, you name it). Outlook lets you set your Outlook "Sent" folder to map to the IMAP server's Sent folder (the other choice is to map your Outlook Sent to your Personal Folders Sent) - this is good. When you send a message from Outlook and then look in the sent folder of the IMAP server (e.g. from a different client or from a browser), the messages are there. This is the behavior I want. Outlook does NOT support the same behavior for Drafts and Trash. In both cases, items deleted (or Drafts saved) in Outlook go in to Outlook's local folders and do NOT show on the IMAP server's Trash or Drafts folders. Same problem in reverse. Thunderbird on the other hand does support the proper mapping of Drafts, Sent and Trash. I expected this to be IMAP-specific but it appears to be client specific. What does Outlook implement it this way and is there a workaround?

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • How do you get SharePoint back in sync when you change a user's sAMAccountName?

    - by Kirk Liemohn
    I have observed on SharePoint 2010 that if you change the sAMAccountName of a user after the user has logged into a SharePoint site collection, the tp_Login field in the UserInfo table does not get updated. It still has the old user Id. While the user can log into SharePoint under the new account, these new logins do not update the table. I have code that looks at the SPUser.LoginName and this value appears to be the tp_Login field value which is now old. The fact that this value is old causes my code to fail. I suspect this behavior is identical in SharePoint 2007. Is there any way to force SharePoint to recognize the new sAMAccountName? I suspect that profile synchronization might help, but I would like for my solution to work with WSS 3.0 and SharePoint 2010 Foundation. I considered manually updating the database table, but I would like to stick with supported approaches.

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, "sync" option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • Web deploy (msdeploy), syncing everything but sites and pools (but include siteDefaults)

    - by jishi
    Today I do the following to sync two webservers but skip all site configuration: msdeploy -verb:sync -source:webServer -dest:webServer,computerName=web25:8080 -skip:objectName=section,absolutePath=system.applicationHost/sites -skip:objectName=section,absolutePath=system.applicationHost/applicationPools However, this effectively also skip the siteDefaults, which I do like to sync (system.applicationHost/sites/siteDefaults) There doesn't seem to be a way to "include" a section, to override the skip directive. And there doesn't seem to be a way to sync only the siteDefaults section from applicationHost either, since source appHostConfig only seem to sync a specified site, and not siteDefaults. Maybe it is possible to "skip" using an Xpath expression or similar, to only skip the nodes, but include , but I find the documentation a bit confusing and my Xpath is rusty.

    Read the article

  • BizTalk: History of one project architecture

    - by Leonid Ganeline
    "In the beginning God made heaven and earth. Then he started to integrate." At the very start was the requirement: integrate two working systems. Small digging up: It was one system. It was good but IT guys want to change it to the new one, much better, chipper, more flexible, and more progressive in technologies, more suitable for the future, for the faster world and hungry competitors. One thing. One small, little thing. We cannot turn off the old system (call it A, because it was the first), turn on the new one (call it B, because it is second but not the last one). The A has a hundreds users all across a country, they must study B. A still has a lot nice custom features, home-made features that cannot disappear. These features have to be moved to the B and it is a long process, months and months of redevelopment. So, the decision was simple. Let’s move not jump, let’s both systems working side-by-side several months. In this time we could teach the users and move all custom A’s special functionality to B. That automatically means both systems should work side-by-side all these months and use the same data. Data in A and B must be in sync. That’s how the integration projects get birth. Moreover, the specific of the user tasks requires the both systems must be in sync in real-time. Nightly synchronization is not working, absolutely.   First draft The first draft seems simple. Both systems keep data in SQL databases. When data changes, the Create, Update, Delete operations performed on the data, and the sync process could be started. The obvious decision is to use triggers on tables. When we are talking about data, we are talking about several entities. For example, Orders and Items [in Orders]. We decided to use the BizTalk Server to synchronize systems. Why it was chosen is another story. Second draft   Let’s take an example how it works in more details. 1.       User creates a new entity in the A system. This fires an insert trigger on the entity table. Trigger has to pass the message “Entity created”. This message includes all attributes of the new entity, but I focused on the Id of this entity in the A system. Notation for this message is id.A. System A sends id.A to the BizTalk Server. 2.       BizTalk transforms id.A to the format of the system B. This is easiest part and I will not focus on this kind of transformations in the following text. The message on the picture is still id.A but it is in slightly different format, that’s why it is changing in color. BizTalk sends id.A to the system B. 3.       The system B creates the entity on its side. But it uses different id-s for entities, these id-s are id.B. System B saves id.A+id.B. System B sends the message id.A+id.B back to the BizTalk. 4.       BizTalk sends the message id.A+id.B to the system A. 5.       System A saves id.A+id.B. Why both id-s should be saved on both systems? It was one of the next requirements. Users of both systems have to know the systems are in sync or not in sync. Users working with the entity on the system A can see the id.B and use it to switch to the system B and work there with the copy of the same entity. The decision was to store the pairs of entity id-s on both sides. If there is only one id, the entities are not in sync yet (for the Create operation). Third draft Next problem was the reliability of the synchronization. The synchronizing process can be interrupted on each step, when message goes through the wires. It can be communication problem, timeout, temporary shutdown one of the systems, the second system cannot be synchronized by some internal reason. There were several potential problems that prevented from enclosing the whole synchronization process in one transaction. Decision was to restart the whole sync process if it was not finished (in case of the error). For this purpose was created an additional service. Let’s call it the Resync service. We still keep the id pairs in both systems, but only for the fast access not for the synchronization process. For the synchronizing these id-s now are kept in one main place, in the Resync service database. The Resync service keeps record as: ·       Id.A ·       Id.B ·       Entity.Type ·       Operation (Create, Update, Delete) ·       IsSyncStarted (true/false) ·       IsSyncFinished (true/false0 The example now looks like: 1.       System A creates id.A. id.A is saved on the A. Id.A is sent to the BizTalk. 2.       BizTalk sends id.A to the Resync and to the B. id.A is saved on the Resync. 3.       System B creates id.B. id.A+id.B are saved on the B. id.A+id.B are sent to the BizTalk. 4.       BizTalk sends id.A+id.B to the Resync and to the A. id.A+id.B are saved on the Resync. 5.       id.A+id.B are saved on the B. Resync changes the IsSyncStarted and IsSyncFinished flags accordingly. The Resync service implements three main methods: ·       Save (id.A, Entity.Type, Operation) ·       Save (id.A, id.B, Entity.Type, Operation) ·       Resync () Two Save() are used to save id-s to the service storage. See in the above example, in 2 and 4 steps. What about the Resync()? It is the method that finishes the interrupted synchronization processes. If Save() is started by the trigger event, the Resync() is working as an independent process. It periodically scans the Resync storage to find out “unfinished” records. Then it restarts the synchronization processes. It tries to synchronize them several times then gives up.     One more thing, both systems A and B must tolerate duplicates of one synchronizing process. Say on the step 3 the system B was not able to send id.A+id.B back. The Resync service must restart the synchronization process that will send the id.A to B second time. In this case system B must just send back again also created id.A+id.B pair without errors. That means “tolerate duplicates”. Fourth draft Next draft was created only because of the aesthetics. As it always happens, aesthetics gave significant performance gain to the whole system. First was the stupid question. Why do we need this additional service with special database? Can we just master the BizTalk to do something like this Resync() does? So the Resync orchestration is doing the same thing as the Resync service. It is started by the Id.A and finished by the id.A+id.B message. The first works as a Start message, the second works as a Finish message.     Here is a diagram the whole process without errors. It is pretty straightforward. The Resync orchestration is waiting for the Finish message specific period of time then resubmits the Id.A message. It resubmits the Id.A message specific number of times then gives up and gets suspended. It can be resubmitted then it starts the whole process again: waiting [, resubmitting [, get suspended]], finishing. Tuning up The Resync orchestration resubmits the id.A message with special “Resubmitted” flag. The subscription filter on the Resync orchestration includes predicate as (Resubmit_Flag != “Resubmitted”). That means only the first Sync orchestration starts the Resync orchestration. Other Sync orchestration instantiated by the resubmitting can finish this Resync orchestration but cannot start another instance of the Resync   Here is a diagram where system B was inaccessible for some period of time. The Resync orchestration resubmitted the id.A two times. Then system B got the response the id.A+id.B and this finished the Resync service execution. What is interesting about this, there were submitted several identical id.A messages and only one id.A+id.B message. Because of this, the system B and the Resync must tolerate the duplicate messages. We also told about this requirement for the system B. Now the same requirement is for the Resunc. Let’s assume the system B was very slow in the first response and the Resync service had time to resubmit two id.A messages. System B responded not, as it was in previous case, with one id.A+id.B but with two id.A+id.B messages. First of them finished the Resync execution for the id.A. What about the second id.A+id.B? Where it goes? So, we have to add one more internal requirement. The whole solution must tolerate many identical id.A+id.B messages. It is easy task with the BizTalk. I added the “SinkExtraMessages” subscriber (orchestration with one receive shape), that just get these messages and do nothing. Real design Real architecture is much more complex and interesting. In reality each system can submit several id.A almost simultaneously and completely unordered. There are not only the “Create entity” operation but the Update and Delete operations. And these operations relate each other. Say the Update operation after Delete means not the same as Update after Create. In reality there are entities related each other. Say the Order and Order Items. Change on one of it could start the series of the operations on another. Moreover, the system internals are the “black boxes” and we cannot predict the exact content and order of the operation series. It worth to say, I had to spend a time to manage the zombie message problems. The zombies are still here, but this is not a problem now. And this is another story. What is interesting in the last design? One orchestration works to help another to be more reliable. Why two orchestration design is more reliable, isn’t it something strange? The Synch orchestration takes all the message exchange between systems, here is the area where most of the errors could happen. The Resync orchestration sends and receives messages only within the BizTalk server. Is there another design? Sure. All Resync functionality could be implemented inside the Sync orchestration. Hey guys, some other ideas?

    Read the article

  • Outlook error-'Can't open e-mail folders.You must connect to Exchange w/ current profile before you can sync folders w/ your Outlook data file'

    - by Emilio
    Note that the error message I put in the title of this question is abbreviated. The actual error message is below. I have an Exchange account and using Outlook 2010 as the client. I run in Cached Exchange Mode and have an .OST file locally. Recently I uninstalled and reinstalled office. I set up a new mail profile when prompted by Outlook 2010 upon first execution of the program. In my initial attempt, I pointed the data file at my existing OST file. In my second attempt, I had Outlook create a fresh empty file. In both cases I'm getting the error 'Cannot open your default e-mail folders. You must connect to Microsoft Exchange with the current profile before you can synchronize your folders with your Outlook data file (.ost).', also shown in this screenshot -- http://drop.io/4rc9v9o/asset/outlook-error-png. I don't know how to connect with the current profile - that's what I thought I did when I created a new .OST file? I've had this problem for several days so my OST file is now out of date. Once I get things running I obviously want my active mailbox to update the OST, not the other way around.

    Read the article

  • My system administrator set up 2 databases that sync. Master-Master. However, these two databases a

    - by Alex
    DB1 and DB2. I made changes to DB1, and it does not seem to be on DB2. When I do "SHOW SLAVE STATUS\G" on DB2, there seems to be an error: mysql> show slave status\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: Master_User: Master_Port: Connect_Retry: 60 Master_Log_File: mysql-bin.0005496 Read_Master_Log_Pos: 5445649315 Relay_Log_File: mysqld-relay-bin.0041705 Relay_Log_Pos: 1624302119 Relay_Master_Log_File: mysql-bin.0004461 Slave_IO_Running: Yes Slave_SQL_Running: No Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 1062 Last_Error: Error 'Duplicate entry '4779' for key 1' on query. Default database: 'falc'. Query: 'INSERT INTO `log` (`anon_id`, `created_at`, `query`, `episode_url`, `detail_id`, `ip`) VALUES ('fdzn1d45kMavF4qbyePv', '2009-11-19 04:19:13', 'amazon', '', '', '130.126.40.57')' Skip_Counter: 0 Exec_Master_Log_Pos: 162301982 Relay_Log_Space: 136505187184 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: NULL 1 row in set (0.00 sec) Then, I did show tables, and it seems like DB2 is lacking a table that I created on DB1...that means that for some reason, DB2 stopped syncing with DB1. How can I simply allow them to be in full synchronization again? All I want is DB2 to be exactly the same as DB1!

    Read the article

  • Why won't Mail sync To Do/Tasks with Exchange?

    - by cebjyre
    I'm using Apple Mail (Snow Leopard, everything is fully up-to-date), and am happily using an Exchange 2007 server for email needs, but I can't get it to synchronise the To Do notes from Mail with the Tasks from Exchange. I've tried creating a task in each and neither of them went to the other side. Bizarrely I have a single task from before I actually upgraded to Snow Leopard that did get into Mail from Exchange. Right-clicking on the Inbox and hitting 'Get Account Info' in Mail reports the correct number of entries in the 'To Do' folder for 'Messages on Server'.

    Read the article

  • Should use EXT4 or XFS to be able to 'sync'/backup to S3?

    - by Rafa
    It's my first message here, so bear with me... (I have already checked quite a few of the "Related Questions" suggested by the editor) Here's the setup, a brand new dedicated server (8GB RAM, some 140+ GB disk, Raid 1 via HW controller, 15000 RPM) it's a production web server (with MySQL in it, too, not just serving web requests); not a personal desktop computer or similar. Ubuntu Server 64bit 10.04 LTS We have an Amazon EC2+EBS setup with the EBS volume formatted as XFS for easily taking snapshots to S3, via AWS' console. We are now migrating to the dedicated server and I want to be able to backup our data to Amazon's S3. The main reason being the possibility of using the latest snapshot from an EC2 instance in case of hardware failure on the dedicated server. There are two approaches I am thinking of: do a "simple" file-based backup with rsync, dumping the database' and other files, and uploading to amazon via S3 API commands, or to an EC2 instance, or something. do a file-system "freeze" (using XFS) with the usual ebs/ec2 snapshot tool to take part of the file system, take a snapshot, and upload it to Amazon. Here's my question (or series of questions): Can I safely use XFS for the whole system as the main and only format on the dedicated server? If not, is it safe to use EXT4? Or should I use something else? would then be possible to make snapshots of the system to upload to Amazon? Is it possible/feasible/practical to do what I want to do, anyway? any recommendations? When searching around for S3/EBS/XFS, anything relevant to my problem is usually focused on taking snapshots of a XFS system that is already an EBS volume. My intention is to do it in a "real"/metal dedicated server. Update: I just saw this on Wikipedia: XFS does not provide direct support for snapshots, as it expects the snapshot process to be implemented by the volume manager. I had always assumed that I could choose 2 ways of doing snapshots: via LVM or via XFS (without LVM). After reading this, I realize these 2 options are more like it: With XFS: 1) do xfs_freeze; 2) copy the frozen files via, eg, rsync; 3) unfreeze xfs With LVM and XFS: 1) do xfs_freeze; 2) make a binary copy of the frozen fs via lvcreate and related commands; 3) unfreeze xfs; 4) somehow backup the LVM snapshot. Thanks a lot in advance, Let me know if I need to clarify something.

    Read the article

  • Time sync fails on Hyper-V VM, but succeeds when I log in as a domain user

    - by Richard Beier
    We have a Windows Server 2003 SP2 VM running on Hyper-V (Server 2008 R2 host). The VM has Hyper-V time synchronization enabled. I noticed that the time on the VM was fast by around 25 minutes. I saw the following in the event log: The time provider NtpClient is configured to acquire time from one or more time sources, however none of the sources are currently accessible. No attempt to contact a source will be made for 15 minutes. NtpClient has no source of accurate time. The time provider NtpClient cannot reach or is currently receiving invalid time data from ourdc.ourdomain.local (ntp.d|192.168.2.18:123-192.168.2.2:123). Time Provider NtpClient: No valid response has been received from domain controller ourdc.ourdomain.local after 8 attempts to contact it. This domain controller will be discarded as a time source and NtpClient will attempt to discover a new domain controller from which to synchronize. I had been logged in as a local user. (We have an old app that runs on this VM - it requires a user to be logged in at all times, and we use a non-domain user account for this.) When I logged in as a domain user, the clock almost immediately corrected itself. Running "w32tm /monitor" and "net time" as the domain user showed no errors, and indicated that our domain controller was the time source. Does anyone know what might cause this, and why logging in under a domain account fixes the problem? I'm wondering if the time will start to drift again. Thanks for your help, Richard

    Read the article

  • Using Puppet, is it possible to define which plugins gets sync with a particular client?

    - by luckytaxi
    I have a plugin that gets pushed to all clients. However, I have one that's specific to a particular module, hence I don't want it synced with all my clients. My generic plugin is stored in /etc/puppet/modules/custom/lib/facter but I have a plugin stored inside a module that seems to be pushed to all clients regardless if the host inherits the class or not. Location of module: /etc/puppet/modules/apache/lib/facter/SAMPLE_PLUGIN.rb

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • What does 'Can't sync file './database_name/…frm' to disk (Errcode: 28)' mean?

    - by cool_cs
    I am getting this error message whenever I try to create a new index or a new table in my mysql server. Does anyone know what the reason is? This is the ouput after I run df -a Filesystem Size Used Avail Use% Mounted on /dev/sda2 13G 7.3G 4.5G 63% / /dev/sda1 251M 27M 212M 12% /boot tmpfs 3.9G 0 3.9G 0% /dev/shm 10.156.248.29:/vol/pharos_pnxd_data_01/env_empty_sbid_27133_qdcprod 30G 30G 32K 100% /app

    Read the article

  • How do I sync the Solution Explorer with the current File in Visual Studio?

    - by thepaulpage
    When I have an open code file in Visual Studio that I am editing I would like to keep that same file highlighted inside of the solution Explorer so that I know where I am at. What I'd really like is to change the focus to a different code file and the solution explorer to move to the file that I am editing. Further Explanation and example: I have a project with 2 files. Class1 and Class2. I open both files. The focus is on Class1. I click on the Class2 Tab, thereby changing the file that I am editing to Class2. Desired Behavior The solution explorer will highlight Class2.

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • Dropbox and IIS

    - by DigitalAce7
    I'm running into the problem of using Dropbox and IIS. I am using the Dropbox Folder Sync plugin to sync a folder outside the Dropbox folder. They are synced into my inetpub\root\downloads\ directory. The problem is not that it doesn't sync. The problem is that the files that get synced have no permissions attached to them. I can't open any of them on IIS. Initially the downloads were to be only PDFs but they don't open so I tried an .ASPX file and it fails to load as well. Is there any way around this or another program I can use to sync files but allow them to be opened on an IIS server. Thanks for any help!

    Read the article

  • Why No android.content.SyncAdapter meta-data registering sync-adapter?

    - by mobibob
    I am following the SampleSyncAdapter and upon startup, it appears that my SyncAdapter is not configured correctly. It reports an error trying to load its meta-data. How can I isolate the problem? You can see the other accounts in the system that register correctly. Logcat: 12-21 17:10:50.667 W/PackageManager( 121): Unable to load service info ResolveInfo{4605dcd0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.667 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.667 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.667 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.667 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.667 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570) 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.skype.contacts.sync, packageName=com.skype.raider 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.twitter.android.auth.login, packageName=com.twitter.android 12-21 17:10:50.747 D/Sources ( 294): Creating external source for type=com.example.android.samplesync, packageName=com.example.android.samplesync 12-21 17:10:50.747 W/PackageManager( 121): Unable to load service info ResolveInfo{460504b0 com.myapp.syncadapter.MySyncAdapter p=0 o=0 m=0x108000} 12-21 17:10:50.747 W/PackageManager( 121): org.xmlpull.v1.XmlPullParserException: No android.content.SyncAdapter meta-data 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.parseServiceInfo(RegisteredServicesCache.java:391) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache.generateServicesMap(RegisteredServicesCache.java:260) 12-21 17:10:50.747 W/PackageManager( 121): at android.content.pm.RegisteredServicesCache$1.onReceive(RegisteredServicesCache.java:110) 12-21 17:10:50.747 W/PackageManager( 121): at android.app.ActivityThread$PackageInfo$ReceiverDispatcher$Args.run(ActivityThread.java:892) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.handleCallback(Handler.java:587) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Handler.dispatchMessage(Handler.java:92) 12-21 17:10:50.747 W/PackageManager( 121): at android.os.Looper.loop(Looper.java:123) 12-21 17:10:50.747 W/PackageManager( 121): at com.android.server.ServerThread.run(SystemServer.java:570)

    Read the article

  • Why does Android Account & Sync reboot when trying to find my settings activity?

    - by mobibob
    I have an activity that I can declare as Launcher category and it launches just fine from the home screen. However, when I try to hook-up the same activity into my SyncAdapter's settings activity and open it from the Accounts & Sync page - MySyncAdapter - (touch account listing) it aborts with a system fatal error (reboots phone). Meanwhile, my SyncAdapter is working other respects. Here is the log at point of impact: 01-13 12:31:00.976 5024 5038 I ActivityManager: Starting activity: Intent { act=android.provider.Settings.ACTION_SYNC_SETTINGS flg=0x10000000 cmp=com.myapp.android.syncadapter.ui/SyncAdapterSettingsActivity.class (has extras) } 01-13 12:31:00.985 5024 5038 E AndroidRuntime: *** FATAL EXCEPTION IN SYSTEM PROCESS: android.server.ServerThread 01-13 12:31:00.985 5024 5038 E AndroidRuntime: android.content.ActivityNotFoundException: Unable to find explicit activity class {com.myapp.android.syncadapter.ui/SyncAdapterSettingsActivity.class}; have you declared this activity in your AndroidManifest.xml? 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.app.Instrumentation.checkStartActivityResult(Instrumentation.java:1404) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.app.Instrumentation.execStartActivity(Instrumentation.java:1378) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.app.ContextImpl.startActivity(ContextImpl.java:622) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.preference.Preference.performClick(Preference.java:828) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.preference.PreferenceScreen.onItemClick(PreferenceScreen.java:190) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.widget.AdapterView.performItemClick(AdapterView.java:284) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.widget.ListView.performItemClick(ListView.java:3382) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.widget.AbsListView$PerformClick.run(AbsListView.java:1696) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.os.Handler.handleCallback(Handler.java:587) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.os.Handler.dispatchMessage(Handler.java:92) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at android.os.Looper.loop(Looper.java:123) 01-13 12:31:00.985 5024 5038 E AndroidRuntime: at com.android.server.ServerThread.run(SystemServer.java:517) 01-13 12:31:00.985 5024 5038 I Process : Sending signal. PID: 5024 SIG: 9 01-13 12:31:01.005 5019 5019 I Zygote : Exit zygote because system server (5024) has terminated 01-13 12:31:01.015 1211 1211 E installd: eof Here is a snippet from my manifest file: <activity android:name="com.myapp.android.syncadapter.ui.SyncAdapterSettingsActivity" android:label="@string/title_settings" android:windowSoftInputMode="stateAlwaysHidden|adjustPan"> <intent-filter> <action android:name="android.intent.action.VIEW" /> <action android:name="android.intent.action.MAIN" /> <action android:name="android.provider.Settings.ACTION_SYNC_SETTINGS"/> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity>

    Read the article

  • [Ubuntu 10.04] mdadm - Can't get RAID5 Array To Start

    - by Matthew Hodgkins
    Hello, after a power failure my RAID array refuses to start. When I boot I have to sudo mdadm --assemble --force /dev/md0 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 to get mdadm to notice the array. Here are the details (after I force assemble). sudo mdadm --misc --detail /dev/md0: /dev/md0: Version : 00.90 Creation Time : Sun Apr 25 01:39:25 2010 Raid Level : raid5 Used Dev Size : 1465135872 (1397.26 GiB 1500.30 GB) Raid Devices : 6 Total Devices : 6 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Jun 17 23:02:38 2010 State : active, Not Started Active Devices : 6 Working Devices : 6 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 128K UUID : 44a8f730:b9bea6ea:3a28392c:12b22235 (local to host hodge-fs) Events : 0.1249691 Number Major Minor RaidDevice State 0 8 65 0 active sync /dev/sde1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 49 3 active sync /dev/sdd1 4 8 33 4 active sync /dev/sdc1 5 8 17 5 active sync /dev/sdb1 mdadm.conf: # by default, scan all partitions (/proc/partitions) for MD superblocks. # alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions /dev/sdb1 /dev/sdb1 # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # definitions of existing MD arrays ARRAY /dev/md0 level=raid5 num-devices=6 UUID=44a8f730:b9bea6ea:3a28392c:12b22235 Any help would be appreciated.

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Rpmbuild - setting name of created .rpm

    - by sync
    Hi I've been trying to find out what's the easiest way to set a fixed filename during rpm creation. Can it be set somewhere in .spec file or as rpmbuild parameter? The default name depends on version and release number. Name of my rpm has to be always the same. thanks sync

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >