Search Results

Search found 3179 results on 128 pages for 'merge replication'.

Page 94/128 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • How to convert Silverlight XAML to Resource dictionary in Silverlight 2

    - by Vecdid
    I am looking for the quickest and easiest way to combine two silverlight projects. Once has controls all in Silverlight XAML the other is template driven and uses a template based on a Silverlight resource dictionary. I am looking for some advice and resources on the best and quickest way to do this. One project is based on this: Silverlight Image Slider http://www.silverlightshow.net/items/Image-slider-control-in-Silverlight-1.1.aspx the other project is based on this: Open Video Player http://www.openvideoplayer.com/ I need to move the slider into the player, but the player uses a template and I just don't get how to merge the two as I do not understand resource dictionaries as they are applied to the player. We have completely gutted and made each proiject do what we want, but heck if I can figure out how to combine them.

    Read the article

  • Windows 2003 Domain Controller Very Upset about NIC Teaming

    - by Kyle Brandt
    I set up BACS (Broadcom Teaming) to team two NIC on a Windows 2003 Active Directory Domain Controller. Networking still works okay, I can ping the gateway etc, but both DNS and Active Directory fail to start with various 40xx errors. The team that I created is Smart load Balancing with Failover, with one backup and only one in smart load balancing (So really it is just failover). I have the team the same IP address that the single active NIC had before. Anyone seen this before, or have any ideas what the problem might be? Event Type: Error Event Source: DNS Event Category: None Event ID: 4015 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server has encountered a critical error from the Active Directory. Check that the Active Directory is functioning properly. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: DNS Event Category: None Event ID: 4004 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server was unable to complete directory service enumeration of zone .. This DNS server is configured to use information obtained from Active Directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and repeat enumeration of the zone. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: NTDS Replication Event Category: DS RPC Client Event ID: 2087 Date: 3/7/2010 Time: 10:40:28 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: ADC Description: Active Directory could not resolve the following DNS host name of the source domain controller to an IP address. This error prevents additions, deletions and changes in Active Directory from replicating between one or more domain controllers in the forest. Security groups, group policy, users and computers and their passwords will be inconsistent between domain controllers until this error is resolved, potentially affecting logon authentication and access to network resources.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • Coco/R vs. ANTLR

    - by Eamon Nerbonne
    I'm evaluating using Coco/R vs. ANTLR for use in a C# project as part of what's essentially a scriptable mail-merge functionality. To parse the (simple) scripts, I'll need a parser. I've focussed on Coco/R and ANTLR because both seem fairly mature and well-maintained and capable of generating decent C# parsers. Neither seem to be trivial to use either, however, and simplicity is something I'd appreciate - particularly maintainability by others. Does anyone have any recommendations to make? What are the pros/cons of either for a parsing a small language - or am I looking into the wrong things entirely? How well do these integrate into a typical continuous integration setup? What are the pitfalls? Related: Well, many questions, such as 1, 2, 3, 4, 5.

    Read the article

  • Is git / tortoisegit ready to replace snv / tortoisesvn on windows?

    - by opensas
    I've been using svn thru tortoise and svn:// protocol on windows for a couple of years and I'm pretty comfortable with it. Nevertheless tThere are a couple of features I'd like to hove, mainly shelves (something like local commits) and easier branch / merge support. From my experience, svn / tortoise on windows is rock solid stuff. I was wondering if git / tortoisegit has achieved the level of maturity and stability so as to be a replacement for svn on windows. Any experience about it? saludos sas some links: a similar question, only a bit outdated http://stackoverflow.com/questions/1500400/is-tortoisegit-ready-for-prime-time-yet http://stackoverflow.com/questions/931105/tortoisegit-tortoisebzr-tortoisehg-are-any-solid-enough-to-switch-from-tortois (well, it seems like mercurial could be an option) http://code.google.com/p/tortoisegit/ http://www.syntevo.com/smartgit/index.html (free of charge for non-commercial use or to active members of the Open Source community)

    Read the article

  • No rails commands will run

    - by Jeremy
    I am trying to learn rails and haven't used it in the last few weeks but today when I try to run any rails commands such as - 'rails -v' - 'script/server' I get not have reinstalled ruby but the didn't don't have a clue what could be wrong Am on a brand new Macbook Pro Jeremy-Geross-MacBook-Pro:~ Jeremy$ rails -v /Library/Ruby/Site/1.8/rubygems/config_file.rb:172:in merge': can't convert String into Hash (TypeError) from /Library/Ruby/Site/1.8/rubygems/config_file.rb:172:ininitialize' from /Library/Ruby/Site/1.8/rubygems.rb:384:in new' from /Library/Ruby/Site/1.8/rubygems.rb:384:inconfiguration' from /Library/Ruby/Site/1.8/rubygems.rb:634:in path' from /Library/Ruby/Site/1.8/rubygems/source_index.rb:68:ininstalled_spec_directories' from /Library/Ruby/Site/1.8/rubygems/source_index.rb:58:in from_installed_gems' from /Library/Ruby/Site/1.8/rubygems.rb:881:insource_index' from /Library/Ruby/Site/1.8/rubygems/gem_path_searcher.rb:81:in init_gemspecs' from /Library/Ruby/Site/1.8/rubygems/gem_path_searcher.rb:13:ininitialize' from /Library/Ruby/Site/1.8/rubygems.rb:839:in new' from /Library/Ruby/Site/1.8/rubygems.rb:839:insearcher' from /Library/Ruby/Site/1.8/rubygems.rb:838:in synchronize' from /Library/Ruby/Site/1.8/rubygems.rb:838:insearcher' from /Library/Ruby/Site/1.8/rubygems.rb:478:in find_files' from /Library/Ruby/Site/1.8/rubygems.rb:1103 from /usr/bin/rails:9:inrequire' from /usr/bin/rails:9

    Read the article

  • asp.NET Dynamic Data Site and asp.NET MVC-2 site together

    - by loviji
    Hi, I have created firstly ASP.NET MVC 2. and write more functionality. After I create asp.NET Dynamic Data Site. now, when I click on run button in Visual Studio, mvc app. opened in browser as http://localhost:50062. and asp.NET Dynamic Data Site as http://localhost:58395/cms/. but i want to merge this app. in one. can I use asp.NET Dynamic Data Site and asp.NET MVC-2 at the same time?

    Read the article

  • ldap_modify: Insufficient access (50)

    - by Lynn Owens
    I am running an OpenLDAP 2.4 server that uses the SSL service for communication. It works for lookups. I am trying to add mirror mode replication. So this is the command that I'm executing: ldapmodify -D "cn=myuser,dc=mydomain,dc=com" -H ldaps://myloadbalancer -W -f /etc/ldap/ldif/server_id.ldif Where this is my server_id.ldif: dn: cn=config changetype: modify replace: olcServerID olcServerID: 1 myserver1 olcServerID: 2 myserver2 and this is my cn\=config.ldif in the slapd.d tree of text files: dn: cn=config objectClass: olcGlobal cn: config olcArgsFile: /var/run/slapd/slapd.args olcPidFile: /var/run/slapd/slapd.pid olcToolThreads: 1 structuralObjectClass: olcGlobal entryUUID: ff9689de-c61d-1031-880b-c3eb45d66183 creatorsName: cn=config createTimestamp: 20121118224947Z olcLogLevel: stats olcTLSCertificateFile: /etc/ldap/certs/ldapscert.pem olcTLSCertificateKeyFile: /etc/ldap/certs/ldapskey.pem olcTLSCACertificateFile: /etc/ldap/certs/ldapscert.pem olcTLSVerifyClient: never entryCSN: 20121119022009.770692Z#000000#000#000000 modifiersName: gidNumber=0+uidNumber=0,cn=peercred,cn=external,cn=auth modifyTimestamp: 20121119022009Z But unfortunately I'm getting this: Enter LDAP Password: modifying entry "cn=config" ldap_modify: Insufficient access (50) If I try to specify the config database I get this: ldapmodify -H 'ldaps://myloadbalancer/cn=config' -D "cn=myuser,cn=config" -W -f ./server_id.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49)} Does anyone know how I can add the serverID to the config database so that I can complete the setup of mirror mode?

    Read the article

  • Java: ArrayList bottleneck

    - by Jack
    Hello, while profiling a java application that calculates hierarchical clustering of thousands of elements I realized that ArrayList.get occupies like half of the CPU needed in the clusterization part of the execution. The algorithm searches the two more similar elements (so it is O(n*(n+1)/2) ), here's the pseudo code: int currentMax = 0.0f for (int i = 0 to n) for (int j = i to n) get content i-th and j-th if their similarity > currentMax update currentMax merge the two clusters So effectively there are a lot of ArrayList.get involved. Is there a faster way? I though that since ArrayList should be a linear array of references it should be the quickest way and maybe I can't do anything since there are simple too many gets.. but maybe I'm wrong. I don't think using a HashMap could work since I need to get them all on every iteration and map.values() should be backed by an ArrayList anyway.. Otherwise should I try other collection libraries that are more optimized? Like google's one, or apache one.. Thanks

    Read the article

  • Reasons for NSManagedObjectMergeError error on [NSManagedObjectContext save:]

    - by ross-kimes
    I have a application that combines threading and CoreData. I and using one global NSPersistentStoreCoordinator and a main NSManagedObjectContextModel. I have a process where I have to download 9 files simultaneously, so I created an object to handle the download (each individual download has its own object) and save it to the persistentStoreCoordinator. In the [NSURLConnection connectionDidFinishLoading:] method, I created a new NSManagedObject and attempt to save the data (which will also merge it with the main managedObjectContext). I think that it is failing due to multiple process trying to save to the persistentStoreCoordinator at the same time as the downloads are finishing around the same time. What is the easiest way to eliminate this error and still download the files independently? Thank you!

    Read the article

  • PowerBuilder IDE Customization for SCC

    - by Adam Hawkes
    Our PowerBuilder application is fairly large and has many objects in several PBLs for organizing our code. We often have 10 or more datawindows on one window, and these datawindows may be spread across two or three PBLs. For version control, we use exclusive check-out to avoid merge conflicts. The situation is that when you right-click on a datawindow object from the Window painter you get a context-menu with options like "Script" and "Properties" and "Modify Datawindow...". We'd like to add one for "Check-out..." to avoid having to hunt for the datawindow in several PBLs. Any ideas on how to do this, or something similar, would be greatly appreciated.

    Read the article

  • How to use gnlcomposition to concatenate video files?

    - by Hardy
    Hi All, I am trying to concatenate two video files with the gnonlin components of the gstreamer. The pipeline I am using is gst-launch-0.10 gnlcomposition { gnlfilesource name="s1" location="/home/s1.mp4" start=0 duration=2000000000 media-start=0 media-duration=2000000000 gnlfilesource name="s2" location="/home/s2.mp4" start=2000000000 duration=2000000000 media-start=0 media-duration=2000000000 } ! queue ! videorate ! progressreport name="Merging Progress" ! ffmpegcolorspace ! ffenc_mpeg4 ! ffmux_mp4 ! filesink location="/home/merge.mp4" As a result I am getting only the second file for the duration specified in the parameters. Tried several things and also searched on google but I could not figure out the problem with the above command. Can anyone point what i am doing wrong? Any other way of concatenating multiple files into one based on time is welcome too. Thanks

    Read the article

  • mysqld crashes on any statement

    - by ??iu
    I restarted my slave to change configuration settings to skip reverse hostname lookup on connecting and to enable the slow query log. I edited /etc/my.cnf making only these changes, then restarted mysqld with /etc/init.d/mysql restart All appeared to be well but when I connect to msyqld remotely or locally though it connects okay a slight problem is that mysqld crashes whenever you try to issue any kind of statement. The client looks like: Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.1.31-1ubuntu2-log Type 'help;' or '\h' for help. Type '\c' to clear the buffer. mysql> show tables; ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 1 Current database: mydb ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... ERROR 2003 (HY000): Can't connect to MySQL server on 'xx.xx.xx.xx' (61) ERROR: Can't connect to the server ERROR 2006 (HY000): MySQL server has gone away Bus error The mysqld error log looks like: 101210 16:35:51 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:35:51 InnoDB: Assertion failure in thread 140245598570832 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:35:51 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=3 max_threads=600 threads_connected=3 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18209220 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f8d791580d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f902a76a080] /lib/libc.so.6(gsignal+0x35) [0x7f90291f8fb5] /lib/libc.so.6(abort+0x183) [0x7f90291fabc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f902a7623ba] /lib/libc.so.6(clone+0x6d) [0x7f90292abfcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18213c70 = thd->thread_id=3 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:35:51 mysqld_safe Number of processes running now: 0 101210 16:35:51 mysqld_safe mysqld restarted InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 101210 16:35:54 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... 101210 16:35:56 InnoDB: Started; log sequence number 456 143528628 101210 16:35:56 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 16:35:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 16:35:56 [Note] Event Scheduler: Loaded 0 events 101210 16:35:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 16:36:11 InnoDB: Error: (1500) Couldn't read the MAX(job_id) autoinc value from the index (PRIMARY). 101210 16:36:11 InnoDB: Assertion failure in thread 139955151501648 in file handler/ha_innodb.cc line 2595 InnoDB: Failing assertion: error == DB_SUCCESS InnoDB: We intentionally generate a memory trap. InnoDB: Submit a detailed bug report to http://bugs.mysql.com. InnoDB: If you get repeated assertion failures or crashes, even InnoDB: immediately after the mysqld startup, there may be InnoDB: corruption in the InnoDB tablespace. Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html InnoDB: about forcing recovery. 101210 16:36:11 - mysqld got signal 6 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x18588720 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f49d916f0d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f4c8a73f080] /lib/libc.so.6(gsignal+0x35) [0x7f4c891cdfb5] /lib/libc.so.6(abort+0x183) [0x7f4c891cfbc3] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x41b) [0x781f4b] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f4c8a7373ba] /lib/libc.so.6(clone+0x6d) [0x7f4c89280fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x18599950 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 101210 16:36:11 mysqld_safe Number of processes running now: 0 101210 16:36:11 mysqld_safe mysqld restarted The config is [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_file_per_table innodb_buffer_pool_size=10G innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 skip-slave-start server-id=3 # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /DB2/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. #bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 16M max_allowed_packet = 16M thread_stack = 128K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 600 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M query_cache_size = 32M # skip-federated slow-query-log skip-name-resolve Update: I followed the instructions as per http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html and set innodb_force_recovery = 4 and the logs are showing a different error but the behavior is still the same: 101210 19:14:15 mysqld_safe mysqld restarted 101210 19:14:19 InnoDB: Started; log sequence number 456 143528628 InnoDB: !!! innodb_force_recovery is set to 4 !!! 101210 19:14:19 [Warning] 'user' entry 'root@PSDB102' ignored in --skip-name-resolve mode. 101210 19:14:19 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=mysqld-relay-bin' to avoid this problem. 101210 19:14:19 [Note] Event Scheduler: Loaded 0 events 101210 19:14:19 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.1.31-1ubuntu2-log' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu) 101210 19:14:32 InnoDB: error: space object of table mydb/__twitter_friend, InnoDB: space id 1602 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/access_request, InnoDB: space id 1318 did not exist in memory. Retrying an open. 101210 19:14:32 InnoDB: error: space object of table mydb/activity, InnoDB: space id 1595 did not exist in memory. Retrying an open. 101210 19:14:32 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=131072 max_used_connections=1 max_threads=600 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1328077 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x1753c070 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 0x7f7a0b5800d0 thread_stack 0x20000 /usr/sbin/mysqld(my_print_stacktrace+0x29) [0x8b4f89] /usr/sbin/mysqld(handle_segfault+0x383) [0x5f8f03] /lib/libpthread.so.0 [0x7f7cbc350080] /usr/sbin/mysqld(ha_innobase::innobase_get_index(unsigned int)+0x46) [0x77c516] /usr/sbin/mysqld(ha_innobase::innobase_initialize_autoinc()+0x40) [0x77c640] /usr/sbin/mysqld(ha_innobase::open(char const*, int, unsigned int)+0x3f3) [0x781f23] /usr/sbin/mysqld(handler::ha_open(st_table*, char const*, int, int)+0x3f) [0x6db00f] /usr/sbin/mysqld(open_table_from_share(THD*, st_table_share*, char const*, unsigned int, unsigned int, unsigned int, st_table*, bool)+0x57a) [0x64760a] /usr/sbin/mysqld [0x63f281] /usr/sbin/mysqld(open_table(THD*, TABLE_LIST*, st_mem_root*, bool*, unsigned int)+0x626) [0x641e16] /usr/sbin/mysqld(open_tables(THD*, TABLE_LIST**, unsigned int*, unsigned int)+0x5db) [0x6429cb] /usr/sbin/mysqld(open_normal_and_derived_tables(THD*, TABLE_LIST*, unsigned int)+0x1e) [0x642b0e] /usr/sbin/mysqld(mysqld_list_fields(THD*, TABLE_LIST*, char const*)+0x22) [0x70b292] /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*, unsigned int)+0x146d) [0x60dc1d] /usr/sbin/mysqld(do_command(THD*)+0xe8) [0x60dda8] /usr/sbin/mysqld(handle_one_connection+0x226) [0x601426] /lib/libpthread.so.0 [0x7f7cbc3483ba] /lib/libc.so.6(clone+0x6d) [0x7f7cbae91fcd] Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x1754d690 = thd->thread_id=1 thd->killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash.

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    Hi, I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server Cannot connect to MSSQLSERVER. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 BUTTONS: OK I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • WPF Resource Dictionary in a separate assembly

    - by Gustavo Cavalcanti
    I have resource dictionary files (MenuTemplate.xaml, ButtonTemplate.xaml, etc) that I want to use in multiple separate applications. I could add them to the applications' assemblies, but it's better if I compile these resources in one single assembly and have my applications reference it, right? After the resource assembly is built, how can I reference it in the App.xaml of my applications? Currently I use ResourceDictionary.MergedDictionaries to merge the individual dictionary files. If I have them in an assembly, how can I reference them in xaml? Thanks for your help! Gustavo

    Read the article

  • How do I share a WiX fragment in two WiX projects?

    - by Randy Eppinger
    We have a WiX fragment in a file SomeDialog.wxs that prompts the user for some information. It's referenced in another fragment in InstallerUI.wxs file that controls the dialog order. Of course, Product.wxs is our main file. Works great. Now I have a second Visual Studio 2008 Wix 3.0 Project for the .MSI of another application and it needs to ask the user for the same information. I can't seem to figure out the best way to share the file so that changing the information requested will result in both .MSIs getting the new behavior. I honestly can't tell if a merge module, an .wsi (include) or a .wixlib is the right solution. I would have hoped to find a simple example of someone doing this but I have failed thus far. Edit: Based on Rob Mensching's wixlib blog entry, a wixlib may be the answer, but I am still searching for an example of how to do this.

    Read the article

  • Working out global tab order algorithmically?

    - by Mrgreen
    We have a proprietry system where we can configure fields on indiviual forms. However these fields have a global tab order (we cannot specify for a specific form). We have a bunch of forms (35 in total) which share a lot of different fields. Each form has a specific tab/edit order that needs to be configured. Example: Form 1 has fields A,B,C,D in that order. Form 2 has fields E,F,G,A in that order. Form 3 has fields E,B,H,I in that order. The global tab orders would be E,F,G,A,B,C,D,H,I Notice how A needs to come before B yet after G. Is there any easy way to work this out using the tab order lists for each form? I need to merge this tab order information into a single global tab order list. I have over 200 fields in total and it is near impossible to do by hand.

    Read the article

  • Combine multiple dataset columns to one dataset

    - by Matt
    I have multiple datasets that I would like to combine into one. There is a common ID field that can be associated to each row. Calling Merge on the dataset will add additional rows to the dataset, but I would like to combine the additional columns. There are too many fields to do this in one query and therefore would make it unmanageable. Each individual query would be able to handle ordering to ensure the data is placed in the correct row. For Example lets say I have two queries resulting in two datasets: SELECT ID, colA, colB SELECT colC, colD The resulting dataset would look like ID colA colB colC colD 1 a b c d 2 e f g h Any ideas on ways to accomplish this?

    Read the article

  • dcommit to SVN in 1 commit after cherry-picking in git

    - by DJ
    I would like to know if there is a clean way to do git-svn dcommit of multiple local commits as 1 commit into subversion. The situation that I have is I am cherry picking some bug fixes changes from our trunk into the maintenance branch. The project preference is to have the bug fixes to be committed as 1 commit in subversion, but I would like to keep the history of changes that I had cherry-picked on my local git for references. Currently what I do is to do all cherry-picking on branch X and then do a squash merge into new branch Y. The dcommit will then be done from branch Y. Is there a better way to do it without using an intermediary branch?

    Read the article

  • create a wav file from multiple wav files in delphi

    - by Bayu
    i' ve a problem in doing my final project... i'm having trouble with how to save multiple wav files into 1 wav file.. let's take an example: i have 3 wav files which are the syllables of the word "hospital" : "hos.wav", "pi.wav", and "tal.wav" (sorry if i'm wrong in determining the syllables of the words).. each of those syllable wav files contains utterances of the syllables of the word "hospital" respectively.. my task is to merge those files so that the word hospital could be reproduced from those files. and then to save the merged file to be a new wav file, let say "hospital.wav"..I've done my first task, but not with my second task... does anyone can help me? thx b4..

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • Exchange 2007 two node cluster setup on Windows 2008 Enterprise, install error

    - by Shadow00Caster
    I am installing Microsoft Exchange 2007 x64 in a two node environment using Microsoft Windows 2008 Enterprise x64. The Failover Cluster is all setup properly and following best practices for setting up the windows clustering for use with Exchange 2007. All the validation tests pass on the cluster and all of that portion is working fine. The problem is when I go to install the first Exchange node as an Active mailbox in configuration for a two node CCR. It gets all the way through the first 3 steps (Copy Exchange Files, Management Tools, Mailbox Role) and then fails on the 4th step 'Clustered Mailbox Server' with the following error: Error: The clustered mailbox server's group 'XXXX' was not found, and should already exist. Firewalls are all disabled, DNS is all setup properly, the environment has 3 domain controllers all 2k8 ent x64, all replication works. The name I pick for the CCR cluster (XXX) does not exist in AD or in DNS. I have attempted this install from both of the two Exchange nodes and multiple times .. tried with different names. I have been banging my head against the wall for days working on this and would appreciate any feedback on the issue.

    Read the article

  • Programatically find TFS changes since last good build

    - by abigblackman
    I have several branches in TFS (dev, test, stage) and when I merge changes into the test branch I want the automated build and deploy script to find all the updated SQL files and deploy them to the test database. I thought I could do this by finding all the changesets associated with the build since the last good build, finding all the sql files in the changesets and deploying them. However I don't seem to be having the changeset associated with the build for some reason so my question is twofold: 1) How do I ensure that a changeset is associated with a particular build? 2) How can I get a list of files that have changed in the branch since the last good build? I have the last successfully built build but I'm unsure how to get the files without checking the changesets (which as mentioned above are not associated with the build!)

    Read the article

  • Sticky sessions not sticky on coldfusion cluster

    - by GreatSeaSpider
    we're trying to deploy a legacy coldfusion site onto a new CF8 cluster. The cluster consists of three cf instances running under JRUN4 on a single windows 2008 server. I've got the cluster set to not replicate sessions, and sticky sessions turned on. each instance is set to use J2EE session variables. The application tag for the site has: sessionmanagement="Yes" setclientcookies="Yes" setdomaincookies="Yes" when each instance starts... no errors are reported in the instance log and they join the cluster without any issues. though the intances do have: 16/10 08:31:25 info SessionReplicationService successfully joined a JINI lookup service (assigned JINI-ID .....) and 16/10 08:31:25 info Clusterable service SessionReplicationService discovered a SessionReplicationService peer on a JRun server named "xxxx" on host xxxx which is interesting since session replication is definately off, is the SessionReplicationService responsible for sticky sessions aswell? thats enough background, the problem is that the sticky sessions appear to simply not work, each request is bounced to a different instance, and it seems as if the sessions are being lost on each instance anyways? As soon as the cluster is down to a single instance, the web app works exactly as expected and the sessions seem fine. has anybody got any ideas for me? i've been trawling the web and I cant seem to find any answers.

    Read the article

  • Sharepoint and Cross-Site Lookup

    - by Mina Samy
    Hi all I have this scenario I want to build two sharepoint 2007 sites. One for customers info and the other for products and customers orders. Now the problem is that in the second site I need to reference the customers info from the first site but unfortunately sharepoint doesnot provide out of the box cross-site lookup. I did some search and found custom cross-site fields and used one but when I upgraded the site to sharepoint 2010 this custom field was not compatible and the upgrade wizard said it could not be upgraded. so what is the solution for this ? is it to merge the two sites so that I can use the standard lookup feature or is there any workaround for this ? please if any body has faced such a scenario, share the solution with me ? thanks

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >