Search Results

Search found 22170 results on 887 pages for 'multiple schema'.

Page 393/887 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • How to simulate a file read error in the CRT

    - by Mark0978
    Using VS2008, we would like to simulate a file that has a size of X, but that has a read failure at X-Y bytes, so that we get an error indication. Anyone have an idea of how to do this on windows? Looks like there is a solution for linux, but I can't really come up with a way to do this on windows. We have multiple developers, multiple machines, and cppunit testing framework, so I want a software only design. I'm trying to simulate the actual CRT failing, so I can test the code that is dealing with the failure.

    Read the article

  • Interface Design Problem: Storing Result of Transactions

    - by jboyd
    Requirements: multiple sources of input (social media content) into a system multiple destinations of output (social media api's) sources and destinations WILL be added some pseudo: IContentProvider contentProvider = context.getBean("contentProvider"); List<Content> toPost = contentProvider.getContent(); for (Content c : toPost) { SocialMediaPresence smPresence = socialMediaService.getSMPresenceBySomeId(c.getDestId()); smPresence.hasTwitter(); smPresence.hasFacebook(); //just to show what this is smPresence.postContent(c); //post content could fail for some SM platforms, but shoulnd't be lost forever } So now I run out of steam, I need to know what content has been successfully posted, and if it hasn't gone too all platforms, or if another platform were added in the future that content needs to go out for it as well (therefore my content provider will need to not only know if content has gone out, but for what platforms). I'm not looking for code, although sample/pseudo is fine... I'm looking for an approach to this problem that I can implement

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Fast single thread comet server, possible?

    - by Pepijn
    I recently encountered a few cases where a server would distribute an event stream that contains the exact same data for all listeners, such as a 'recent activity' box. It occurred to me that it is quite strange and inefficient to have a server like Apache run a thread processing and querying the database for every single comet stream containing the same data. What I would do for those global(not per user) streams is run a single thread that continuously emits data, and a new (green)thread for every new request that outputs the headers and then 'merges' into the main thread. Is it possible for one thread to serve multiple sockets, or for multiple clients to listen to the same socket? An example o = event # threads received | a b # 3 o / / # 3 - |/_/ | # 1 o c # 2 a, b | / o/ # 2 a, b o # 1 a, b, c | # connection b closed o # 1 a, c Does something like this exist? Would it work? Is it possible to do? Disclaimer: I'm not a server expert.

    Read the article

  • Reading option values in select

    - by user281180
    I have select values as follows: <select id="SelectBox" multiple="multiple"> <% foreach (var item in Model.Name) { %> <option value="<%= item.Value %>"><%=item.Text%></option> <% } %> </select> I have a function in jquery that will have to read both the text and value. I need to have the values in array so that I can display them in a table with a column Id and another column text. My problem is that Im not able to retrieve each and every value separately. Im having the text in one line, test1test2test3. function read() { $("#SelectBox").each(function() { var value = $(this).val(); var text = $(this).text(); alert(value);alert(text); }); }

    Read the article

  • I need an algorithm to find the best path

    - by user242635
    I need an algorithm to find the best solution of a path finding problem. The problem can be stated as: At the starting point I can proceed along multiple different paths. At each step there are another multiple possible choices where to proceed. There are two operations possible at each step: A boundary condition that determine if a path is acceptable or not. A condition that determine if the path has reached the final destination and can be selected as the best one. At each step a number of paths can be eliminated, letting only the "good" paths to grow. I hope this sufficiently describes my problem, and also a possible brute force solution. My question is: is the brute force is the best/only solution to the problem, and I need some hint also about the best coding structure of the algorithm.

    Read the article

  • Open Source Queuing Solutions for peek, mark as done and then remove

    - by user330114
    I am looking at open source queuing platforms that allow me do the following: I have multiple producers, multiple consumers putting data into a queue in a multithreaded environment with the specific use case: I want the ability for consumers to be able do the following Peek at a message from the queue(which should mark as the message as invisible on the queue so that other consumers cannot consume the same message) The consumer works on the message consumed and if it is able to do the work successfully, it marks the message as consumed which should permanently delete it from the queue. If the consumer dies abruptly after marking the message as consumed or fails to acknowledge successful consumption after a certain timeout, the message is made visible on the queue again so that another consumer can pick it up. I've been looking at RabbitMQ, hornetQ, ActiveMQ but I'm not sure I can get this functionality out of the box, any recommendations on a system that gives me this functionality?

    Read the article

  • Power Law distribution for a given exponent in C# using MathNet

    - by Eric Tobias
    Hello! I am currently working on a project where I need to generate multiple values (floats or doubles preferably) that follow a power law distribution with a given exponent! I was advised to use the MathNet.Iridium library to help me. The problem I have is that the documentation is not as explicit as it should be if there is any! I see multiple distributions that fit the general idea of the power law distribution but I cannot pinpoint a good distribution to use with a certain exponent as a parameter. Does anybody have more experience in that matter and could give me some hints or advice?

    Read the article

  • Why doesn't Maven's mvn clean ever work the first time?

    - by hoffmandirt
    Nine times out of ten when I run mvn clean on my projects I experience a build error. I have to execute mvn clean multiple times until the build error goes away. Does anyone else experience this? Is there any way to fix this within Maven? If not, how do you get around it? I wrote a bat file that deletes the target folders and that works well, but it's not practical when you are working on multiple projects. I am using Maven 2.2.1. [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Failed to delete directory: C:\Documents and Settings\user\My Documents\software-developm ent\a\b\c\application-domain\target. Reason: Unable to delete directory C:\Documen ts and Settings\user\My Documents\software-development\a\b\c\application-domai n\target\classes\com\a\b [INFO] ------------------------------------------------------------------------ [INFO] For more information, run Maven with the -e switch [INFO] ------------------------------------------------------------------------ [INFO] Total time: 6 seconds [INFO] Finished at: Fri Oct 23 15:22:48 EDT 2009 [INFO] Final Memory: 11M/254M [INFO] ------------------------------------------------------------------------

    Read the article

  • Binding Many-to-Many Core Data relationships in UI

    - by Kevin
    Basically my setup is this. I have a many-to-many relationship in Core Data where a student entity can have multiple courses, and a course entity can have multiple students. My problem is in trying to figure out how to bind this relationship to the UI in Interface Builder. I want to be able to add courses to a course array controller, then have those courses displayed in a popup menu in a NSTableView in the Edit Student window where you can add courses to a student. This is what I have so far: http://vimeo.com/10671726 It's probably easier to understand from the video. Thanks

    Read the article

  • How can I model the data in a multi-language data editor in WPF with MVVM?

    - by Patrick Szalapski
    Are there any good practices to follow when designing a model/ViewModel to represent data in an app that will view/edit that data in multiple languages? Our top-level class--let's call it Course--contains several collection properties, say Books and TopicsCovered, which each might have a collection property among its data. For example, the data needs to represent course1.Books.First().Title in different languages, and course1.TopicsCovered.First().Name in different languages. We want a app that can edit any of the data for one given course in any of the available languages--as well as edit non-language-specific data, perhaps the Author of a Book (i.e. course1.Books.First().Author). We are having trouble figuring out how best to set up the model to enable binding in the XAML view. For example, do we replace (in the single-language model) each String with a collection of LanguageSpecificString instances? So to get the author in the current language: course1.Books.First().Author.Where(Function(a) a.Language = CurrentLanguage).SingleOrDefault If we do that, we cannot easily bind to any value in one given language, only to the collection of language values such as in an ItemsControl. <TextBox Text={Binding Author.???} /> <!-- no way to bind to the current language author --> Do we replace the top-level Course class with a collection of language-specific Courses? So to get the author in the current language: course1.GetLanguage(CurrentLanguage).Books.First.Author If we do that, we can only easily work with one language at a time; we might want a view to show one language and let the user edit the other. <TextBox Text={Binding Author} /> <!-- good --> <TextBlock Text={Binding ??? } /> <!-- no way to bind to the other language author --> Also, that has the disadvantage of not representing language-neutral data as such; every property (such as Author) would seem to be in multiple languages. Even non-string properties would be in multiple languages. Is there an option in between those two? Is there another way that we aren't thinking of? I realize this is somewhat vague, but it would seem to be a somewhat common problem to design for. Note: This is not a question about providing a multilingual UI, but rather about actually editing multi-language data in a flexible way.

    Read the article

  • Combine static files or load in parallel

    - by Niall Collins
    I am at present introducing code to my site to combine css and javascript files. Is there a way without having to include an external library to load javascript asynchronously or in parallel? I have read on some blogs that combining of files can be counter productive as the load of the http request can be large and its better to load multiple files in parallel. Opinions on this? I am caching my javascript/css. And would have thought it was better to combine rather than have multiple http requests.

    Read the article

  • Track pageviews in Google Analytics on partial url of Grails application

    - by Pieter van Gent
    For my Grails application I want to set up Google Analytics to track only "partial" url's. I 'll explain: a typical Grails url consists of the following parts: domain + application-name + controller + action + id e.g. www.mydomain.com/myapp/controller/action/12345 As far as I understand for Google Analytics the page to be tracked is identified by the entire url. For my purpose I'm not interested in the id part of the url: I want to know which actions have been performed, but I need not know for which id the action was executed. And of course I would like a generic solution, because I have multiple controllers and multiple actions... Maybe some kind of filter stating "I want to track pages 3 levels deep (/myapp/controller/action)" would do? Or a filter stating "exclude everything from url after the last /"? Any help would be much appreciated. Kind regards, Pieter

    Read the article

  • How to create following Pivot table ?

    - by Vamshi
    Hi! Can we create pivot table with Multiple columns and each column contains multiple rows. For example........... Database Table: BatchID BatchName Chemical Value -------------------------------------------------------- BI-1 BN-1 CH-1 1 BI-2 BN-2 CH-2 2 -------------------------------------------------------- This is the table , i need to display like below in Excel Sheet BI-1 BI-2 BN-1 BN-2 ------------------------------------------ CH-1 1 null ------------------------------------------ CH-2 null 2 ------------------------------------------ Here BI-1,BN-1 are two rows in a single columns i need to display chemical value as row of that. Could Please help me to solve this problem. Thank You.

    Read the article

  • Communication between Page and User Controls

    - by Narmatha Balasundaram
    I have a main page that has multiple user controls on it. All the user controls have static text and some data to be retrieved from the DB. The aysnchronous DB call is clubbed at the page level and one call is made to avoid multiple calls (in the different user controls) to get the same data. I want the page and user controls to load initially with the static text and later refresh the contents obtained from the server. To sum it all up, Page Loads = Asyn calls fired on page load = Data received back from the server (XML/text/whatever) = User controls to load with this data using AJAX. What methods do I have to let the user controls know that I have the data from the server and they need to update with this data?

    Read the article

  • How can you toggle between two sets of values per data series in flot?

    - by Jedidja
    flot has built-in support for multiple data series (sample code) and also dual-axis (sample code). Assuming multiple data series (water, electricity, etc) that each have an amount (usage) and a dollar value (charge for that usage), what would the best way be to to use flot to display either the amount or dollar values for all the data series, while still supporting toggling display for each individual series? The idea is to send down all the data in one GET request and then let the client take care of everything else in Javascript. Ideally we could use triplets somehow {date, amount, charge}, and then possibly split that into two arrays for flot.

    Read the article

  • Can I use array_push on a SESSION array in php?

    - by zeckdude
    I have an array that I want on multiple pages, so I made it a SESSION array. I want to add a series of names and then on another page, I want to be able to use a foreach loop to echo out all the names in that array. This is the session: $_SESSION['names'] I want to add a series of names to that array using array_push like this: array_push($_SESSION['names'],$name); I am getting this error: array_push() [function.array-push]: First argument should be an array Can I use array_push to put multiple values into that array? Or perhaps there is a better, more efficient way of doing what I am trying to achieve?

    Read the article

  • Socket : can an asynchronous Receive returns without reading all the bytes I asked for?

    - by NorthWind
    Hi; I was reading an article on Vadym Stetsiak's blog about how to transfer variable length messages with async sockets (url: http://vadmyst.blogspot.com/2008/03/part-2-how-to-transfer-fixed-sized-data.html). He says : What to expect when multiple messages arrive at the server? While dealing with multiple messages one has to remember that receive operation can return arbitrary number of bytes being read from the net. Typically that size is from 0 to specified buffer length in the Receive or BeginReceive methods. So, even if I tell BeginReceive to read 100 bytes, it may read less than that and returns??? I am developing a network-enabled software (TCP/IP), and I always receive the same exact number of bytes I asked for. I don't even understand the logic : why would Receive completes asynchronously if it didn't get every byte I asked for ... just keep waiting. Maybe it has something to do with IP vs TCP? Thank you for your help.

    Read the article

  • Retrieve the CAKeyframeAnimationKey's key in AnimationDidStop

    - by JacktheSparrow
    I have multiple CAKeyframeAnimation objects, each with a unique key like the following: ..... [myAnimation setValues:images]; [myAnimation setDuration:1]; .... [myLayer addAnimation:myAnimation forKey:@"unique key"]; My question is, if I have multiple animation like this and each with a unique key, how do I retrieve their keys in the method AnimationDidStop? I want to be able to do something like this: -(void)animationDidStop:(CAAnimation*)animation finished:(BOOL)flag{ if(..... ==@"uniquekey1"){ //code to handle this specific animation here: }else if(.... ==@"uiquekey2"){ //code to handle this specific animation here: } }

    Read the article

  • Generate all project dependencies in a single file using gcc -MM flag

    - by Jabez
    Hi all, I want to generate a single dependency file which consists of all the dependencies of source files using gcc -M flags through Makefile. I googled for this solution but, all the solutions mentioned are for generating multiple deps files for multiple objects. DEPS = make.dep $(OBJS): $(SOURCES) @$(CC) -MM $(SOURCEs) > $(DEPS) @mv -f $(DEPS) $(DEPS).tmp @sed -e 's|.$@:|$@:|' < $(DEPS).tmp > $(DEPS) @sed -e 's/.*://' -e 's/\\$$//' < $(DEPS).tmp | fmt -1 | \ sed -e 's/^ *//' -e 's/$$/:/' >> $(DEPS) @rm -f $(DEPS).tmp But it is not working properly. Please tell me where i'm making the mistake.

    Read the article

  • Is frameworks really necessary for this problem?

    - by The Elite Gentleman
    Hi Guys I'm creating an online music store application (in Java) for signed and unsigned artist for my client. I'm currently using Struts 1.3.10 (I was recommended Spring but Spring setup is sort of similar to Struts) for my Web application. My Database is currently a MySQL 5 (or higher) and I'm using a DAO pattern to talk to it. There are limitations to using Struts and DAO's (e.g. Multiple File upload in Struts is not implemented the same way as multiple String parameters, and for DAO's, there lacks a Publish-Subscribe feature). Is what I'm doing the best way forward or should I go straight Hibernate (or similar) and move out of Struts? What are the performance implications or technical issues that you've experienced with the same setup I have? The client doesn't care how I do it as long as it is done.

    Read the article

  • Entity Framework 4 CTP 5 POCO - Many-to-many configuration, insertion, and update?

    - by Saxman
    I really need someone to help me to fully understand how to do many-to-many relationship with Entity Framework 4 CTP 5, POCO. I need to understand 3 concepts: How to config my model to indicates some tables are many-to-many. How to properly do insert. How to properly do update. Here are my current models: public class MusicSheet { [Key] public int ID { get; set; } public string Title { get; set; } public string Key { get; set; } public virtual ICollection<Author> Authors { get; set; } public virtual ICollection<Tag> Tags { get; set; } } public class Author { [Key] public int ID { get; set; } public string Name { get; set; } public string Bio { get; set; } public virtual ICollection<MusicSheet> MusicSheets { get; set; } } public class Tag { [Key] public int ID { get; set; } public string TagName { get; set; } public virtual ICollection<MusicSheet> MusicSheets { get; set; } } As you can see, the MusicSheet can have many Authors or Tags, and an Author or Tag can have multiple MusicSheets. Again, my questions are: What to do on the EntityTypeConfiguration to set the relationship between them as well as mapping to an table/object that associates with the many-to-many relationship. How to insert a new music sheets (where it might have multiple authors or multiple tags). How to update a music sheet. For example, I might set TagA, TagB to MusicSheet1, but later I need to change the tags to TagA and TagC. It seems like I need to first check to see if the tags already exists, if not, insert the new tag and then associate it with the music sheet (so that I doesn't re-insert TagA?). Or this is something already handled by the framework? Thank you very much. I really hope to fully understand it rather than just doing it without fully understand what's going on. Especially on #3.

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >