Search Results

Search found 13889 results on 556 pages for 'results'.

Page 308/556 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • How to use jQuery .each() - I get a value, but it's the wrong one

    - by Ankur
    I am returning a JSON object (as a String) through a servlet. The JSON object looks like this: {"3":"Martin Luther","30":"Boris Becker","32":"Joseph Goebels","19":"Leonardo Da Vinci","31":"Adolf Hitler"} My jQuery looks like this (the submission of data is correct because I get a proper looking result from the servlet): $.ajax( { type : "GET", url : "MyServlet", data : queryString + "count=" + variables, success : function(resultObj) { $.each(resultObj, function(key, value) { $("#resultCount").html(key+", "+value); }); } }); However when I try to print the results, that is the variables key and value, I get a number for the key but not a number from the JSONObject and an empty string instead of the value. Essentially the question is about how to "extract" the information from a JSON object.

    Read the article

  • How to group a complex list of objects using LINQ?

    - by Daoming Yang
    I want to select and group the products, and rank them by the number of times they occur. For example, I have an OrderList each of order object has a OrderProductVariantList(OrderLineList), and each of OrderProductVariant object has ProductVariant, and then the ProductVariant object will have a Product object which contains product information. A friend helped me with the following code. It could be compiled, but it did not return any value/result. I used the watch window for the query and it gave me "The name 'query' does not exist in the current context". Can anyone help me? Many thanks. var query = orderList.SelectMany( o => o.OrderLineList ) // results in IEnumerable<OrderProductVariant> .Select( opv => opv.ProductVariant ) .Select( pv => p.Product ) .GroupBy( p => p ) .Select( g => new { Product = g.Key, Count = g.Count() });

    Read the article

  • Unix: how to have delimiter as "\t&\t" in paste-tool?

    - by HH
    Results are in clean files. I want to get them to latex-table format with paste. So how can I have a delimiter "\t&\t"? or is there some Latex tool? Pasting Columnwise to have \t&\t delimiter $ paste -d'\t\&\t' d d_powered_-2 rad 5.0 400.0&384.5 7.5 204.1&184.5 10.0 100.0&115.5 15.0 44.4&58.2 20.0 25.0&45.0 25.0 16.0&38.8 30.0 11.1&33.3 35.0 8.2&34.4 37.0 7.3&34.1 40.0 6.2&34.1 $ paste d d_powered_-2 rad 5.0 400.0 384.5 7.5 204.1 184.5 10.0 100.0 115.5 15.0 44.4 58.2 20.0 25.0 45.0 25.0 16.0 38.8 30.0 11.1 33.3 35.0 8.2 34.4 37.0 7.3 34.1 40.0 6.2 34.1

    Read the article

  • group expression in jasper reports

    - by ed1t
    I've a report which has a has 5 columns on each page and I have a group defined which shows columns related to A | B | C | D | E - main column X | Y | Z - group - A is the key I have my query ORDER BY A, but when it is displayed it doesn't print the results in next page if A is changed. Following is how I have a group defined. <group name="A" isResetPageNumber="true" > <groupExpression><![CDATA[$F{A}]]></groupExpression> <groupHeader> <band/> </groupHeader> <groupFooter> <band/> </groupFooter> </group> does A need to be part of the group?

    Read the article

  • Dyanamic client side validation

    - by Noel
    Is anyone doing dyanamic client validation and if so how are you doing it. I have a view where client side validation is enabled through jquery validator ( see below) <script src="../../Scripts/jquery-1.3.2.js" type="text/javascript"></script> <script src="../../Scripts/jquery.validate.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcJQueryValidation.js" type="text/javascript"></script> <% Html.EnableClientValidation(); %> This results in javascript code been generated on my page which calls validate when I click the submit button: function __MVC_EnableClientValidation(validationContext) { .... theForm.validate(options); } If I want validation to occur when the onblur event occurs on a textbox how can i get this to work?

    Read the article

  • 3 dimensional bin packing algorithms

    - by BuschnicK
    I'm faced with a 3 dimensional bin packing problem and am currently conducting some preliminary research as to which algorithms/heuristics are currently yielding the best results. Since the problem is NP hard I do not expect to find the optimal solution in every case, but I was wondering: 1) what are the best exact solvers? Branch and Bound? What problem instance sizes can I expect to solve with reasonable computing resources? 2) what are the best heuristic solvers? 3) What off-the-shelf solutions exist to conduct some experiments with?

    Read the article

  • MIPS assembly: big and little endian confusion

    - by Barney
    I've run the following code snippet on the MIPS MARS simulator. That simulator is little endian. So the results are as follows: lui $t0,0x1DE # $t0 = 0x01DE0000 ori $t0,$t0,0xCADE # $t0 = 0x01DECADE lui $t1,0x1001 # $t1 = 0x10010000 sw $t0,200($t1) # $t1 + 200 bytes = 0x01DECADE lw $t2,200($t1) # $t2 = 0x01DECADE So on a little endian MIPS simulator, the value of $t2 at the end of the program is 0x01DECADE. If this simulator was big endian, what would the value be? Would it be 0xDECADE01 or would it still be 0x01DECADE?

    Read the article

  • ScrollPane has type of "movieclip" in attached movieclip

    - by Chris Porter
    var spw:MovieClip = contentsLayer.attachMovie("ScrollPaneWrapper", "ScrollPaneWrapper123", contentsLayer.getNextHighestDepth()); var sp_:ScrollPane = spw.sp; Here typeof(sp_) == "movieclip" and I can't set any content to it. I've tried exporting it for ActionScript, exporting the wrapper movieclip "ScrollPaneWrapper" and "Export in Frame 1" and all combinations of these options. What's more weird is that I have another Flash project in which I can access the ScrollPane as expected and I can't tell any differences between the two projects. Casting the spw.sp to ScrollPane results in null.

    Read the article

  • How to write Asynchronous LINQ query?

    - by Morgan Cheng
    After I read a bunch of LINQ related stuff, I suddenly realized that no articles introduce how to write asynchronous LINQ query. Suppose we use LINQ to SQL, below statement is clear. However, if the SQL database responds slowly, then the thread using this block of code would be hindered. var result = from item in Products where item.Price > 3 select item.Name; foreach (var name in result) { Console.WriteLine(name); } Seems that current LINQ query spec doesn't provide support to this. Is there any way to do asynchronous programming LINQ? It works like there is a callback notification when results are ready to use without any blocking delay on I/O.

    Read the article

  • Replace empty cells with logical 0's before cell2mat in MATLAB

    - by Doresoom
    I've got a cell array of empty cells and ones that I want to convert to a logical array, where the empty cells are zeros. When I use cell2mat, the empty cells are ignored, and I end up with a matrix of solely 1's, with no reference to the previous index they held. Is there a way to perform this operation without using loops? Example code: for n=1:5 %generate sample cell array mycellarray{n}=1; end mycellarray{2}=[] %remove one value for testing Things I've tried: mylogicalarray=logical(cell2mat(mycellarray)); which results in [1,1,1,1], not [1,0,1,1,1]. for n=1:length(mycellarray) if isempty(mycellarray{n}) mycellarray{n}=0; end end mylogicalarray=logical(cell2mat(mycellarray)); which works, but uses loops.

    Read the article

  • reinventing the wheels: Node.JS/Event-driven programming v.s. Functional Programming?

    - by ivanTheTerrible
    Now there's all the hype lately about Node.JS, an event driven framework using Javascript callbacks. To my limited understanding, its primary advantage seems to be that you don't have to wait step by step sequentially (for example, you can fetch the SQL results, while calling other functions too). So my question is: how is this different, or better than just functional languages, like CL, Haskell, Clojure etc? If not better, then why don't people just do functional languages then (instead of reinventing the wheel with Javascript)? Please note that I have none experience in either Node.JS nor functional programming. So some basic explanation can be helpful.

    Read the article

  • Disable text selection UITextView

    - by marcio
    Hello, i want to disable text selection on a UITextView. Until now what i've already done is: - (BOOL)canPerformAction:(SEL)action withSender:(id)sender { [UIMenuController sharedMenuController].menuVisible = NO; if (action == @selector(paste:)) return NO; if (action == @selector(select:)) return NO; if (action == @selector(selectAll:)) return NO; return NO; } In this away i set UIMenuController to hidden and i put a stop to text copy but the text selection is still visible. Google results (also stackoverflow) take me to no solution. Has someone already faced the same problem? Any ideas? Thanks in advance, marcio

    Read the article

  • a missing variable in a switch statement ?!!

    - by mechhfly
    hi folks, i have a strange issue. a variable seems to be missing during a pass through a case statement i have a function like so function checklink($var0, $var1, $var2) { switch($var0) { case "case1": print $var2; //code uses $var2 successfully case "case2": print $var2; //variable has disappeared! } } essentially what i am doing is constructing a string from the last variable based on the first. if i run the code in which the first case is true, i get the expected results, but when i run the second case, my variable seems to have vanished. these first two case statements are syntactically the same and the variable is gathered from the $_GET array (hard coded into hyperlink). any light on this issue? if more explanation is needed let me know, its late and my brain is getting mushy. thanks my friends.

    Read the article

  • Git diff gone mad?

    - by dr Hannibal Lecter
    I'm trying to figure out what's going on with my local Git repo. I edit a file. Git reports everything has changed in the file (I only changed one line) At first I think "must be a newline problem", but it's not. I do a diff in TortoiseGit, everything looks fine. I do a diff with Netbeans (git plugin), everything seems fine. I do a reset, backup the file, modify it, git again reports everything has changed. I do a binary compare in Total Commander, the files have no differences except for the single line I changed. I do a hard reset again. Git tells me it was done successfully. Git status still says my file has changed. I diff the thing and there are no differences - bug git says there are. I've tried using both git bash and gui, with same results (I'm on Windows). Any clues, what's going on here?

    Read the article

  • Implementing Security on custom BCS/.net class?

    - by Michael Stum
    I'm implementing a custom BCS Model to get data from a backend system. As the backend uses it's own user management, I'm accessing it through a service account. All of this works well and allows me to pull data into SharePoint. However because it's channeled through the service account, everyone can access it, which is bad. Can anyone give me some tips which method to implement? The backend does not give me NT ACLs, but I wonder if I could just "fake" them somehow? (Essentially saying "This NT Group has Read Access" is good enough). I am aware of ISecurityTrimmer2 for Search Results, but ideally I want to cover security inside the BCS Model so that it applies to external lists as well. I want to avoid using Secure storage and mapping each individual user to the backend.

    Read the article

  • Should strongly typed partial views on one page in asp.net mvc-2 have one combined view model?

    - by Kai
    Hi guys, I have a question about asp.net mvc-2 strongly typed partial views, and view models. I was just wondering if I can (or should) have two strongly typed partial views on one page, without implementing a whole new view model for that page. For example, I have a page that displays profiles, but also has an inline form to add a quick contact. Each of these entities already has it's own view model, i.e I have a ProfileViewModel and a ContactViewModel. So my view needs two strongly typed partial views, one using an IEnumerable List of ProfileViewModels, and one using a ContactViewModel. Is it possible or desirable to avoid making a third view model, an 'IndexViewModel' for this page, which holds a list of ProfileViewModels and a ContactViewModel? Is not implementing this view model bad practice, or tidier as it results in less view models? Thanks!

    Read the article

  • Face recognition Library

    - by Janusz
    I'm looking for a free face recognition library for a university project. I'm not looking for face detection. I'm looking for actual recognition. That means finding images that contain specified faces or libraries that calculate distances between specific faces. I'm using OpenCV for detecting the faces and a rough Eigenfaces Algorithm for the recognition now. But I thought there should be something out there with a better performance then a self written Eigenfaces Algorithm. I don't talk about speed as performance I'm looking for a library with better results as an simple Eigenfaces approach I took a look at faint but it seems the library is not very reusable for my own applications. I'm happy with a library in Python, Java, C++, C or something like that. The best thing would be if it can be run on a Windowsmachine

    Read the article

  • SSRS 2005 - Usability analysis - Is SSRS a good option for this scenario?

    - by Sach
    How practical is it to consider SSRS 2005 or SSRS 2008 as a reporting solution for a report that has to show reports with millions of records (records vary from 3 to 10 million)? Is there any threshold on the size of report in SSRS? How do I know that for a huge report, wheather SSRS will consume the whole memory and start paging the operations to disk or it will give a memory leak error? Even if I keep on increasing the memory how can I be sure that certain memory will be sufficient for such huge reports for the report server? All the above questions are haunting me because I have a dedicated report server with a decent hardware and OS configuration (8 processors, 8GB RAM, 64 bit OS and 64 bit SQL Server 2005). Still my report with around 2 million records is taking more than 6 minutes and going from one page to another takes 3 minutes!!! My datasource is on separate server and when I execute only the stored proc there, it returns the results in less than 2 minutes.

    Read the article

  • error when installing mysql ruby gem on OSX 10.6.3

    - by kapil.israni
    So I am getting the same issue as mentioned here - http://stackoverflow.com/questions/1366746/gem-install-mysql-failure-in-snow-leopard But I haven't been able to get it fixed using the answers on this link. Here's a brief history - I had MAMP on my machine, but now I downloaded the latest MySQL from mysql.com and installed version 5.1.46 this new version runs fine and client "mysql" is able to connect and I also have XCode v3.2.1, since someone mentioned that it can cause issues. Here's the error - **Building native extensions. This could take a while... ERROR: Error installing mysql: ERROR: Failed to build gem native extension. /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby extconf.rb --with-mysql-config=/usr/local/mysql/bin/mysql_config mkmf.rb can't find header files for ruby at /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/ruby.h Gem files will remain installed in /Library/Ruby/Gems/1.8/gems/mysql-2.8.1 for inspection. Results logged to /Library/Ruby/Gems/1.8/gems/mysql-2.8.1/ext/mysql_api/gem_make.out**

    Read the article

  • Windows Azure Service Bus Splitter and Aggregator

    - by Alan Smith
    This article will cover basic implementations of the Splitter and Aggregator patterns using the Windows Azure Service Bus. The content will be included in the next release of the “Windows Azure Service Bus Developer Guide”, along with some other patterns I am working on. I’ve taken the pattern descriptions from the book “Enterprise Integration Patterns” by Gregor Hohpe. I bought a copy of the book in 2004, and recently dusted it off when I started to look at implementing the patterns on the Windows Azure Service Bus. Gregor has also presented an session in 2011 “Enterprise Integration Patterns: Past, Present and Future” which is well worth a look. I’ll be covering more patterns in the coming weeks, I’m currently working on Wire-Tap and Scatter-Gather. There will no doubt be a section on implementing these patterns in my “SOA, Connectivity and Integration using the Windows Azure Service Bus” course. There are a number of scenarios where a message needs to be divided into a number of sub messages, and also where a number of sub messages need to be combined to form one message. The splitter and aggregator patterns provide a definition of how this can be achieved. This section will focus on the implementation of basic splitter and aggregator patens using the Windows Azure Service Bus direct programming model. In BizTalk Server receive pipelines are typically used to implement the splitter patterns, with sequential convoy orchestrations often used to aggregate messages. In the current release of the Service Bus, there is no functionality in the direct programming model that implements these patterns, so it is up to the developer to implement them in the applications that send and receive messages. Splitter A message splitter takes a message and spits the message into a number of sub messages. As there are different scenarios for how a message can be split into sub messages, message splitters are implemented using different algorithms. The Enterprise Integration Patterns book describes the splatter pattern as follows: How can we process a message if it contains multiple elements, each of which may have to be processed in a different way? Use a Splitter to break out the composite message into a series of individual messages, each containing data related to one item. The Enterprise Integration Patterns website provides a description of the Splitter pattern here. In some scenarios a batch message could be split into the sub messages that are contained in the batch. The splitting of a message could be based on the message type of sub-message, or the trading partner that the sub message is to be sent to. Aggregator An aggregator takes a stream or related messages and combines them together to form one message. The Enterprise Integration Patterns book describes the aggregator pattern as follows: How do we combine the results of individual, but related messages so that they can be processed as a whole? Use a stateful filter, an Aggregator, to collect and store individual messages until a complete set of related messages has been received. Then, the Aggregator publishes a single message distilled from the individual messages. The Enterprise Integration Patterns website provides a description of the Aggregator pattern here. A common example of the need for an aggregator is in scenarios where a stream of messages needs to be combined into a daily batch to be sent to a legacy line-of-business application. The BizTalk Server EDI functionality provides support for batching messages in this way using a sequential convoy orchestration. Scenario The scenario for this implementation of the splitter and aggregator patterns is the sending and receiving of large messages using a Service Bus queue. In the current release, the Windows Azure Service Bus currently supports a maximum message size of 256 KB, with a maximum header size of 64 KB. This leaves a safe maximum body size of 192 KB. The BrokeredMessage class will support messages larger than 256 KB; in fact the Size property is of type long, implying that very large messages may be supported at some point in the future. The 256 KB size restriction is set in the service bus components that are deployed in the Windows Azure data centers. One of the ways of working around this size restriction is to split large messages into a sequence of smaller sub messages in the sending application, send them via a queue, and then reassemble them in the receiving application. This scenario will be used to demonstrate the pattern implementations. Implementation The splitter and aggregator will be used to provide functionality to send and receive large messages over the Windows Azure Service Bus. In order to make the implementations generic and reusable they will be implemented as a class library. The splitter will be implemented in the LargeMessageSender class and the aggregator in the LargeMessageReceiver class. A class diagram showing the two classes is shown below. Implementing the Splitter The splitter will take a large brokered message, and split the messages into a sequence of smaller sub-messages that can be transmitted over the service bus messaging entities. The LargeMessageSender class provides a Send method that takes a large brokered message as a parameter. The implementation of the class is shown below; console output has been added to provide details of the splitting operation. public class LargeMessageSender {     private static int SubMessageBodySize = 192 * 1024;     private QueueClient m_QueueClient;       public LargeMessageSender(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public void Send(BrokeredMessage message)     {         // Calculate the number of sub messages required.         long messageBodySize = message.Size;         int nrSubMessages = (int)(messageBodySize / SubMessageBodySize);         if (messageBodySize % SubMessageBodySize != 0)         {             nrSubMessages++;         }           // Create a unique session Id.         string sessionId = Guid.NewGuid().ToString();         Console.WriteLine("Message session Id: " + sessionId);         Console.Write("Sending {0} sub-messages", nrSubMessages);           Stream bodyStream = message.GetBody<Stream>();         for (int streamOffest = 0; streamOffest < messageBodySize;             streamOffest += SubMessageBodySize)         {                                     // Get the stream chunk from the large message             long arraySize = (messageBodySize - streamOffest) > SubMessageBodySize                 ? SubMessageBodySize : messageBodySize - streamOffest;             byte[] subMessageBytes = new byte[arraySize];             int result = bodyStream.Read(subMessageBytes, 0, (int)arraySize);             MemoryStream subMessageStream = new MemoryStream(subMessageBytes);               // Create a new message             BrokeredMessage subMessage = new BrokeredMessage(subMessageStream, true);             subMessage.SessionId = sessionId;               // Send the message             m_QueueClient.Send(subMessage);             Console.Write(".");         }         Console.WriteLine("Done!");     }} The LargeMessageSender class is initialized with a QueueClient that is created by the sending application. When the large message is sent, the number of sub messages is calculated based on the size of the body of the large message. A unique session Id is created to allow the sub messages to be sent as a message session, this session Id will be used for correlation in the aggregator. A for loop in then used to create the sequence of sub messages by creating chunks of data from the stream of the large message. The sub messages are then sent to the queue using the QueueClient. As sessions are used to correlate the messages, the queue used for message exchange must be created with the RequiresSession property set to true. Implementing the Aggregator The aggregator will receive the sub messages in the message session that was created by the splitter, and combine them to form a single, large message. The aggregator is implemented in the LargeMessageReceiver class, with a Receive method that returns a BrokeredMessage. The implementation of the class is shown below; console output has been added to provide details of the splitting operation.   public class LargeMessageReceiver {     private QueueClient m_QueueClient;       public LargeMessageReceiver(QueueClient queueClient)     {         m_QueueClient = queueClient;     }       public BrokeredMessage Receive()     {         // Create a memory stream to store the large message body.         MemoryStream largeMessageStream = new MemoryStream();           // Accept a message session from the queue.         MessageSession session = m_QueueClient.AcceptMessageSession();         Console.WriteLine("Message session Id: " + session.SessionId);         Console.Write("Receiving sub messages");           while (true)         {             // Receive a sub message             BrokeredMessage subMessage = session.Receive(TimeSpan.FromSeconds(5));               if (subMessage != null)             {                 // Copy the sub message body to the large message stream.                 Stream subMessageStream = subMessage.GetBody<Stream>();                 subMessageStream.CopyTo(largeMessageStream);                   // Mark the message as complete.                 subMessage.Complete();                 Console.Write(".");             }             else             {                 // The last message in the sequence is our completeness criteria.                 Console.WriteLine("Done!");                 break;             }         }                     // Create an aggregated message from the large message stream.         BrokeredMessage largeMessage = new BrokeredMessage(largeMessageStream, true);         return largeMessage;     } }   The LargeMessageReceiver initialized using a QueueClient that is created by the receiving application. The receive method creates a memory stream that will be used to aggregate the large message body. The AcceptMessageSession method on the QueueClient is then called, which will wait for the first message in a message session to become available on the queue. As the AcceptMessageSession can throw a timeout exception if no message is available on the queue after 60 seconds, a real-world implementation should handle this accordingly. Once the message session as accepted, the sub messages in the session are received, and their message body streams copied to the memory stream. Once all the messages have been received, the memory stream is used to create a large message, that is then returned to the receiving application. Testing the Implementation The splitter and aggregator are tested by creating a message sender and message receiver application. The payload for the large message will be one of the webcast video files from http://www.cloudcasts.net/, the file size is 9,697 KB, well over the 256 KB threshold imposed by the Service Bus. As the splitter and aggregator are implemented in a separate class library, the code used in the sender and receiver console is fairly basic. The implementation of the main method of the sending application is shown below.   static void Main(string[] args) {     // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Open the input file.     FileStream fileStream = new FileStream(AccountDetails.TestFile, FileMode.Open);       // Create a BrokeredMessage for the file.     BrokeredMessage largeMessage = new BrokeredMessage(fileStream, true);       Console.WriteLine("Sending: " + AccountDetails.TestFile);     Console.WriteLine("Message body size: " + largeMessage.Size);     Console.WriteLine();         // Send the message with a LargeMessageSender     LargeMessageSender sender = new LargeMessageSender(queueClient);     sender.Send(largeMessage);       // Close the messaging facory.     factory.Close();  } The implementation of the main method of the receiving application is shown below. static void Main(string[] args) {       // Create a token provider with the relevant credentials.     TokenProvider credentials =         TokenProvider.CreateSharedSecretTokenProvider         (AccountDetails.Name, AccountDetails.Key);       // Create a URI for the serivce bus.     Uri serviceBusUri = ServiceBusEnvironment.CreateServiceUri         ("sb", AccountDetails.Namespace, string.Empty);       // Create the MessagingFactory     MessagingFactory factory = MessagingFactory.Create(serviceBusUri, credentials);       // Use the MessagingFactory to create a queue client     QueueClient queueClient = factory.CreateQueueClient(AccountDetails.QueueName);       // Create a LargeMessageReceiver and receive the message.     LargeMessageReceiver receiver = new LargeMessageReceiver(queueClient);     BrokeredMessage largeMessage = receiver.Receive();       Console.WriteLine("Received message");     Console.WriteLine("Message body size: " + largeMessage.Size);       string testFile = AccountDetails.TestFile.Replace(@"\In\", @"\Out\");     Console.WriteLine("Saving file: " + testFile);       // Save the message body as a file.     Stream largeMessageStream = largeMessage.GetBody<Stream>();     largeMessageStream.Seek(0, SeekOrigin.Begin);     FileStream fileOut = new FileStream(testFile, FileMode.Create);     largeMessageStream.CopyTo(fileOut);     fileOut.Close();       Console.WriteLine("Done!"); } In order to test the application, the sending application is executed, which will use the LargeMessageSender class to split the message and place it on the queue. The output of the sender console is shown below. The console shows that the body size of the large message was 9,929,365 bytes, and the message was sent as a sequence of 51 sub messages. When the receiving application is executed the results are shown below. The console application shows that the aggregator has received the 51 messages from the message sequence that was creating in the sending application. The messages have been aggregated to form a massage with a body of 9,929,365 bytes, which is the same as the original large message. The message body is then saved as a file. Improvements to the Implementation The splitter and aggregator patterns in this implementation were created in order to show the usage of the patterns in a demo, which they do quite well. When implementing these patterns in a real-world scenario there are a number of improvements that could be made to the design. Copying Message Header Properties When sending a large message using these classes, it would be great if the message header properties in the message that was received were copied from the message that was sent. The sending application may well add information to the message context that will be required in the receiving application. When the sub messages are created in the splitter, the header properties in the first message could be set to the values in the original large message. The aggregator could then used the values from this first sub message to set the properties in the message header of the large message during the aggregation process. Using Asynchronous Methods The current implementation uses the synchronous send and receive methods of the QueueClient class. It would be much more performant to use the asynchronous methods, however doing so may well affect the sequence in which the sub messages are enqueued, which would require the implementation of a resequencer in the aggregator to restore the correct message sequence. Handling Exceptions In order to keep the code readable no exception handling was added to the implementations. In a real-world scenario exceptions should be handled accordingly.

    Read the article

  • Can you clear jquery ajax cache?

    - by chobo2
    Hi I am wondering is it possible to clear the cache from a particular ajax method? Say if I have this $.ajax({ url: "test.html", cache: true, success: function(html){ $("#results").append(html); } }); Now 99% of the time a cached result can be used since it should always be same content. However if a user updates this content it of course changes. If it is cached and it would still show the old content. So it would be cool if I could pick out this cache for this method and clear it and all other cached stuff would stay. Can this be done?

    Read the article

  • How to geocoding big number of addresses?

    - by user308569
    I need to geocode, i.e. translate street address to latitude,longitude for ~8,000 street addresses. I am using both Yahoo and Google geocoding engines at http://www.gpsvisualizer.com/geocoder/, and found out that for a big number of addresses those engines (one of them or both) either could not perform geocoding (i.e.return latitude=0,longitude=0), or return wrong coordinates (incl. cases when Yahoo and Google give different results). What is the best way to handle this problem? Which engine is (usually) more accurate? I would appreciate any thoughts, suggestions, ideas from people who had previous experience with this kind of task.

    Read the article

  • MySQL differences between to select queries

    - by bpmccain
    I have two mysql queries that return a column of phone numbers. I want am trying to end up with a list of phone numbers that are in one list, but not in the other. So the two queries I have are: SELECT phone FROM civicrm_phone phone LEFT JOIN civicrm_participant participant ON phone.contact_id = participant.contact_id WHERE phone.is_primary = 1 AND participant.id IS NULL and SELECT phone FROM civicrm_phone phone LEFT JOIN civicrm_participant participant ON phone.contact_id = participant.contact_id WHERE phone.is_primary = 1 AND participant.id IS NOT NULL And before anyone asks, the above two queries do not provide mutually exclusive results (based on using IS NULL and IS NOT NULL for the last WHERE statement), since we have related individuals in the database who use the same phone number, but do not necessarily all have a participant.id. Thanks for any help.

    Read the article

  • Best practices for fixed-width processing in .NET

    - by jmgant
    I'm working a .NET web service that will be processing a text file with a relatively long, multilevel record format. Each record in the file represents a different entity; the record contains multiple sub-types. (The same record format is currently being processed by a COBOL job, if that gives you a better picture of what we're looking at). I've created a class structure (a DATA DIVISION if you will) to hold the input data. My question is, what best practices have you found for processing large, complex fixed-width files in .NET? My general approach will be to read the entire line into a string and then parse the data from the string into the classes I've created. But I'm not sure whether I'll get better results working with the characters in the string as an array, or with the string itself. I guess that's the specific question, string vs. char[], but I would appreciate any other pointers anyone has. Thanks.

    Read the article

  • Why are getters prefixed with the word "get"?

    - by Joey
    Generally speaking, creating a fluid API is something that makes all programmers happy; Both for the creators who write the interface, and the consumers who program against it. Looking beyond conventions, why is it that we prefix all our getters with the word "get". Omitting it usually results in a more fluid, easy to read set of instructions, which ultimately leads to happiness (however small or passive). Consider this very simple example. (pseudo code) Conventional: person = new Person("Joey") person.getName().toLower().print() Alternative: person = new Person("Joey") person.name().toLower().print() Of course this only applies to languages where getters/setters are the norm, but is not directed at any specific language. Were these conventions developed around technical limitations (disambiguation), or simply through the pursuit of a more explicit, intentional feeling type of interface, or perhaps this is just a case of trickle a down norm. What are your thoughts? And how would simple changes to these conventions impact your happiness / daily attitudes towards your craft (however minimal). Thanks.

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >