Search Results

Search found 9662 results on 387 pages for 'sales and operations plan'.

Page 122/387 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Stream Reuse in C#

    - by MikeD
    I've been playing around with what I thought was a simple idea. I want to be able to read in a file from somewhere (website, filesystem, ftp), perform some operations on it (compress, encrypt, etc.) and then save it somewhere (somewhere may be a filesystem, ftp, or whatever). It's a basic pipeline design. What I would like to do is to read in the file and put it onto a MemoryStream, then perform the operations on the data in the MemoryStream, and then save that data in the MemoryStream somewhere. I was thinking I could use the same Stream to do this but run into a couple of problems: Everytime I use a StreamWriter or StreamReader I need to close it and that closes the stream so that I cannot use it anymore. That seems like there must be some way to get around that. Some of these files may be big and so I may run out of memory if I try to read the whole thing in at once. I was hoping to be able to spin up each of the steps as separate threads and have the compression step begin as soon as there is data on the stream, and then as soon as the compression has some compressed data available on the stream I could start saving it (for example). Is anything like this easily possible with the C# Streams? ANyone have thoughts as to how to accomplish this best? Thanks, Mike

    Read the article

  • How to implement Excel Solver functionality in C#?

    - by Vic
    Hi, I have an application in C#, I need to do some optimization calculations, like Excel Solver Add-in does, one option is certainly to write my own solver implementation, but I'm kind of short of time, so I'm looking into libraries that already exist that can help me with this. I've been trying the Microsoft Solver Foundation, which seems pretty neat and cool, the problem is that it doesn't seem to work with the kind of calculations that I need to do. At the end of this question I'm adding the information about the calculations I need to perform and optimize. So basically my question is if any of you know of any other library that I can use for this purpose, or any tutorial that can help to do my own solver, or any idea that gives me a lead to solve this issue. Thanks. Additional Info: This is the data I need to calculate: I have 7 variables, lets call them var1, var2,...,var7 The constraints for these variables are: All of them need to be 0 <= varn <= 0.5 (where n is the number of the variable) The sum of all the variables should be equal to 1 The objective is to maximize the target formula, which in Excel looks like this: (MMULT(TRANSPOSE(L26:L32),M14:M20)) / (SQRT(MMULT(MMULT(TRANSPOSE(L26:L32),M4:S10),L26:L32))) The range that you see in this formula, L26:L32, is actually the range with the variables from above, var1, var2,..., varn. M14:M20 and M4:S10 are ranges with data that I get from different sources, there are more likely decimal values. As I said before, I was using Microsoft Solver Foundation, I modeled pretty much everything with it, I created functions that handle the operations of the target formula, but when I tried to solve the model it always fail, I think it is because of the complexity of the operations. In any case, I just wanted to show these data so you can have an idea about the kind of calculations that I need to implement.

    Read the article

  • Project Architecture For Enhancing Legacy project.

    - by vijay.shad
    I am working on legacy project. Now the situation demands project to be divided into parts. What strategy I should follow to do this task. Description: The legacy project (A) is fully functional web application with almost well defined layers. But now i need to extend the project to a further enhancement. This project usage maven as build tool. But it is used only for dependency managements only. (project exported to war form inside eclipse). The new enhancement needs me to add new data table, new UI(jsp, css, js and images). What should be my strategy to enhance to application. My proposed design. I am planing to create two new projects Project B : Main Enhancement works will done in this project. Will have all layers like service layer, dao layer and UI layer in itself. And will be a web application in itself. Project C : Extract some common model and service code form project-A and create this project. This project will be added as dependency to both the projects. If my this approach is okay! Then i presume there will be problem be problem in deployment. These two projects will demand to deploy separately(currently tomcat is used). But I must deploy these two projects as one war. So, i need to have a plan to change the web.xml entries to have configurations for both projects. This will comes with some more complexities with project. What should be my design for the project? Does my plan sounds good.

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • std::vector iterator or index access speed question

    - by Simone Margaritelli
    Just a stupid question . I have a std::vector<SomeClass *> v; in my code and i need to access its elements very often in the program, looping them forward and backward . Which is the fastest access type between those two ? Iterator access std::vector<SomeClass *> v; std::vector<SomeClass *>::iterator i; std::vector<SomeClass *>::reverse_iterator j; // i loops forward, j loops backward for( i = v.begin(), j = v.rbegin(); i != v.end() && j != v.rend(); i++, j++ ){ // some operations on v items } Subscript access (by index) std::vector<SomeClass *> v; unsigned int i, j, size = v.size(); // i loops forward, j loops backward for( i = 0, j = size - 1; i < size && j >= 0; i++, j-- ){ // some operations on v items } And, does const_iterator offer a faster way to access vector elements in case i do not have to modify them? Thank you in advantage.

    Read the article

  • Managing lots of callback recursion in Nodejs

    - by Maciek
    In Nodejs, there are virtually no blocking I/O operations. This means that almost all nodejs IO code involves many callbacks. This applies to reading and writing to/from databases, files, processes, etc. A typical example of this is the following: var useFile = function(filename,callback){ posix.stat(filename).addCallback(function (stats) { posix.open(filename, process.O_RDONLY, 0666).addCallback(function (fd) { posix.read(fd, stats.size, 0).addCallback(function(contents){ callback(contents); }); }); }); }; ... useFile("test.data",function(data){ // use data.. }); I am anticipating writing code that will make many IO operations, so I expect to be writing many callbacks. I'm quite comfortable with using callbacks, but I'm worried about all the recursion. Am I in danger of running into too much recursion and blowing through a stack somewhere? If I make thousands of individual writes to my key-value store with thousands of callbacks, will my program eventually crash? Am I misunderstanding or underestimating the impact? If not, is there a way to get around this while still using Nodejs' callback coding style?

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

  • Are there some cases where Python threads can safely manipulate shared state?

    - by erikg
    Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs. Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable. However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race: from threading import Thread, Lock import operator def contains_all_ints(l, n): l.sort() for i in xrange(0, n): if l[i] != i: return False return True def test(ntests): results = [] threads = [] def lockless_append(i): results.append(i) for i in xrange(0, ntests): threads.append(Thread(target=lockless_append, args=(i,))) threads[i].start() for i in xrange(0, ntests): threads[i].join() if len(results) != ntests or not contains_all_ints(results, ntests): return False else: return True for i in range(0,100): if test(100000): print "OK", i else: print "appending to a list without locks *is* unsafe" exit() I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads? Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • Impact of ordering of correlated subqueries within a projection

    - by Michael Petito
    I'm noticing something a bit unexpected with how SQL Server (SQL Server 2008 in this case) treats correlated subqueries within a select statement. My assumption was that a query plan should not be affected by the mere order in which subqueries (or columns, for that matter) are written within the projection clause of the select statement. However, this does not appear to be the case. Consider the following two queries, which are identical except for the ordering of the subqueries within the CTE: --query 1: subquery for Color is second WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; --query 2: subquery for Color is first WITH vw AS ( SELECT p.[ID], (SELECT TOP(1) [Color] FROM [Preference] WHERE p.ID = ID AND [Color] IS NOT NULL ORDER BY [LastModified] DESC) [Color], (SELECT TOP(1) [FirstName] FROM [Preference] WHERE p.ID = ID AND [FirstName] IS NOT NULL ORDER BY [LastModified] DESC) [FirstName] FROM Person p ) SELECT ID, Color, FirstName FROM vw WHERE Color = 'Gray'; If you look at the two query plans, you'll see that an outer join is used for each subquery and that the order of the joins is the same as the order the subqueries are written. There is a filter applied to the result of the outer join for color, to filter out rows where the color is not 'Gray'. (It's odd to me that SQL would use an outer join for the color subquery since I have a non-null constraint on the result of the color subquery, but OK.) Most of the rows are removed by the color filter. The result is that query 2 is significantly cheaper than query 1 because fewer rows are involved with the second join. All reasons for constructing such a statement aside, is this an expected behavior? Shouldn't SQL server opt to move the filter as early as possible in the query plan, regardless of the order the subqueries are written?

    Read the article

  • Clear UITextField when standard number pad is pressed

    - by Manu
    Hi all, I'm sorry if I didn't explain my problem well enough in the title. In my application, I'm using a small subview to create a very basic calculator on the top side of the screen (I just have an UITextField to show the operations and two buttons). Instead of using a custom keyboard for it, I want to use the standard iPhone number pad. My problem is that after doing an operation (e.g. adding two numbers) and showing the result, I cannot figure out how to clear the screen when the user enters a new number. So, for example: User enters "6" - Number 6 is shown in UITextField User selects "+" - UITextField is cleared to make room for the next number User enters "10" - Number 10 is shown in UITextField User selects "+" - 16 is shown as the result of the previous operation, and should stay there until another number is pressed (he wants to continue adding more numbers) User enters "5" At this point, if I was using a custom keyboard, I could clear the UITextField as soon as the button "5" is pressed by the user, but I cannot figure out how to do so when using the standard number pad. So, the result I get at the moment is "165". Is there a way to detect when a key is pressed in the standard number pad so that I can clear the UITextField before the new number appears? I thought there may be a NSNotification for that, but I couldn't find it. I'm aware that I could solve my problem if I created a custom keyboard or if I used two separated UITextFields (one for the operations and another one for the total), but I would like to use the standard number pad if it's possible. Thanks very much!

    Read the article

  • Using Qt signals/slots instead of a worker thread

    - by Rob
    I am using Qt and wish to write a class that will perform some network-type operations, similar to FTP/HTTP. The class needs to connect to lots of machines, one after the other but I need the applications UI to stay (relatively) responsive during this process, so the user can cancel the operation, exit the application, etc. My first thought was to use a separate thread for network stuff but the built-in Qt FTP/HTTP (and other) classes apparently avoid using threads and instead rely on signals and slots. So, I'd like to do something similar and was hoping I could do something like this: class Foo : public QObject { Q_OBJECT public: void start(); signals: void next(); private slots: void nextJob(); }; void Foo::start() { ... connect(this, SIGNAL(next()), this, SLOT(nextJob())); emit next(); } void Foo::nextJob() { // Process next 'chunk' if (workLeftToDo) { emit next(); } } void Bar::StartOperation() { Foo* foo = new Foo; foo->start(); } However, this doesn't work and UI freezes until all operations have completed. I was hoping that emitting signals wouldn't actually call the slots immediately but would somehow be queued up by Qt, allowing the main UI to still operate. So what do I need to do in order to make this work? How does Qt achieve this with the multitude of built-in classes that appear to perform lengthy tasks on a single thread?

    Read the article

  • Dynamically created operators

    - by Gero
    I created a program using dev-cpp and wxwidgets which solves a puzzle. The user must fill the operations blocks and the results blocks, and the program will solve it. Im solving it using bruteforce, i generate all non repeated 9 length number combinations using a recursive algorithm. It does it pretty fast. Up to here all is great! But the problem is when my program operates depending the character on the blocks. Its extremely slow (it never gets the answer), because of the chars comparation against +, -, *, etc. Im doing a CASE. Is there some way or some programming language wich allows dinamic creation of operators? So i can define the operator ROW1COL2 to be a +, and the same way to all other operations. I leave a screenshot of the app, so its easier to understand how the puzzle works. http://www.imageshare.web.id/images/9gg5cev8vyokp8rhlot9.png PD: The algorithm works, i tryed it with a trivial puzzle, and solved it in a second.

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • Java remove HTML from String without regular expressions

    - by behrk2
    Hello, I am trying to remove all HTML elements from a String. Unfortunately, I cannot use regular expressions because I am developing on the Blackberry platform and regular expressions are not yet supported. Is there any other way that I can remove HTML from a string? I read somewhere that you can use a DOM Parser, but I couldn't find much on it. Text with HTML: <![CDATA[As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (<a href="http://www.netflix.com/RoleDisplay/Billy_Bob_Thornton/20000303">Billy Bob Thornton</a>) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (<a href="http://www.netflix.com/RoleDisplay/Bruce_Willis/99786">Bruce Willis</a>) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task. <a href="http://www.netflix.com/RoleDisplay/Ben_Affleck/20000016">Ben Affleck</a> and <a href="http://www.netflix.com/RoleDisplay/Liv_Tyler/162745">Liv Tyler</a> co-star.]]> Text without HTML: As a massive asteroid hurtles toward Earth, NASA head honcho Dan Truman (Billy Bob Thornton) hatches a plan to split the deadly rock in two before it annihilates the entire planet, calling on Harry Stamper (Bruce Willis) -- the world's finest oil driller -- to head up the mission. With time rapidly running out, Stamper assembles a crack team and blasts off into space to attempt the treacherous task.Ben Affleck and Liv Tyler co-star. Thanks!

    Read the article

  • Flex - Increase timeout on a PHP service function call

    - by Travesty3
    I'm using Flash Builder 4 Beta 2. I have it connecting to a PHP service. The way I set this up was using the wizard, so I didn't actually write the code to connect to it. The service looks like this: package services.flash { import mx.rpc.AsyncToken; import com.adobe.fiber.core.model_internal; import mx.rpc.AbstractOperation; import valueObjects.CustomDatatype8; import valueObjects.NewUsageData; import mx.collections.ItemResponder; import mx.rpc.remoting.RemoteObject; import mx.rpc.remoting.Operation; import com.adobe.fiber.services.wrapper.RemoteObjectServiceWrapper; import com.adobe.fiber.valueobjects.AvailablePropertyIterator; import com.adobe.serializers.utility.TypeUtility; [ExcludeClass] internal class _Super_FLASH extends RemoteObjectServiceWrapper { // Constructor public function _Super_FLASH() { // initialize service control _serviceControl = new RemoteObject(); var operations:Object = new Object(); var operation:Operation; operation = new Operation(null, "sendCommand"); operation.resultType = Object; operations["sendCommand"] = operation; ... } } One of the functions that I'm calling fetches users from a MySQL database. There are about 30,000 users right now. The service seems to timeout when fetching more than around 22,000 rows, I get the "Channel Disconnected before an acknowledgement was received" error. If I call the PHP script from a browser, it fetches them all with no problems at all, however. I have tried increasing the timeout in the PHP script (which didn't work), but obviously this isn't the problem since the browser is able to pull them up with no problems. Is there a way to increase the timeout of the PHP service in Flash Builder? I'm a bit of a noob when it comes to Flash, so please be descriptive. Thanks in advance!

    Read the article

  • .NET threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

  • Need events to execute on timer events, metronome precision.

    - by user295734
    I setup a timer to call an event in my application. The problme is the event execution is being skewed by other windows operations. Ex. openning and window, loading a webpage. I need the event to excute exactly on time, every time. When i first set up the app, used a sound file, like a metronome to listen to the even firing, in a steady state, its firing right on, but as soon do something in the windows environment, the sound fires slower, then sort of sppeds up a bit to catch up. So i added a logging method to the event to ctahc the timer ticks. From that data, it appears that the timer is not being affected by the windows app, but my application event calls are being affected. I figured this out by checking the datetime.now in the event, and if i set it to 250 milliseconds, which is 4 clicks per second. You get data something like below. (sec):(ms) 1:000 1:250 1:500 1:750 2:000 2:250 2:500 2:750 3:000 3:250 3:500 3:750 (lets say i execute some windows event)(time will skew) 4:122 4:388 4:600 4:876 (stop doing what i was doing in windows) (going to shorten the data for simplicit, my list was 30sec long) 5:124 5:268 5:500 5:750 (you would se the time go back the same milliseconds it was at the begining) 6:000 6:250 6:500 6:750 7:000 7:250 7:500 7:750 So i'm thinking the timer continues to fire on the same millisecond every time, but its the event that is being skewed to fire off time by other windows operations. Its not a huge skew, but for what i need to accomplish, its unacceptable. Is there anyhting i can do in .NET, hoping to use XAML/WPF application, thats will correct the skewing of events? thx.

    Read the article

  • Replace System.Net.Mail.MailMessage with manually created message and send it

    - by DEH
    I am trying to send emails that will bounce to a known mailbox. I plan to use VERP. Unfortunately the System.Net.Mail.MailMessage object does not allow me to precisely set the From: and Sender: headers within my email - it forces the values so that the resulting email contains the phrase 'on behalf of', and does not allow me fine control over the relevant mime headers. I therefore plan to manually write mime email messages directly to the pickup directory so that I can independently control the From and Sender headers. My dev box is a Vista box and therefore does not have an SMTP server. I would like to configure the dev box so that I have an SMTP server running on it. I can then turn off the SMTP server, write messages to the pickup dir, then turn on the SMPT server and see how the individual emails that I have written will behave (some delivered, some bounced to a bounce handler on a different email domain, as dictated by the Sender). Two questions: 1. Can anyone recommend an SMTP server that will monitor a pickup directory? 2. If I set headers as follows; From:[email protected]; Sender:[email protected] then the recipient will see the email as having come from [email protected] ( and won't see any reference to [email protected]), but if the mail bounces then the NDR will be sent to [email protected]). It's a real pain to have to do this, but I can't see any way of using System.Net.Mail.MailMessage without it messing up my headers.

    Read the article

  • Why does autoboxing in Java allow me to have 3 possible values for a boolean?

    - by John
    Reference: http://java.sun.com/j2se/1.5.0/docs/guide/language/autoboxing.html If your program tries to autounbox null, it will throw a NullPointerException. javac will give you a compile-time error if you try to assign null to a boolean. makes sense. assigning null to a Boolean is a-ok though. also makes sense, i guess. but let's think about the fact that you'll get a NPE when trying to autounbox null. what this means is that you can't safely perform boolean operations on Booleans without null-checking or exception handling. same goes for doing math operations on an Integer. for a long time, i was a fan of autoboxing in java1.5+ because I thought it got java closer to be truly object-oriented. but, after running into this problem last night, i gotta say that i think this sucks. the compiler giving me an error when I'm trying to do stuff with an uninitialized primitive is a good thing. I think I may be misunderstanding the point of autoboxing, but at the same time I will never accept that a boolean should be able to have 3 values. can anyone explain this? what am i not getting?

    Read the article

  • How to design a RESTful collection resource?

    - by Suresh Kumar
    I am trying to design a "collection of items" resource. I need to support the following operations: Create the collection Remove the collection Add a single item to the collection Add multiple items to the collection Remove a single item from the collection Remove multiple items from the collection This is as far as I have gone: Create collection: ==> POST /service Host: www.myserver.com Content-Type: application/xml <collection name="items"> <item href="item1"/> <item href="item2"/> <item href="item3"/> </collection> <== 201 Created Location: http://myserver.com/service/items Content-Type: application/xml ... Remove collection: ==> DELETE /service/items <== 200 OK Removing a single item from the collection: ==> DELETE /service/items/item1 <== 200 OK However, I am finding supporting the other operations a bit tricky i.e. what methods can I use to: Add single or multiple items to the collection. (PUT doesn't seem to be right here as per HTTP 1.1 RFC Remove multiple items from the collection in one transaction. (DELETE doesn't seem to right here either)

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Spring - Transaction Readonly

    - by AAK
    Hello Gurus! Just wanted your expert opinions on declarative transaction management for Spring. Here is my setup - A. DAO Layer is Plain old JDBC using jdbcTemplete (No hibernate etc) B. Service Layer is POJO with declarative trasnactions as follows - save*, readonly=false, rollback for Throwable Things work fine with above setup. However when I say get*, readonly=true I see errors in my log file saying - Database connection cannot be marked as readonly. This happens for all get* methods in Service Layer. Now my questions - A. Do I have to say get* as readonly? All my get* methods are pure read DB operations. I do not wish to run them in any transaction context. How serious is the above error? B. When I remove the get* confiiguration, I do not see the errors, morever all my simple get* operations are performed without transactions. Is this the way to go? C. Why would anyone want to have transactional methods where readonly = true? Is there any practical significance of this configuration? Thank you! As always your resposes are much appreciated!

    Read the article

  • SQL Server 2005, wide indexes, computed columns, and sargable queries

    - by luksan
    In my database, assume we have a table defined as follows: CREATE TABLE [Chemical]( [ChemicalId] int NOT NULL IDENTITY(1,1) PRIMARY KEY, [Name] nvarchar(max) NOT NULL, [Description] nvarchar(max) NULL ) The value for Name can be very large, so we must use nvarchar(max). Unfortunately, we want to create an index on this column, but nvarchar(max) is not supported inside an index. So we create the following computed column and associated index based upon it: ALTER TABLE [Chemical] ADD [Name_Indexable] AS LEFT([Name], 20) CREATE INDEX [IX_Name] ON [Chemical]([Name_Indexable]) INCLUDE([Name]) The index will not be unique but we can enforce uniqueness via a trigger. If we perform the following query, the execution plan results in a index scan, which is not what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' However, if we modify the query to make it "sargable," then the execution plan results in an index seek, which is what we want: SELECT [ChemicalId], [Name], [Description] FROM [Chemical] WHERE [Indexable_Name]='[1,1''-Bicyclohexyl]-' AND [Name]='[1,1''-Bicyclohexyl]-2-carboxylic acid, 4'',5-dihydroxy-2'',3-dimethyl-5'',6-bis[(1-oxo-2-propen-1-yl)oxy]-, methyl ester' Is this a good solution if we control the format of all queries executed against the database via our middle tier? Is there a better way? Is this a major kludge? Should we be using full-text indexing?

    Read the article

  • Skip Checkout in Magento for a downloadable product

    - by Aaron Newton
    Hello Magento boffins. I am using Magento to build an eBooks site. For the release, we plan to have a number of free downloadable books. We were hoping that it would be possible to use the normal Magento 'catalog' functionality to add categories with products underneath. However, since these are free downloadable products, it doesn't really make sense to send users through the checkout when they try to download. Does anyone know of a way to create a free downloadable product which bypasses the checkout altogether? I have noticed that there is a 'free sample' option for downloadable products, but I would prefer not to use this if I can as I plan to use this field for its intended purpose when I add paid products. [EDIT] I have noticed that some of you have voted this question down for 'lack of question clarity'. For clarity, I want: to know if it is possible to create a downloadable product in Magento which doesn't require users to go through the usual checkout process (since it is free) and which is not the 'Free Sample' field of a downloadable product Unfortunately I don't think I can ask this any more eloquently. [/EDIT]

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >