Search Results

Search found 1908 results on 77 pages for 'relational operators'.

Page 62/77 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • How Should I Generate Trade Statistics For CouchDB/Rails3 Application?

    - by James
    My Problem: I am trying to developing a web application for currency traders. The application allows traders to enter or upload information about their trades and I want to calculate a wide variety of statistics based on what the user entered. Now, normally I would use a relational database for this, but I have two requirements that don't fit well with a relational database so I am attempting to use couchdb. Those two problems are: 1) Primarily, I have a companion desktop application that users will be able to work with and replicate to the site using couchdb's awesome replication feature and 2) I would like to allow users to be able to define their own custom things to track about trades and generate results based off of what they enter. The schema less nature of couch seems perfect here, but it may end up being harder than it sounds. (I already know couch requires you to define views in advance and such so I was just planning on sticking all the custom attributes in an array and then emitting the array in the view and further processing from there.) What I Am Doing: Right now I am just emitting each trade in couch keyed by each user's system and querying with the key of the system to get an array of trades per system. Simple. I am not using a reduce function currently to calculate any stats because I couldn't figure out how to get everything I need without getting a reduce overflow error. Here is an example of rows that are getting emitted from couch: {"total_rows":134,"offset":0,"rows":[ {"id":"5b1dcd47221e160d8721feee4ccc64be", "key":["80e40ba2fa43589d57ec3f1d19db41e6","2010/05/14 04:32:37 +0000"], null, "doc":{ "_id":"5b1dcd47221e160d8721feee4ccc64be", "_rev":"1-bc9fe763e2637694df47d6f5efb58e5b", "couchrest-type":"Trade", "system":"80e40ba2fa43589d57ec3f1d19db41e6", "pair":"EUR/USD", "direction":"Buy", "entry":12600, "exit":12700, "stop_loss":12500, "profit_target":12700, "status":"Closed", "slug":"101332132375", "custom_tracking": [{"name":"signal", "value":"Pin Bar"}] "updated_at":"2010/05/14 04:32:37 +0000", "created_at":"2010/05/14 04:32:37 +0000", "result":100}} ]} In my rails 3 controller I am basically just populating an array of trades such as the one above and then extracting out the relevant data into smaller arrays that I can compute my statistics on. Here is my show action for the page that I want to display the stats and all the trades: def show @trades = Trade.by_system(:startkey => [@system.id], :endkey => [@system.id, Time.now ]) @trades.each do |trade| if trade.result > 0 @winning_trades << trade.result elsif trade.result < 0 @losing_trades << trade.result else @breakeven_trades << trade.result end if trade.direction == "Buy" @long_trades << trade.result else @short_trades << trade.result end if trade["custom_tracking"] != nil @custom_tracking << {"result" => trade.result, "variables" => trade["custom_tracking"]} end end end I am omitting some other stuff that is going on, but that is the gist of what I am doing. Then I am calculating stuff in the view layer to produce some results: <% winning_long_trades = @long_trades.reject {|trade| trade <= 0 } %> <% winning_short_trades = @short_trades.reject {|trade| trade <= 0 } %> <ul> <li>Total Trades: <%= @trades.count %></li> <li>Winners: <%= @winning_trades.size %></li> <li>Biggest Winner (Pips): <%= @winning_trades.max %></li> <li>Average Win(Pips): <%= @winning_trades.sum/@winning_trades.size %></li> <li>Losers: <%= @losing_trades.size %></li> <li>Biggest Loser (Pips): <%= @losing_trades.min %></li> <li>Average Loss(Pips): <%= @losing_trades.sum/@losing_trades.size %></li> <li>Breakeven Trades: <%= @breakeven_trades.size %></li> <li>Long Trades: <%= @long_trades.size %></li> <li>Winning Long Trades: <%= winning_long_trades.size %></li> <li>Short Trades: <%= @short_trades.size %></li> <li>Winning Short Trades: <%= winning_short_trades.size %></li> <li>Total Pips: <%= @winning_trades.sum + @losing_trades.sum %></li> <li>Win Rate (%): <%= @winning_trades.size/@trades.count.to_f * 100 %></li> </ul> This produces the following results, which aside from a few things is exactly what I want: Total Trades: 134 Winners: 70 Biggest Winner (Pips): 1488 Average Win(Pips): 440 Losers: 58 Biggest Loser (Pips): -516 Average Loss(Pips): -225 Breakeven Trades: 6 Long Trades: 125 Winning Long Trades: 67 Short Trades: 9 Winning Short Trades: 3 Total Pips: 17819 Win Rate (%): 52.23880597014925 What I Am Wondering- Finally The Actual Questions: I am starting to get really skeptical of how well this method will work when a user has 5,000 trades instead of just 134 like in this example. I anticipate most users will only have somewhere under 200 per year, but some users may have a couple thousand trades per year. Probably no more than 5,000 per year. It seems to work ok now, but the page load times are already getting a tad high for my tastes. (About 800ms to generate the page according to rails logs with about a 250ms of that spent in the view layer.) I will end up caching this page I am sure, but I still need the regenerate the page each time a trade is updated and I can't afford to have this be too slow. Sooo..... Is doing something similar here possible with a straight couchdb reduce function? I am assuming handing this off to couch would possibly help with larger data sets. I couldn't figure out how, but I suppose that doesn't mean it isn't possible. If possible, any hints will be helpful. Could I use a list function if a reduce was not available due to reduce constraints? Are couchdb list functions suitable for this type of calculations? Anyone have any idea of whether or not list functions perform well? Any hints what one would look like for the type of calculations I am trying to achieve? I thought about other options such as running the calculations at the time each trade was saved or nightly if I had to and saving the results to a statistics doc that I could then query so that all the processing was done ahead of time. I would like this to be the last resort because then I can't really filter out trades by time periods dynamically like I would really like to. (I want to have a slider that a user can slide to only show trades from that time period using the startkey and endkey in couchdb if I can.) If I should continue running the calculations inside the rails app at the time of the page view, what can I do to improve my current implementation. I am new to rails, couch and programming in general. I am sure that I could be doing something better here. Do I need to create an array for each stat or is there a better way to do that. I guess I just would really like some advice on how to tackle this problem. I want to keep the page generation time minimal since I anticipate these being some of the highest trafficked pages. My gut is that I will need to offload the statistics calculation to either couch or run the stats in advance of when they are called, but I am not sure. Lastly: Like I mentioned above, one of the primary reasons for using couch is to allow users to define their own things to track per trade. Getting the data into couch is no problem, but how would I be able to take the custom_tracking array and find how many winning trades for each named tracking attribute. If anyone can give me any hints to the possibility of doing this that would be great. Thanks a bunch. Would really appreciate any help. Willing to fork out some $$$ if someone wants to take on the problem for me. (Don't know if that is allowed on stack overflow or not.)

    Read the article

  • Linked List manipulation, issues retrieving data c++

    - by floatfil
    I'm trying to implement some functions to manipulate a linked list. The implementation is a template typename T and the class is 'List' which includes a 'head' pointer and also a struct: struct Node { // the node in a linked list T* data; // pointer to actual data, operations in T Node* next; // pointer to a Node }; Since it is a template, and 'T' can be any data, how do I go about checking the data of a list to see if it matches the data input into the function? The function is called 'retrieve' and takes two parameters, the data and a pointer: bool retrieve(T target, T*& ptr); // This is the prototype we need to use for the project "bool retrieve : similar to remove, but not removed from list. If there are duplicates in the list, the first one encountered is retrieved. Second parameter is unreliable if return value is false. E.g., " Employee target("duck", "donald"); success = company1.retrieve(target, oneEmployee); if (success) { cout << "Found in list: " << *oneEmployee << endl; } And the function is called like this: company4.retrieve(emp3, oneEmployee) So that when you cout *oneEmployee, you'll get the data of that pointer (in this case the data is of type Employee). (Also, this is assuming all data types have the apropriate overloaded operators) I hope this makes sense so far, but my issue is in comparing the data in the parameter and the data while going through the list. (The data types that we use all include overloads for equality operators, so oneData == twoData is valid) This is what I have so far: template <typename T> bool List<T>::retrieve(T target , T*& ptr) { List<T>::Node* dummyPtr = head; // point dummy pointer to what the list's head points to for(;;) { if (*dummyPtr->data == target) { // EDIT: it now compiles, but it breaks here and I get an Access Violation error. ptr = dummyPtr->data; // set the parameter pointer to the dummy pointer return true; // return true } else { dummyPtr = dummyPtr->next; // else, move to the next data node } } return false; } Here is the implementation for the Employee class: //-------------------------- constructor ----------------------------------- Employee::Employee(string last, string first, int id, int sal) { idNumber = (id >= 0 && id <= MAXID? id : -1); salary = (sal >= 0 ? sal : -1); lastName = last; firstName = first; } //-------------------------- destructor ------------------------------------ // Needed so that memory for strings is properly deallocated Employee::~Employee() { } //---------------------- copy constructor ----------------------------------- Employee::Employee(const Employee& E) { lastName = E.lastName; firstName = E.firstName; idNumber = E.idNumber; salary = E.salary; } //-------------------------- operator= --------------------------------------- Employee& Employee::operator=(const Employee& E) { if (&E != this) { idNumber = E.idNumber; salary = E.salary; lastName = E.lastName; firstName = E.firstName; } return *this; } //----------------------------- setData ------------------------------------ // set data from file bool Employee::setData(ifstream& inFile) { inFile >> lastName >> firstName >> idNumber >> salary; return idNumber >= 0 && idNumber <= MAXID && salary >= 0; } //------------------------------- < ---------------------------------------- // < defined by value of name bool Employee::operator<(const Employee& E) const { return lastName < E.lastName || (lastName == E.lastName && firstName < E.firstName); } //------------------------------- <= ---------------------------------------- // < defined by value of inamedNumber bool Employee::operator<=(const Employee& E) const { return *this < E || *this == E; } //------------------------------- > ---------------------------------------- // > defined by value of name bool Employee::operator>(const Employee& E) const { return lastName > E.lastName || (lastName == E.lastName && firstName > E.firstName); } //------------------------------- >= ---------------------------------------- // < defined by value of name bool Employee::operator>=(const Employee& E) const { return *this > E || *this == E; } //----------------- operator == (equality) ---------------- // if name of calling and passed object are equal, // return true, otherwise false // bool Employee::operator==(const Employee& E) const { return lastName == E.lastName && firstName == E.firstName; } //----------------- operator != (inequality) ---------------- // return opposite value of operator== bool Employee::operator!=(const Employee& E) const { return !(*this == E); } //------------------------------- << --------------------------------------- // display Employee object ostream& operator<<(ostream& output, const Employee& E) { output << setw(4) << E.idNumber << setw(7) << E.salary << " " << E.lastName << " " << E.firstName << endl; return output; } I will include a check for NULL pointer but I just want to get this working and will test it on a list that includes the data I am checking. Thanks to whoever can help and as usual, this is for a course so I don't expect or want the answer, but any tips as to what might be going wrong will help immensely!

    Read the article

  • Noob Objective-C/C++ - Linker Problem/Method Signature Problem

    - by Josh
    There is a static class Pipe, defined in C++ header that I'm including. The static method I'm interested in calling (from Objetive-c) is here: static ERC SendUserGet(const UserId &_idUser,const GUID &_idStyle,const ZoneId &_idZone,const char *_pszMsg); I have access to an objetive-c data structure that appears to store a copy of userID, and zoneID -- it looks like: @interface DataBlock : NSObject { GUID userID; GUID zoneID; } Looked up the GUID def, and its a struct with a bunch of overloaded operators for equality. UserId and ZoneId from the first function signature are #typedef GUID Now when I try to call the method, no matter how I cast it (const UserId), (UserId), etc, I get the following linker error: Ld build/Debug/Seeker.app/Contents/MacOS/Seeker normal i386 cd /Users/josh/Development/project/Mac/Seeker setenv MACOSX_DEPLOYMENT_TARGET 10.5 /Developer/usr/bin/g++-4.2 -arch i386 -isysroot /Developer/SDKs/MacOSX10.5.sdk -L/Users/josh/Development/TS/Mac/Seeker/build/Debug -L/Users/josh/Development/TS/Mac/Seeker/../../../debug -L/Developer/Platforms/iPhoneOS.platform/Developer/usr/lib/gcc/i686-apple-darwin10/4.2.1 -F/Users/josh/Development/TS/Mac/Seeker/build/Debug -filelist /Users/josh/Development/TS/Mac/Seeker/build/Seeker.build/Debug/Seeker.build/Objects-normal/i386/Seeker.LinkFileList -mmacosx-version-min=10.5 -framework Cocoa -framework WebKit -lSAPI -lSPL -o /Users/josh/Development/TS/Mac/Seeker/build/Debug/Seeker.app/Contents/MacOS/Seeker Undefined symbols: "SocPipe::SendUserGet(_GUID const&, _GUID const&, _GUID const&, char const*)", referenced from: -[PeoplePaneController clickGet:] in PeoplePaneController.o ld: symbol(s) not found collect2: ld returned 1 exit status Is this a type/function signature error, or truly some sort of linker error? I have the headers where all these types and static classes are defined #imported -- I tried #include too, just in case, since I'm already stumbling :P Forgive me, I come from a web tech background, so this c-style memory management and immutability stuff is super hazy. Edit: Added full linker error text. Changed "function" to "method" Thanks, Josh

    Read the article

  • Noob Objective-C/C++ - Linker Problem/Function Def Problem

    - by Josh
    There is a static class Pipe, defined in C++ header that I'm including. The function I'm interested in calling (from Objetive-c) is here: static ERC SendUserGet(const UserId &_idUser,const GUID &_idStyle,const ZoneId &_idZone,const char *_pszMsg); I have access to an objetive-c data structure that appears to store a copy of userID, and zoneID -- it looks like: @interface DataBlock : NSObject { GUID userID; GUID zoneID; } Looked up the GUID def, and its a struct with a bunch of overloaded operators for equality. UserId and ZoneId from the first function signature are #typedef GUID Now when I try to call the function, no matter how I cast it (const UserId), (UserId), etc, I get the following linker error: "Pipe::SendUserGet(_GUID const&, _GUID const&, _GUID const&, char const*)", referenced from: -[PeoplePaneController clickGet:] in PeoplePaneController.o Is this a type/function signature error, or truly some sort of linker error? I have the headers where all these types and static classes are defined #imported -- I tried #include too, just in case, since I'm already stumbling :P Forgive me, I come from a web tech background, so this c-style memory management and immutability stuff is super hazy. Thanks, Josh

    Read the article

  • CAD like 3D geometry .NET library

    - by Naszta
    I am looking for a good 3D CAD like library. I need basic geometry shapes (cube, sphere, torus etc.) and the library should make the surface mesh - based on the shapes and some boolean operations. I have found many libraries on google (wrapped on C++), but most of them are not really comfortable, and/or do not support union/intersection. http://www.geometros.com/sgcore/index.htm - it has wrapped interface, http://www.opencsg.org/ - I haven't found wrapped interface, http://carve-csg.com/ - I haven't found wrapped interface, http://gts.sourceforge.net/ - I haven't found wrapped interface, http://www.ogre3d.org/ - I haven't found basic geometric shapes and boolean operators, http://brlcad.org/ - its interface is not clear for me, I haven't found wrapped interface, http://www.cgal.org/ - currently I try to make it work, I haven't found wrapped interface, http://www.k-3d.org/ - I haven't found wrapped interface, http://www.opencascade.org/ - I haven't found wrapped interface, http://ilnumerics.net/ - it does not support solid boolean operations, http://www.techsoft3d.com/ - seems to be really good one. Support both C++ and C#, http://www.devdept.com/products/eyeshot/ - one more C# library. It was not tested. Open source would be nice, but not necessary. Many thanks for help. P.S.: the previous topic Update: in C# we will use Eyeshot project.

    Read the article

  • constructing paramater query SQL - LIKE % in .cs and in grid view

    - by Indranil Mutsuddy
    Hello friends, I am trying to implement admin login and operators(india,australia,america) login, now operator id starts with 'IND123' 'AUS123' and 'AM123' when operator india logs in he can see the details of only members have id 'IND%', the same applies for an australian and american users and when admin logs in he can see details of members with id 'IND%' or 'AUS%' or 'AM%' i have a table which defines the role i.e admin or operator and their prefix(IND,AUS respectively) In loginpage i created a session for Role and prefix PREFIX = myReader["Prefix"].ToString(); Session["LoginRole"] = myReader["Role"].ToString(); Session["LoginPrefix"] = String.Concat(PREFIX + "%"); works fine In main page(after login) i have to count the number of member so i wrote string prefix = Session["LoginPrefix"].ToString(); string role = Session["LoginRole"].ToString(); if (role.Equals("ADMIN")) StrMemberId = "select count(*) from MemberList"; else StrMemberId = "select count(*) from MemberList where MemberID like '"+prefix+"'"; thatworks fine too Problem: 1. i want to constructor parameter something like StrMemberId = "select count(*) from MemberList where MemberID like '@prefix+'"; myCommd.Parameters.AddWithValue("@prefix", prefix); Which is not working 2 Displaying the members in gridview i need to give condition (something like if (role.Equals("ADMIN")) show all members else show member depending on the operator prefix)the list of members in operator mode and admin mode. - where to put the condition in gridview how to apply these please suggest something Regards Indranil

    Read the article

  • What was Tim Sweeney thinking? (How does this C++ parser work?)

    - by Frank Krueger
    Tim Sweeney of Epic MegaGames is the lead developer for Unreal and a programming language geek. Many years ago posted the following screen shot to VoodooExtreme: As a C++ programmer and Sweeney fan, I was captivated by this. It shows generic C++ code that implements some kind of scripting language where that language itself seems to be generic in the sense that it can define its own grammar. Mr. Sweeney never explained himself. :-) It's rare to see this level of template programming, but you do see it from time to time when people want to push the compiler to generate great code or because they want to create generic code (for example, Modern C++ Design). Tim seems to be using it to create a grammar in Parser.cpp - you can see what look like prioritized binary operators. If that is the case, then why does Test.ae look like it's also defining a grammar? Obviously this is a puzzle that needs to be solved. Victory goes to the answer with a working version of this code, or the most plausible explanation, or to Tim Sweeney himself if he posts an answer. :-)

    Read the article

  • Could CouchDB benefit significantly from the use of BERT instead of JSON?

    - by Victor Rodrigues
    I appreciate a lot CouchDB attempt to use universal web formats in everything it does: RESTFUL HTTP methods in every interaction, JSON objects, javascript code to customize database and documents. CouchDB seems to scale pretty well, but the individual cost to make a request usually makes 'relational' people afraid of. Many small business applications should deal with only one machine and that's all. In this case the scalability talk doesn't say too much, we need more performance per request, or people will not use it. BERT (Binary ERlang Term http://bert-rpc.org/ ) has proven to be a faster and lighter format than JSON and it is native for Erlang, the language in which CouchDB is written. Could we benefit from that, using BERT documents instead of JSON ones? I'm not saying just for retrieving in views, but for everything CouchDB does, including syncing. And, as a consequence of it, use Erlang functions instead of javascript ones. This would modify some original CouchDB principles, because today it is very web oriented. Considering I imagine few people would make their database API public and usually its data is accessed by the users through an application, it would be a good deal to have the ability to configure CouchDB for working faster. HTTP+JSON calls could still be handled by CouchDB, considering an extra cost in these cases because of parsing.

    Read the article

  • Sentiment analysis for twitter in python

    - by Ran
    I'm looking for an open source implementation, preferably in python, of Textual Sentiment Analysis (http://en.wikipedia.org/wiki/Sentiment_analysis). Is anyone familiar with such open source implementation I can use? I'm writing an application that searches twitter for some search term, say "youtube", and counts "happy" tweets vs. "sad" tweets. I'm using Google's appengine, so it's in python. I'd like to be able to classify the returned search results from twitter and I'd like to do that in python. I haven't been able to find such sentiment analyzer so far, specifically not in python. Are you familiar with such open source implementation I can use? Preferably this is already in python, but if not, hopefully I can translate it to python. Note, the texts I'm analyzing are VERY short, they are tweets. So ideally, this classifier is optimized for such short texts. BTW, twitter does support the ":)" and ":(" operators in search, which aim to do just this, but unfortunately, the classification provided by them isn't that great, so I figured I might give this a try myself. Thanks! BTW, an early demo is here and the code I have so far is here and I'd love to opensource it with any interested developer.

    Read the article

  • Nested dereferencing arrows in Perl: to omit or not to omit?

    - by DVK
    In Perl, when you have a nested data structure, it is permissible to omit de-referencing arrows to 2d and more level of nesting. In other words, the following two syntaxes are identical: my $hash_ref = { 1 => [ 11, 12, 13 ], 3 => [31, 32] }; my $elem1 = $hash_ref->{1}->[1]; my $elem2 = $hash_ref->{1}[1]; # exactly the same as above Now, my question is, is there a good reason to choose one style over the other? It seems to be a popular bone of stylistic contention (Just on SO, I accidentally bumped into this and this in the space of 5 minutes). So far, none of the usual suspects says anything definitive: perldoc merely says "you are free to omit the pointer dereferencing arrow". Conway's "Perl Best Practices" says "whenever possible, dereference with arrows", but it appears to only apply to the context of dereferencing the main reference, not optional arrows on 2d level of nested data structures. "MAstering Perl for Bioinfirmatics" author James Tisdall doesn't give very solid preference either: "The sharp-witted reader may have noticed that we seem to be omitting arrow operators between array subscripts. (After all, these are anonymous arrays of anonymous arrays of anonymous arrays, etc., so shouldn't they be written [$array-[$i]-[$j]-[$k]?) Perl allows this; only the arrow operator between the variable name and the first array subscript is required. It make things easier on the eyes and helps avoid carpal tunnel syndrome. On the other hand, you may prefer to keep the dereferencing arrows in place, to make it clear you are dealing with references. Your choice." Personally, i'm on the side of "always put arrows in, since itg's more readable and obvious tiy're dealing with a reference".

    Read the article

  • Bitwise Interval Arithmetic

    - by KennyTM
    I've recently read an interesting thread on the D newsgroup, which basically asks, Given two signed integers a ∈ [amin, amax], b ∈ [bmin, bmax], what is the tightest interval of a | b? I'm think if interval arithmetics can be applied on general bitwise operators (assuming infinite bits). The bitwise-NOT and shifts are trivial since they just corresponds to -1 − x and 2n x. But bitwise-AND/OR are a lot trickier, due to the mix of bitwise and arithmetic properties. Is there a polynomial-time algorithm to compute the intervals of bitwise-AND/OR? Note: Assume all bitwise operations run in linear time (of number of bits), and test/set a bit is constant time. The brute-force algorithm runs in exponential time. Because ~(a | b) = ~a & ~b and a ^ b = (a | b) & ~(a & b), solving the bitwise-AND and -NOT problem implies bitwise-OR and -XOR are done. Although the content of that thread suggests min{a | b} = max(amin, bmin), it is not the tightest bound. Just consider [2, 3] | [8, 9] = [10, 11].)

    Read the article

  • Code for decoding/encoding a modified base64 URL

    - by Kirk Liemohn
    I want to base64 encode data to put it in a URL and then decode it within my HttpHandler. I have found that Base64 Encoding allows for a '/' character which will mess up my UriTemplate matching. Then I found that there is a concept of a "modified Base64 for URL" from wikipedia: A modified Base64 for URL variant exists, where no padding '=' will be used, and the '+' and '/' characters of standard Base64 are respectively replaced by '-' and '_', so that using URL encoders/decoders is no longer necessary and has no impact on the length of the encoded value, leaving the same encoded form intact for use in relational databases, web forms, and object identifiers in general. Using .NET I want to modify my current code from doing basic base64 encoding and decoding to using the "modified base64 for URL" method. Has anyone done this? To decode, I know it starts out with something like: string base64EncodedText = base64UrlEncodedText.Replace('-', '+').Replace('_', '/'); // Append '=' char(s) if necessary - how best to do this? // My normal base64 decoding now uses encodedText But, I need to potentially add one or two '=' chars to the end which looks a little more complex. My encoding logic should be a little simpler: // Perform normal base64 encoding byte[] encodedBytes = Encoding.UTF8.GetBytes(unencodedText); string base64EncodedText = Convert.ToBase64String(encodedBytes); // Apply URL variant string base64UrlEncodedText = base64EncodedText.Replace("=", String.Empty).Replace('+', '-').Replace('/', '_'); I have seen the Guid to Base64 for URL StackOverflow entry, but that has a known length and therefore they can hardcode the number of equal signs needed at the end.

    Read the article

  • Is there an "embedded DBMS" to support multiple writer applications (processes) on the same db files

    - by Amir Moghimi
    I need to know if there is any embedded DBMS (preferably in Java and not necessarily relational) which supports multiple writer applications (processes) on the same set of db files. BerkeleyDB supports multiple readers but just one writer. I need multiple writers and multiple readers. UPDATE: It is not a multiple connection issue. I mean I do not need multiple connections to a running DBMS application (process) to write data. I need multiple DBMS applications (processes) to commit on the same storage files. HSQLDB, H2, JavaDB (Derby) and MongoDB do not support this feature. I think that there may be some File System limitations that prohibit this. If so, is there a File System that allows multiple writers on a single file? Use Case: The use case is a high-throughput clustered system that intends to store its high-volume business log entries into a SAN storage. Storing business logs in separate files for each server does not fit because query and indexing capabilities are needed on the whole biz logs. Because "a SAN typically is its own network of storage devices that are generally not accessible through the regular network by regular devices", I want to use SAN network bandwidth for logging while cluster LAN bandwidth is being used for other server to server and client to server communications.

    Read the article

  • Replacing XML reserved characters in SQL Server 2005

    - by Barn
    I'm working on a system that takes relational data from a sql server DB and uses SSIS to produce an XML extract using sql server 2005's 'FOR XML PATH' command and a schema. The problem lies with replacing the XML reserved characters. 'FOR XML PATH' is only replacing <, , and &, not ' and ", so I need a way of replacing these myself. I've tried pre-processing the fields in the database to replace XML reserved characters with their entitised equivalents (e.g. & becomes &amp;), but once these fields are used to construct XML using FOR XML the leading & is replaced with &amp;, so I end up with &amp;amp; where I should have &amp;. What I've tried so far is altering the element's contents after the XML has been constructed using XQuery inside SQL server like so: DECLARE @data VARCHAR(MAX) SET @data = CONVERT(VARCHAR(MAX), [my xml column].query(' data(/root/node_i_want)') SELECT @data = [function to replace quotes etc](@data) SET [my xml column].modify('replace value of (/root/node_i_want)[1] with sql:variable("@data")') but I get the same problem. Essentially, is there something wrong I'm doing with the above, or a way to tell FOR XML to entitise other characters, or something like that? Basically anything short of having to write a program to change the XML after it has been assembled in large batches and saved to files!

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • Calculation Expression Parser with Nesting and Variables in ActionScript

    - by yuletide
    Hi There, I'm trying to enable dynamic fields in the configuration file for my mapping app, but I can't figure out how to parse the "equation" passed in by the user, at least not without writing a whole parser from scratch! I'm sure there is some easier way to do this, and so I'm asking for ideas! Basic idea: public var testString:String = "(#TOTPOP_CY#-#HISPOP_CY#)/#TOTPOP_CY#"; public var valueObject:Object = {TOTPOP_CY:1000, HISPOP_CY:100}; public function calcParse(eq:String):String { // do calculations return calculatedValue } So far, I was thinking of splitting the expression by either the operators, or maybe the variable tokens, but that gets rid of the parenthetical nesting. Alternatively, use a series of regex to search and replace each piece of the expression with its value, recursively running until only a number is left. But I don't think regex does math (i.e. replace "\d + \d" with the sum of the two numbers) Ideally, I'd just do a find/replace all variable names with their values, then run an eval(), but there's no eval in AS... eesh I downloaded some course materials for a course on compiler design, so maybe I'll just write a full-fledged calculator language and parser and port it over from the OTHER flex (the parser generator) :-D

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • Data in two databases, eager spool resulting in query

    - by Valkyrie
    I have two databases in SQL2k5: one that holds a large amount of static data (SQL Database 1) (never updated but frequently inserted into) and one that holds relational data (SQL Database 2) related to the static data. They're separated mainly because of corporate guidelines and business requirements: assume for the following problem that combining them is not practical. There are places in SQLDB2 that PKs in SQLDB1 are referenced; triggers control the referential integrity, since cross-database relationships are troublesome in SQL Server. BUT, because of the large amount of data in SQLDB1, I'm getting eager spools on queries that join from the Id in SQLDB2 that references the data in SQLDB1. (With me so far? Maybe an example will help:) SELECT t.Id, t.Name, t2.Company FROM SQLDB1.table t INNER JOIN SQLDB2.table t2 ON t.Id = t2.FKId This query results in a eager spool that's 84% of the load of the query; the table in SQLDB1 has 35M rows, so it's completely choking this query. I can't create a view on the table in SQLDB1 and use that as my FK/index; it doesn't want me to create a constraint based on a view. Anyone have any idea how I can fix this huge bottleneck? (Short of putting the static data in the first db: believe me, I've argued that one until I'm blue in the face to no avail.) Thanks! valkyrie Edit: also can't create an indexed view because you can't put schemabinding on a view that references a table outside the database where the view resides. Dang it.

    Read the article

  • How to temporarily replace one primitive type with another when compiling to different targets in c#

    - by Keith
    How to easily/quickly replace float's for doubles (for example) for compiling to two different targets using these two particular choices of primitive types? Discussion: I have a large amount of c# code under development that I need to compile to alternatively use float, double or decimals depending on the use case of the target assembly. Using something like “class MYNumber : Double” so that it is only necessary to change one line of code does not work as Double is sealed, and obviously there is no #define in C#. Peppering the code with #if #else statements is also not an option, there is just too much supporting Math operators/related code using these particular primitive types. I am at a loss on how to do this apparently simple task, thanks! Edit: Just a quick comment in relation to boxing mentioned in Kyles reply: Unfortunately I need to avoid boxing, mainly since float's are being chosen when maximum speed is required, and decimals when maximum accuracy is the priority (and taking the 20x+ performance hit is acceptable). Boxing would probably rules out decimals as a valid choice and defeat the purpose somewhat. Edit2: For reference, those suggesting generics as a possible answer to this question note that there are many issues which count generics out (at least for our needs). For an overview and further references see Using generics for calculations

    Read the article

  • Correct escaping of delimited identifers in SQL Server without using QUOTENAME

    - by Ross Bradbury
    Is there anything else that the code must do to sanitize identifiers (table, view, column) other than to wrap them in double quotation marks and "double up" and double quotation marks present in the identifier name? References would be appreciated. I have inherited a code base that has a custom object-relational mapping (ORM) system. SQL cannot be written in the application but the ORM must still eventually generate the SQL to send to the SQL Server. All identifiers are quoted with double quotation marks. string QuoteName(string identifier) { return "\" + identifier.Replace("\"", "\"\"") + "\""; } If I were building this dynamic SQL in SQL, I would use the built-in SQL Server QUOTENAME function: declare @identifier nvarchar(128); set @identifier = N'Client"; DROP TABLE [dbo].Client; --'; declare @delimitedIdentifier nvarchar(258); set @delimitedIdentifier = QUOTENAME(@identifier, '"'); print @delimitedIdentifier; -- "Client""; DROP TABLE [dbo].Client; --" I have not found any definitive documentation about how to escape quoted identifiers in SQL Server. I have found Delimited Identifiers (Database Engine) and I also saw this stackoverflow question about sanitizing. If it were to have to call the QUOTENAME function just to quote the identifiers that is a lot of traffic to SQL Server that should not be needed. The ORM seems to be pretty well thought out with regards to SQL Injection. It is in C# and predates the nHibernate port and Entity Framework etc. All user input is sent using ADO.NET SqlParameter objects, it is just the identifier names that I am concerned about in this question. This needs to work on SQL Server 2005 and 2008.

    Read the article

  • Regex with optional part doesn't create backreference

    - by padraigf
    I want to match an optional tag at the end of a line of text. Example input text: The quick brown fox jumps over the lazy dog {tag} I want to match the part in curly-braces and create a back-reference to it. My regex looks like this: ^.*(\{\w+\})? (somewhat simplified, I'm also matching parts before the tag): It matches the lines ok (with and without the tag) but doesn't create a back-reference to the tag. If I remove the '?' character, so regex is: ^.*(\{\w+\}) It creates a back-reference to the tag but then doesn't match lines without the tag. I understood from http://www.regular-expressions.info/refadv.html that the optional operator wouldn't affect the backreference: Round brackets group the regex between them. They capture the text matched by the regex inside them that can be reused in a backreference, and they allow you to apply regex operators to the entire grouped regex. but must've misunderstood something. How do I make the tag part optional and create a back-reference when it exists?

    Read the article

  • Scheme Infix to Postfix

    - by Cody
    Let me establish that this is part of a class assignment, so I'm definitely not looking for a complete code answer. Essentially we need to write a converter in Scheme that takes a list representing a mathematical equation in infix format and then output a list with the equation in postfix format. We've been provided with the algorithm to do so, simple enough. The issue is that there is a restriction against using any of the available imperative language features. I can't figure out how to do this in a purely functional manner. This is our fist introduction to functional programming in my program. I know I'm going to be using recursion to iterate over the list of items in the infix expression like such. (define (itp ifExpr) ( ; do some processing using cond statement (itp (cdr ifExpr)) )) I have all of the processing implemented (at least as best I can without knowing how to do the rest) but the algorithm I'm using to implement this requires that operators be pushed onto a stack and used later. My question is how do I implement a stack in this function that is available to all of the recursive calls as well?

    Read the article

  • how to get cartesian products between database and local sequences in linq?

    - by JD
    I saw this similar question here but can't figure out how to use Contains in Cartesian product desired result situation: http://stackoverflow.com/questions/1712105/linq-to-sql-exception-local-sequence-cannot-be-used-in-linq-to-sql-implementatio Let's say I have following: var a = new [] { 1, 4, 7 }; var b = new [] { 2, 5, 8 }; var test = from i in a from j in b select new { A = i, B = j, AB = string.Format("{0:00}a{1:00}b", i, j), }; foreach (var t in test) Console.Write("{0}, ", t.AB); This works great and I get a dump like so (note, I want the cartesian product): 01a02b, 01a05b, 01a08b, 04a02b, 04a05b, 04a08b, 07a02b, 07a05b, 07a08b, Now what I really want is to take this and cartesian product it again against an ID from a database table I have. But, as soon as I add in one more from clause that instead of referencing objects, references SQL table, I get an error. So, altering above to something like so where db is defined as a new DataContext (i.e., class deriving from System.Data.Linq.DataContext): var a = new [] { 1, 4, 7 }; var b = new [] { 2, 5, 8 }; var test = from symbol in db.Symbols from i in a from j in b select new { A = i, B = j, AB = string.Format("{0}{1:00}a{2:00}b", symbol.ID, i, j), }; foreach (var t in test) Console.Write("{0}, ", t.AB); The error I get is following: Local sequence cannot be used in LINQ to SQL implementations of query operators except the Contains operator Its related to not using Contains apparently but I'm unsure how Contains would be used when I don't really want to constrict the results - I want the Cartesian product for my situation. Any ideas of how to use Contains above and still yield the Cartesian product when joining database and local sequences?

    Read the article

  • How to scrape Google SERP based on copyright year?

    - by Michael Mao
    Hi all: I know there must be ways to do this sort of things. I am not pro in RoR or Python, not even an expert in PHP. So my solution tends to be quite dumb: It uses a FireFox add-on called imarcos to scrape the target urls from Google SERP, and use PHP to store info into the database. At the very core of my workaround there lies a problem: How to specifically find target urls based on their copyright year? I mean, something like "copyright 1998-2006" in the footer is to be considered a target, but my search results are not 100% accurate. I used the following url to search : http://www.google.com.au/#hl=en&q=inurl:.com.au+intext:copyright+1995..2007+--2008+--2009&start=0&cad=b&fp=6a8119b094529f00 It reads : search for pages that have .com.au in URL and a copyright range from 1995 to 2007 exclude the year of 2008 or 2009. Starting position is 0, of course the offset can be changed. I've already done a dummy list and honestly I am not pleased with the result. That's mostly because I cannot find a way to restrict search terms in the exact order as they are entered into the search url. copyright can appear in anywhere on page and doesn't necessarily before the years, that's the current story. Is there a more clear way to sort out this? Oh, almost forgot to say the client doesn't wanna spent too much in this - I cannot persuade him simply buy some cool software, unfortunately. I hope there is a way to use clever Google search operators or similar things to go around this issue. Many thanks in advance!

    Read the article

  • What ever happened to APL?

    - by lkessler
    When I was at University 30 years ago, I used a programming language called APL. I believe the acronym stood for "A Programming Language", This language was interpretive and was especially useful for array and matrix operations with powerful operators and library functions to help with that. Did you use APL? Is this language still in use anywhere? Is it still available, either commercially or open source? I remember the combinatorics assignment we had. It was complex. It took a week of work for people to program it in PL/1 and those programs ranged from 500 to 1000 lines long. I wrote it in APL in under an hour. I left it at 10 lines for readability, although I should have been a purist and worked another hour to get it into 1 line. The PL/1 programs took 1 or 2 minutes to run on the IBM mainframe and solve the problem. The computer charge was $20. My APL program took 2 hours to run and the charge was $1,500 which was paid for by our Computer Science Department's budget. That's when I realized that a week of my time is worth way more than saving some $'s in someone else's budget. I got an A+ in the course. p.s. Don't miss this presentation entitled: "APL one of the greatest programming languages ever"

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >