Search Results

Search found 3875 results on 155 pages for 'hibernate criteria'.

Page 138/155 | < Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >

  • Hadoop Map/Reduce - simple use example to do the following...

    - by alexeypro
    I have MySQL database, where I store the following BLOB (which contains JSON object) and ID (for this JSON object). JSON object contains a lot of different information. Say, "city:Los Angeles" and "state:California". There are about 500k of such records for now, but they are growing. And each JSON object is quite big. My goal is to do searches (real-time) in MySQL database. Say, I want to search for all JSON objects which have "state" to "California" and "city" to "San Francisco". I want to utilize Hadoop for the task. My idea is that there will be "job", which takes chunks of, say, 100 records (rows) from MySQL, verifies them according to the given search criteria, returns those (ID's) which qualify. Pros/cons? I understand that one might think that I should utilize simple SQL power for that, but the thing is that JSON object structure is pretty "heavy", if I put it as SQL schemas, there will be at least 3-5 tables joins, which (I tried, really) creates quite a headache, and building all the right indexes eats RAM faster than I one can think. ;-) And even then, every SQL query has to be analyzed to be utilizing the indexes, otherwise with full scan it literally is a pain. And with such structure we have the only way "up" is just with vertical scaling. But I am not sure it's the best option for me, as I see how JSON objects will grow (the data structure), and I see that the number of them will grow too. :-) Help? Can somebody point me to simple examples of how this can be done? Does it make sense at all? Am I missing something important? Thank you.

    Read the article

  • Rolling Back a Transaction with MySQL Connector in VB.net

    - by Jonathan
    Hey all- I have one multi-row INSERT statement (300 or so sets of values) that I would like to commit to the MySQL database in an all-or-nothing fashion. insert into table VALUES (1, 2, 3), (4, 5, 6), (7, 8, 9); In some cases, a set of values in the command will not meet the criteria of the table (duplicate key, for example). When that happens I do not want any of the previous sets added to the database. I've implemented this with the following code, however, my rollback command doesn't appear to be making a difference. I've used this documentation: http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqltransaction.html Dim transaction As MySqlTransaction = sqlConnection.BeginTransaction() sqlCommand = New MySqlCommand(insertStr, sqlConnection, transaction) Try sqlCommand.ExecuteNonQuery() Catch ex As Exception writeToLog("EXCEPTION: " & ex.Message & vbNewLine) writeToLog("Could not execute " & sqlCmd & vbNewLine) Try transaction.Rollback() writeToLog("All statements were rolled back." & vbNewLine) Return False Catch rollbackEx As Exception writeToLog("EXCEPTION: " & rollbackEx.Message & vbNewLine) writeToLog("All statements were not rolled back." & vbNewLine) Return False End Try End Try transaction.commit() I get the DUPLICATE KEY exception thrown, no Rollback Exception thrown, and every set of values up to duplicate key committed to the database. What am I doing wrong? Thanks- Jonathan

    Read the article

  • Sort and limit queryset by comment count and date using queryset.extra() (django)

    - by thornomad
    I am trying to sort/narrow a queryset of objects based on the number of comments each object has as well as by the timeframe during which the comments were posted. Am using a queryset.extra() method (using django_comments which utilizes generic foreign keys). I got the idea for using queryset.extra() (and the code) from here. This is a follow-up question to my initial question yesterday (which shows I am making some progress). Current Code: What I have so far works in that it will sort by the number of comments; however, I want to extend the functionality and also be able to pass a time frame argument (eg, 7 days) and return an ordered list of the most commented posts in that time frame. Here is what my view looks like with the basic functionality in tact: import datetime from django.contrib.comments.models import Comment from django.contrib.contenttypes.models import ContentType from django.db.models import Count, Sum from django.views.generic.list_detail import object_list def custom_object_list(request, queryset, *args, **kwargs): '''Extending the list_detail.object_list to allow some sorting. Example: http://example.com/video?sort_by=comments&days=7 Would get a list of the videos sorted by most comments in the last seven days. ''' try: # this is where I started working on the date business ... days = int(request.GET.get('days', None)) period = datetime.datetime.utcnow() - datetime.timedelta(days=int(days)) except (ValueError, TypeError): days = None period = None sort_by = request.GET.get('sort_by', None) ctype = ContentType.objects.get_for_model(queryset.model) if sort_by == 'comments': queryset = queryset.extra(select={ 'count' : """ SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s """ % ( ctype.pk, queryset.model._meta.db_table, queryset.model._meta.pk.name ), }, order_by=['-count']).order_by('-count', '-created') return object_list(request, queryset, *args, **kwargs) What I've Tried: I am not well versed in SQL but I did try just to add another WHERE criteria by hand to see if I could make some progress: SELECT COUNT(*) AS comment_count FROM django_comments WHERE content_type_id=%s AND object_pk=%s.%s AND submit_date='2010-05-01 12:00:00' But that didn't do anything except mess around with my sort order. Any ideas on how I can add this extra layer of functionality? Thanks for any help or insight.

    Read the article

  • Reporting tool for OLAP, *not* OLTP!

    - by Stefan Moser
    I'm looking for a control that I can put on top of an already existing OLAP star schema to allow the user to define their own "queries" and generate reports. Right now I have some predefined reports built on top of the cubes, but I'd like to allow the user to define their own criteria based on the cubes that I've created. I've found lots of products that will allow you to treat a transactional table like an OLAP cube, but nothing specifically for pre-existing cubes. EDIT: Let me be clear, I know there are countless reporting tools out there that claim to report on OLAP cubes. The problem is they all assume they are looking at transactional data and try to create their own cubes. I have tables that contain tens, if not hundreds of millions of records. Most tools crash when handling this much data, the others just run incredible slowly. I don't want a tool that is targeting the business people. I want a tool that understands what a star and snowflake schema is. I want to be able to tell it what the fact tables are and what the dimension tables are, and then creates a UI on top of them. This is an easier problem to solve for the tool vendor because I am spoon feeding them the cubes. I want to rely on the fact that cubes are a standardized pattern and I want a tool that takes advantage of this fact. I want a tool that targets developers and starts with the assumption that I actually know how to manage my data, it just needs to build pretty reports for me and not crumble under the weight of my data.

    Read the article

  • Why is there unreachable code here?

    - by Richard
    I am writing a c# app and want to output error messages to either the console or a messagebox (Depending on the app type: enum AppTypeChoice { Console, Windows } ), and also control wether the app keeps running or not ( bool StopOnError ). I came up with this method that will check all the criteria, but I'm getting an "unreachable code detected" warning. I can't see why! Here is the whole method (Brace yourselves for some hobbyist code!) public void OutputError(string message) { string standardMessage = "Something went WRONG!. [ But I'm not telling you what! ]"; string defaultMsgBoxTitle = "Aaaaarrrggggggggggg!!!!!"; string dosBoxOutput = "\n\n*** " + defaultMsgBoxTitle + " *** \n\n Message was: '" + message + "'\n\n"; AppTypeChoice appType = DataDefs.AppType; DebugLevelChoice level = DataDefs.DebugLevel; // Decide how much info we should give out here... if (level != DebugLevelChoice.None) { // Give some info.... if (appType == AppTypeChoice.Windows) MessageBox.Show(message, defaultMsgBoxTitle, MessageBoxButtons.OK, MessageBoxIcon.Error); else Console.WriteLine(dosBoxOutput); } else { // Be very secretive... if (appType == AppTypeChoice.Windows) MessageBox.Show(standardMessage, defaultMsgBoxTitle, MessageBoxButtons.OK, MessageBoxIcon.Error); else Console.WriteLine(standardMessage); } // Decide if app falls over or not.. if (DataDefs.StopOnError == true) Environment.Exit(0); // UNREACHABLE CODE HERE } Also, while I have your attention, to get the app type, I'm just using a constant at the top of the file (ie. AppTypeChoice.Console in a Console app etc) - is there a better way of doing this (i mean finding out in code if it is a DOS or Windows app)? Also, I noticed that I can use a messagebox with a fully-qualified path in a Console app...How bad is is to do that ( I mean, will I get tarred and feathered when other developers see it?!) Thanks for your help

    Read the article

  • How to join a table in symfony (Propel) and retrieve object from both table with one query

    - by Jean-Philippe
    Hi, I'm trying to get an easy way to fetch data from two joined Mysql table using Propel (inside Symfony) but in one query. Let's say I do this simple thing: $comment = CommentPeer::RetrieveByPk(1); print $comment->getArticle()->getTitle(); //Assuming the Article table is joined to the Comment table Symfony will call 2 queries to get that done. The first one to get the Comment row and the next one to get the Article row linked to the comment one. Now, I am trying to find a way to make all that within one query. I've tried to join them using $c = new Criteria(); $c->addJoin(CommentPeer::ARTICLE_ID, ArticlePeer::ID); $c->add(CommentPeer::ID, 1); $comment = CommentPeer::doSelectOne($c); But when I try to get the Article object using $comment->getArticle() It will still issue the query to get the Article row. I could easily clear all the selected columns and select the columns I need but that would not give me the Propel object I'd like, just an array of the query's raw result. So how can I get a populated propel object of two (or more) joined table with only one query? Thanks, JP

    Read the article

  • Binding a date string parameter in an MS Access PDO query

    - by harryg
    I've made a PDO database class which I use to run queries on an MS Access database. When querying using a date condition, as is common in SQL, dates are passed as a string. Access usually expects the date to be surrounded in hashes however. E.g. SELECT transactions.amount FROM transactions WHERE transactions.date = #2013-05-25#; If I where to run this query using PDO I might do the following. //instatiate pdo connection etc... resulting in a $db object $stmt = $db->prepare('SELECT transactions.amount FROM transactions WHERE transactions.date = #:mydate#;'); //prepare the query $stmt->bindValue('mydate', '2013-05-25', PDO::PARAM_STR); //bind the date as a string $stmt->execute(); //run it $result = $stmt->fetch(); //get the results As far as my understanding goes the statement that results from the above would look like this as binding a string results in it being surrounded by quotes: SELECT transactions.amount FROM transactions WHERE transactions.date = #'2013-05-25'#; This causes an error and prevents the statement from running. What's the best way to bind a date string in PDO without causing this error? I'm currently resorting to sprintf-ing the string which I'm sure is bad practise. Edit: if I pass the hash-surrounded date then I still get the error as below: Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[22018]: Invalid character value for cast specification: -3030 [Microsoft][ODBC Microsoft Access Driver] Data type mismatch in criteria expression. (SQLExecute[-3030] at ext\pdo_odbc\odbc_stmt.c:254)' in C:\xampp\htdocs\ips\php\classes.php:49 Stack trace: #0 C:\xampp\htdocs\ips\php\classes.php(49): PDOStatement-execute() #1 C:\xampp\htdocs\ips\php\classes.php(52): database-execute() #2 C:\xampp\htdocs\ips\try2.php(12): database-resultset() #3 {main} thrown in C:\xampp\htdocs\ips\php\classes.php on line 49

    Read the article

  • Algorithm To Select Most Popular Places from Database

    - by Russell C.
    We have a website that contains a database of places. For each place our users are able to take one of the follow actions which we record: VIEW - View it's profile RATING - Rate it on a scale of 1-5 stars REVIEW - Review it COMPLETED - Mark that they've been there WISH LIST - Mark that they want to go there FAVORITE - Mark that it's one of their favorites In our database table of places each place contains a count of the number of times each action above was taken as well as the average rating given by users. views ratings avg_rating completed wishlist favorite What we want to be able to do is generate lists of the top places using the above information. Ideally, we would want to be able to generate this list using a relatively simple SQL query without needing to do any legwork to calculate additional fields or stack rank places against one another. That being said, since we only have about 50,000 places we could run a nightly cron job to calculate some fields such as rankings on different categories if it would make a meaningful difference in the overall results of our top places. I'd appreciate if you could make some suggestions on how we should think about bubbling the best places to the top, which criteria we should weight more heavily, and given that information - suggest what the MySQL query would need to look like in order to select the top 10 places. One thing to note is that at this time we are less concerned with the recency of a place being popular - meaning that looking at the aggregate information is fine and that more recent data doesn't need to be weighted more heavily. Thanks in advance for your help & advice!

    Read the article

  • 1. I fill out a form & click submit. 2. I get the results page. Goal: Get the same results without f

    - by Chris
    This is my first time posting - I greatly appreciate any and all guidance on this subject. Background: I am building a Real Estate web site. I would like to use the free IDX data provided by my local MLS board. The MLS board does not allow me the option of displaying a predefined search and only provides me with a link to the search field. after filling out the search field, I am able to view the results. Goal: I would like to bypass this step and frame the results page into a GoDaddy website I am building, which supports HTML. Here is a link to the search page: http://fgcmls.rapmls.com/scripts/mgrqispi.dll?APPNAME=Fortmyers&PRGNAME=MLSLogin&ARGUMENT=vBSJvLQtMcbg7F0O0KnXDiggv%2F12B0S6Ss9wv4510QA%3D&KeyRid=1 I am trying to only show the listings that appear in my neighborhood. Options include: 1. Property Type - Residential 2. GEO Area - FM11 3. Developments: Fiddlesticks Country Club Once this criteria is entered, I have the page needed to make this project work. Thank all of you for taking the time to read this and for the time you spend helping me out. Best regards, Chris

    Read the article

  • Multiple WCF calls for a single ASP.NET page load

    - by Rodney Burton
    I have an existing asp.net web application I am redesigning to use a service architecture. I have the beginnings of an WCF service which I am able to call and perform functions with no problems. As far as updating data, it all makes sense. For example, I have a button that says Submit Order, it sends the data to the service, which does the processing. Here's my concern: If I have an ASP.NET page that shows me a list of orders (View Orders page), and at the top I have a bunch of drop down lists for order types, and other search criteria which is populated by querying different tables from the database (lookup tables, etc). I am hoping to eventually completely decouple the web application from the DB, and use data contracts to pass information between the BLL, the SOA, and the web app. With that said, how can I reduce the # of WCF calls needed to load my "View Orders" page? I would need to make 1 call get the list of orders, and 1 call for each drop down list, etc because those are populated by individual functions in my BLL. Is it good architecture to create a web service method that returns back a specialized data contract that consists of everything you would need to display a View Orders page, in 1 shot? Something like this pseudocode: public class ViewOrderPageDTO { public OrderDTO[] Orders { get; set; } public OrderTypesDTO[] OrderTypes { get; set; } public OrderStatusesDTO[] OrderStatuses { get; set; } public CustomerListDTO[] CustomerList { get; set; } } Or is it better practice in the page_load event to make 5 or 6 or even 15 individual calls to the SOA to get the data needed to load the page? Therefore, bypassing the need for specialized wcf methods or DTO's that conglomerate other DTO? Thanks for your input and suggestions.

    Read the article

  • Templates, and C++ operator for logic: B contained by set A

    - by James Morris
    In C++, I'm looking to implement an operator for selecting items in a list (of type B) based upon B being contained entirely within A. In the book "the logical design of digital computers" by Montgomery Phister jr (published 1958), p54, it says: F11 = A + ~B has two interesting and useful associations, neither of them having much to do with computer design. The first is the logical notation of implication... The second is notation of inclusion... This may be expressed by a familiar looking relation, B < A; or by the statement "B is included in A"; or by the boolean equation F11= A + ~B = 1. My initial implementation was in C. Callbacks were given to the list to use for such operations. An example being a list of ints, and a struct containting two ints, min and max, for selection purposes. There, selection would be based upon B = A-min && B <= A-max. Using C++ and templates, how would you approach this after having implemented a generic list in C using void pointers and callbacks? Is using < as an over-ridden operator for such purposes... <ugh> evil? </ugh> (or by using a class B for the selection criteria, implementing the comparison by overloading ?)

    Read the article

  • Deleting a non-owned dynamic array through a pointer

    - by ayanzo
    Hello all, I'm relatively novice when it comes to C++ as I was weened on Java for much of my undergraduate curriculum (tis a shame). Memory management has been a hassle, but I've purchased a number books on ansi C and C++. I've poked around the related questions, but couldn't find one that matched this particular criteria. Maybe it's so obvious nobody mentions it? This question has been bugging me, but I feel as if there's a conceptual point i'm not utilizing. Suppose: char original[56]; cstr[0] = 'a'; cstr[1] = 'b'; cstr[2] = 'c'; cstr[3] = 'd'; cstr[4] = 'e'; cstr[5] = '\0'; char *shaved = shavecstr(cstr); delete[] cstrn; where char* shavecstr(char* cstr) { size_t len = strlen(cstr); char* ncstr = new char[len]; strcpy(ncstr,cstr); return ncstr; } In that the whole point is to have 'original' be a buffer that fills with characters and routinely has its copy shaved and used elsewhere. To prevent leaks, I want to free up the memory held by 'shaved' to be used again after it passes through some arguments. There is probably a good reason for why this is restricted, but there should be some way to free the memory as by this configuration, there is no way to access the original owner (pointer) of the data.

    Read the article

  • How can I test for an empty Breeze predicate?

    - by Megan
    I'm using Breeze to filter data requested on the client. My code looks a little like this: Client - Creating Filter Predicate var predicates = []; var criteriaPredicate = null; $.each(selectedFilterCriteria(), function (index, item) { criteriaPredicate = (index == 0) ? breeze.Predicate.create('criteriaId', breeze.FilterQueryOp.Equals, item) : criteriaPredicate.or('criteriaId', breeze.FilterQueryOp.Equals, item); if (breeze.Predicate.isPredicate(criteriaPredicate)) { predicates.push(criteriaPredicate); } // Repeat for X Filter Criteria var filter = breeze.Predicate.and(predicates); return context.getAll(filter, data); Client - Context Query function getAll(predicate, dataObservable) { var query = breeze.EntityQuery.from('Data'); if (breeze.Predicate.isPredicate(predicate)) { query = query.where(predicate); } return manager.executeQuery(query).then(success).fail(failure); } Issue I'm having an issue with the request because, if there are no filters set, I apply an "empty" predicate (due to the var filter = breeze.Predicate.and([]) line) resulting in a request like http://mysite/api/app/Data?$filter=. The request is an invalid OData query since the value of the $filter argument cannot be empty. Is there a good way for me to check for an empty predicate? I know I can refactor my client code to not use a predicate unless there is at least one filterable item, but I thought I would check first to see if I overlooked some property or method on the Breeze Predicate.

    Read the article

  • Password Confirmation Overlay

    - by Alasdair
    Hello, I'm creating a J2EE web application that uses jQuery and Ajax to help with some of the presentation for a user-friendly interface. I've done a lot of work ensuring security around persistant login cookies, and I've decided to request the password from any user that logged in using a persistant login cookie before being allowed to make any changes that could be malicious. This request would only happen once to confirm the user is who they say they are and will last throughout the session. At present, any requests that meet this criteria has their request information stored in session and then the user is forwarded to a page to confirm their password. Once confirmed, the user's original request is then performed and the requestion information removed from session. What I would like to do is avoid all this redirection and minimize what's held in session (even if it's just for a small time), thus improving usability and convenience for the user. I believe that a jQuery overlay could allow me to prompt the user for their password (if required) and then continue to submit the request if successful. I would of originally used ThickBox, but since that's now deprecated I don't see the benefit in implementing it in an application at this development stage. However, I have tried to create an overlay using jQuery but I've scrapped every attempt as I can't seem to make it all come together. My main problem is preventing the submission when the user incorrectly types a password or cancels the overlay. Desired Flow Persistant Login Sensitive Page Submit Password Confirmation Overlay [Continue Submit | (Cancel | Incorrect] I have already created JavaScript code to encrypt the password to be sent in a parameter, but all I need now is a method of controlling the overlay and how best to use Ajax for this purpose. Please ignore the fact that this is a J2EE web application when answering as it is irrelevant really. Thanks in advance, Alasdair

    Read the article

  • Calling jquery function from ascx not working

    - by Metju
    Hi Guys, I'm having a problem with the following situation. I have an ascx which contains a submit button for a search criteria and I am trying to call a validation function in a js file I've used throughout the site (this is the first time I'm using it in an ascx). Now I've just tried this: <script type="text/javascript" src="js/jquery-1.3.2.js"></script> <script type="text/javascript" src="js/jsAdmin_Generic_SystemValidation.js"></script> <script type="text/javascript"> $(document).ready(function () { $(".submitBtn").click(function (e) { alert("test"); alert($.Validate()); alert("test 2"); }); }); </script> The file is being referenced correctly as I am already seeing posts in Firebug that are done by it. This is the function: jQuery.extend({ Validate: function () { does validation... }); Now at first I was getting "Validate() is not a function" in firebug. Since I did that alert testing, I am getting the first alert, then nothing with no errors. Can anyone shed some light? Thanks

    Read the article

  • SQL Alchemy MVC and cross controller joins

    - by Khorkrak
    When using SQL Alchemy for abstracting your data access layer and using controllers as the way to access objects from that abstraction layer, how should joins be handled? So for example, say you have an Orders controller class that manages Order objects such that it provides getOrder, saveOrder, etc methods and likewise a similar controller for User objects. First of all do you even need these controllers? Should you instead just treat SQL Alchemy as "the" thing for handling data access. Why bother with object oriented controller stuff there when you instead have a clean declarative way to obtain and persist objects without having to write SQL directly either. Well one reason could be that perhaps you may want to replace SQL Alchemy with direct SQL or Storm or whatever else. So having controller classes there to act as an intermediate layer helps limit what would need to change then. Anyway - back to the main question - so assuming you have these two controllers, now lets say you want the list of orders for a certain set of users meeting some criteria. How do you go about doing this? Generally you don't want the controllers crossing domains - the Orders controllers knows only about Orders and the User controller just about Users - they don't mess with each other. You also don't want to go fetch all the Users that match and then feed a big list of user ids to the Orders controller to go find the matching Orders. What's needed is a join. Here's where I'm stuck - that seems to mean either the controllers must cross domains or perhaps they should be done away with altogether and you simply do the join via SQL Alchemy directly and get the resulting User and / or Order objects as needed. Thoughts?

    Read the article

  • Why does the Java Collections Framework offer two different ways to sort?

    - by dvanaria
    If I have a list of elements I would like to sort, Java offers two ways to go about this. For example, lets say I have a list of Movie objects and I’d like to sort them by title. One way I could do this is by calling the one-argument version of the static java.util.Collections.sort( ) method with my movie list as the single argument. So I would call Collections.sort(myMovieList). In order for this to work, the Movie class would have to be declared to implement the java.lang.Comparable interface, and the required method compareTo( ) would have to be implemented inside this class. Another way to sort is by calling the two-argument version of the static java.util.Collections.sort( ) method with the movie list and a java.util.Comparator object as it’s arguments. I would call Collections.sort(myMovieList, titleComparator). In this case, the Movie class wouldn’t implement the Comparable interface. Instead, inside the main class that builds and maintains the movie list itself, I would create an inner class that implements the java.util.Comparator interface, and implement the one required method compare( ). Then I'd create an instance of this class and call the two-argument version of sort( ). The benefit of this second method is you can create an unlimited number of these inner class Comparators, so you can sort a list of objects in different ways. In the example above, you could have another Comparator to sort by the year a movie was made, for example. My question is, why bother to learn both ways to sort in Java, when the two-argument version of Collections.sort( ) does everything the first one-argument version does, but with the added benefit of being able to sort the list’s elements based on several different criteria? It would be one less thing to have to keep in your mind while coding. You’d have one basic mechanism of sorting lists in Java to know.

    Read the article

  • Automated Oracle Schema Migration Tool

    - by Dave Jarvis
    What are some tools (commercial or OSS) that provide a GUI-based mechanism for creating schema upgrade scripts? To be clear, here are the tool responsibilities: Obtain connection to recent schema version (called "source"). Obtain connection to previous schema version (called "target"). Compare all schema objects between source and target. Create a script to make the target schema equivalent to the source schema ("upgrade script"). Create a rollback script to revert the source schema, used if the upgrade script fails (at any point). Create individual files for schema objects. The software must: Use ALTER TABLE instead of DROP and CREATE for renamed columns. Work with Oracle 10g or greater. Create scripts that can be batch executed (via command-line). Trivial installation process. (Bonus) Create scripts that can be executed with SQL*Plus. Here are some examples (from StackOverflow, ServerFault, and Google searches): Change Manager Oracle SQL Developer Software that does not meet the criteria, or cannot be evaluated, includes: TOAD PL/SQL Developer - Invalid SQL*Plus statements. Does not produce ALTER statements. SQL Fairy - No installer. Complex installation process. Poorly documented. DBDiff - Crippled data set evaluation, poor customer support. OrbitDB - Crippled data set evaluation. SchemaCrawler - No easily identifiable download version for Oracle databases. SQL Compare - SQL Server, not Oracle. LiquiBase - Requires changing the development process. No installer. Manually edit config files. Does not recognize its own baseUrl parameter. The only acceptable crippling of the evaluation version is by time. Crippling by restricting the number of tables and views hides possible bugs that are only visible in the software during the attempt to migrate hundreds of tables and views.

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • Date Filtered Collections without Functions

    - by madcapnmckay
    Hi, I have an entity similar to the below: public class Entity { public List<DateItem> PastDates { get; set; } public List<DateItem> FutureDates { get; set; } } public class DateItem { public DateTime Date { get; set; } /* * Other Properties * */ } Where PastDates and FutureDates are both mapped to the same type/table. I have been trying to find a way to have the Past and Future properties mapped automagically by Nhibernate. The closest I came was where clause on the mapping as follows HasMany(x => x.PastDates) .AsBag().Cascade .AllDeleteOrphan() .KeyColumnNames.Add("EventId").Where("Date < currentdate()") .Inverse(); Where currentdate is a UDF. I do not want to have these database specific functions if I can avoid it, mostly because i can't then test my DAL with SQLite as it doesn't support functions or stored procedures. At the moment I am building the past and future collections using Criteria and adding to my DTO manually. Anyone know how this could be achieved without using any UDFs? Many thanks,

    Read the article

  • How do I get 3 lines of text from a paragraph in C#

    - by Keltex
    I'm trying to create an "snippet" from a paragraph. I have a long paragraph of text with a word hilighted in the middle. I want to get the line containing the word before that line and the line after that line. I have the following piece of information: The text (in a string) The lines are deliminated by a NEWLINE character \n I have the index into the string of the text I want to hilight A couple other criteria: If my word falls on first line of the paragraph, it should show the 1st 3 lines If my word falls on the last line of the paragraph, it should show the last 3 lines Should show the entire paragraph in the degenative cases (the paragraph only has 1 or 2 lines) Here's an example: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph Example, if my index points to BIRD, it should show lines 1, 2, & 3 as one complete string like this: This is the 1st line of CAT text in the paragraph This is the 2nd line of BIRD text in the paragraph This is the 3rd line of MOUSE text in the paragraph If my index points to DOG, it should show lines 3, 4, & 5 as one complete string like this: This is the 3rd line of MOUSE text in the paragraph This is the 4th line of DOG text in the paragraph This is the 5th line of RABBIT text in the paragraph etc. Anybody want to help tackle this?

    Read the article

  • Best strategy for synching data in iPhone app

    - by iamj4de
    I am working on a regular iPhone app which pulls data from a server (XML, JSON, etc...), and I'm wondering what is the best way to implement synching data. Criteria are speed (less network data exchange), robustness (data recovery in case update fails), offline access and flexibility (adaptable when the structure of the database changes slightly, like a new column). I know it varies from app to app, but can you guys share some of your strategy/experience? For me, I'm thinking of something like this: 1) Store Last Modified Date in iPhone 2) Upon launching, send a message like getNewData.php?lastModifiedDate=... 3) Server will process and send back only modified data from last time. 4) This data is formatted as so: <+><data id="..."></data></+> // add this to SQLite/CoreData <-><data id="..."></data></-> // remove this <%><data id="..."><attribute>newValue</attribute></data></%> // new modified value I don't want to make <+, <-, <%... for each attribute as well, because it would be too complicated, so probably when receive a <% field, I would just remove the data with the specified id and then add it again (assuming id here is not some automatically auto-incremented field). 5) Once everything is downloaded and updated, I will update the Last Modified Date field. The main problem with this strategy is: If the network goes down when I am updating something = the Last Modified Date is not yet updated = next time I relaunch the app, I will have to go through the same thing again. Not to mention potential inconsistent data. If I use a temporary table for update and make the whole thing atomic, it would work, but then again, if the update is too long (lots of data change), the user has to wait a long time until new data is available. Should I use Last-Modified-Date for each of the data field and update data gradually?

    Read the article

  • Which key value store is the most promising/stable?

    - by Mike Trpcic
    I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)

    Read the article

  • How to set up single array or dictionary for use in multiple datasources?

    - by Roman
    I have multiple TableViewDatasources that need to display list of objects form same pool depending of certain property. E.g. object.flag1 is set- it will show up in TableView1 object.flag2 is set- it will show up in TableView2 The obvious way would be to have separate arrays for each TableView, But same object may appear in different arrays. Also I need to update objects very often or access all objects through same array. How do I setup a single dictionary or array to have all objects in one structure? To put it in another way: When table view or selection changes, application need to redraw TableViews with the new data. Application have to access the pool of objects and search through them using iterator and accessing each object and its properties. I think that this is an expensive operation and want to avoid that. Perhaps maybe by making a global pool of objects a dictionary and exposing objects properties as dictionary fields. So instead of iterating global pool of objects I could access global pool Dicitonary in a manner of database by selecting objects that has fields that match particular criteria. Anyone know any example of doing that?

    Read the article

< Previous Page | 134 135 136 137 138 139 140 141 142 143 144 145  | Next Page >