Search Results

Search found 4815 results on 193 pages for 'parameterized queries'.

Page 47/193 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • Can't return a List from a Compiled Query.

    - by Andrew
    I was speeding up my app by using compiled queries for queries which were getting hit over and over. I tried to implement it like this: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id) End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, List(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ (From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u).ToList()) This did not work and I got this error message: Type : System.ArgumentNullException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Value cannot be null. Parameter name: value However, when I changed my compiled query to return IQueryable instead of List like so: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id).ToList() End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, IQueryable(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u) It worked fine. Can anyone shed any light as to why this is? BTW, compiled queries rock! They sped up my app by a factor of 2.

    Read the article

  • transactions and delete using fluent nhibernate

    - by Will I Am
    I am starting to play with (Fluent) nHibernate and I am wondering if someone can help with the following. I'm sure it's a total noob question. I want to do: delete from TABX where name = 'abc' where table TABX is defined as: ID int name varchar(32) ... I build the code based on internet samples: using (ITransaction transaction = session.BeginTransaction()) { IQuery query = session.CreateQuery("FROM TABX WHERE name = :uid") .SetString("uid", "abc"); session.Delete(query.List<Person>()[0]); transaction.Commit(); } but alas, it's generating two queries (one select and one delete). I want to do this in a single statement, as in my original SQL. What is the correct way of doing this? Also, I noticed that in most samples on the internet, people tend to always wrap all queries in transactions. Why is that? If I'm only running a single statement, that seems an overkill. Do people tend to just mindlessly cut and paste, or is there a reason beyond that? For example, in my query above, if I do manage it to get it from two queries down to one, i should be able to remove the begin/commit transaction, no? if it matters, I'm using PostgreSQL for experimenting.

    Read the article

  • Lucene (.NET) Document stucture and performance suggestions.

    - by Josh Handel
    Hello, I am indexing about 100M documents that consist of a few string identifiers and a hundred or so numaric terms.. I won't be doing range queries, so I haven't dugg too deep into Numaric Field but I'm not thinking its the right choose here. My problem is that the query performance degrades quickly when I start adding OR criteria to my query.. All my queries are on specific numaric terms.. So a document looks like StringField:[someString] and N DataField:[someNumber].. I then query it with something like DataField:((+1 +(2 3)) (+75 +(3 5 52)) (+99 +88 +(102 155 199))). Currently these queries take about 7 to 16 seconds to run on my laptop.. I would like to make sure thats really the best they can do.. I am open to suggestions on field structure and query structure :-). Thanks Josh PS: I have already read over all the other lucene performance discussions on here, and on the Lucene wiki and at lucid imiagination... I'm a bit further down the rabbit hole then that...

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • How to query JDO persistent objects in unowned relationship model?

    - by Paul B
    Hello, I'm trying to migrate my app from PHP and RDBMS (MySQL) to Google App Engine and have a hard time figuring out data model and relationships in JDO. In my current app I use a lot of JOIN queries like: SELECT users.name, comments.comment FROM users, comments WHERE users.user_id = comments.user_id AND users.email = '[email protected]' As I understand, JOIN queries are not supported in this way so the only(?) way to store data is using unowned relationships and "foreign" keys. There is a documentation regarding that, but no useful examples. So far I have something like this: @PersistenceCapable public class Users {     @PrimaryKey     @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)     private Key key;     @Persistent     private String name;         @Persistent     private String email;         @Persistent     private Set<Key> commentKeys;     // Accessors... } @PersistenceCapable public class Comments {     @PrimaryKey     @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)     private Key key;     @Persistent     private String comment;         @Persistent     private Date commentDate;     @Persistent     private Key userKey;     // Accessors... } So, how do I get a list with commenter's name, comment and date in one query? I see how I probably could get away with 3 queries but that seems wrong and would create unnecessary overhead. Please, help me out with some code examples. -- Paul.

    Read the article

  • MS Access 2003 - Is there a way to programmatically define the data for a chart?

    - by Justin
    So I have some VBA for taking charts built with the Form's Chart Wizard, and automatically inserting it into PowerPoint Presentation slides. I use those chart-forms as sub forms within a larger forms that has parameters the user can select to determine what is on the chart. The idea is that the user can determine the parameter, build the chart to his/her liking, and click a button and have it in a ppt slide with the company's background template, blah blah blah..... So it works, though it is very bulky in terms of the amount of objects I have to use to accomplish this. I use expressions such as the following: like forms!frmMain.Month&* to get the input values into the saved queries, which was fine when i first started, but it went over so well and they want so many options, that it is driving the number of saved queries/objects up. I need several saved forms with charts because of the number of different types of charts I need to have this be able to handle. SO FINALLY TO MY QUESTION: I would much rather do all this on the fly with some VBA. I know how to insert list boxes, and text boxes on a form, and I know how to use SQL in VBA to get the values I want from tables/queries using VBA, I just don't know if there is some vba I can use to set the data values of the charts from a resulting recordset: DIM rs AS DAO.Rescordset DIM db AS DAO.Database DIM sql AS String sql = "SELECT TOP 5 Count(tblMain.TransactionID) AS Total, tblMain.Location FROM tblMain WHERE (((tblMain.Month) = """ & me.txtMonth & """ )) ORDER BY Count (tblMain.TransactionID) DESC;" set db = currentDB set rs = db.OpenRecordSet(sql) rs.movefirst some kind of cool code in here to make this recordset the data of chart in frmChart ("Chart01") thanks for your help. apologies for the length of the explanation.

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Mysql InnoDB performance optimization and indexing

    - by Davide C
    Hello everybody, I have 2 databases and I need to link information between two big tables (more than 3M entries each, continuously growing). The 1st database has a table 'pages' that stores various information about web pages, and includes the URL of each one. The column 'URL' is a varchar(512) and has no index. The 2nd database has a table 'urlHops' defined as: CREATE TABLE urlHops ( dest varchar(512) NOT NULL, src varchar(512) DEFAULT NULL, timestamp timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP, KEY dest_key (dest), KEY src_key (src) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now, I need basically to issue (efficiently) queries like this: select p.id,p.URL from db1.pages p, db2.urlHops u where u.src=p.URL and u.dest=? At first, I thought to add an index on pages(URL). But it's a very long column, and I already issue a lot of INSERTs and UPDATEs on the same table (way more than the number of SELECTs I would do using this index). Other possible solutions I thought are: -adding a column to pages, storing the md5 hash of the URL and indexing it; this way I could do queries using the md5 of the URL, with the advantage of an index on a smaller column. -adding another table that contains only page id and page URL, indexing both columns. But this is maybe a waste of space, having only the advantage of not slowing down the inserts and updates I execute on 'pages'. I don't want to slow down the inserts and updates, but at the same time I would be able to do the queries on the URL efficiently. Any advice? My primary concern is performance; if needed, wasting some disk space is not a problem. Thank you, regards Davide

    Read the article

  • ways to avoid global temp tables in oracle

    - by Omnipresent
    We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues. what other alternatives are there? Collections? Cursors? Our typical use of GTT's is like so: Insert into GTT INSERT INTO some_gtt_1 (column_a, column_b, column_c) (SELECT someA, someB, someC FROM TABLE_A WHERE condition_1 = 'YN756' AND type_cd = 'P' AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12' AND (lname LIKE (v_LnameUpper || '%') OR lname LIKE (v_searchLnameLower || '%')) AND (e_flag = 'Y' OR it_flag = 'Y' OR fit_flag = 'Y')); Update the GTT UPDATE some_gtt_1 a SET column_a = (SELECT b.data_a FROM some_table_b b WHERE a.column_b = b.data_b AND a.column_c = 'C') WHERE column_a IS NULL OR column_a = ' '; and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries. I have a three part question: Can someone show how to transform the above sample queries to collections and/or cursors? Since with GTT's you can work natively with SQL...why go away from the GTTs? are they really that bad. What should be the guidelines on When to use and When to avoid GTT's

    Read the article

  • Modeling complex hierarchies

    - by jdn
    To gain some experience, I am trying to make an expert system that can answer queries about the animal kingdom. However, I have run into a problem modeling the domain. I originally considered the animal kingdom hierarchy to be drawn like -animal -bird -carnivore -hawk -herbivore -bluejay -mammals -carnivores -herbivores This I figured would allow me to make queries easily like "give me all birds", but would be much more expensive to say "give me all carnivores", so I rewrote the hierarchy to look like: -animal -carnivore -birds -hawk -mammals -xyz -herbivores -birds -bluejay -mammals But now it will be much slower to query "give me all birds." This is of course a simple example, but it got me thinking that I don't really know how to model complex relationships that are not so strictly hierarchical in nature in the context of writing an expert system to answer queries as stated above. A directed, cyclic graph seems like it could mathematically solve the problem, but storing this in a relational database and maintaining it (updates) would seem like a nightmare to me. I would like to know how people typically model such things. Explanations or pointers to resources to read further would be acceptable and appreciated.

    Read the article

  • Using JSF, PrimeFaces and JPA: Create Basic WebApp without using Generated CRUD Classes, Forms, etc

    - by user2774489
    I am trying to build a basic CRUD application with NetBeans 7.4, JSF, PrimeFaces and JPA using MySQL. I have successfully done this by using the NetBeans wizards. I want to do this from scratch, no wizards. There seems to be a lack of support for the combo of JSF, PrimeFaces and JPA. When I say "lack", I mean a full example (I might be asking too much), without using the CRUD auto-gen templates/classes AND shows actual queries coded and passed to the datatables(primefaces). YouTube is full of non-English speaking examples using Hibernate (not JPA) and other examples that show flashy GUI's with no code. So far I understand you need an @Entity class (provides the physical build of the tables), a Controller (serializable) and the .xhtml web page to show the datatable.. what else? Also, I'm not seeing any posts or examples where queries are using with JPA/JSF and how they are tied together (in one place). I need to connect the dots here so that I can leverage JSF/JPA to create simple queries to populate my PF DataTables. I've read the blogs and I've googled the intranets until I'm blue in the face. Sending me a list of URL's to read to learn about each product is something I've already done. I get what they do independently, but am looking for the "How do they all connect" answer with maybe some basic code examples!! :)

    Read the article

  • How to use a getter with a nullable?

    - by Desmond Lost
    I am reading a bunch of queries from a database. I had an issue with the queries not closing, so I added a CommandTimeout. Now, the individual queries read from the config file each time they are run. How would I make the code cache the int from the config file only once using a static nullable and getter. I was thinking of doing something along the lines of: static int? var; get{ var = null; if (var.HasValue) ...(i dont know how to complete the rest) My actual code: private object ExecuteQuery(string dbConnStr, bool fixIt) { object result = false; using (SqlConnection connection = new SqlConnection(dbConnStr)) { connection.Open(); using (SqlCommand sqlCmd = new SqlCommand()) { AddSQLParms(sqlCmd); sqlCmd.CommandTimeout = 30; sqlCmd.CommandText = _cmdText; sqlCmd.Connection = connection; sqlCmd.CommandType = System.Data.CommandType.Text; sqlCmd.ExecuteNonQuery(); } connection.Close(); } return result; }}

    Read the article

  • Can I clone an IQueryable in linq? For UNION purposes?

    - by user169867
    I have a table of WorkOrders. The table has a PrimaryWorker & PrimaryPay field. It also has a SecondaryWorker & SecondaryPay field (which can be null). I wish to run 2 very similar queries & union them so that it will return a Worker Field & Pay field. So if a single WorkOrder record had both the PrimaryWorker and SecondaryWorker field populated I would get 2 records back. The "where clause" part of these 2 queries is very similar and long to construct. Here's a dummy example var q = ctx.WorkOrder.Where(w => w.WorkDate >= StartDt && w.WorkDate <= EndDt); if(showApprovedOnly) { q = q.Where(w => w.IsApproved); } //...more filters applied Now I also have a search flag called "hideZeroPay". If that's true I don't want to include the record if the worker was payed $0. But obviously for 1 query I need to compare the PrimaryPay field and in the other I need to compare the SecondaryPay field. So I'm wondering how to do this. Can I clone my base query "q" and make a primary & secondary worker query out of it and then union those 2 queries together? I'd greatly appreciate an example of how to correctly handle this. Thanks very much for any help.

    Read the article

  • Good strategy for copying a "sliding window" of data from a table?

    - by chiborg
    I have a MySQL table from a third-party application that has millions of rows and only one index - the timestamp of each entry. Now I want to do some heavy self-joins and queries on the data using fields other than the timestamp. Doing the query on the original table would bring the database to a crawl, adding indexes to the table is not an option. Additionally, I only need entries that are newer than one week. My current strategy for doing the queries efficiently is to use a separate table (aux_table) that has the necessary indexes. My questions are: Is there another way to do the queries? and if not, How do I update the data in the indexed table efficiently? So far I have found two approaches for updating aux_table: Truncate aux_table and insert the desired data from the original table. Not very efficient because all the indexes must be re-crated. Check for the biggest timestamp in aux_table and insert all entries with a greater or equal timestamp from the original table. Occasionally drop older entries. Only copying entries with greater timestamp leads to dropped entries (because of entries with same timestamp that were inserted into the original table after the last update).

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • Django: Determining if a user has voted or not

    - by TheLizardKing
    I have a long list of links that I spit out using the below code, total votes, submitted by, the usual stuff but I am not 100% on how to determine if the currently logged in user has voted on a link or not. I know how to do this from within my view but do I need to alter my below view code or can I make use of the way templates work to determine it? I have read http://stackoverflow.com/questions/1528583/django-vote-up-down-method but I don't quite understand what's going on ( and don't need any ofjavascriptery). Models (snippet): class Link(models.Model): category = models.ForeignKey(Category, blank=False, default=1) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) modified = models.DateTimeField(auto_now=True) url = models.URLField(max_length=1024, unique=True, verify_exists=True) name = models.CharField(max_length=512) def __unicode__(self): return u'%s (%s)' % (self.name, self.url) class Vote(models.Model): link = models.ForeignKey(Link) user = models.ForeignKey(User) created = models.DateTimeField(auto_now_add=True) def __unicode__(self): return u'%s vote for %s' % (self.user, self.link) Views (snippet): def hot(request): links = Link.objects.select_related().annotate(votes=Count('vote')).order_by('-created') for link in links: delta_in_hours = (int(datetime.now().strftime("%s")) - int(link.created.strftime("%s"))) / 3600 link.popularity = ((link.votes - 1) / (delta_in_hours + 2)**1.5) if request.user.is_authenticated(): try: link.voted = Vote.objects.get(link=link, user=request.user) except Vote.DoesNotExist: link.voted = None links = sorted(links, key=lambda x: x.popularity, reverse=True) links = paginate(request, links, 15) return direct_to_template( request, template = 'links/link_list.html', extra_context = { 'links': links, }) The above view actually accomplishes what I need but in what I believe to be a horribly inefficient way. This causes the dreaded n+1 queries, as it stands that's 33 queries for a page containing just 29 links while originally I got away with just 4 queries. I would really prefer to do this using Django's ORM or at least .extra(). Any advice?

    Read the article

  • Why aren't my MySQL Group By Hours vs Half Hours files Not displaying same data?

    - by stogdilla
    I need to be able to display data that I have in 15 minute increments in different display types. I have two queries that are giving me trouble. One shows data by half an hour, the other shows data by hour. The only issue is that the data totals change between queries. It's not counting the data that happens between the time frames, only AT the time frames. Ex: There are 5 things that happen at 7:15am. 2 that happen at 7:30am and 4 that show at 7:00am. The 15 minute view displays all of the data. The half hour view displays the data from 7:00am and from 7:30am but ignores the 7:15am. The hour display only shows the 7:00am data Here are my queries: $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start),floor(minute(start)/30)"; and $query="SELECT * FROM data WHERE startDate='$startDate' and queue='$queue' GROUP BY HOUR(start) "; How can I pull out the data in groups like I have but get all the data included? Is the issue the way the data is stored in the mysql table? Currently I have a column with dates (2010-03-29) and a column with times (00:00) Do I need to convert these into something else?

    Read the article

  • How to use named_scope to incrementally construct a complex query?

    - by wbharding
    I want to use Rails named_scope to incrementally build complex queries that are returned from external methods. I believe I can get the behavior I want with anonymous scopes like so: def filter_turkeys Farm.scoped(:conditions => {:animal => 'turkey'}) end def filter_tasty Farm.scoped(:conditions => {:tasty => true}) end turkeys = filter_turkeys is_tasty = filter_tasty tasty_turkeys = filter_turkeys.filter_tasty But say that I already have a named_scope that does what these anonymous scopes do, can I just use that, rather than having to declare anonymous scopes? Obviously one solution would be to pass my growing query to each of the filter methods, a la def filter_turkey(existing_query) existing_query.turkey # turkey is a named_scoped that filters for turkey end But that feels like an un-DRY way to solve the problem. Oh, and if you're wondering why I would want to return named_scope pieces from methods, rather than just build all the named scopes I need and concatenate them together, it's because my queries are too complex to be nicely handled with named_scopes themselves, and the queries get used in multiple places, so I don't want to manually build the same big hairy concatenation multiple times.

    Read the article

  • MySQL Paritioning performance

    - by Imran Pathan
    Measured performance on key partitioned tables and normal tables separately. But we couldn't find any performance improvement with partitioning. Queries are pruned. Using MySQL 5.1.47 on RHEL 4. Table details: UserUsage - Will have entries for user mobile number and data usage for each date. Mobile number and Date as PRI KEY. UserProfile - Queries prev table and stores summary for each mobile number. Mobile number PRI KEY. CREATE TABLE `UserUsage` ( `Msisdn` decimal(20,0) NOT NULL, `Date` date NOT NULL, . . PRIMARY KEY USING BTREE (`Msisdn`,`Date`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; CREATE TABLE `UserProfile` ( `Msisdn` decimal(20,0) NOT NULL, . . PRIMARY KEY (`Msisdn`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 PARTITION BY KEY(Msisdn) PARTITIONS 50; Second table is updated by query select and order by date in first table in a perl program, query is select * from UserUsage where Msisdn=number order by Date desc limit 7 [Process data in perl] update UserProfile values(....) where Msisdn=number explain partition for select, shows row being scanned in a particular partition only. Is something wrong with partition design or queries as partitioning is taking almost same or more time compared to normal tables?

    Read the article

  • SQL Databases and table design/organization

    - by John McMullen
    (NOOB disclaimer) I'm working on a system (a type of map), that is accessed mostly via 3 fields: ID (auto incremented), X coordinate, and Y coordinate. As it is right now, i have all data on the map, stored in 1 table. Whenever the map display is loaded it simply queries the database for contents in x and y, and the DB gives the data (other fields in the same entry). If an item on the map is doing something, it has a flag saying its doing something, and then has an ID of the action in another table holding that type of 'actions'. Essentially, for all map data, its stored in 1 table. All actions of a certain type are stored in their own table. I'm a noob, and i'm wondering what the most effective/efficient structure for such a design? (a map that has items, and each item has stats/actions). I'm using PHP atm, using standard SQL queries to get my data. Should i split up the tables so that there are only x number of entries on a table? (coord range limits)? Should it just keep growing and growing? There's a lot of queries to the table... so just tryin to see what is best :/

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • How to use Enquire.Js?

    - by big_smile
    Enquire.js is a Javascipt that re-creates CSS media queries for Javascripts. This means you can wrap your Javascripts in Media Queries. (Just like you would wrap CSS in Media Queries). I'm not quite sure how to use it. This tutorial says this: enquire.register("max-width: 960px", function() { // put your code here }); However, when I follow that, my code stops working. Here is an example of some Jquery Tabs without Enquire.JS. It works fine. Here are the same tabs, but with Enquire.JS added. Now it stops working. I've experimented with different variations of the code, but none of them work. What am I doing wrong? I think you might have place the Jquery Tab code in a separate file and then link to that file from within Enquire.Js, but I'm not sure how you would do that. (Although it would be handy to know as I'd imagine it would let you keep your scripts be more reusable). PS. This is not a criticism of Enquire.Js. I know that the problem lies squarely with my lack of proficiency in Javascript! I did spend a couple of hours searching for a solution, but couldn't find anything. Thanks for any help!

    Read the article

  • How Best to Replace PL/SQL with C#?

    - by Mike
    Hi, I write a lot of one-off Oracle SQL queries/reports (in Toad), and sometimes they can get complex, involving lots of unions, joins, and subqueries, and/or requiring dynamic SQL, and/or procedural logic. PL/SQL is custom made for handling these situations, but as a language it does not compare to C#, and even if it did, it's tooling does not, and if even that did, forcing yet another language on the team is something to be avoided whenever possible. Experience has shown me that using SQL for the set based processing, coupled with C# for the procedural processing, is a powerful combination indeed, and far more readable, maintainable and enhanceable than PL/SQL. So, we end up with a number of small C# programs which typically would construct a SQL query string piece by piece and/or run several queries and process them as needed. This kind of code could easily be a disaster, but a good developer can make this work out quite well, and end up with very readable code. So, I don't think it's a bad way to code for smaller DB focused projects. My main question is, how best to create and package all these little C# programs that run ad hoc queries and reports against the database? Right now I create little report objects in a DLL, developed and tested with NUnit, but then I continue to use NUnit as the GUI to select and run them. NUnit just happens to provide a nice GUI for this kind of thing, even after testing has been completed. I'm also interested in suggestions for reporting apps generally. I feel like there is a missing piece or product. The perfect tool would allow writing and running C# inside of Toad, or SQL inside of Visual Studio, along with simple reporting facilities. All ideas will be appreciated, but let's make the assumption that PL/SQL is not the solution.

    Read the article

  • Why can't you return a List from a Compiled Query?

    - by Andrew
    I was speeding up my app by using compiled queries for queries which were getting hit over and over. I tried to implement it like this: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id) End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, List(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ (From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u).ToList()) This did not work and I got this error message: Type : System.ArgumentNullException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089 Message : Value cannot be null. Parameter name: value However, when I changed my compiled query to return IQueryable instead of List like so: Function Select(ByVal fk_id As Integer) As List(SomeEntity) Using db As New DataContext() db.ObjectTrackingEnabled = False Return CompiledSelect(db, fk_id).ToList() End Using End Function Shared CompiledSelect As Func(Of DataContext, Integer, IQueryable(Of SomeEntity)) = _ CompiledQuery.Compile(Function(db As DataContext, fk_id As Integer) _ From u In db.SomeEntities _ Where u.SomeLinkedEntity.ID = fk_id _ Select u) It worked fine. Can anyone shed any light as to why this is? BTW, compiled queries rock! They sped up my app by a factor of 2.

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >