Search Results

Search found 17401 results on 697 pages for 'query optimizer'.

Page 212/697 | < Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >

  • What version of mongodb was full $text query operator introduced?

    - by Marc Maxson
    Stupid question, right? But the official docs for 'text index' say: http://docs.mongodb.org/manual/core/index-text/ Text Indexes New in version 2.4. To perform queries that access the text index, use the $text query operator. Whereas if you click on the help for searching the index you created with the $text operator, it reads: http://docs.mongodb.org/manual/reference/operator/query/text/#op._S_text $text New in version 2.6. Seems to be 2.4 but still having problems wiht it.

    Read the article

  • Any way to make this PostgreSQL count query any faster?

    - by Ben Dauphinee
    I'm running a case-insensitive search on a table with 7.2 million rows, and I was wondering if there was any way to make this query any faster? Currently, it takes approx 11.6 seconds to execute, with just one search parameter, and I'm worried that as soon as I add more than one, this query will become massively slow. SELECT count(*) FROM "exif_parse" WHERE (description ~* 'canon')

    Read the article

  • Where does the query language sit within the MVC pattern?

    - by weesilmania
    I'd assume that since the query language sits within the controller (typically) that it belongs to that component, but if I play devil's advocate I'd argue that the query language is execute within the domain of the model, and is tightly coupled to that component so it might also be a part of it. Anyone know the answer? Is there a straight answer or is it technology specific?

    Read the article

  • PL/SQL 'select in' from a list of values whose type are different from the outer query

    - by Attilah
    I have the following tables : Table1 ( Col1 : varchar2, Col2 : number, Col3 : number) Table2 ( Col1 : number, Col2 : varchar2, Col3 : varchar2) I want to run a query like this : select distinct Col2 from Table1 where Col1 in ( select Col1 from Table2 ) Table1.Col1 is of type varchar2 while Table2.Col1 is of type number. so, I need to do some casting, it seems but it fails to succeed. The problem is that any attempts to run the query returns the following error : ORA-01722: invalid number 01722. 00000 - "invalid number" *Cause: *Action: Table1.Col1 contains some null values.

    Read the article

  • How can I change column length using HQL query?

    - by gmugmu
    I tried session.createSQLQuery("ALTER TABLE People MODIFY address VARCHAR(1000);").executeUpdate(); but this throws org.hibernate.exception.SQLGrammarException: could not execute native bulk manipulation query After a lot of googling, the recommendation is to use HQL instead of SQL query to do bulk updates. Not sure how to use HQL to accomplish this. There seems to be no decent HQL documentation for updating column length in a table. Thanks so much for the help.

    Read the article

  • Option To AutoFormat Query Syntax in SSMS 2005 or 2008?

    - by dragon77
    In TOAD (for SQL or Oracle), there is a simple AUTOFORMAT button that will nicely format your query - I couldn't find that option in SSMS 2005, but was advised by a co-worker that it was available in SSMS 2008. I am unable to locate the option there either. This is VERY helpful when pasting a query from another source. Thanks for any assistance.

    Read the article

  • Help with simple query - why isn't an index being used?

    - by Randy Minder
    I have the following query: SELECT MAX([LastModifiedTime]) FROM Workflow There are approximately 400M rows in the Workflow table. There is an index on the LastModifiedTime column as follows: CREATE NONCLUSTERED INDEX [IX_Workflow_LastModifiedTime] ON [dbo].[Workflow] ( [LastModifiedTime] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 100) The above query takes 1.5 minutes to execute. Why wouldn't SQL Server use the above index and simply retrieve the last row in the index to get the maximum value? Thanks.

    Read the article

  • How would I add an if statement into MSQLI query?

    - by Josh
    Okay so I'm just learning mysqli and I'm having a little trouble putting this code together. I've posted the mysqli query below and then below that is the code I'm trying to combine with the mysqli query and I can't seem to get it to work. Maybe what I'm doing isn't possible, but the third section below is how I had the query written for mysql and it's working fine. Answers in code are appreciated! Thanks! MYSQLI QUERY: <?php require("../config.php"); if ($stmt = $mysqli->prepare("SELECT firstname,lastname,spousefirst,phonecell,email,date,contacttype,status FROM contacts WHERE contacttype IN ('Buyer','Seller','Buyer / Seller','Investor') ORDER BY date DESC")) { $stmt->execute(); $stmt->bind_results($firstname,$lastname,$spousefirst,$phonecell,$email,$date,$contacttype,$status); while ($stmt->fetch()) { echo ''.$firstname.' '." ".' '.$lastname.' '.",".' '.$spousefirst.' '.",".' '.$phonecell.' '.",".' '.$email.' '.",".' '.$date.' '.",".' '.$contacttype.' '.",".' '.$status.'</br>'; } $stmt->close(); } $mysqli->close(); ?> WHAT I'M TRYING TO COMBINE THE ABOVE WITH: if (($_GET['date'] == 'today')) { $sql = "SELECT * FROM contacts WHERE contacttype IN ('Buyer','Seller','Buyer / Seller','Investor') AND date = DATE(NOW()) ORDER BY date DESC"; } WHAT I HAD BEFORE WITH MYSQL THAT WORKS: <?php require("../config.php"); $sql = "SELECT * FROM contacts WHERE contacttype IN ('Buyer','Seller','Buyer / Seller','Investor') AND status = 'New' ORDER BY date DESC"; if (($_GET['date'] == 'today')) { $sql = "SELECT * FROM contacts WHERE contacttype IN ('Buyer','Seller','Buyer / Seller','Investor') AND date = DATE(NOW()) ORDER BY date DESC"; } ?>

    Read the article

  • Can IF be used to start a MySQL query?

    - by Littledot
    Hi there, I have a query that looks like this: mysql_query("IF EXISTS(SELECT * FROM predict WHERE uid=$i AND bid=$j) THEN UPDATE predict SET predict_tfidf=$predict_tfidf WHERE uid=$i AND bid=$j ELSE INSERT INTO predict (uid, bid, predict_tfidf) VALUES('$i','$j','$predict_tfidf') END IF")or die(mysql_error()); But it dies and mysql tells me to check the syntax near IF EXISTS(....) Can we not use an IF statement to start a mysql query? Thank you in advance.

    Read the article

  • How do I create a query which displays dots (....) after a certain number of characters within the field

    - by Marchese Il Chihuahua
    I would like to create a query on a field which after a certain number of characters adds/displays a number of dots to show the user that there is additional text to read. At the moment there is a syntax error using the following code in which it doesn't like the "Left" instruction: X:IIF(len(description) > 5, Left(description, 5) & "....", description) Note: "X" is what i am naming the field 'description' in my query screen in Access

    Read the article

  • The least amount of code possible for this MySQL query?

    - by ddan
    I have a MySQL query that: gets data from three tables linked by unique id's. counts the number of games played in each category, from each user and counts the number of games each user has played that fall under the "fps" category It seems to me that this code could be a lot smaller. How would I go about making this query smaller. http://sqlfiddle.com/#!2/6d211/1 Any help is appreciated even if you just give me links to check out.

    Read the article

  • Is it faster to compute values in a query, call a Scalar Function (decimal(28,2) datatype) 4 times,

    - by Pulsehead
    I have a handful of queries I need to write in SQL Server 2005. Each Query will be calculating 4 unit cost values based on a handful of (up to 11) fields. Any time I want 1 of these 4 unit cost values, I'll want all 4. Which is quicker? Computing in the SQL Query ((a+b+c+d+e+f+g+h+i)/(j+k)), calling ComputeScalarUnitCost(datapoint.ID) 4 times, or joining to ComputeUnitCostTable(datapoint.ID) one time?

    Read the article

  • Why is one query consistently ~25ms faster than another in postgres?

    - by Emory
    A friend wrote a query with the following condition: AND ( SELECT count(1) FROM users_alerts_status uas WHERE uas.alert_id = context_alert.alert_id AND uas.user_id = 18309 AND uas.status = 'read' ) = 0 Seeing this, I suggested we change it to: AND NOT EXISTS ( SELECT 1 FROM users_alerts_status uas WHERE uas.alert_id = context_alert.alert_id AND uas.user_id = 18309 AND uas.status = 'read' ) But in testing, the first version of the query is consistently between 20 and 30ms faster (we tested after restarting the server). Conceptually, what am I missing?

    Read the article

  • why headers of query results display sometimes in button sometimes in links format ?

    - by I Like PHP
    Hello All, it's just my curiosity to know that why phpmyadmin behavior in different manner when we modify query a bit( putting extra space) then headers of query results, sometime comes in Button format( on hover it say sort, but sorting not working at all) and sometimes in blue link is there any difference for this or it just causing by some othe reason? i m attaching both images. Button headers Link headers

    Read the article

  • Is my dns server being attacked? And what should I do about it?

    - by Mnebuerquo
    I've been having some intermittent dns problems with a web server, where certain isp's dns servers don't have my hostnames in cache and fail to look them up. At the same time, queries to opendns for those hostnames resolve correctly. It's intermittent, and it always works fine for me, so it's hard to identify the problem when someone reports connectivity problems to my site. In trying to figure this out, I've been looking at my logs to see if there are any errors I should know about. I found thousands of the following messages in my logs, from different ip's, but all requesting similar dns records: May 12 11:42:13 localhost named[26399]: client 94.76.107.2#36141: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#29075: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#47924: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:13 localhost named[26399]: client 94.76.107.2#4727: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#16153: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:42:14 localhost named[26399]: client 94.76.107.2#40267: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63507: query (cache) 'burningpianos.com/MX/IN' denied May 12 11:43:35 localhost named[26399]: client 82.209.240.241#63721: query (cache) 'burningpianos.org/MX/IN' denied May 12 11:43:36 localhost named[26399]: client 82.209.240.241#3537: query (cache) 'burningpianos.com/MX/IN' denied I've read of Dan Kaminski's dns cache poisoning vulnerability, and I'm wondering if these log records are an attempt by some evildoer to attack my dns server. There are thousands of records in my logs, all requesting "burningpianos", some for com and some for org, most looking for an mx record. There are requests from multiple ip's, but each ip will request hundreds of times per day. So this smells to me like an attack. What is the defense against this?

    Read the article

  • Can My Personal GMail Query A Remote LDAP Server?

    - by Maarx
    I have a personal GMail account, from which I frequently send e-mail to a great many various users of a specific business. The corporation has been kind enough to provide me with the credentials to access their LDAP server, with which I would like my GMail web client to be able to auto-complete partial addresses or names for which that LDAP server has an entry. Is there any way I can get a personal GMail account (or it's corresponding entire Google account) account to incorporate an LDAP server into it's Contacts? If I cannot get it to query dynamically and on-demand, is there an idiot-proof way (assuming the client permits, which they may not) to query the LDAP server for it's entire database, save it, and bulk import it to GMail? Perhaps, even, something I could set to repeat periodically (weekly, perhaps), without human interaction? If I did the latter, I assume it would be trivial to import all of these contacts under a single category that could be easily manipulated from within the GMail web-based client. I have been a staunch user and supporter of the GMail web-based client since it's instantiation, but this one is kind of a deal-breaker for me. If it's impossible, what do you suggest I do?

    Read the article

  • Enhanced Dynamic Filtering

    - by Ricardo Peres
    Remember my last post on dynamic filtering? Well, this time I'm extending the code in order to allow two levels of querying: Match type, represented by the following options: public enum MatchType { StartsWith = 0, Contains = 1 } And word match: public enum WordMatch { AnyWord = 0, AllWords = 1, ExactPhrase = 2 } You can combine the two levels in order to achieve the following combinations: MatchType.StartsWith + WordMatch.AnyWord Matches any record that starts with any of the words specified MatchType.StartsWith + WordMatch.AllWords Not available: does not make sense, throws an exception MatchType.StartsWith + WordMatch.ExactPhrase Matches any record that starts with the exact specified phrase MatchType.Contains + WordMatch.AnyWord Matches any record that contains any of the specified words MatchType.Contains + WordMatch.AllWords Matches any record that contains all of the specified words MatchType.Contains + WordMatch.ExactPhrase Matches any record that contains the exact specified phrase Here is the code: public static IList Search(IQueryable query, Type entityType, String dataTextField, String phrase, MatchType matchType, WordMatch wordMatch, Int32 maxCount) { String [] terms = phrase.Split(' ').Distinct().ToArray(); StringBuilder result = new StringBuilder(); PropertyInfo displayProperty = entityType.GetProperty(dataTextField); IList searchList = null; MethodInfo orderByMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "OrderBy").ToArray() [ 0 ].MakeGenericMethod(entityType, displayProperty.PropertyType); MethodInfo takeMethod = typeof(Queryable).GetMethod("Take", BindingFlags.Public | BindingFlags.Static).MakeGenericMethod(entityType); MethodInfo whereMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Where").ToArray() [ 0 ].MakeGenericMethod(entityType); MethodInfo distinctMethod = typeof(Queryable).GetMethods(BindingFlags.Public | BindingFlags.Static).Where(m = m.Name == "Distinct" && m.GetParameters().Length == 1).Single().MakeGenericMethod(entityType); MethodInfo toListMethod = typeof(Enumerable).GetMethod("ToList", BindingFlags.Static | BindingFlags.Public).MakeGenericMethod(entityType); MethodInfo matchMethod = typeof(String).GetMethod ( (matchType == MatchType.StartsWith) ? "StartsWith" : "Contains", new Type [] { typeof(String) } ); MemberExpression member = Expression.MakeMemberAccess ( Expression.Parameter(entityType, "n"), displayProperty ); MethodCallExpression call = null; LambdaExpression where = null; LambdaExpression orderBy = Expression.Lambda ( member, member.Expression as ParameterExpression ); switch (matchType) { case MatchType.StartsWith: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: throw (new Exception("The match type StartsWith is not supported with word match AllWords")); } break; case MatchType.Contains: switch (wordMatch) { case WordMatch.AnyWord: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.Or ( where.Body, exp ), where.Parameters.ToArray() ); } break; case WordMatch.ExactPhrase: call = Expression.Call ( member, matchMethod, Expression.Constant(phrase) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); break; case WordMatch.AllWords: call = Expression.Call ( member, matchMethod, Expression.Constant(terms [ 0 ]) ); where = Expression.Lambda ( call, member.Expression as ParameterExpression ); for (Int32 i = 1; i ()); where = Expression.Lambda ( Expression.AndAlso ( where.Body, exp ), where.Parameters.ToArray() ); } break; } break; } query = orderByMethod.Invoke(null, new Object [] { query, orderBy }) as IQueryable; query = whereMethod.Invoke(null, new Object [] { query, where }) as IQueryable; if (maxCount != 0) { query = takeMethod.Invoke(null, new Object [] { query, maxCount }) as IQueryable; } searchList = toListMethod.Invoke(null, new Object [] { query }) as IList; return (searchList); } And this is how you'd use it: IQueryable query = ctx.MyEntities; IList list = Search(query, typeof(MyEntity), "Name", "Ricardo Peres", MatchType.Contains, WordMatch.ExactPhrase, 10 /*0 for all*/); SyntaxHighlighter.config.clipboardSwf = 'http://alexgorbatchev.com/pub/sh/2.0.320/scripts/clipboard.swf'; SyntaxHighlighter.brushes.CSharp.aliases = ['c#', 'c-sharp', 'csharp']; SyntaxHighlighter.all();

    Read the article

  • SQL Constraints &ndash; CHECK and NOCHECK

    - by David Turner
    One performance issue i faced at a recent project was with the way that our constraints were being managed, we were using Subsonic as our ORM, and it has a useful tool for generating your ORM code called SubStage – once configured, you can regenerate your DAL code easily based on your database schema, and it can even be integrated into your build as a pre-build event if you want to do this.  SubStage also offers the useful feature of being able to generate DDL scripts for your entire database, and can script your data for you too. The problem came when we decided to use the generate scripts feature to migrate the database onto a test database instance – it turns out that the DDL scripts that it generates include the WITH NOCHECK option, so when we executed them on the test instance, and performed some testing, we found that performance wasn’t as expected. A constraint can be disabled, enabled but not trusted, or enabled and trusted.  When it is disabled, data can be inserted that violates the constraint because it is not being enforced, this is useful for bulk load scenarios where performance is important.  So what does it mean to say that a constraint is trusted or not trusted?  Well this refers to the SQL Server Query Optimizer, and whether it trusts that the constraint is valid.  If it trusts the constraint then it doesn’t check it is valid when executing a query, so the query can be executed much faster. Here is an example base in this article on TechNet, here we create two tables with a Foreign Key constraint between them, and add a single row to each.  We then query the tables: 1 DROP TABLE t2 2 DROP TABLE t1 3 GO 4 5 CREATE TABLE t1(col1 int NOT NULL PRIMARY KEY) 6 CREATE TABLE t2(col1 int NOT NULL) 7 8 ALTER TABLE t2 WITH CHECK ADD CONSTRAINT fk_t2_t1 FOREIGN KEY(col1) 9 REFERENCES t1(col1) 10 11 INSERT INTO t1 VALUES(1) 12 INSERT INTO t2 VALUES(1) 13 GO14 15 SELECT COUNT(*) FROM t2 16 WHERE EXISTS17 (SELECT *18 FROM t1 19 WHERE t1.col1 = t2.col1) This all works fine, and in this scenario the constraint is enabled and trusted.  We can verify this by executing the following SQL to query the ‘is_disabled’ and ‘is_not_trusted’ properties: 1 select name, is_disabled, is_not_trusted from sys.foreign_keys This gives the following result: We can disable the constraint using this SQL: 1 alter table t2 NOCHECK CONSTRAINT fk_t2_t1 And when we query the constraints again, we see that the constraint is disabled and not trusted: So the constraint won’t be enforced and we can insert data into the table t2 that doesn’t match the data in t1, but we don’t want to do this, so we can enable the constraint again using this SQL: 1 alter table t2 CHECK CONSTRAINT fk_t2_t1 But when we query the constraints again, we see that the constraint is enabled, but it is still not trusted: This means that the optimizer will check the constraint each time a query is executed over it, which will impact the performance of the query, and this is definitely not what we want, so we need to make the constraint trusted by the optimizer again.  First we should check that our constraints haven’t been violated, which we can do by running DBCC: 1 DBCC CHECKCONSTRAINTS (t2) Hopefully you see the following message indicating that DBCC completed without finding any violations of your constraint: Having verified that the constraint was not violated while it was disabled, we can simply execute the following SQL:   1 alter table t2 WITH CHECK CHECK CONSTRAINT fk_t2_t1 At first glance this looks like it must be a typo to have the keyword CHECK repeated twice in succession, but it is the correct syntax and when we query the constraints properties, we find that it is now trusted again: To fix our specific problem, we created a script that checked all constraints on our tables, using the following syntax: 1 ALTER TABLE t2 WITH CHECK CHECK CONSTRAINT ALL

    Read the article

  • Getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provides a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? Use mipmaps and simply read the pixel on the 1x1 level. Again the question of accuracy and if it is even possible using non-power-of-two textures. The problem wit these approaches is, that the pipeline gets stalled which results in major performance issues. Therefore, I'm looking for a more performant way to accomplish my goal. Using the EXT_OCCLUSION_QUERY_BOOLEAN extension Apple introduced EXT_OCCLUSION_QUERY_BOOLEAN in iOS 5.0 for iPad 2. "4.1.6 Occlusion Queries Occlusion queries use query objects to track the number of fragments or samples that pass the depth test. An occlusion query can be started and finished by calling BeginQueryEXT and EndQueryEXT, respectively, with a target of ANY_SAMPLES_PASSED_EXT or ANY_SAMPLES_PASSED_CONSERVATIVE_EXT. When an occlusion query is started with the target ANY_SAMPLES_PASSED_EXT, the samples-boolean state maintained by the GL is set to FALSE. While that occlusion query is active, the samples-boolean state is set to TRUE if any fragment or sample passes the depth test. When the occlusion query finishes, the samples-boolean state of FALSE or TRUE is written to the corresponding query object as the query result value, and the query result for that object is marked as available. If the target of the query is ANY_SAMPLES_PASSED_CONSERVATIVE_EXT, an implementation may choose to use a less precise version of the test which can additionally set the samples-boolean state to TRUE in some other implementation dependent cases." The first sentence hints on a behavior which is exactly what I'm looking for: getting the number of pixels which passed the depth test in an asynchronous manner without much performance loss. However, the rest of the document describes only how to get boolean results. Is it possible to exploit this extension to get the pixel count? Does the hardware support it so that there may be hidden API to get access to the pixel count? Other extensions which could be exploitable would be debugging features like the number of times the fragment shader was invoked (PSInvocations in DirectX - not sure if something simila is available in OpenGL ES). However, this would also result in a pipeline stall.

    Read the article

  • Android SQLite Problem: Program Crash When Try a Query!

    - by Skatephone
    Hi i have a problem programming with android SDK 1.6. I'm doing the same things of the "notepad exaple" but the programm crash when i try some query. If i try to do a query directly in to the DatabaseHelper create() metod it goes, but out of this function it doesn't. Do you have any idea? this is the source: public class DbAdapter { public static final String KEY_NAME = "name"; public static final String KEY_TOT_DAYS = "totdays"; public static final String KEY_ROWID = "_id"; private static final String TAG = "DbAdapter"; private DatabaseHelper mDbHelper; private SQLiteDatabase mDb; private static final String DATABASE_NAME = "flowratedb"; private static final String DATABASE_TABLE = "girl_data"; private static final String DATABASE_TABLE_2 = "girl_cyle"; private static final int DATABASE_VERSION = 2; /** * Database creation sql statement */ private static final String DATABASE_CREATE = "create table "+DATABASE_TABLE+" (id integer, name text not null, totdays int);"; private static final String DATABASE_CREATE_2 = "create table "+DATABASE_TABLE_2+" (ref_id integer, day long not null);"; private final Context mCtx; private static class DatabaseHelper extends SQLiteOpenHelper { DatabaseHelper(Context context) { super(context, DATABASE_NAME, null, DATABASE_VERSION); } @Override public void onCreate(SQLiteDatabase db) { db.execSQL(DATABASE_CREATE); db.execSQL(DATABASE_CREATE_2); db.delete(DATABASE_TABLE, null, null); db.delete(DATABASE_TABLE_2, null, null); } @Override public void onUpgrade(SQLiteDatabase db, int oldVersion, int newVersion) { Log.w(TAG, "Upgrading database from version " + oldVersion + " to " + newVersion + ", which will destroy all old data"); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE); db.execSQL("DROP TABLE IF EXISTS "+DATABASE_TABLE_2); onCreate(db); } } public DbAdapter(Context ctx) { this.mCtx = ctx; } public DbAdapter open() throws SQLException { mDbHelper = new DatabaseHelper(mCtx); mDb = mDbHelper.getWritableDatabase(); return this; } public void close() { mDbHelper.close(); } public long createGirl(int id,String name, int totdays) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_ROWID, id); initialValues.put(KEY_NAME, name); initialValues.put(KEY_TOT_DAYS, totdays); return mDb.insert(DATABASE_TABLE, null, initialValues); } public long createGirl_fd_day(int refid, long fd) { ContentValues initialValues = new ContentValues(); initialValues.put("ref_id", refid); initialValues.put("calendar", fd); return mDb.insert(DATABASE_TABLE, null, initialValues); } public boolean updateGirl(int rowId, String name, int totdays) { ContentValues args = new ContentValues(); args.put(KEY_NAME, name); args.put(KEY_TOT_DAYS, totdays); return mDb.update(DATABASE_TABLE, args, KEY_ROWID + "=" + rowId, null) > 0; } public boolean deleteGirlsData() { if (mDb.delete(DATABASE_TABLE_2, null, null)>0) if(mDb.delete(DATABASE_TABLE, null, null)>0) return true; return false; } public Bundle fetchAllGirls() { Bundle extras = new Bundle(); Cursor cur = mDb.query(DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, null, null, null, null, null); cur.moveToFirst(); int tot = cur.getCount(); extras.putInt("tot", tot); int index; for (int i=0;i<tot;i++){ index=cur.getInt(cur.getColumnIndex("_id")); extras.putString("name"+index, cur.getString(cur.getColumnIndex("name"))); extras.putInt("totdays"+index, cur.getInt(cur.getColumnIndex("totdays"))); } cur.close(); return extras; } public Cursor fetchGirl(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE, new String[] {KEY_ROWID, KEY_NAME, KEY_TOT_DAYS}, KEY_ROWID + "=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } public Cursor fetchGirlCD(int rowId) throws SQLException { Cursor mCursor = mDb.query(true, DATABASE_TABLE_2, new String[] {"ref_id", "day"}, "ref_id=" + rowId, null, null, null, null, null); if (mCursor != null) { mCursor.moveToFirst(); } return mCursor; } } Tank's Valerio From Italy :)

    Read the article

  • Parallelism in .NET – Part 9, Configuration in PLINQ and TPL

    - by Reed
    Parallel LINQ and the Task Parallel Library contain many options for configuration.  Although the default configuration options are often ideal, there are times when customizing the behavior is desirable.  Both frameworks provide full configuration support. When working with Data Parallelism, there is one primary configuration option we often need to control – the number of threads we want the system to use when parallelizing our routine.  By default, PLINQ and the TPL both use the ThreadPool to schedule tasks.  Given the major improvements in the ThreadPool in CLR 4, this default behavior is often ideal.  However, there are times that the default behavior is not appropriate.  For example, if you are working on multiple threads simultaneously, and want to schedule parallel operations from within both threads, you might want to consider restricting each parallel operation to using a subset of the processing cores of the system.  Not doing this might over-parallelize your routine, which leads to inefficiencies from having too many context switches. In the Task Parallel Library, configuration is handled via the ParallelOptions class.  All of the methods of the Parallel class have an overload which accepts a ParallelOptions argument. We configure the Parallel class by setting the ParallelOptions.MaxDegreeOfParallelism property.  For example, let’s revisit one of the simple data parallel examples from Part 2: Parallel.For(0, pixelData.GetUpperBound(0), row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here, we’re looping through an image, and calling a method on each pixel in the image.  If this was being done on a separate thread, and we knew another thread within our system was going to be doing a similar operation, we likely would want to restrict this to using half of the cores on the system.  This could be accomplished easily by doing: var options = new ParallelOptions(); options.MaxDegreeOfParallelism = Math.Max(Environment.ProcessorCount / 2, 1); Parallel.For(0, pixelData.GetUpperBound(0), options, row => { for (int col=0; col < pixelData.GetUpperBound(1); ++col) { pixelData[row, col] = AdjustContrast(pixelData[row, col], minPixel, maxPixel); } }); Now, we’re restricting this routine to using no more than half the cores in our system.  Note that I included a check to prevent a single core system from supplying zero; without this check, we’d potentially cause an exception.  I also did not hard code a specific value for the MaxDegreeOfParallelism property.  One of our goals when parallelizing a routine is allowing it to scale on better hardware.  Specifying a hard-coded value would contradict that goal. Parallel LINQ also supports configuration, and in fact, has quite a few more options for configuring the system.  The main configuration option we most often need is the same as our TPL option: we need to supply the maximum number of processing threads.  In PLINQ, this is done via a new extension method on ParallelQuery<T>: ParallelEnumerable.WithDegreeOfParallelism. Let’s revisit our declarative data parallelism sample from Part 6: double min = collection.AsParallel().Min(item => item.PerformComputation()); Here, we’re performing a computation on each element in the collection, and saving the minimum value of this operation.  If we wanted to restrict this to a limited number of threads, we would add our new extension method: int maxThreads = Math.Max(Environment.ProcessorCount / 2, 1); double min = collection .AsParallel() .WithDegreeOfParallelism(maxThreads) .Min(item => item.PerformComputation()); This automatically restricts the PLINQ query to half of the threads on the system. PLINQ provides some additional configuration options.  By default, PLINQ will occasionally revert to processing a query in parallel.  This occurs because many queries, if parallelized, typically actually cause an overall slowdown compared to a serial processing equivalent.  By analyzing the “shape” of the query, PLINQ often decides to run a query serially instead of in parallel.  This can occur for (taken from MSDN): Queries that contain a Select, indexed Where, indexed SelectMany, or ElementAt clause after an ordering or filtering operator that has removed or rearranged original indices. Queries that contain a Take, TakeWhile, Skip, SkipWhile operator and where indices in the source sequence are not in the original order. Queries that contain Zip or SequenceEquals, unless one of the data sources has an originally ordered index and the other data source is indexable (i.e. an array or IList(T)). Queries that contain Concat, unless it is applied to indexable data sources. Queries that contain Reverse, unless applied to an indexable data source. If the specific query follows these rules, PLINQ will run the query on a single thread.  However, none of these rules look at the specific work being done in the delegates, only at the “shape” of the query.  There are cases where running in parallel may still be beneficial, even if the shape is one where it typically parallelizes poorly.  In these cases, you can override the default behavior by using the WithExecutionMode extension method.  This would be done like so: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .Select(i => i.PerformComputation()) .Reverse(); Here, the default behavior would be to not parallelize the query unless collection implemented IList<T>.  We can force this to run in parallel by adding the WithExecutionMode extension method in the method chain. Finally, PLINQ has the ability to configure how results are returned.  When a query is filtering or selecting an input collection, the results will need to be streamed back into a single IEnumerable<T> result.  For example, the method above returns a new, reversed collection.  In this case, the processing of the collection will be done in parallel, but the results need to be streamed back to the caller serially, so they can be enumerated on a single thread. This streaming introduces overhead.  IEnumerable<T> isn’t designed with thread safety in mind, so the system needs to handle merging the parallel processes back into a single stream, which introduces synchronization issues.  There are two extremes of how this could be accomplished, but both extremes have disadvantages. The system could watch each thread, and whenever a thread produces a result, take that result and send it back to the caller.  This would mean that the calling thread would have access to the data as soon as data is available, which is the benefit of this approach.  However, it also means that every item is introducing synchronization overhead, since each item needs to be merged individually. On the other extreme, the system could wait until all of the results from all of the threads were ready, then push all of the results back to the calling thread in one shot.  The advantage here is that the least amount of synchronization is added to the system, which means the query will, on a whole, run the fastest.  However, the calling thread will have to wait for all elements to be processed, so this could introduce a long delay between when a parallel query begins and when results are returned. The default behavior in PLINQ is actually between these two extremes.  By default, PLINQ maintains an internal buffer, and chooses an optimal buffer size to maintain.  Query results are accumulated into the buffer, then returned in the IEnumerable<T> result in chunks.  This provides reasonably fast access to the results, as well as good overall throughput, in most scenarios. However, if we know the nature of our algorithm, we may decide we would prefer one of the other extremes.  This can be done by using the WithMergeOptions extension method.  For example, if we know that our PerformComputation() routine is very slow, but also variable in runtime, we may want to retrieve results as they are available, with no bufferring.  This can be done by changing our above routine to: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.NotBuffered) .Select(i => i.PerformComputation()) .Reverse(); On the other hand, if are already on a background thread, and we want to allow the system to maximize its speed, we might want to allow the system to fully buffer the results: var reversed = collection .AsParallel() .WithExecutionMode(ParallelExecutionMode.ForceParallelism) .WithMergeOptions(ParallelMergeOptions.FullyBuffered) .Select(i => i.PerformComputation()) .Reverse(); Notice, also, that you can specify multiple configuration options in a parallel query.  By chaining these extension methods together, we generate a query that will always run in parallel, and will always complete before making the results available in our IEnumerable<T>.

    Read the article

< Previous Page | 208 209 210 211 212 213 214 215 216 217 218 219  | Next Page >