Search Results

Search found 4206 results on 169 pages for 'equals operator'.

Page 58/169 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Purpose of overloading operators in C++?

    - by Geo Drawkcab
    What is the main purpose of overloading operators in C++? In the code below, << and >> are overloaded; what is the advantage to doing so? #include <iostream> #include <string> using namespace std; class book { string name,gvari; double cost; int year; public: book(){}; book(string a, string b, double c, int d) { a=name;b=gvari;c=cost;d=year; } ~book() {} double setprice(double a) { return a=cost; } friend ostream& operator <<(ostream& , book&); void printbook(){ cout<<"wignis saxeli "<<name<<endl; cout<<"wignis avtori "<<gvari<<endl; cout<<"girebuleba "<<cost<<endl; cout<<"weli "<<year<<endl; } }; ostream& operator <<(ostream& out, book& a){ out<<"wignis saxeli "<<a.name<<endl; out<<"wignis avtori "<<a.gvari<<endl; out<<"girebuleba "<<a.cost<<endl; out<<"weli "<<a.year<<endl; return out; } class library_card : public book { string nomeri; int raod; public: library_card(){}; library_card( string a, int b){a=nomeri;b=raod;} ~library_card() {}; void printcard(){ cout<<"katalogis nomeri "<<nomeri<<endl; cout<<"gacemis raodenoba "<<raod<<endl; } friend ostream& operator <<(ostream& , library_card&); }; ostream& operator <<(ostream& out, library_card& b) { out<<"katalogis nomeri "<<b.nomeri<<endl; out<<"gacemis raodenoba "<<b.raod<<endl; return out; } int main() { book A("robizon kruno","giorgi",15,1992); library_card B("910CPP",123); A.printbook(); B.printbook(); A.setprice(15); B.printbook(); system("pause"); return 0; }

    Read the article

  • SQL SERVER – Subquery or Join – Various Options – SQL Server Engine knows the Best

    - by pinaldave
    This is followup post of my earlier article SQL SERVER – Convert IN to EXISTS – Performance Talk, after reading all the comments I have received I felt that I could write more on the same subject to clear few things out. First let us run following four queries, all of them are giving exactly same resultset. USE AdventureWorks GO -- use of = SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID = ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of in SELECT * FROM HumanResources.Employee E WHERE E.EmployeeID IN ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- use of exists SELECT * FROM HumanResources.Employee E WHERE EXISTS ( SELECT EA.EmployeeID FROM HumanResources.EmployeeAddress EA WHERE EA.EmployeeID = E.EmployeeID) GO -- Use of Join SELECT * FROM HumanResources.Employee E INNER JOIN HumanResources.EmployeeAddress EA ON E.EmployeeID = EA.EmployeeID GO Let us compare the execution plan of the queries listed above. Click on image to see larger image. It is quite clear from the execution plan that in case of IN, EXISTS and JOIN SQL Server Engines is smart enough to figure out what is the best optimal plan of Merge Join for the same query and execute the same. However, in the case of use of Equal (=) Operator, SQL Server is forced to use Nested Loop and test each result of the inner query and compare to outer query, leading to cut the performance. Please note that here I no mean suggesting that Nested Loop is bad or Merge Join is better. This can very well vary on your machine and amount of resources available on your computer. When I see Equal (=) operator used in query like above, I usually recommend to see if user can use IN or EXISTS or JOIN. As I said, this can very much vary on different system. What is your take in above query? I believe SQL Server Engines is usually pretty smart to figure out what is ideal execution plan and use it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • C#: String Concatenation vs Format vs StringBuilder

    - by James Michael Hare
    I was looking through my groups’ C# coding standards the other day and there were a couple of legacy items in there that caught my eye.  They had been passed down from committee to committee so many times that no one even thought to second guess and try them for a long time.  It’s yet another example of how micro-optimizations can often get the best of us and cause us to write code that is not as maintainable as it could be for the sake of squeezing an extra ounce of performance out of our software. So the two standards in question were these, in paraphrase: Prefer StringBuilder or string.Format() to string concatenation. Prefer string.Equals() with case-insensitive option to string.ToUpper().Equals(). Now some of you may already know what my results are going to show, as these items have been compared before on many blogs, but I think it’s always worth repeating and trying these yourself.  So let’s dig in. The first test was a pretty standard one.  When concattenating strings, what is the best choice: StringBuilder, string concattenation, or string.Format()? So before we being I read in a number of iterations from the console and a length of each string to generate.  Then I generate that many random strings of the given length and an array to hold the results.  Why am I so keen to keep the results?  Because I want to be able to snapshot the memory and don’t want garbage collection to collect the strings, hence the array to keep hold of them.  I also didn’t want the random strings to be part of the allocation, so I pre-allocate them and the array up front before the snapshot.  So in the code snippets below: num – Number of iterations. strings – Array of randomly generated strings. results – Array to hold the results of the concatenation tests. timer – A System.Diagnostics.Stopwatch() instance to time code execution. start – Beginning memory size. stop – Ending memory size. after – Memory size after final GC. So first, let’s look at the concatenation loop: 1: // build num strings using concattenation. 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = "This is test #" + i + " with a result of " + strings[i]; 5: } Pretty standard, right?  Next for string.Format(): 1: // build strings using string.Format() 2: for (int i = 0; i < num; i++) 3: { 4: results[i] = string.Format("This is test #{0} with a result of {1}", i, strings[i]); 5: }   Finally, StringBuilder: 1: // build strings using StringBuilder 2: for (int i = 0; i < num; i++) 3: { 4: var builder = new StringBuilder(); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: } So I take each of these loops, and time them by using a block like this: 1: // get the total amount of memory used, true tells it to run GC first. 2: start = System.GC.GetTotalMemory(true); 3:  4: // restart the timer 5: timer.Reset(); 6: timer.Start(); 7:  8: // *** code to time and measure goes here. *** 9:  10: // get the current amount of memory, stop the timer, then get memory after GC. 11: stop = System.GC.GetTotalMemory(false); 12: timer.Stop(); 13: other = System.GC.GetTotalMemory(true); So let’s look at what happens when I run each of these blocks through the timer and memory check at 500,000 iterations: 1: Operator + - Time: 547, Memory: 56104540/55595960 - 500000 2: string.Format() - Time: 749, Memory: 57295812/55595960 - 500000 3: StringBuilder - Time: 608, Memory: 55312888/55595960 – 500000   Egad!  string.Format brings up the rear and + triumphs, well, at least in terms of speed.  The concat burns more memory than StringBuilder but less than string.Format().  This shows two main things: StringBuilder is not always the panacea many think it is. The difference between any of the three is miniscule! The second point is extremely important!  You will often here people who will grasp at results and say, “look, operator + is 10% faster than StringBuilder so always use StringBuilder.”  Statements like this are a disservice and often misleading.  For example, if I had a good guess at what the size of the string would be, I could have preallocated my StringBuffer like so:   1: for (int i = 0; i < num; i++) 2: { 3: // pre-declare StringBuilder to have 100 char buffer. 4: var builder = new StringBuilder(100); 5: builder.Append("This is test #"); 6: builder.Append(i); 7: builder.Append(" with a result of "); 8: builder.Append(strings[i]); 9: results[i] = builder.ToString(); 10: }   Now let’s look at the times: 1: Operator + - Time: 551, Memory: 56104412/55595960 - 500000 2: string.Format() - Time: 753, Memory: 57296484/55595960 - 500000 3: StringBuilder - Time: 525, Memory: 59779156/55595960 - 500000   Whoa!  All of the sudden StringBuilder is back on top again!  But notice, it takes more memory now.  This makes perfect sense if you examine the IL behind the scenes.  Whenever you do a string concat (+) in your code, it examines the lengths of the arguments and creates a StringBuilder behind the scenes of the appropriate size for you. But even IF we know the approximate size of our StringBuilder, look how much less readable it is!  That’s why I feel you should always take into account both readability and performance.  After all, consider all these timings are over 500,000 iterations.   That’s at best  0.0004 ms difference per call which is neglidgable at best.  The key is to pick the best tool for the job.  What do I mean?  Consider these awesome words of wisdom: Concatenate (+) is best at concatenating.  StringBuilder is best when you need to building. Format is best at formatting. Totally Earth-shattering, right!  But if you consider it carefully, it actually has a lot of beauty in it’s simplicity.  Remember, there is no magic bullet.  If one of these always beat the others we’d only have one and not three choices. The fact is, the concattenation operator (+) has been optimized for speed and looks the cleanest for joining together a known set of strings in the simplest manner possible. StringBuilder, on the other hand, excels when you need to build a string of inderterminant length.  Use it in those times when you are looping till you hit a stop condition and building a result and it won’t steer you wrong. String.Format seems to be the looser from the stats, but consider which of these is more readable.  Yes, ignore the fact that you could do this with ToString() on a DateTime.  1: // build a date via concatenation 2: var date1 = (month < 10 ? string.Empty : "0") + month + '/' 3: + (day < 10 ? string.Empty : "0") + '/' + year; 4:  5: // build a date via string builder 6: var builder = new StringBuilder(10); 7: if (month < 10) builder.Append('0'); 8: builder.Append(month); 9: builder.Append('/'); 10: if (day < 10) builder.Append('0'); 11: builder.Append(day); 12: builder.Append('/'); 13: builder.Append(year); 14: var date2 = builder.ToString(); 15:  16: // build a date via string.Format 17: var date3 = string.Format("{0:00}/{1:00}/{2:0000}", month, day, year); 18:  So the strength in string.Format is that it makes constructing a formatted string easy to read.  Yes, it’s slower, but look at how much more elegant it is to do zero-padding and anything else string.Format does. So my lesson is, don’t look for the silver bullet!  Choose the best tool.  Micro-optimization almost always bites you in the end because you’re sacrificing readability for performance, which is almost exactly the wrong choice 90% of the time. I love the rules of optimization.  They’ve been stated before in many forms, but here’s how I always remember them: For Beginners: Do not optimize. For Experts: Do not optimize yet. It’s so true.  Most of the time on today’s modern hardware, a micro-second optimization at the sake of readability will net you nothing because it won’t be your bottleneck.  Code for readability, choose the best tool for the job which will usually be the most readable and maintainable as well.  Then, and only then, if you need that extra performance boost after profiling your code and exhausting all other options… then you can start to think about optimizing.

    Read the article

  • Overriding GetHashCode in a mutable struct - What NOT to do?

    - by Kyle Baran
    I am using the XNA Framework to make a learning project. It has a Point struct which exposes an X and Y value; for the purpose of optimization, it breaks the rules for proper struct design, since its a mutable struct. As Marc Gravell, John Skeet, and Eric Lippert point out in their respective posts about GetHashCode() (which Point overrides), this is a rather bad thing, since if an object's values change while its contained in a hashmap (ie, LINQ queries), it can become "lost". However, I am making my own Point3D struct, following the design of Point as a guideline. Thus, it too is a mutable struct which overrides GetHashCode(). The only difference is that mine exposes and int for X, Y, and Z values, but is fundamentally the same. The signatures are below: public struct Point3D : IEquatable<Point3D> { public int X; public int Y; public int Z; public static bool operator !=(Point3D a, Point3D b) { } public static bool operator ==(Point3D a, Point3D b) { } public Point3D Zero { get; } public override int GetHashCode() { } public override bool Equals(object obj) { } public bool Equals(Point3D other) { } public override string ToString() { } } I have tried to break my struct in the way they describe, namely by storing it in a List<Point3D>, as well as changing the value via a method using ref, but I did not encounter they behavior they warn about (maybe a pointer might allow me to break it?). Am I being too cautious in my approach, or should I be okay to use it as is?

    Read the article

  • Is this kind of design - a class for Operations On Object - correct?

    - by Mithir
    In our system we have many complex operations which involve many validations and DB activities. One of the main Business functionality could have been designed better. In short, there were no separation of layers, and the code would only work from the scenario in which it was first designed at, and now there were more scenarios (like requests from an API or from other devices) So I had to redesign. I found myself moving all the DB code to objects which acts like Business to DB objects, and I've put all the business logic in an Operator kind of a class, which I've implemented like this: First, I created an object which will hold all the information needed for the operation let's call it InformationObject. Then I created an OperatorObject which will take the InformationObject as a parameter and act on it. The OperatorObject should activate different objects and validate or check for existence or any scenario in which the business logic is compromised and then make the operation according to the information on the InformationObject. So my question is - Is this kind of implementation correct? PS, this Operator only works on a single Business-wise Operation.

    Read the article

  • Should I make the Cells in a Tiledmap as null when my player hits it

    - by Vishal Kumar
    I am making a Tile Based game using Libgdx. I took the idea from SuperKoalio platformer demo by Mario Zencher. When I wanted to implement Collectables in my game , I simply draw the coins using Tiled Map Editor. When my player hits that, I use to set that cell as null. Someday on this site suggested me not to do so... never use null. I agreed. What can be any other way. If I am using layer.setCell(x,y) to set the cell to any other cell... even if an transparent one .. my player seems to be stopped by an invisible object/hurdle. This is my code: for (Rectangle tile : tiles) { if (koalaRect.overlaps(tile)) { TiledMapTileLayer layer = (TiledMapTileLayer) map.getLayers().get(1); try{ type = layer.getCell((int) tile.x, (int) tile.y).getTile().getProperties().get("tileType").toString(); } catch(Exception e){ System.out.print("Exception in Tiles Property"+e); type="nonbreakable"; } //Let us destroy this cell if(("award".equals(type))){ layer.setCell((int) tile.x, (int) tile.y, null); listener.coin(); score+=100; test = ""+layer.getCell(0, 0).getTile().getProperties().get("tileType"); } //DOING THIS GIVES A BAD EFFECT if(("killer".equals(type))){ //player.health--; //layer.setCell((int) tile.x, (int) tile.y, layer.getCell(20,0)); } // we actually reset the player y-position here // so it is just below/above the tile we collided with // this removes bouncing :) if (player.velocity.y > 0) { player.position.y = (tile.y - Player.height); } Is this a right approach? OR I should create separate Sprite Class called Coin.

    Read the article

  • The best way to have a pointer to several methods - critique requested

    - by user827992
    I'm starting with a short introduction of what i know from the C language: a pointer is a type that stores an adress or a NULL the * operator reads the left value of the variable on its right and use this value as address and reads the value of the variable at that address the & operator generate a pointer to the variable on its right so i was thinking that in C++ the pointers can work this way too, but i was wrong, to generate a pointer to a static method i have to do this: #include <iostream> class Foo{ public: static void dummy(void){ std::cout << "I'm dummy" << std::endl; }; }; int main(){ void (*p)(); p = Foo::dummy; // step 1 p(); p = &(Foo::dummy); // step 2 p(); p = Foo; // step 3 p->dummy(); return(0); } now i have several questions: why step 1 works why step 2 works too, looks like a "pointer to pointer" for p to me, very different from step 1 why step 3 is the only one that doesn't work and is the only one that makes some sort of sense to me, honestly how can i write an array of pointers or a pointer to pointers structure to store methods ( static or non-static from real objects ) what is the best syntax and coding style for generating a pointer to a method?

    Read the article

  • Using prefix incremented loops in C#

    - by KChaloux
    Back when I started programming in college, a friend encouraged me to use the prefix incrementation operator ++i instead of the postfix i++, citing that there was a slight chance of better performance with no real chance of a downside. I realize this is true in C++, and it's become a general habit that I continue to do. I'm led to believe that it makes little to no difference when used in a loop in C#, regardless of data type. Apparently the ++ operator can't be overridden. Nevertheless, I like the appearance more, and don't see a direct downside to it. It did astonish a coworker just a moment ago though, he made the (fairly logical) assumption that my loop would terminate early as a result. He's a self-taught programmer, and apparently never came across the C++ convention. That made me question whether or not the equivalent behavior of pre- and post-fix increment and decrement operators in loops is well known enough. Is it acceptable for me to continue using ++i in looping constructs because of style preference, even though it has no real performance benefit? Or is it likely to cause confusion amongst other programmers? Note: This is assuming the ++i convention is used consistently throughout all code.

    Read the article

  • my application did not show toast mesage when network is not available [closed]

    - by Smart Guy
    my application did no show toast message when network is disable if (position == 2) { final ConnectivityManager connMgr = (ConnectivityManager) getSystemService(Context.CONNECTIVITY_SERVICE); NetworkInfo activeNetworkInfo = connMgr .getActiveNetworkInfo(); android.net.NetworkInfo mobile1 = connMgr.getNetworkInfo(ConnectivityManager.TYPE_MOBILE); if (activeNetworkInfo == null) { Toast.makeText(LoginScreen.this, "No Active Network",Toast.LENGTH_LONG).show(); } else { if (activeNetworkInfo.isConnected()) { btnLogin.setOnClickListener(new OnClickListener() { public void onClick(View view) { String pinemptycheck = pin.getText().toString(); String mobileemptycheck = mobile.getText().toString(); if (pinemptycheck.trim().equals("")||(mobileemptycheck.trim().equals(""))) { Toast.makeText(getApplicationContext(), "Please Enter Correct Information", Toast.LENGTH_LONG).show(); } else { showProgress(); postLoginData(); } } }); } else if (activeNetworkInfo.isConnectedOrConnecting()) { Toast.makeText(LoginScreen.this, "network is Connecting", Toast.LENGTH_LONG) .show(); else if (mobile1.isAvailable()) { btnLogin.setOnClickListener(new OnClickListener() { public void onClick(View view) { showProgress(); postLoginData(); } }); } else if (!mobile1.isAvailable()) { Toast.makeText(LoginScreen.this,"No other Connection Found ",Toast.LENGTH_LONG).show(); btnLogin.setOnClickListener(new OnClickListener() { public void onClick(View v) { Toast.makeText(LoginScreen.this," No other Connection Found", Toast.LENGTH_LONG).show(); } }); }}}

    Read the article

  • How to Implement Project Type "Copy", "Move", "Rename", and "Delete"

    - by Geertjan
    You've followed the NetBeans Project Type Tutorial and now you'd like to let the user copy, move, rename, and delete the projects conforming to your project type. When they right-click a project, they should see the relevant menu items and those menu items should provide dialogs for user interaction, followed by event handling code to deal with the current operation. Right now, at the end of the tutorial, the "Copy" and "Delete" menu items are present but disabled, while the "Move" and "Rename" menu items are absent: The NetBeans Project API provides a built-in mechanism out of the box that you can leverage for project-level "Copy", "Move", "Rename", and "Delete" actions. All the functionality is there for you to use, while all that you need to do is a bit of enablement and configuration, which is described below. To get started, read the following from the NetBeans Project API: http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/ActionProvider.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/CopyOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/MoveOrRenameOperationImplementation.html http://bits.netbeans.org/dev/javadoc/org-netbeans-modules-projectapi/org/netbeans/spi/project/DeleteOperationImplementation.html Now, let's do some work. For each of the menu items we're interested in, we need to do the following: Provide enablement and invocation handling in an ActionProvider implementation. Provide appropriate OperationImplementation classes. Add the new classes to the Project Lookup. Make the Actions visible on the Project Node. Run the application and verify the Actions work as you'd like. Here we go: Create an ActionProvider. Here you specify the Actions that should be supported, the conditions under which they should be enabled, and what should happen when they're invoked, using lots of default code that lets you reuse the functionality provided by the NetBeans Project API: class CustomerActionProvider implements ActionProvider { @Override public String[] getSupportedActions() { return new String[]{ ActionProvider.COMMAND_RENAME, ActionProvider.COMMAND_MOVE, ActionProvider.COMMAND_COPY, ActionProvider.COMMAND_DELETE }; } @Override public void invokeAction(String string, Lookup lkp) throws IllegalArgumentException { if (string.equalsIgnoreCase(ActionProvider.COMMAND_RENAME)) { DefaultProjectOperations.performDefaultRenameOperation( CustomerProject.this, ""); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_MOVE)) { DefaultProjectOperations.performDefaultMoveOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_COPY)) { DefaultProjectOperations.performDefaultCopyOperation( CustomerProject.this); } if (string.equalsIgnoreCase(ActionProvider.COMMAND_DELETE)) { DefaultProjectOperations.performDefaultDeleteOperation( CustomerProject.this); } } @Override public boolean isActionEnabled(String command, Lookup lookup) throws IllegalArgumentException { if ((command.equals(ActionProvider.COMMAND_RENAME))) { return true; } else if ((command.equals(ActionProvider.COMMAND_MOVE))) { return true; } else if ((command.equals(ActionProvider.COMMAND_COPY))) { return true; } else if ((command.equals(ActionProvider.COMMAND_DELETE))) { return true; } return false; } } Importantly, to round off this step, add "new CustomerActionProvider()" to the "getLookup" method of the project. If you were to run the application right now, all the Actions we're interested in would be enabled (if they are visible, as described in step 4 below) but when you invoke any of them you'd get an error message because each of the DefaultProjectOperations above looks in the Lookup of the Project for the presence of an implementation of a class for handling the operation. That's what we're going to do in the next step. Provide Implementations of Project Operations. For each of our operations, the NetBeans Project API lets you implement classes to handle the operation. The dialogs for interacting with the project are provided by the NetBeans project system, but what happens with the folders and files during the operation can be influenced via the operations. Below are the simplest possible implementations, i.e., here we assume we want nothing special to happen. Each of the below needs to be in the Lookup of the Project in order for the operation invocation to succeed. private final class CustomerProjectMoveOrRenameOperation implements MoveOrRenameOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyRenaming() throws IOException { } @Override public void notifyRenamed(String nueName) throws IOException { } @Override public void notifyMoving() throws IOException { } @Override public void notifyMoved(Project original, File originalPath, String nueName) throws IOException { } } private final class CustomerProjectCopyOperation implements CopyOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyCopying() throws IOException { } @Override public void notifyCopied(Project prjct, File file, String string) throws IOException { } } private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public List<FileObject> getDataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Also make sure to put the above methods into the Project Lookup. Check the Lookup of the Project. The "getLookup()" method of the project should now include the classes you created above, as shown in bold below: @Override public Lookup getLookup() { if (lkp == null) { lkp = Lookups.fixed(new Object[]{ this, new Info(), new CustomerProjectLogicalView(this), new CustomerCustomizerProvider(this), new CustomerActionProvider(), new CustomerProjectMoveOrRenameOperation(), new CustomerProjectCopyOperation(), new CustomerProjectDeleteOperation(), new ReportsSubprojectProvider(this), }); } return lkp; } Make Actions Visible on the Project Node. The NetBeans Project API gives you a number of CommonProjectActions, including for the actions we're dealing with. Make sure the items in bold below are in the "getActions" method of the project node: @Override public Action[] getActions(boolean arg0) { return new Action[]{ CommonProjectActions.newFileAction(), CommonProjectActions.copyProjectAction(), CommonProjectActions.moveProjectAction(), CommonProjectActions.renameProjectAction(), CommonProjectActions.deleteProjectAction(), CommonProjectActions.customizeProjectAction(), CommonProjectActions.closeProjectAction() }; } Run the Application. When you run the application, you should see this: Let's now try out the various actions: Copy. When you invoke the Copy action, you'll see the dialog below. Provide a new project name and location and then the copy action is performed when the Copy button is clicked below: The message you see above, in red, might not be relevant to your project type. When you right-click the application and choose Branding, you can find the string in the Resource Bundles tab, as shown below: However, note that the message will be shown in red, no matter what the text is, hence you can really only put something like a warning message there. If you have no text at all, it will also look odd.If the project has subprojects, the copy operation will not automatically copy the subprojects. Take a look here and here for similar more complex scenarios. Move. When you invoke the Move action, the dialog below is shown: Rename. The Rename Project dialog below is shown when you invoke the Rename action: I tried it and both the display name and the folder on disk are changed. Delete. When you invoke the Delete action, you'll see this dialog: The checkbox is not checkable, in the default scenario, and when the dialog above is confirmed, the project is simply closed, i.e., the node hierarchy is removed from the application. However, if you truly want to let the user delete the project on disk, pass the Project to the DeleteOperationImplementation and then add the children of the Project you want to delete to the getDataFiles method: private final class CustomerProjectDeleteOperation implements DeleteOperationImplementation { private final CustomerProject project; private CustomerProjectDeleteOperation(CustomerProject project) { this.project = project; } @Override public List<FileObject> getDataFiles() { List<FileObject> files = new ArrayList<FileObject>(); FileObject[] projectChildren = project.getProjectDirectory().getChildren(); for (FileObject fileObject : projectChildren) { addFile(project.getProjectDirectory(), fileObject.getNameExt(), files); } return files; } private void addFile(FileObject projectDirectory, String fileName, List<FileObject> result) { FileObject file = projectDirectory.getFileObject(fileName); if (file != null) { result.add(file); } } @Override public List<FileObject> getMetadataFiles() { return new ArrayList<FileObject>(); } @Override public void notifyDeleting() throws IOException { } @Override public void notifyDeleted() throws IOException { } } Now the user will be able to check the checkbox, causing the method above to be called in the DeleteOperationImplementation: Hope this answers some questions or at least gets the discussion started. Before asking questions about this topic, please take the steps above and only then attempt to apply them to your own scenario. Useful implementations to look at: http://kickjava.com/src/org/netbeans/modules/j2ee/clientproject/AppClientProjectOperations.java.htm https://kenai.com/projects/nbandroid/sources/mercurial/content/project/src/org/netbeans/modules/android/project/AndroidProjectOperations.java

    Read the article

  • Design Pattern for Skipping Steps in a Wizard

    - by Eric J.
    I'm designing a flexible Wizard system that presents a number of screens to complete a task. Some screens may need to be skipped based on answers to prompts on one or more previous screens. The conditions to skip a given screen need to be editable by a non-technical user via a UI. Multiple conditions need only be combined with and. I have an initial design in mind, but it feels inelegant. I wonder if there's a better way to approach this class of problem. Initial Design UI where The first column allows the user to select a question from a previous screen. The second column allows the user to select an operator applicable to the type of question asked. The third column allows the user to enter one or more values depending on the selected operator. Object Model public enum Operations { ... } public class Condition { int QuestionId { get; set; } Operations Operation { get; set; } List<object> Parameters { get; private set; } } List<Condition> pageSkipConditions; Controller Logic bool allConditionsTrue = pageSkipConditions.Count > 0; foreach (Condition c in pageSkipConditions) { allConditionsTrue &= Evaluate(previousAnswers, c); } // ... private bool Evaluate(List<Answers> previousAnswers, Condition c) { switch (c.Operation) { case Operations.StartsWith: // logic for this operation // etc. } }

    Read the article

  • Creating a Predicate Builder extension method

    - by Rippo
    I have a Kendo UI Grid that I am currently allowing filtering on multiple columns. I am wondering if there is a an alternative approach removing the outer switch statement? Basically I want to able to create an extension method so I can filter on a IQueryable<T> and I want to drop the outer case statement so I don't have to switch column names. private static IQueryable<Contact> FilterContactList(FilterDescriptor filter, IQueryable<Contact> contactList) { switch (filter.Member) { case "Name": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Firstname.StartsWith(filter.Value.ToString()) || w.Lastname.StartsWith(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Firstname.Contains(filter.Value.ToString()) || w.Lastname.Contains(filter.Value.ToString()) || (w.Firstname + " " + w.Lastname).Contains( filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Firstname == filter.Value.ToString() || w.Lastname == filter.Value.ToString() || (w.Firstname + " " + w.Lastname) == filter.Value.ToString()); break; } break; case "Company": switch (filter.Operator) { case FilterOperator.StartsWith: contactList = contactList.Where(w => w.Company.StartsWith(filter.Value.ToString())); break; case FilterOperator.Contains: contactList = contactList.Where(w => w.Company.Contains(filter.Value.ToString())); break; case FilterOperator.IsEqualTo: contactList = contactList.Where(w => w.Company == filter.Value.ToString()); break; } break; } return contactList; } Some additional information, I am using NHibernate Linq. Also another problem is that the "Name" column on my grid is actually "Firstname" + " " + "LastName" on my contact entity. We can also assume that all filterable columns will be strings.

    Read the article

  • Java Program help [migrated]

    - by georgetheevilman
    Okay I have a really annoying error. Its coming from my retainAll method. The problem is that I am outputting 1,3,5 in ints at the end, but I need 1,3,5,7,9. Here is the code below for the MySet and driver classes public class MySetTester { public static void main(String[]args) { MySet<String> strings = new MySet<String>(); strings.add("Hey!"); strings.add("Hey!"); strings.add("Hey!"); strings.add("Hey!"); strings.add("Hey!"); strings.add("Listen!"); strings.add("Listen!"); strings.add("Sorry, I couldn't resist."); strings.add("Sorry, I couldn't resist."); strings.add("(you know you would if you could)"); System.out.println("Testing add:\n"); System.out.println("Your size: " + strings.size() + ", contains(Sorry): " + strings.contains("Sorry, I couldn't resist.")); System.out.println("Exp. size: 4, contains(Sorry): true\n"); MySet<String> moreStrings = new MySet<String>(); moreStrings.add("Sorry, I couldn't resist."); moreStrings.add("(you know you would if you could)"); strings.removeAll(moreStrings); System.out.println("Testing remove and removeAll:\n"); System.out.println("Your size: " + strings.size() + ", contains(Sorry): " + strings.contains("Sorry, I couldn't resist.")); System.out.println("Exp. size: 2, contains(Sorry): false\n"); MySet<Integer> ints = new MySet<Integer>(); for (int i = 0; i < 100; i++) { ints.add(i); } System.out.println("Your size: " + ints.size()); System.out.println("Exp. size: 100\n"); for (int i = 0; i < 100; i += 2) { ints.remove(i); } System.out.println("Your size: " + ints.size()); System.out.println("Exp. size: 50\n"); MySet<Integer> zeroThroughNine = new MySet<Integer>(); for (int i = 0; i < 10; i++) { zeroThroughNine.add(i); } ints.retainAll(zeroThroughNine); System.out.println("ints should now only retain odd numbers" + " 0 through 10\n"); System.out.println("Testing your iterator:\n"); for (Integer i : ints) { System.out.println(i); } System.out.println("\nExpected: \n\n1 \n3 \n5 \n7 \n9\n"); System.out.println("Yours:"); for (String s : strings) { System.out.println(s); } System.out.println("\nExpected: \nHey! \nListen!"); strings.clear(); System.out.println("\nClearing your set...\n"); System.out.println("Your set is empty: " + strings.isEmpty()); System.out.println("Exp. set is empty: true"); } } And here is the main code. But still read the top part because that's where my examples are. import java.util.Set; import java.util.Collection; import java.lang.Iterable; import java.util.Iterator; import java.util.Arrays; import java.lang.reflect.Array; public class MySet implements Set, Iterable { // instance variables - replace the example below with your own private E[] backingArray; private int numElements; /** * Constructor for objects of class MySet */ public MySet() { backingArray=(E[]) new Object[5]; numElements=0; } public boolean add(E e){ for(Object elem:backingArray){ if (elem==null ? e==null : elem.equals(e)){ return false; } } if(numElements==backingArray.length){ E[] newArray=Arrays.copyOf(backingArray,backingArray.length*2); newArray[numElements]=e; numElements=numElements+1; backingArray=newArray; return true; } else{ backingArray[numElements]=e; numElements=numElements+1; return true; } } public boolean addAll(Collection<? extends E> c){ for(E elem:c){ this.add(elem); } return true; } public void clear(){ E[] newArray=(E[])new Object[backingArray.length]; numElements=0; backingArray=newArray; } public boolean equals(Object o){ if(o instanceof Set &&(((Set)o).size()==numElements)){ for(E elem:(Set<E>)o){ if (this.contains(o)==false){ return false; } return true; } } return false; } public boolean contains(Object o){ for(E backingElem:backingArray){ if (o!=null && o.equals(backingElem)){ return true; } } return false; } public boolean containsAll(Collection<?> c){ for(E elem:(Set<E>)c){ if(!(this.contains(elem))){ return false; } } return true; } public int hashCode(){ int sum=0; for(E elem:backingArray){ if(elem!=null){ sum=sum+elem.hashCode(); } } return sum; } public boolean isEmpty(){ if(numElements==0){ return true; } else{ return false; } } public boolean remove(Object o){ int i=0; for(Object elem:backingArray){ if(o!=null && o.equals(elem)){ backingArray[i]=null; numElements=numElements-1; E[] newArray=Arrays.copyOf(backingArray,backingArray.length-1); return true; } i=i+1; } return false; } public boolean removeAll(Collection<?> c){ for(Object elem:c){ this.remove(elem); } return true; } public boolean retainAll(Collection<?> c){ MySet<E> removalArray=new MySet<E>(); for(E arrayElem:backingArray){ if(arrayElem!= null && !(c.contains(arrayElem))){ this.remove(arrayElem); } } return false; } public int size(){ return numElements; } public <T> T[] toArray(T[] a) throws ArrayStoreException,NullPointerException{ for(int i=0;i<numElements;i++){ a[i]=(T)backingArray[i]; } for(int j=numElements;j<a.length;j++){ a[j]=null; } return a; } public Object[] toArray(){ Object[] newArray=new Object[numElements]; for(int i=0;i<numElements;i++){ newArray[i]=backingArray[i]; } return newArray; } public Iterator<E> iterator(){ setIterator iterator=new setIterator(); return iterator; } private class setIterator implements Iterator<E>{ private int currIndex; private E lastElement; public setIterator(){ currIndex=0; lastElement=null; } public boolean hasNext(){ while(currIndex<=numElements && backingArray[currIndex]==null){ currIndex=currIndex+1; } if (currIndex<=numElements){ return true; } return false; } public E next(){ E element=backingArray[currIndex]; currIndex=currIndex+1; lastElement=element; return element; } public void remove() throws UnsupportedOperationException,IllegalStateException{ if(lastElement!=null){ MySet.this.remove((Object)lastElement); numElements=numElements-1; } else{ throw new IllegalStateException(); } } } } I've been able to reduce the problems, but otherwise this thing is still causing problems.

    Read the article

  • How to swap or move 2 string in Array? [on hold]

    - by Wisnu Khazefa
    I have a need to convert .csv file to .dat file. In my problem, there are value pairs, with a name attribute (called Fund) and corresponding numeric value. If the input file has a pair whose value is 0, then that pair (Fund and value) is dropped. The output file should have only those pairs (Fund and value) where the value is non-zero. Here is the prototype of my code. public static void Check_Fund(){ String header = "Text1,Text2,Text3,FUND_UALFND_1,FUND_UALPRC_1,FUND_UALFND_2," +"FUND_UALPRC_2,FUND_UALFND_3,FUND_UALPRC_3,FUND_UALFND_4,FUND_UALPRC_4,FUND_UALFND_5,FUND_UALPRC_5,Text4,Text5,Text6,Text7"; String text = "ABC;CDE;EFG;PRMF;0;PRFF;50;PREF;0;PRCF;0;PRMP;50;TAHU;;BAKWAN;SINGKONG"; String[] head; String[] value; String showText = ""; head = header.split(","); value = text.split(";"); String regex = "\\d+"; String[] fund = {"PREF","PRMF","PRFF","PRCF","PRMP","PDFF","PSEF","PSCB","PSMF","PRGC","PREP"}; for(int i = 0; i < value.length; i++){ for(int j=0;j < fund.length; j++){ if(value[i].equals(fund[j]) && value[i+1].matches(regex)){ if(value[i+1].equals("0")){ value[i] = ""; value[i+1] = ""; } } } showText = showText + head[i] +":" + value[i] + System.lineSeparator(); } System.out.println(showText ); } Expected Result Input: FUND_UALFND_1:PRMF FUND_UALPRC_1:0 FUND_UALFND_2:PRFF FUND_UALPRC_2:50 FUND_UALFND_3:PREF FUND_UALPRC_3:0 FUND_UALFND_4:PRCF FUND_UALPRC_4:0 FUND_UALFND_5:PRMP FUND_UALPRC_5:50 Output: FUND_UALFND_1:PRFF FUND_UALPRC_1:50 FUND_UALFND_2:PRMP FUND_UALPRC_2:50 FUND_UALFND_0: FUND_UALPRC_0: FUND_UALFND_0: FUND_UALPRC_0: FUND_UALFND_0: FUND_UALPRC_0:

    Read the article

  • Scala parser combinator runs out of memory

    - by user3217013
    I wrote the following parser in Scala using the parser combinators: import scala.util.parsing.combinator._ import scala.collection.Map import scala.io.StdIn object Keywords { val Define = "define" val True = "true" val False = "false" val If = "if" val Then = "then" val Else = "else" val Return = "return" val Pass = "pass" val Conj = ";" val OpenParen = "(" val CloseParen = ")" val OpenBrack = "{" val CloseBrack = "}" val Comma = "," val Plus = "+" val Minus = "-" val Times = "*" val Divide = "/" val Pow = "**" val And = "&&" val Or = "||" val Xor = "^^" val Not = "!" val Equals = "==" val NotEquals = "!=" val Assignment = "=" } //--------------------------------------------------------------------------------- sealed abstract class Op case object Plus extends Op case object Minus extends Op case object Times extends Op case object Divide extends Op case object Pow extends Op case object And extends Op case object Or extends Op case object Xor extends Op case object Not extends Op case object Equals extends Op case object NotEquals extends Op case object Assignment extends Op //--------------------------------------------------------------------------------- sealed abstract class Term case object TrueTerm extends Term case object FalseTerm extends Term case class FloatTerm(value : Float) extends Term case class StringTerm(value : String) extends Term case class Identifier(name : String) extends Term //--------------------------------------------------------------------------------- sealed abstract class Expression case class TermExp(term : Term) extends Expression case class UnaryOp(op : Op, exp : Expression) extends Expression case class BinaryOp(op : Op, left : Expression, right : Expression) extends Expression case class FuncApp(funcName : Term, args : List[Expression]) extends Expression //--------------------------------------------------------------------------------- sealed abstract class Statement case class ExpressionStatement(exp : Expression) extends Statement case class Pass() extends Statement case class Return(value : Expression) extends Statement case class AssignmentVar(variable : Term, exp : Expression) extends Statement case class IfThenElse(testBody : Expression, thenBody : Statement, elseBody : Statement) extends Statement case class Conjunction(left : Statement, right : Statement) extends Statement case class AssignmentFunc(functionName : Term, args : List[Term], body : Statement) extends Statement //--------------------------------------------------------------------------------- class myParser extends JavaTokenParsers { val keywordMap : Map[String, Op] = Map( Keywords.Plus -> Plus, Keywords.Minus -> Minus, Keywords.Times -> Times, Keywords.Divide -> Divide, Keywords.Pow -> Pow, Keywords.And -> And, Keywords.Or -> Or, Keywords.Xor -> Xor, Keywords.Not -> Not, Keywords.Equals -> Equals, Keywords.NotEquals -> NotEquals, Keywords.Assignment -> Assignment ) def floatTerm : Parser[Term] = decimalNumber ^^ { case x => FloatTerm( x.toFloat ) } def stringTerm : Parser[Term] = stringLiteral ^^ { case str => StringTerm(str) } def identifier : Parser[Term] = ident ^^ { case value => Identifier(value) } def boolTerm : Parser[Term] = (Keywords.True | Keywords.False) ^^ { case Keywords.True => TrueTerm case Keywords.False => FalseTerm } def simpleTerm : Parser[Expression] = (boolTerm | floatTerm | stringTerm) ^^ { case term => TermExp(term) } def argument = expression def arguments_aux : Parser[List[Expression]] = (argument <~ Keywords.Comma) ~ arguments ^^ { case arg ~ argList => arg :: argList } def arguments = arguments_aux | { argument ^^ { case arg => List(arg) } } def funcAppArgs : Parser[List[Expression]] = funcEmptyArgs | ( Keywords.OpenParen ~> arguments <~ Keywords.CloseParen ^^ { case args => args.foldRight(List[Expression]()) ( (a,b) => a :: b ) } ) def funcApp = identifier ~ funcAppArgs ^^ { case funcName ~ argList => FuncApp(funcName, argList) } def variableTerm : Parser[Expression] = identifier ^^ { case name => TermExp(name) } def atomic_expression = simpleTerm | funcApp | variableTerm def paren_expression : Parser[Expression] = Keywords.OpenParen ~> expression <~ Keywords.CloseParen def unary_operation : Parser[String] = Keywords.Not def unary_expression : Parser[Expression] = operation(0) ~ expression(0) ^^ { case op ~ exp => UnaryOp(keywordMap(op), exp) } def operation(precedence : Int) : Parser[String] = precedence match { case 0 => Keywords.Not case 1 => Keywords.Pow case 2 => Keywords.Times | Keywords.Divide | Keywords.And case 3 => Keywords.Plus | Keywords.Minus | Keywords.Or | Keywords.Xor case 4 => Keywords.Equals | Keywords.NotEquals case _ => throw new Exception("No operations with this precedence.") } def binary_expression(precedence : Int) : Parser[Expression] = precedence match { case 0 => throw new Exception("No operation with zero precedence.") case n => (expression (n-1)) ~ operation(n) ~ (expression (n)) ^^ { case left ~ op ~ right => BinaryOp(keywordMap(op), left, right) } } def expression(precedence : Int) : Parser[Expression] = precedence match { case 0 => unary_expression | paren_expression | atomic_expression case n => binary_expression(n) | expression(n-1) } def expression : Parser[Expression] = expression(4) def expressionStmt : Parser[Statement] = expression ^^ { case exp => ExpressionStatement(exp) } def assignment : Parser[Statement] = (identifier <~ Keywords.Assignment) ~ expression ^^ { case varName ~ exp => AssignmentVar(varName, exp) } def ifthen : Parser[Statement] = ((Keywords.If ~ Keywords.OpenParen) ~> expression <~ Keywords.CloseParen) ~ ((Keywords.Then ~ Keywords.OpenBrack) ~> statements <~ Keywords.CloseBrack) ^^ { case ifBody ~ thenBody => IfThenElse(ifBody, thenBody, Pass()) } def ifthenelse : Parser[Statement] = ((Keywords.If ~ Keywords.OpenParen) ~> expression <~ Keywords.CloseParen) ~ ((Keywords.Then ~ Keywords.OpenBrack) ~> statements <~ Keywords.CloseBrack) ~ ((Keywords.Else ~ Keywords.OpenBrack) ~> statements <~ Keywords.CloseBrack) ^^ { case ifBody ~ thenBody ~ elseBody => IfThenElse(ifBody, thenBody, elseBody) } def pass : Parser[Statement] = Keywords.Pass ^^^ { Pass() } def returnStmt : Parser[Statement] = Keywords.Return ~> expression ^^ { case exp => Return(exp) } def statement : Parser[Statement] = ((pass | returnStmt | assignment | expressionStmt) <~ Keywords.Conj) | ifthenelse | ifthen def statements_aux : Parser[Statement] = statement ~ statements ^^ { case st ~ sts => Conjunction(st, sts) } def statements : Parser[Statement] = statements_aux | statement def funcDefBody : Parser[Statement] = Keywords.OpenBrack ~> statements <~ Keywords.CloseBrack def funcEmptyArgs = Keywords.OpenParen ~ Keywords.CloseParen ^^^ { List() } def funcDefArgs : Parser[List[Term]] = funcEmptyArgs | Keywords.OpenParen ~> repsep(identifier, Keywords.Comma) <~ Keywords.CloseParen ^^ { case args => args.foldRight(List[Term]()) ( (a,b) => a :: b ) } def funcDef : Parser[Statement] = (Keywords.Define ~> identifier) ~ funcDefArgs ~ funcDefBody ^^ { case funcName ~ funcArgs ~ body => AssignmentFunc(funcName, funcArgs, body) } def funcDefAndStatement : Parser[Statement] = funcDef | statement def funcDefAndStatements_aux : Parser[Statement] = funcDefAndStatement ~ funcDefAndStatements ^^ { case stmt ~ stmts => Conjunction(stmt, stmts) } def funcDefAndStatements : Parser[Statement] = funcDefAndStatements_aux | funcDefAndStatement def parseProgram : Parser[Statement] = funcDefAndStatements def eval(input : String) = { parseAll(parseProgram, input) match { case Success(result, _) => result case Failure(m, _) => println(m) case _ => println("") } } } object Parser { def main(args : Array[String]) { val x : myParser = new myParser() println(args(0)) val lines = scala.io.Source.fromFile(args(0)).mkString println(x.eval(lines)) } } The problem is, when I run the parser on the following example it works fine: define foo(a) { if (!h(IM) && a) then { return 0; } if (a() && !h()) then { return 0; } } But when I add threes characters in the first if statement, it runs out of memory. This is absolutely blowing my mind. Can anyone help? (I suspect it has to do with repsep, but I am not sure.) define foo(a) { if (!h(IM) && a(1)) then { return 0; } if (a() && !h()) then { return 0; } } EDIT: Any constructive comments about my Scala style is also appreciated.

    Read the article

  • Sorting in Hash Maps in Java

    - by Crystal
    I'm trying to get familiar with Collections. I have a String which is my key, email address, and a Person object (firstName, lastName, telephone, email). I read in the Java collections chapter on Sun's webpages that if you had a HashMap and wanted it sorted, you could use a TreeMap. How does this sort work? Is it based on the compareTo() method you have in your Person class? I overrode the compareTo() method in my Person class to sort by lastName. But it isn't working properly and was wondering if I have the right idea or not. getSortedListByLastName at the bottom of this code is where I try to convert to a TreeMap. Also, if this is the correct way to do it, or one of the correct ways to do it, how do I then sort by firstName since my compareTo() is comparing by lastName. import java.util.*; public class OrganizeThis { /** Add a person to the organizer @param p A person object */ public void add(Person p) { staff.put(p.getEmail(), p); //System.out.println("Person " + p + "added"); } /** * Remove a Person from the organizer. * * @param email The email of the person to be removed. */ public void remove(String email) { staff.remove(email); } /** * Remove all contacts from the organizer. * */ public void empty() { staff.clear(); } /** * Find the person stored in the organizer with the email address. * Note, each person will have a unique email address. * * @param email The person email address you are looking for. * */ public Person findByEmail(String email) { Person aPerson = staff.get(email); return aPerson; } /** * Find all persons stored in the organizer with the same last name. * Note, there can be multiple persons with the same last name. * * @param lastName The last name of the persons your are looking for. * */ public Person[] find(String lastName) { ArrayList<Person> names = new ArrayList<Person>(); for (Person s : staff.values()) { if (s.getLastName() == lastName) { names.add(s); } } // Convert ArrayList back to Array Person nameArray[] = new Person[names.size()]; names.toArray(nameArray); return nameArray; } /** * Return all the contact from the orgnizer in * an array sorted by last name. * * @return An array of Person objects. * */ public Person[] getSortedListByLastName() { Map<String, Person> sorted = new TreeMap<String, Person>(staff); ArrayList<Person> sortedArrayList = new ArrayList<Person>(); for (Person s: sorted.values()) { sortedArrayList.add(s); } Person sortedArray[] = new Person[sortedArrayList.size()]; sortedArrayList.toArray(sortedArray); return sortedArray; } private Map<String, Person> staff = new HashMap<String, Person>(); public static void main(String[] args) { OrganizeThis testObj = new OrganizeThis(); Person person1 = new Person("J", "W", "111-222-3333", "[email protected]"); Person person2 = new Person("K", "W", "345-678-9999", "[email protected]"); Person person3 = new Person("Phoebe", "Wang", "322-111-3333", "[email protected]"); Person person4 = new Person("Nermal", "Johnson", "322-342-5555", "[email protected]"); Person person5 = new Person("Apple", "Banana", "123-456-1111", "[email protected]"); testObj.add(person1); testObj.add(person2); testObj.add(person3); testObj.add(person4); testObj.add(person5); System.out.println(testObj.findByEmail("[email protected]")); System.out.println("------------" + '\n'); Person a[] = testObj.find("W"); for (Person p : a) System.out.println(p); System.out.println("------------" + '\n'); a = testObj.find("W"); for (Person p : a) System.out.println(p); System.out.println("SORTED" + '\n'); a = testObj.getSortedListByLastName(); for (Person b : a) { System.out.println(b); } } } Person class: public class Person implements Comparable { String firstName; String lastName; String telephone; String email; public Person() { firstName = ""; lastName = ""; telephone = ""; email = ""; } public Person(String firstName) { this.firstName = firstName; } public Person(String firstName, String lastName, String telephone, String email) { this.firstName = firstName; this.lastName = lastName; this.telephone = telephone; this.email = email; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getTelephone() { return telephone; } public void setTelephone(String telephone) { this.telephone = telephone; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public int compareTo(Object o) { String s1 = this.lastName + this.firstName; String s2 = ((Person) o).lastName + ((Person) o).firstName; return s1.compareTo(s2); } public boolean equals(Object otherObject) { // a quick test to see if the objects are identical if (this == otherObject) { return true; } // must return false if the explicit parameter is null if (otherObject == null) { return false; } if (!(otherObject instanceof Person)) { return false; } Person other = (Person) otherObject; return firstName.equals(other.firstName) && lastName.equals(other.lastName) && telephone.equals(other.telephone) && email.equals(other.email); } public int hashCode() { return this.email.toLowerCase().hashCode(); } public String toString() { return getClass().getName() + "[firstName = " + firstName + '\n' + "lastName = " + lastName + '\n' + "telephone = " + telephone + '\n' + "email = " + email + "]"; } }

    Read the article

  • Why can't I assign a scalar value to a class using shorthand, but instead declare it first, then set

    - by ~delan-azabani
    I am writing a UTF-8 library for C++ as an exercise as this is my first real-world C++ code. So far, I've implemented concatenation, character indexing, parsing and encoding UTF-8 in a class called "ustring". It looks like it's working, but two (seemingly equivalent) ways of declaring a new ustring behave differently. The first way: ustring a; a = "test"; works, and the overloaded "=" operator parses the string into the class (which stores the Unicode strings as an dynamically allocated int pointer). However, the following does not work: ustring a = "test"; because I get the following error: test.cpp:4: error: conversion from ‘const char [5]’ to non-scalar type ‘ustring’ requested Is there a way to workaround this error? It probably is a problem with my code, though. The following is what I've written so far for the library: #include <cstdlib> #include <cstring> class ustring { int * values; long len; public: long length() { return len; } ustring * operator=(ustring input) { len = input.len; values = (int *) malloc(sizeof(int) * len); for (long i = 0; i < len; i++) values[i] = input.values[i]; return this; } ustring * operator=(char input[]) { len = sizeof(input); values = (int *) malloc(0); long s = 0; // s = number of parsed chars int a, b, c, d, contNeed = 0, cont = 0; for (long i = 0; i < sizeof(input); i++) if (input[i] < 0x80) { // ASCII, direct copy (00-7f) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = input[i]; } else if (input[i] < 0xc0) { // this is a continuation (80-bf) if (cont == contNeed) { // no need for continuation, use U+fffd values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } cont = cont + 1; values[s - 1] = values[s - 1] | ((input[i] & 0x3f) << ((contNeed - cont) * 6)); if (cont == contNeed) cont = contNeed = 0; } else if (input[i] < 0xc2) { // invalid byte, use U+fffd (c0-c1) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } else if (input[i] < 0xe0) { // start of 2-byte sequence (c2-df) contNeed = 1; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x1f) << 6; } else if (input[i] < 0xf0) { // start of 3-byte sequence (e0-ef) contNeed = 2; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x0f) << 12; } else if (input[i] < 0xf5) { // start of 4-byte sequence (f0-f4) contNeed = 3; values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = (input[i] & 0x07) << 18; } else { // restricted or invalid (f5-ff) values = (int *) realloc(values, sizeof(int) * ++s); values[s - 1] = 0xfffd; } return this; } ustring operator+(ustring input) { ustring result; result.len = len + input.len; result.values = (int *) malloc(sizeof(int) * result.len); for (long i = 0; i < len; i++) result.values[i] = values[i]; for (long i = 0; i < input.len; i++) result.values[i + len] = input.values[i]; return result; } ustring operator[](long index) { ustring result; result.len = 1; result.values = (int *) malloc(sizeof(int)); result.values[0] = values[index]; return result; } char * encode() { char * r = (char *) malloc(0); long s = 0; for (long i = 0; i < len; i++) { if (values[i] < 0x80) r = (char *) realloc(r, s + 1), r[s + 0] = char(values[i]), s += 1; else if (values[i] < 0x800) r = (char *) realloc(r, s + 2), r[s + 0] = char(values[i] >> 6 | 0x60), r[s + 1] = char(values[i] & 0x3f | 0x80), s += 2; else if (values[i] < 0x10000) r = (char *) realloc(r, s + 3), r[s + 0] = char(values[i] >> 12 | 0xe0), r[s + 1] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 2] = char(values[i] & 0x3f | 0x80), s += 3; else r = (char *) realloc(r, s + 4), r[s + 0] = char(values[i] >> 18 | 0xf0), r[s + 1] = char(values[i] >> 12 & 0x3f | 0x80), r[s + 2] = char(values[i] >> 6 & 0x3f | 0x80), r[s + 3] = char(values[i] & 0x3f | 0x80), s += 4; } return r; } };

    Read the article

  • Custom language - FOR loop in a clojure interpeter?

    - by Mark
    I have a basic interpreter in clojure. Now i need to implement for (initialisation; finish-test; loop-update) { statements } Implement a similar for-loop for the interpreted language. The pattern will be: (for variable-declarations end-test loop-update do statement) The variable-declarations will set up initial values for variables.The end-test returns a boolean, and the loop will end if end-test returns false. The statement is interpreted followed by the loop-update for each pass of the loop. Examples of use are: (run ’(for ((i 0)) (< i 10) (set i (+ 1 i)) do (println i))) (run ’(for ((i 0) (j 0)) (< i 10) (seq (set i (+ 1 i)) (set j (+ j (* 2 i)))) do (println j))) inside my interpreter. I will attach my interpreter code I got so far. Any help is appreciated. Interpreter (declare interpret make-env) ;; needed as language terms call out to 'interpret' (def do-trace false) ;; change to 'true' to show calls to 'interpret' ;; simple utilities (def third ; return third item in a list (fn [a-list] (second (rest a-list)))) (def fourth ; return fourth item in a list (fn [a-list] (third (rest a-list)))) (def run ; make it easy to test the interpreter (fn [e] (println "Processing: " e) (println "=> " (interpret e (make-env))))) ;; for the environment (def make-env (fn [] '())) (def add-var (fn [env var val] (cons (list var val) env))) (def lookup-var (fn [env var] (cond (empty? env) 'error (= (first (first env)) var) (second (first env)) :else (lookup-var (rest env) var)))) ;; for terms in language ;; -- define numbers (def is-number? (fn [expn] (number? expn))) (def interpret-number (fn [expn env] expn)) ;; -- define symbols (def is-symbol? (fn [expn] (symbol? expn))) (def interpret-symbol (fn [expn env] (lookup-var env expn))) ;; -- define boolean (def is-boolean? (fn [expn] (or (= expn 'true) (= expn 'false)))) (def interpret-boolean (fn [expn env] expn)) ;; -- define functions (def is-function? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'lambda (first expn))))) (def interpret-function ; keep function definitions as they are written (fn [expn env] expn)) ;; -- define addition (def is-plus? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '+ (first expn))))) (def interpret-plus (fn [expn env] (+ (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define subtraction (def is-minus? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '- (first expn))))) (def interpret-minus (fn [expn env] (- (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define multiplication (def is-times? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '* (first expn))))) (def interpret-times (fn [expn env] (* (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define division (def is-divides? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '/ (first expn))))) (def interpret-divides (fn [expn env] (/ (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define equals test (def is-equals? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '= (first expn))))) (def interpret-equals (fn [expn env] (= (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define greater-than test (def is-greater-than? (fn [expn] (and (list? expn) (= 3 (count expn)) (= '> (first expn))))) (def interpret-greater-than (fn [expn env] (> (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define not (def is-not? (fn [expn] (and (list? expn) (= 2 (count expn)) (= 'not (first expn))))) (def interpret-not (fn [expn env] (not (interpret (second expn) env)))) ;; -- define or (def is-or? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'or (first expn))))) (def interpret-or (fn [expn env] (or (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define and (def is-and? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'and (first expn))))) (def interpret-and (fn [expn env] (and (interpret (second expn) env) (interpret (third expn) env)))) ;; -- define print (def is-print? (fn [expn] (and (list? expn) (= 2 (count expn)) (= 'println (first expn))))) (def interpret-print (fn [expn env] (println (interpret (second expn) env)))) ;; -- define with (def is-with? (fn [expn] (and (list? expn) (= 3 (count expn)) (= 'with (first expn))))) (def interpret-with (fn [expn env] (interpret (third expn) (add-var env (first (second expn)) (interpret (second (second expn)) env))))) ;; -- define if (def is-if? (fn [expn] (and (list? expn) (= 4 (count expn)) (= 'if (first expn))))) (def interpret-if (fn [expn env] (cond (interpret (second expn) env) (interpret (third expn) env) :else (interpret (fourth expn) env)))) ;; -- define function-application (def is-function-application? (fn [expn env] (and (list? expn) (= 2 (count expn)) (is-function? (interpret (first expn) env))))) (def interpret-function-application (fn [expn env] (let [function (interpret (first expn) env)] (interpret (third function) (add-var env (first (second function)) (interpret (second expn) env)))))) ;; the interpreter itself (def interpret (fn [expn env] (cond do-trace (println "Interpret is processing: " expn)) (cond ; basic values (is-number? expn) (interpret-number expn env) (is-symbol? expn) (interpret-symbol expn env) (is-boolean? expn) (interpret-boolean expn env) (is-function? expn) (interpret-function expn env) ; built-in functions (is-plus? expn) (interpret-plus expn env) (is-minus? expn) (interpret-minus expn env) (is-times? expn) (interpret-times expn env) (is-divides? expn) (interpret-divides expn env) (is-equals? expn) (interpret-equals expn env) (is-greater-than? expn) (interpret-greater-than expn env) (is-not? expn) (interpret-not expn env) (is-or? expn) (interpret-or expn env) (is-and? expn) (interpret-and expn env) (is-print? expn) (interpret-print expn env) ; special syntax (is-with? expn) (interpret-with expn env) (is-if? expn) (interpret-if expn env) ; functions (is-function-application? expn env) (interpret-function-application expn env) :else 'error))) ;; tests of using environment (println "Environment tests:") (println (add-var (make-env) 'x 1)) (println (add-var (add-var (add-var (make-env) 'x 1) 'y 2) 'x 3)) (println (lookup-var '() 'x)) (println (lookup-var '((x 1)) 'x)) (println (lookup-var '((x 1) (y 2)) 'x)) (println (lookup-var '((x 1) (y 2)) 'y)) (println (lookup-var '((x 3) (y 2) (x 1)) 'x)) ;; examples of using interpreter (println "Interpreter examples:") (run '1) (run '2) (run '(+ 1 2)) (run '(/ (* (+ 4 5) (- 2 4)) 2)) (run '(with (x 1) x)) (run '(with (x 1) (with (y 2) (+ x y)))) (run '(with (x (+ 2 4)) x)) (run 'false) (run '(not false)) (run '(with (x true) (with (y false) (or x y)))) (run '(or (= 3 4) (> 4 3))) (run '(with (x 1) (if (= x 1) 2 3))) (run '(with (x 2) (if (= x 1) 2 3))) (run '((lambda (n) (* 2 n)) 4)) (run '(with (double (lambda (n) (* 2 n))) (double 4))) (run '(with (sum-to (lambda (n) (if (= n 0) 0 (+ n (sum-to (- n 1)))))) (sum-to 100))) (run '(with (x 1) (with (f (lambda (n) (+ n x))) (with (x 2) (println (f 3))))))

    Read the article

  • How to save/retrieve words to/from SQlite database?

    - by user998032
    Sorry if I repeat my question but I have still had no clues of what to do and how to deal with the question. My app is a dictionary. I assume that users will need to add words that they want to memorise to a Favourite list. Thus, I created a Favorite button that works on two phases: short-click to save the currently-view word into the Favourite list; and long-click to view the Favourite list so that users can click on any words to look them up again. I go for using a SQlite database to store the favourite words but I wonder how I can do this task. Specifically, my questions are: Should I use the current dictionary SQLite database or create a new SQLite database to favorite words? In each case, what codes do I have to write to cope with the mentioned task? Could anyone there kindly help? Here is the dictionary code: package mydict.app; import java.util.ArrayList; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteException; import android.util.Log; public class DictionaryEngine { static final private String SQL_TAG = "[MyAppName - DictionaryEngine]"; private SQLiteDatabase mDB = null; private String mDBName; private String mDBPath; //private String mDBExtension; public ArrayList<String> lstCurrentWord = null; public ArrayList<String> lstCurrentContent = null; //public ArrayAdapter<String> adapter = null; public DictionaryEngine() { lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); } public DictionaryEngine(String basePath, String dbName, String dbExtension) { //mDBExtension = getResources().getString(R.string.dbExtension); //mDBExtension = dbExtension; lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); this.setDatabaseFile(basePath, dbName, dbExtension); } public boolean setDatabaseFile(String basePath, String dbName, String dbExtension) { if (mDB != null) { if (mDB.isOpen() == true) // Database is already opened { if (basePath.equals(mDBPath) && dbName.equals(mDBName)) // the opened database has the same name and path -> do nothing { Log.i(SQL_TAG, "Database is already opened!"); return true; } else { mDB.close(); } } } String fullDbPath=""; try { fullDbPath = basePath + dbName + "/" + dbName + dbExtension; mDB = SQLiteDatabase.openDatabase(fullDbPath, null, SQLiteDatabase.OPEN_READWRITE|SQLiteDatabase.NO_LOCALIZED_COLLATORS); } catch (SQLiteException ex) { ex.printStackTrace(); Log.i(SQL_TAG, "There is no valid dictionary database " + dbName +" at path " + basePath); return false; } if (mDB == null) { return false; } this.mDBName = dbName; this.mDBPath = basePath; Log.i(SQL_TAG,"Database " + dbName + " is opened!"); return true; } public void getWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); int indexWordColumn = result.getColumnIndex("Word"); int indexContentColumn = result.getColumnIndex("Content"); if (result != null) { int countRow=result.getCount(); Log.i(SQL_TAG, "countRow = " + countRow); lstCurrentWord.clear(); lstCurrentContent.clear(); if (countRow >= 1) { result.moveToFirst(); String strWord = Utility.decodeContent(result.getString(indexWordColumn)); String strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(0,strWord); lstCurrentContent.add(0,strContent); int i = 0; while (result.moveToNext()) { strWord = Utility.decodeContent(result.getString(indexWordColumn)); strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(i,strWord); lstCurrentContent.add(i,strContent); i++; } } result.close(); } } public Cursor getCursorWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromId(int wordId) { String query; // encode input if (wordId <= 0) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE Id = " + wordId ; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromWord(String word) { String query; // encode input if (word == null || word.equals("")) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word = '" + word + "' LIMIT 0,1"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public void closeDatabase() { mDB.close(); } public boolean isOpen() { return mDB.isOpen(); } public boolean isReadOnly() { return mDB.isReadOnly(); } } And here is the code below the Favourite button to save to and load the Favourite list: btnAddFavourite = (ImageButton) findViewById(R.id.btnAddFavourite); btnAddFavourite.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Add code here to save the favourite, e.g. in the db. Toast toast = Toast.makeText(ContentView.this, R.string.messageWordAddedToFarvourite, Toast.LENGTH_SHORT); toast.show(); } }); btnAddFavourite.setOnLongClickListener(new View.OnLongClickListener() { @Override public boolean onLongClick(View v) { // Open the favourite Activity, which in turn will fetch the saved favourites, to show them. Intent intent = new Intent(getApplicationContext(), FavViewFavourite.class); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); getApplicationContext().startActivity(intent); return false; } }); }

    Read the article

  • concurrency::accelerator

    - by Daniel Moth
    Overview An accelerator represents a "target" on which C++ AMP code can execute and where data can reside. Typically (but not necessarily) an accelerator is a GPU device. Accelerators are represented in C++ AMP as objects of the accelerator class. For many scenarios, you do not need to obtain an accelerator object, since the runtime has a notion of a default accelerator, which is what it thinks is the best one in the system. Examples where you need to deal with accelerator objects are if you need to pick your own accelerator (based on your specific criteria), or if you need to use more than one accelerators from your app. Construction and operator usage You can query and obtain a std::vector of all the accelerators on your system, which the runtime discovers on startup. Beyond enumerating accelerators, you can also create one directly by passing to the constructor a system-wide unique path to a device if you know it (i.e. the “Device Instance Path” property for the device in Device Manager), e.g. accelerator acc(L"PCI\\VEN_1002&DEV_6898&SUBSYS_0B001002etc"); There are some predefined strings (for predefined accelerators) that you can pass to the accelerator constructor (and there are corresponding constants for those on the accelerator class itself, so you don’t have to hardcode them every time). Examples are the following: accelerator::default_accelerator represents the default accelerator that the C++ AMP runtime picks for you if you don’t pick one (the heuristics of how it picks one will be covered in a future post). Example: accelerator acc; accelerator::direct3d_ref represents the reference rasterizer emulator that simulates a direct3d device on the CPU (in a very slow manner). This emulator is available on systems with Visual Studio installed and is useful for debugging. More on debugging in general in future posts. Example: accelerator acc(accelerator::direct3d_ref); accelerator::direct3d_warp represents a target that I will cover in future blog posts. Example: accelerator acc(accelerator::direct3d_warp); accelerator::cpu_accelerator represents the CPU. In this first release the only use of this accelerator is for using the staging arrays technique that I'll cover separately. Example: accelerator acc(accelerator::cpu_accelerator); You can also create an accelerator by shallow copying another accelerator instance (via the corresponding constructor) or simply assigning it to another accelerator instance (via the operator overloading of =). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator objects between them to determine if they refer to the same underlying device. Querying accelerator characteristics Given an accelerator object, you can access its description, version, device path, size of dedicated memory in KB, whether it is some kind of emulator, whether it has a display attached, whether it supports double precision, and whether it was created with the debugging layer enabled for extensive error reporting. Below is example code that accesses some of the properties; in your real code you'd probably be checking one or more of them in order to pick an accelerator (or check that the default one is good enough for your specific workload): void inspect_accelerator(concurrency::accelerator acc) { std::wcout << "New accelerator: " << acc.description << std::endl; std::wcout << "is_debug = " << acc.is_debug << std::endl; std::wcout << "is_emulated = " << acc.is_emulated << std::endl; std::wcout << "dedicated_memory = " << acc.dedicated_memory << std::endl; std::wcout << "device_path = " << acc.device_path << std::endl; std::wcout << "has_display = " << acc.has_display << std::endl; std::wcout << "version = " << (acc.version >> 16) << '.' << (acc.version & 0xFFFF) << std::endl; } accelerator_view In my next blog post I'll cover a related class: accelerator_view. Suffice to say here that each accelerator may have from 1..n related accelerator_view objects. You can get the accelerator_view from an accelerator via the default_view property, or create new ones by invoking the create_view method that creates an accelerator_view object for you (by also accepting a queuing_mode enum value of deferred or immediate that we'll also explore in the next blog post). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Using a Predicate as a key to a Dictionary

    - by Tom Hines
    I really love Linq and Lambda Expressions in C#.  I also love certain community forums and programming websites like DaniWeb. A user on DaniWeb posted a question about comparing the results of a game that is like poker (5-card stud), but is played with dice. The question stemmed around determining what was the winning hand.  I looked at the question and issued some comments and suggestions toward a potential answer, but I thought it was a neat homework exercise. [A little explanation] I eventually realized not only could I compare the results of the hands (by name) with a certain construct – I could also compare the values of the individual dice with the same construct. That piece of code eventually became a Dictionary with the KEY as a Predicate<int> and the Value a Func<T> that returns a string from the another structure that contains the mapping of an ENUM to a string.  In one instance, that string is the name of the hand and in another instance, it is a string (CSV) representation of of the digits in the hand. An added benefit is that the digits re returned in the order they would be for a proper poker hand.  For instance the hand 1,2,5,3,1 would be returned as ONE_PAIR (1,1,5,3,2). [Getting to the point] 1: using System; 2: using System.Collections.Generic; 3:   4: namespace DicePoker 5: { 6: using KVP_E2S = KeyValuePair<CDicePoker.E_DICE_POKER_HAND_VAL, string>; 7: public partial class CDicePoker 8: { 9: /// <summary> 10: /// Magical construction to determine the winner of given hand Key/Value. 11: /// </summary> 12: private static Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 13: map_prd2fn = new Dictionary<Predicate<int>, Func<List<KVP_E2S>, string>> 14: { 15: {new Predicate<int>(i => i.Equals(0)), PlayerTie},//first tie 16:   17: {new Predicate<int>(i => i > 0), 18: (m => string.Format("Player One wins\n1={0}({1})\n2={2}({3})", 19: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 20:   21: {new Predicate<int>(i => i < 0), 22: (m => string.Format("Player Two wins\n2={2}({3})\n1={0}({1})", 23: m[0].Key, m[0].Value, m[1].Key, m[1].Value))}, 24:   25: {new Predicate<int>(i => i.Equals(0)), 26: (m => string.Format("Tie({0}) \n1={1}\n2={2}", 27: m[0].Key, m[0].Value, m[1].Value))} 28: }; 29: } 30: } When this is called, the code calls the Invoke method of the predicate to return a bool.  The first on matching true will have its value invoked. 1: private static Func<DICE_HAND, E_DICE_POKER_HAND_VAL> GetHandEval = dh => 2: map_dph2fn[map_dph2fn.Keys.Where(enm2fn => enm2fn(dh)).First()]; After coming up with this process, I realized (with a little modification) it could be called to evaluate the individual values in the dice hand in the event of a tie. 1: private static Func<List<KVP_E2S>, string> PlayerTie = lst_kvp => 2: map_prd2fn.Skip(1) 3: .Where(x => x.Key.Invoke(RenderDigits(dhPlayerOne).CompareTo(RenderDigits(dhPlayerTwo)))) 4: .Select(s => s.Value) 5: .First().Invoke(lst_kvp); After that, I realized I could now create a program completely without “if” statements or “for” loops! 1: static void Main(string[] args) 2: { 3: Dictionary<Predicate<int>, Action<Action<string>>> main = new Dictionary<Predicate<int>, Action<Action<string>>> 4: { 5: {(i => i.Equals(0)), PlayGame}, 6: {(i => true), Usage} 7: }; 8:   9: main[main.Keys.Where(m => m.Invoke(args.Length)).First()].Invoke(Display); 10: } …and there you have it. :) ZIPPED Project

    Read the article

  • concurrency::index<N> from amp.h

    - by Daniel Moth
    Overview C++ AMP introduces a new template class index<N>, where N can be any value greater than zero, that represents a unique point in N-dimensional space, e.g. if N=2 then an index<2> object represents a point in 2-dimensional space. This class is essentially a coordinate vector of N integers representing a position in space relative to the origin of that space. It is ordered from most-significant to least-significant (so, if the 2-dimensional space is rows and columns, the first component represents the rows). The underlying type is a signed 32-bit integer, and component values can be negative. The rank field returns N. Creating an index The default parameterless constructor returns an index with each dimension set to zero, e.g. index<3> idx; //represents point (0,0,0) An index can also be created from another index through the copy constructor or assignment, e.g. index<3> idx2(idx); //or index<3> idx2 = idx; To create an index representing something other than 0, you call its constructor as per the following 4-dimensional example: int temp[4] = {2,4,-2,0}; index<4> idx(temp); Note that there are convenience constructors (that don’t require an array argument) for creating index objects of rank 1, 2, and 3, since those are the most common dimensions used, e.g. index<1> idx(3); index<2> idx(3, 6); index<3> idx(3, 6, 12); Accessing the component values You can access each component using the familiar subscript operator, e.g. One-dimensional example: index<1> idx(4); int i = idx[0]; // i=4 Two-dimensional example: index<2> idx(4,5); int i = idx[0]; // i=4 int j = idx[1]; // j=5 Three-dimensional example: index<3> idx(4,5,6); int i = idx[0]; // i=4 int j = idx[1]; // j=5 int k = idx[2]; // k=6 Basic operations Once you have your multi-dimensional point represented in the index, you can now treat it as a single entity, including performing common operations between it and an integer (through operator overloading): -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, -=,%, *, /, +, -. There are also operator overloads for operations between index objects, i.e. ==, !=, +=, -=, +, –. Here is an example (where no assertions are broken): index<2> idx_a; index<2> idx_b(0, 0); index<2> idx_c(6, 9); _ASSERT(idx_a.rank == 2); _ASSERT(idx_a == idx_b); _ASSERT(idx_a != idx_c); idx_a += 5; idx_a[1] += 3; idx_a++; _ASSERT(idx_a != idx_b); _ASSERT(idx_a == idx_c); idx_b = idx_b + 10; idx_b -= index<2>(4, 1); _ASSERT(idx_a == idx_b); Usage You'll most commonly use index<N> objects to index into data types that we'll cover in future posts (namely array and array_view). Also when we look at the new parallel_for_each function we'll see that an index<N> object is the single parameter to the lambda, representing the (multi-dimensional) thread index… In the next post we'll go beyond being able to represent an N-dimensional point in space, and we'll see how to define the N-dimensional space itself through the extent<N> class. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to get distinct values from the List&lt;T&gt; with LINQ

    - by Vincent Maverick Durano
    Recently I was working with data from a generic List<T> and one of my objectives is to get the distinct values that is found in the List. Consider that we have this simple class that holds the following properties: public class Product { public string Make { get; set; } public string Model { get; set; } }   Now in the page code behind we will create a list of product by doing the following: private List<Product> GetProducts() { List<Product> products = new List<Product>(); Product p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 1"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy S 2"; products.Add(p); p = new Product(); p.Make = "Samsung"; p.Model = "Galaxy Note"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4"; products.Add(p); p = new Product(); p.Make = "Apple"; p.Model = "iPhone 4s"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Sensation"; products.Add(p); p = new Product(); p.Make = "HTC"; p.Model = "Desire"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Nokia"; p.Model = "Some Model"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); p = new Product(); p.Make = "Sony Ericsson"; p.Model = "800i"; products.Add(p); return products; }   And then let’s bind the products to the GridView. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts(); Gridview1.DataBind(); } }   Running the code will display something like this in the page: Now what I want is to get the distinct row values from the list. So what I did is to use the LINQ Distinct operator and unfortunately it doesn't work. In order for it work is you must use the overload method of the Distinct operator for you to get the desired results. So I’ve added this IEqualityComparer<T> class to compare values: class ProductComparer : IEqualityComparer<Product> { public bool Equals(Product x, Product y) { if (Object.ReferenceEquals(x, y)) return true; if (Object.ReferenceEquals(x, null) || Object.ReferenceEquals(y, null)) return false; return x.Make == y.Make && x.Model == y.Model; } public int GetHashCode(Product product) { if (Object.ReferenceEquals(product, null)) return 0; int hashProductName = product.Make == null ? 0 : product.Make.GetHashCode(); int hashProductCode = product.Model.GetHashCode(); return hashProductName ^ hashProductCode; } }   After that you can then bind the GridView like this: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { Gridview1.DataSource = GetProducts().Distinct(new ProductComparer()); Gridview1.DataBind(); } }   Running the page will give you the desired output below: As you notice, it now eliminates the duplicate rows in the GridView. Now what if we only want to get the distinct values for a certain field. For example I want to get the distinct “Make” values such as Samsung, Apple, HTC, Nokia and Sony Ericsson and populate them to a DropDownList control for filtering purposes. I was hoping the the Distinct operator has an overload that can compare values based on the property value like (GetProducts().Distinct(o => o.PropertyToCompare). But unfortunately it doesn’t provide that overload so what I did as a workaround is to use the GroupBy,Select and First LINQ query operators to achieve what I want. Here’s the code to get the distinct values of a certain field. protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { DropDownList1.DataSource = GetProducts().GroupBy(o => o.Make).Select(o => o.First()); DropDownList1.DataTextField = "Make"; DropDownList1.DataValueField = "Model"; DropDownList1.DataBind(); } } Running the code will display the following output below:   That’s it! I hope someone find this post useful!

    Read the article

  • Remove duplicates from DataTable and custom IEqualityComparer<DataRow>

    - by abatishchev
    How have I to implement IEqualityComparer<DataRow> to remove duplicates rows from a DataTable with next structure: ID primary key, col_1, col_2, col_3, col_4 The default comparer doesn't work because each row has it's own, unique primary key. How to implement IEqualityComparer<DataRow> that will skip primary key and compare only data remained. I have something like this: public class DataRowComparer : IEqualityComparer<DataRow> { public bool Equals(DataRow x, DataRow y) { return x.ItemArray.Except(new object[] { x[x.Table.PrimaryKey[0].ColumnName] }) == y.ItemArray.Except(new object[] { y[y.Table.PrimaryKey[0].ColumnName] }); } public int GetHashCode(DataRow obj) { return obj.ToString().GetHashCode(); } } and public static DataTable RemoveDuplicates(this DataTable table) { return (table.Rows.Count > 0) ? table.AsEnumerable().Distinct(new DataRowComparer()).CopyToDataTable() : table; } but it calls only GetHashCode() and doesn't call Equals()

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >