Search Results

Search found 16731 results on 670 pages for 'memory limit'.

Page 184/670 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • .NET Code Evolution

    - by Alois Kraus
    Originally posted on: http://geekswithblogs.net/akraus1/archive/2013/07/24/153504.aspxAt my day job I do look at a lot of code written by other people. Most of the code is quite good and some is even a masterpiece. And there is also code which makes you think WTF… oh it was written by me. Hm not so bad after all. There are many excuses reasons for bad code. Most often it is time pressure followed by not enough ambition (who cares) or insufficient training. Normally I do care about code quality quite a lot which makes me a (perceived) slow worker who does write many tests and refines the code quite a lot because of the design deficiencies. Most of the deficiencies I do find by putting my design under stress while checking for invariants. It does also help a lot to step into the code with a debugger (sometimes also Windbg). I do this much more often when my tests are red. That way I do get a much better understanding what my code really does and not what I think it should be doing. This time I do want to show you how code can evolve over the years with different .NET Framework versions. Once there was  time where .NET 1.1 was new and many C++ programmers did switch over to get rid of not initialized pointers and memory leaks. There were also nice new data structures available such as the Hashtable which is fast lookup table with O(1) time complexity. All was good and much code was written since then. At 2005 a new version of the .NET Framework did arrive which did bring many new things like generics and new data structures. The “old” fashioned way of Hashtable were coming to an end and everyone used the new Dictionary<xx,xx> type instead which was type safe and faster because the object to type conversion (aka boxing) was no longer necessary. I think 95% of all Hashtables and dictionaries use string as key. Often it is convenient to ignore casing to make it easy to look up values which the user did enter. An often followed route is to convert the string to upper case before putting it into the Hashtable. Hashtable Table = new Hashtable(); void Add(string key, string value) { Table.Add(key.ToUpper(), value); } This is valid and working code but it has problems. First we can pass to the Hashtable a custom IEqualityComparer to do the string matching case insensitive. Second we can switch over to the now also old Dictionary type to become a little faster and we can keep the the original keys (not upper cased) in the dictionary. Dictionary<string, string> DictTable = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase); void AddDict(string key, string value) { DictTable.Add(key, value); } Many people do not user the other ctors of Dictionary because they do shy away from the overhead of writing their own comparer. They do not know that .NET has for strings already predefined comparers at hand which you can directly use. Today in the many core area we do use threads all over the place. Sometimes things break in subtle ways but most of the time it is sufficient to place a lock around the offender. Threading has become so mainstream that it may sound weird that in the year 2000 some guy got a huge incentive for the idea to reduce the time to process calibration data from 12 hours to 6 hours by using two threads on a dual core machine. Threading does make it easy to become faster at the expense of correctness. Correct and scalable multithreading can be arbitrarily hard to achieve depending on the problem you are trying to solve. Lets suppose we want to process millions of items with two threads and count the processed items processed by all threads. A typical beginners code might look like this: int Counter; void IJustLearnedToUseThreads() { var t1 = new Thread(ThreadWorkMethod); t1.Start(); var t2 = new Thread(ThreadWorkMethod); t2.Start(); t1.Join(); t2.Join(); if (Counter != 2 * Increments) throw new Exception("Hmm " + Counter + " != " + 2 * Increments); } const int Increments = 10 * 1000 * 1000; void ThreadWorkMethod() { for (int i = 0; i < Increments; i++) { Counter++; } } It does throw an exception with the message e.g. “Hmm 10.222.287 != 20.000.000” and does never finish. The code does fail because the assumption that Counter++ is an atomic operation is wrong. The ++ operator is just a shortcut for Counter = Counter + 1 This does involve reading the counter from a memory location into the CPU, incrementing value on the CPU and writing the new value back to the memory location. When we do look at the generated assembly code we will see only inc dword ptr [ecx+10h] which is only one instruction. Yes it is one instruction but it is not atomic. All modern CPUs have several layers of caches (L1,L2,L3) which try to hide the fact how slow actual main memory accesses are. Since cache is just another word for redundant copy it can happen that one CPU does read a value from main memory into the cache, modifies it and write it back to the main memory. The problem is that at least the L1 cache is not shared between CPUs so it can happen that one CPU does make changes to values which did change in meantime in the main memory. From the exception you can see we did increment the value 20 million times but half of the changes were lost because we did overwrite the already changed value from the other thread. This is a very common case and people do learn to protect their  data with proper locking.   void Intermediate() { var time = Stopwatch.StartNew(); Action acc = ThreadWorkMethod_Intermediate; var ar1 = acc.BeginInvoke(null, null); var ar2 = acc.BeginInvoke(null, null); ar1.AsyncWaitHandle.WaitOne(); ar2.AsyncWaitHandle.WaitOne(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Intermediate did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Intermediate() { for (int i = 0; i < Increments; i++) { lock (this) { Counter++; } } } This is better and does use the .NET Threadpool to get rid of manual thread management. It does give the expected result but it can result in deadlocks because you do lock on this. This is in general a bad idea since it can lead to deadlocks when other threads use your class instance as lock object. It is therefore recommended to create a private object as lock object to ensure that nobody else can lock your lock object. When you read more about threading you will read about lock free algorithms. They are nice and can improve performance quite a lot but you need to pay close attention to the CLR memory model. It does make quite weak guarantees in general but it can still work because your CPU architecture does give you more invariants than the CLR memory model. For a simple counter there is an easy lock free alternative present with the Interlocked class in .NET. As a general rule you should not try to write lock free algos since most likely you will fail to get it right on all CPU architectures. void Experienced() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Experienced); t1.Wait(); t2.Wait(); if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Experienced did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Experienced() { for (int i = 0; i < Increments; i++) { Interlocked.Increment(ref Counter); } } Since time does move forward we do not use threads explicitly anymore but the much nicer Task abstraction which was introduced with .NET 4 at 2010. It is educational to look at the generated assembly code. The Interlocked.Increment method must be called which does wondrous things right? Lets see: lock inc dword ptr [eax] The first thing to note that there is no method call at all. Why? Because the JIT compiler does know very well about CPU intrinsic functions. Atomic operations which do lock the memory bus to prevent other processors to read stale values are such things. Second: This is the same increment call prefixed with a lock instruction. The only reason for the existence of the Interlocked class is that the JIT compiler can compile it to the matching CPU intrinsic functions which can not only increment by one but can also do an add, exchange and a combined compare and exchange operation. But be warned that the correct usage of its methods can be tricky. If you try to be clever and look a the generated IL code and try to reason about its efficiency you will fail. Only the generated machine code counts. Is this the best code we can write? Perhaps. It is nice and clean. But can we make it any faster? Lets see how good we are doing currently. Level Time in s IJustLearnedToUseThreads Flawed Code Intermediate 1,5 (lock) Experienced 0,3 (Interlocked.Increment) Master 0,1 (1,0 for int[2]) That lock free thing is really a nice thing. But if you read more about CPU cache, cache coherency, false sharing you can do even better. int[] Counters = new int[12]; // Cache line size is 64 bytes on my machine with an 8 way associative cache try for yourself e.g. 64 on more modern CPUs void Master() { var time = Stopwatch.StartNew(); Task t1 = Task.Factory.StartNew(ThreadWorkMethod_Master, 0); Task t2 = Task.Factory.StartNew(ThreadWorkMethod_Master, Counters.Length - 1); t1.Wait(); t2.Wait(); Counter = Counters[0] + Counters[Counters.Length - 1]; if (Counter != 2 * Increments) throw new Exception(String.Format("Hmm {0:N0} != {1:N0}", Counter, 2 * Increments)); Console.WriteLine("Master did take: {0:F1}s", time.Elapsed.TotalSeconds); } void ThreadWorkMethod_Master(object number) { int index = (int) number; for (int i = 0; i < Increments; i++) { Counters[index]++; } } The key insight here is to use for each core its own value. But if you simply use simply an integer array of two items, one for each core and add the items at the end you will be much slower than the lock free version (factor 3). Each CPU core has its own cache line size which is something in the range of 16-256 bytes. When you do access a value from one location the CPU does not only fetch one value from main memory but a complete cache line (e.g. 16 bytes). This means that you do not pay for the next 15 bytes when you access them. This can lead to dramatic performance improvements and non obvious code which is faster although it does have many more memory reads than another algorithm. So what have we done here? We have started with correct code but it was lacking knowledge how to use the .NET Base Class Libraries optimally. Then we did try to get fancy and used threads for the first time and failed. Our next try was better but it still had non obvious issues (lock object exposed to the outside). Knowledge has increased further and we have found a lock free version of our counter which is a nice and clean way which is a perfectly valid solution. The last example is only here to show you how you can get most out of threading by paying close attention to your used data structures and CPU cache coherency. Although we are working in a virtual execution environment in a high level language with automatic memory management it does pay off to know the details down to the assembly level. Only if you continue to learn and to dig deeper you can come up with solutions no one else was even considering. I have studied particle physics which does help at the digging deeper part. Have you ever tried to solve Quantum Chromodynamics equations? Compared to that the rest must be easy ;-). Although I am no longer working in the Science field I take pride in discovering non obvious things. This can be a very hard to find bug or a new way to restructure data to make something 10 times faster. Now I need to get some sleep ….

    Read the article

  • How to solve out of memory exception error in Entity FramWork?

    - by programmerist
    Hello; these below codes give whole data of my Rehber datas. But if i want to show web page via Gridview send me out of memory exception error. GenoTip.BAL: public static List<Rehber> GetAllDataOfRehber() { using (GenoTipSatisEntities genSatisCtx = new GenoTipSatisEntities()) { ObjectQuery<Rehber> rehber = genSatisCtx.Rehber; return rehber.ToList(); } } if i bind data directly dummy gridview like that no problem occures every thing is great!!! <asp:GridView ID="gwRehber" runat="server"> </asp:GridView> if above codes send data to Satis.aspx page: using GenoTip.BAL; namespace GenoTip.Web.ContentPages.Satis { public partial class Satis : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { gwRehber.DataSource = SatisServices.GetAllDataOfRehber(); gwRehber.DataBind(); //gwRehber.Columns[0].Visible = false; } } } } but i rearranged my gridview send me out of memory exception!!!! i need this arrangenment to show deta!!! <asp:GridView ID="gwRehber" runat="server"> <Columns> <%-- <asp:TemplateField> <ItemTemplate> <asp:Button runat="server" ID="btnID" CommandName="select" CommandArgument='<%# Eval("ID") %>' Text="Seç" /> </ItemTemplate> </asp:TemplateField>--%> <asp:BoundField DataField="Ad" HeaderText="Ad" /> <asp:BoundField DataField="BireyID" HeaderText="BireyID" Visible="false" /> <asp:BoundField DataField="Degistiren" HeaderText="Degistiren" /> <asp:BoundField DataField="EklemeTarihi" HeaderText="EklemeTarihi" /> <asp:BoundField DataField="DegistirmeTarihi" HeaderText="Degistirme Tarihi" Visible="false" /> <asp:BoundField DataField="Ekleyen" HeaderText="Ekleyen" /> <asp:BoundField DataField="ID" HeaderText="ID" Visible="false" /> <asp:BoundField DataField="Imza" HeaderText="Imza" /> <asp:BoundField DataField="KurumID" HeaderText="KurumID" Visible="false" /> </Columns> </asp:GridView> Error Detail : [OutOfMemoryException: 'System.OutOfMemoryException' türünde özel durum olusturuldu.] System.String.GetStringForStringBuilder(String value, Int32 startIndex, Int32 length, Int32 capacity) +29 System.Convert.ToBase64String(Byte[] inArray, Int32 offset, Int32 length, Base64FormattingOptions options) +146 System.Web.UI.ObjectStateFormatter.Serialize(Object stateGraph) +183 System.Web.UI.ObjectStateFormatter.System.Web.UI.IStateFormatter.Serialize(Object state) +4 System.Web.UI.Util.SerializeWithAssert(IStateFormatter formatter, Object stateGraph) +37 System.Web.UI.HiddenFieldPageStatePersister.Save() +79 System.Web.UI.Page.SavePageStateToPersistenceMedium(Object state) +105 System.Web.UI.Page.SaveAllState() +236 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +1099

    Read the article

  • How to add limit option in Magento API call.

    - by ritesh
    I am creating web service for my store. I am using magento API to collect product list from store. But it display all the 500 records. And i want it 25 records per page. What to add in API call? Or What filter will be apply for this? //create soap object $proxy = new SoapClient('http://localhsot/magento/api/soap/?wsdl'); // create authorized session id using api user name and api key // $sessionId = $proxy->login('apiUser', 'apiKey'); $sessionId = $proxy->login('test_admin', '12345678'); $filters = array( ); // Get list of product $productlist = $proxy->call($sessionId, 'product.list', array($filters)); print_r($productlist );

    Read the article

  • Finally, I have my HP 6910p laptop running with 8Gb RAM

    - by Liam Westley
    Today, I received two Corsair Value Select 4Gb DDR SO-DIMMs (from overclock.co.uk) for my aging HP 6910p to give it the extra lease of life to keep it going until the end of 2010.  And here is the proof that Windows 7 64-bit happily sees all 8Gb, There are no 4Gb modules are officially supported for the HP 6910p (they didn’t exist when it was first build).  I was taking a bit of a gamble, and relying on the UK distance selling regulations which meant that even if they didn’t work I’d be able to send them back, getting a full refund and only paying for the return postage. I’d read Keith Comb’s blog back in 2008, (http://blogs.technet.com/b/keithcombs/archive/2008/07/05/loading-a-hp-6910p-with-8gb-of-ram.aspx) where he mentioned ‘trying’ out 4Gb samples of SO-DIMMs in a HP 6910p laptop, but there still appears to be no mentions of running this configuration in any other blog. Seeing how the 8Gb of memory is used is made easier with the new Resource Monitor available in Windows 7.  With two copies of Visual Studio 2008, Outlook, Firefox (with 30+ tabs), TweetDeck (an infamous memory hog) and VMWare workstation running a virtual machine allocated with 2Gb of memory, you might have no ‘free’ memory remaining, but the standby memory is an awesome 2.4Gb, and once the VM is up and running the Hard Faults/sec hovers around zero,   It’s the page fault figure which really counts, because reducing that value means that you are preventing the Windows 7 system drive from being used for virtual memory paging operations.  Even after only a few hours of use it’s noticeable that disc access has been reduced and applications feel more responsive and ‘snappy’.  I did consider the option of purchasing an SSD to replace the main drive, rather than go for 8Gb of RAM, but I think I’ve probably made the correct decision. Given my hobby topic of virtualisation, I take the view that you can never have too much memory.   It was also a decision made easier by the price differential between 8Gb of RAM compared to a decent size SSD.  In the 18 months since Keith Comb tested the first 4Gb SO-DIMMS they have plummeted in price, at just under £100 per 4Gb, they are around a fifth of the price when launched. So if you ever wondered if a HP 6910p can handle 8Gb, now you know.

    Read the article

  • How to crop or get a smaller size UIImage in iPhone without memory leaks?

    - by rkbang
    Hello all, I am using a navigation controller in which I push a tableview Controller as follows: TableView *Controller = [[TableView alloc] initWithStyle:UITableViewStylePlain]; [self.navigationController pushViewController:Controller animated:NO]; [Controller release]; In this table view I am using following two methods to display images: - (UIImage*) getSmallImage:(UIImage*) img { CGSize size = img.size; CGFloat ratio = 0; if (size.width < size.height) { ratio = 36 / size.width; } else { ratio = 36 / size.height; } CGRect rect = CGRectMake(0.0, 0.0, ratio * size.width, ratio * size.height); UIGraphicsBeginImageContext(rect.size); [img drawInRect:rect]; return UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); } - (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)rect { //create a context to do our clipping in UIGraphicsBeginImageContext(rect.size); CGContextRef currentContext = UIGraphicsGetCurrentContext(); //create a rect with the size we want to crop the image to //the X and Y here are zero so we start at the beginning of our //newly created context CGFloat X = (imageToCrop.size.width - rect.size.width)/2; CGFloat Y = (imageToCrop.size.height - rect.size.height)/2; CGRect clippedRect = CGRectMake(X, Y, rect.size.width, rect.size.height); //CGContextClipToRect( currentContext, clippedRect); //create a rect equivalent to the full size of the image //offset the rect by the X and Y we want to start the crop //from in order to cut off anything before them CGRect drawRect = CGRectMake(0, 0, imageToCrop.size.width, imageToCrop.size.height); CGContextTranslateCTM(currentContext, 0.0, drawRect.size.height); CGContextScaleCTM(currentContext, 1.0, -1.0); //draw the image to our clipped context using our offset rect //CGContextDrawImage(currentContext, drawRect, imageToCrop.CGImage); CGImageRef tmp = CGImageCreateWithImageInRect(imageToCrop.CGImage, clippedRect); //pull the image from our cropped context UIImage *cropped = [UIImage imageWithCGImage:tmp];//UIGraphicsGetImageFromCurrentImageContext(); CGImageRelease(tmp); //pop the context to get back to the default UIGraphicsEndImageContext(); //Note: this is autoreleased*/ return cropped; } But when I pop the Controller, the memory being used is not released. Is there any leaks in the above code used to create and crop images. Also are there any efficient method to deal with images in iPhone. I am having a lot of images and facing major challeges in resolving the memory issues. tnx in advance.

    Read the article

  • How to limit an upload speed in java servlet?

    - by den-javamaniac
    Hi. I'm working on an app (based on Spring as DI and MVC framework) that has a file upload function which is currently implemented using Spring Multipart Upload (which in it's turn utilizes commons fileupload libs). So what I'm looking for is a way to lower the upload bandwidth consumption. How can I accomplish that?

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • (Oracle Performance) Will a query based on a view limit the view using the where clause?

    - by BestPractices
    In Oracle (10g), when I use a View (not Materialized View), does Oracle take into account the where clause when it executes the view? Let's say I have: MY_VIEW = SELECT * FROM PERSON P, ORDERS O WHERE P.P_ID = O.P_ID And I then execute the following: SELECT * FROM MY_VIEW WHERE MY_VIEW.P_ID = '1234' When this executes, does oracle first execute the query for the view and THEN filter it based on my where clause (where MY_VIEW.P_ID = '1234') or does it do this filtering as part of the execution of the view? If it does not do the latter, and P_ID had an index, would I also lose out on the indexing capability since Oracle would be executing my query against the view which doesn't have the index rather than the base table which has the index?

    Read the article

  • Is this way of storing typed objects in memory good?

    - by Pindatjuh
    This is an "is this okay, or can it be done better" question. Topic: Storing typed objects in memory. Background information: I'm building a compiler for the x86-32 platform for my language. My goal includes typed objects. Idea: Every primitive is a semi-class (it can be used as if it was a normal class, but it's stored more compact). Every class is represented by primitives and some meta-data (containing class-properties, inheritance stuff, etc.). The meta-data is complex: it doesn't use fields but instead context-switches. For primitives, the meta-data is very small, compared to a "real" class, which is alot bigger. This enables another idea that "primitives are objects", in my language, which I found nessecairy. Example: If I have an array of 32 booleans, then the pure content of this array is exactly 4 byte (32 bits of booleans). The meta-data will contain flags that the type is an array of booleans, which contains 32 entries. The meta-data is very compacted, on bit-level: using a sort of "packing" mechanism, which is read by a FSM at runtime, when doing inspection of the type (like when passing the object to methods for checking, etc.) For instance (read from left to right, top to bottom, remember vertical possition when going to the right, and check nearest column header for meaning of switch): Primitive? Array? Type-Meta 1 Byte? || Size (1 byte) 1 1 [...] 1 [...] done 0 2 Bytes? || Size (2 bytes) 1 [...] done || Size (4 bytes) 0 [...] done Integer? 1 Byte? 2 Bytes? 0 1 0 1 done 1 done 0 done Boolean? Byte? 0 1 0 done 1 done More-Primitives 0 .... Class-Stuff (Huge) 0 ... (After reaching done the data is inserted. || = byte alignement. [...] is variable sized. ... is not described here, for simplicity. And let's call them cost-based-data-structures.) For an array of 32 booleans containing all true values, the memory for this type would be (read top-down): 1 Primitive 1 Array 1 ArrayType: Primitive 0 Not-Array 0 Not-Integer 1 Boolean 0 Not-Byte (thus bit) 1 Integer Size: 1 Byte 00100000 Array size 11111111 11111111 11111111 11111111 Data Thus, 8 bytes represent 32 booleans in an array: 11100101 00100000 11111111 11111111 11111111 11111111 Is this okay, or can it be done better?

    Read the article

  • How can I limit access to a particular class to one caller at a time in a web service?

    - by MusiGenesis
    I have a web service method in which I create a particular type of object, use it for a few seconds, and then dispose it. Because of problems arising from multiple threads creating and using instances of this class at the same time, I need to restrict the method so that only one caller at a time ever has one of these objects. To do this, I am creating a private static object: private static object _lock = new object(); ... and then inside the web service method I do this around the critical code: lock (_lock) { using (DangerousObject do = new DangerousObject()) { do.MakeABigMess(); do.CleanItUp(); } } I'm not sure this is working, though. Do I have this right? Will this code ensure that only one instance of DangerousObject is instantiated and in use at a time?

    Read the article

  • How to get a volume measurement of iPhone recording in dB, with a limit of at least 120dB

    - by Cyber
    Hello, I am trying to make a simple volume meter for the iPhone. I want the volume displayed in dB. When using this turorial, I am only getting measurements up to 78 dB. I've read that that is because the dBFS spectrum for 16 bit audio recordings is only 96 dB. I tried modifying this piece of code in the init funcyion: dataFormat.mSampleRate = 44100.0f; dataFormat.mFormatID = kAudioFormatLinearPCM; dataFormat.mFramesPerPacket = 1; dataFormat.mChannelsPerFrame = 1; dataFormat.mBytesPerFrame = 2; dataFormat.mBytesPerPacket = 2; dataFormat.mBitsPerChannel = 16; dataFormat.mReserved = 0; I changed the value of mBitsPerChannel, hoping to increase the bit value of the recording. dataFormat.mBitsPerChannel = 32; With that variable set to 32, the "mAveragePower" function returns only 0. So, how can i measure more decibels? All my code is practically the same as in the tutorial i posted above. Thanks in advance, Thomas

    Read the article

  • Can a memory page be moved by modifying the page table?

    - by Adam
    Is it possible (on any reasonable OS, preferably Linux) to swap the contents of two memory pages by only modifying the page table and not actually moving any data? The motivation is a dense matrix transpose. If the data were blocked by page size it would be possible to transpose the data within a page (fits in cache) then swap pages to move the blocks into their final place. A large matrix would have many many pages moved, so hopefully flushing the TLB wouldn't cause trouble.

    Read the article

  • How can I limit access to a particular class to one caller at a time in an ASMX web service?

    - by MusiGenesis
    I have a web service method in which I create a particular type of object, use it for a few seconds, and then dispose it. Because of problems arising from multiple threads creating and using instances of this class at the same time, I need to restrict the method so that only one caller at a time ever has one of these objects. To do this, I am creating a private static object: private static object _lock = new object(); ... and then inside the web service method I do this around the critical code: lock (_lock) { using (DangerousObject do = new DangerousObject()) { do.MakeABigMess(); do.CleanItUp(); } } I'm not sure this is working, though. Do I have this right? Will this code ensure that only one instance of DangerousObject is instantiated and in use at a time? Or does each caller get their own copy of _lock, rendering my code here laughable?

    Read the article

  • Windows Azure Role Instance Limits

    - by kaleidoscope
    Brief overview of the limits imposed on hosted services in Windows Azure is as follows: Effective before Dec. 10th 2009 Effective  after Dec. 10th 2009 Effective after Jan. 4th 2010 Token (CTP) Token (CTP) Token (non-billing country) Paying subscription Deployment Slots 2 2 2 2 Hosted Services 1 1 20 20 Roles per  deployment 5 5 5 5 Instances per Role 2 2 no limit no limit VM CPU Cores no limit 8 8 20 Storage Accounts 2 2 5 5 More Information: http://blog.toddysm.com/2010/01/windows-azure-role-instance-limits-explained.html   Amit, S

    Read the article

  • Garbage Collector not doing its job. Memory Consumption = 1.5GB & OutOFMemory Exception.

    - by imageWorker
    I'm working with images (each of size = 5MB). The following code extract some information from each image that is present in the given directory. I'm getting out of memory exception. The size of the process is around (1.5GB). I don't know why garbage collector is not freeing memory. I even tried adding GC.Collect() as last line of foreach loop. Still I'm getting 'OutOFMemory' using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.IO; using System.Drawing; using System.Drawing.Imaging; namespace TrainSVM { class Program { static void Main(string[] args) { FileStream fs = new FileStream("dg.train",FileMode.OpenOrCreate,FileAccess.Write); StreamWriter sw = new StreamWriter(fs); String[] filePathArr = Directory.GetFiles("E:\\images\\"); foreach (string filePath in filePathArr) { if (filePath.Contains("lmn")) { sw.Write("1 "); Console.Write("1 "); } else { sw.Write("1 "); Console.Write("1 "); } Bitmap originalBMP = new Bitmap(filePath); /***********************/ Bitmap imageBody; ImageBody.ImageBody im = new ImageBody.ImageBody(originalBMP); imageBody = im.GetImageBody(-1); /* white coat */ Bitmap whiteCoatBitmap = Rgb2Hsi.Rgb2Hsi.GetHuePlane(imageBody); float WhiteCoatPixelPercentage = Rgb2Hsi.Rgb2Hsi.GetWhiteCoatPixelPercentage(whiteCoatBitmap); //Console.Write("whiteDone\t"); sw.Write("1:" + WhiteCoatPixelPercentage + " "); Console.Write("1:" + WhiteCoatPixelPercentage + " "); /******************/ Quaternion.Quaternion qtr = new Quaternion.Quaternion(-15); Bitmap yellowCoatBMP = qtr.processImage(imageBody); //yellowCoatBMP.Save("yellowCoat.bmp"); float yellowCoatPixelPercentage = qtr.GetYellowCoatPixelPercentage(yellowCoatBMP); //Console.Write("yellowCoatDone\t"); sw.Write("2:" + yellowCoatPixelPercentage + " "); Console.Write("2:" + yellowCoatPixelPercentage + " "); /**********************/ Bitmap balckPatchBitmap = BlackPatchDetection.BlackPatchDetector.MarkBlackPatches(imageBody); float BlackPatchPixelPercentage = BlackPatchDetection.BlackPatchDetector.BlackPatchPercentage; //Console.Write("balckPatchDone\n"); sw.Write("3:" + BlackPatchPixelPercentage + "\n"); Console.Write("3:" + BlackPatchPixelPercentage + "\n"); balckPatchBitmap.Dispose(); yellowCoatBMP.Dispose(); whiteCoatBitmap.Dispose(); originalBMP.Dispose(); sw.Flush(); } sw.Dispose(); fs.Dispose(); } } }

    Read the article

  • What should I call the operation that limit a string's length?

    - by egarcia
    This is a language-agnostic question - unless you count English as a language. I've got this list of items which can have very long names. For aesthetic purposes, these names must be made shorter in some cases, adding dots (...) to indicate that the name is longer. So for example, if article.name returns this: lorem ipsum dolor sit amet I'd like to get this other output. lorem ipsum dolor ... I can program this quite easily. My question is: how should I call that shortening operation? I mean the name, not the implementation. Is there a standard English name for it?

    Read the article

  • How can I solve out of memory exception in generic list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • How to share memory buffer across sessions in Django?

    - by afriza
    I want to have one party (or more) sends a stream of data via HTTP request(s). Other parties will be able to receive the same stream of data in almost real-time. The data stream should be accessible across sessions (according to access control list). How can I do this in Django? If possible I would like to avoid database access and use in memory buffer (along with some synchronization mechanism)

    Read the article

  • dilemma about mysql. using condition to limit load on a dbf

    - by ondrobaco
    hi, I have a table of about 800 000 records. Its basically a log which I query often. I gave condition to query only queries that were entered last month in attempt to reduce the load on a database. My thinking is a) if the database goes only through the first month and then returns entries, its good. b) if the database goes through the whole database + checking the condition against every single record, it's actually worse than no condition. What is your opinion? How would you go about reducing load on a dbf?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >