Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 174/626 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • C# - Which is more efficient and thread safe? static or instant classes?

    - by Soni Ali
    Consider the following two scenarios: //Data Contract public class MyValue { } Scenario 1: Using a static helper class. public class Broker { private string[] _userRoles; public Broker(string[] userRoles) { this._userRoles = userRoles; } public MyValue[] GetValues() { return BrokerHelper.GetValues(this._userRoles); } } static class BrokerHelper { static Dictionary<string, MyValue> _values = new Dictionary<string, MyValue>(); public static MyValue[] GetValues(string[] rolesAllowed) { return FilterForRoles(_values, rolesAllowed); } } Scenario 2: Using an instance class. public class Broker { private BrokerService _service; public Broker(params string[] userRoles) { this._service = new BrokerService(userRoles); } public MyValue[] GetValues() { return _service.GetValues(); } } class BrokerService { private Dictionary<string, MyValue> _values; private string[] _userRoles; public BrokerService(string[] userRoles) { this._userRoles = userRoles; this._values = new Dictionary<string, MyValue>(); } public MyValue[] GetValues() { return FilterForRoles(_values, _userRoles); } } Which of the [Broker] scenarios will scale best if used in a web environment with about 100 different roles and over a thousand users. NOTE: Feel free to sugest any alternative approach.

    Read the article

  • Is there a more efficient way to get the number of search results from a google query?

    - by highone
    Right now I am using this code: string url = "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=hey&esrch=FT1"; string source = getPageSource(url); string[] stringSeparators = new string[] { "<b>", "</b>" }; string[] b = source.Split(stringSeparators, StringSplitOptions.None); bool isResultNum = false; foreach (string s in b) { if (isResultNum) { MessageBox.Show(s.Replace(",", "")); return; } if (s.Contains(" of about ")) { isResultNum = true; } } Unfortunately it is very slow, is there a better way to do it? Also is it legal to query google like this? From the answer in this question it didn't sound like it was http://stackoverflow.com/questions/903747/how-to-download-google-search-results

    Read the article

  • Can a memory page be moved by modifying the page table?

    - by Adam
    Is it possible (on any reasonable OS, preferably Linux) to swap the contents of two memory pages by only modifying the page table and not actually moving any data? The motivation is a dense matrix transpose. If the data were blocked by page size it would be possible to transpose the data within a page (fits in cache) then swap pages to move the blocks into their final place. A large matrix would have many many pages moved, so hopefully flushing the TLB wouldn't cause trouble.

    Read the article

  • Simple but efficient way to store a series of small changes to an image?

    - by finnw
    I have a series of images. Each one is typically (but not always) similar to the previous one, with 3 or 4 small rectangular regions updated. I need to record these changes using a minimum of disk space. The source images are not compressed, but I would like the deltas to be compressed. I need to be able to recreate the images exactly as input (so a lossy video codec is not appropriate.) I am thinking of something along the lines of: Composite the new image with a negative of the old image Save the composited image in any common format that can compress using RLE (probably PNG.) Recreate the second image by compositing the previous image with the delta. Although the images have an alpha channel, I can ignore it for the purposes of this function. Is there an easy-to-implement algorithm or free Java library with this capability?

    Read the article

  • Garbage Collector not doing its job. Memory Consumption = 1.5GB & OutOFMemory Exception.

    - by imageWorker
    I'm working with images (each of size = 5MB). The following code extract some information from each image that is present in the given directory. I'm getting out of memory exception. The size of the process is around (1.5GB). I don't know why garbage collector is not freeing memory. I even tried adding GC.Collect() as last line of foreach loop. Still I'm getting 'OutOFMemory' using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.IO; using System.Drawing; using System.Drawing.Imaging; namespace TrainSVM { class Program { static void Main(string[] args) { FileStream fs = new FileStream("dg.train",FileMode.OpenOrCreate,FileAccess.Write); StreamWriter sw = new StreamWriter(fs); String[] filePathArr = Directory.GetFiles("E:\\images\\"); foreach (string filePath in filePathArr) { if (filePath.Contains("lmn")) { sw.Write("1 "); Console.Write("1 "); } else { sw.Write("1 "); Console.Write("1 "); } Bitmap originalBMP = new Bitmap(filePath); /***********************/ Bitmap imageBody; ImageBody.ImageBody im = new ImageBody.ImageBody(originalBMP); imageBody = im.GetImageBody(-1); /* white coat */ Bitmap whiteCoatBitmap = Rgb2Hsi.Rgb2Hsi.GetHuePlane(imageBody); float WhiteCoatPixelPercentage = Rgb2Hsi.Rgb2Hsi.GetWhiteCoatPixelPercentage(whiteCoatBitmap); //Console.Write("whiteDone\t"); sw.Write("1:" + WhiteCoatPixelPercentage + " "); Console.Write("1:" + WhiteCoatPixelPercentage + " "); /******************/ Quaternion.Quaternion qtr = new Quaternion.Quaternion(-15); Bitmap yellowCoatBMP = qtr.processImage(imageBody); //yellowCoatBMP.Save("yellowCoat.bmp"); float yellowCoatPixelPercentage = qtr.GetYellowCoatPixelPercentage(yellowCoatBMP); //Console.Write("yellowCoatDone\t"); sw.Write("2:" + yellowCoatPixelPercentage + " "); Console.Write("2:" + yellowCoatPixelPercentage + " "); /**********************/ Bitmap balckPatchBitmap = BlackPatchDetection.BlackPatchDetector.MarkBlackPatches(imageBody); float BlackPatchPixelPercentage = BlackPatchDetection.BlackPatchDetector.BlackPatchPercentage; //Console.Write("balckPatchDone\n"); sw.Write("3:" + BlackPatchPixelPercentage + "\n"); Console.Write("3:" + BlackPatchPixelPercentage + "\n"); balckPatchBitmap.Dispose(); yellowCoatBMP.Dispose(); whiteCoatBitmap.Dispose(); originalBMP.Dispose(); sw.Flush(); } sw.Dispose(); fs.Dispose(); } } }

    Read the article

  • Is there a more efficient AS3 way to compare 2 arrays for adds, removes & updates?

    - by WillyCornbread
    Hi all - I'm wondering if there is a better way to approach this than my current solution... I have a list of items, I then retrieve another list of items. I need to compare the two lists and come up with a list of items that are existing (for update), a list that are not existing in the new list (for removal) and a list of items that are not existing in the old list (for adding). Here is what I'm doing now - basically creating a lookup object for testing if an item exists. Thanks for any tips. for each (itm in _oldItems) { _oldLookup[itm.itemNumber] = itm; } // Loop through items and check if they already exist in the 'old' list for each (itm in _items) { // If an item exists in the old list - push it for update if (_oldLookup[itm.itemNumber]) { _itemsToUpdate.push(itm); } else // otherwise push it into the items to add { _itemsToAdd.push(itm); } // remove it from the lookup list - this will leave only // items for removal remaining in the lookup delete _oldLookup[itm.itemNumber]; } // The items remaining in the lookup object have neither been added or updated - // so they must be for removal - add to list for removal for each (itm in _oldLookup) { _itemsToRemove.push(itm); }

    Read the article

  • How can I solve out of memory exception in generic list generic ?

    - by Phsika
    How can i solve out of memory exception in list generic if adding new value foreach(DataColumn dc in dTable.Columns) foreach (DataRow dr in dTable.Rows) myScriptCellsCount.MyCellsCharactersCount.Add(dr[dc].ToString().Length); MyBase Class: public class MyExcelSheetsCells { public List<int> MyCellsCharactersCount { get; set; } public MyExcelSheetsCells() { MyCellsCharactersCount = new List<int>(); } }

    Read the article

  • How to share memory buffer across sessions in Django?

    - by afriza
    I want to have one party (or more) sends a stream of data via HTTP request(s). Other parties will be able to receive the same stream of data in almost real-time. The data stream should be accessible across sessions (according to access control list). How can I do this in Django? If possible I would like to avoid database access and use in memory buffer (along with some synchronization mechanism)

    Read the article

  • t-sql most efficient row to column? for xml path, pivot

    - by ajberry
    create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 -- select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema above, but concept is similar) in both fixed width and delimited formats. The above FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. I've looked at pivot but most of the examples I have found are aggregating information. I just to combine the child rows and tack them onto the parent. For example, for an order it would need to output: OrderId,Product1,Product2,Product3,etc Thoughts or suggestions? I am using SQL Server 2k5.

    Read the article

  • t-sql most efficient row to column? crosstab for xml path, pivot

    - by ajberry
    I am looking for the most performant way to turn rows into columns. I have a requirement to output the contents of the db (not actual schema below, but concept is similar) in both fixed width and delimited formats. The below FOR XML PATH query gives me the result I want, but when dealing with anything other than small amounts of data, can take awhile. select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o I've looked at pivot but most of the examples I have found are aggregating information. I just want to combine the child rows and tack them onto the parent. I should also point out I don't need to deal with the column names either since the output of the child rows will either be a fixed width string or a delimited string. For example, given the following tables: OrderId CustomerId ----------- ----------- 1 1 2 2 3 3 DetailId OrderId ProductId ----------- ----------- ----------- 1 1 100 2 1 158 3 1 234 4 2 125 5 3 101 6 3 105 7 3 212 8 3 250 for an order I need to output: orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250 or orderid Products ----------- ----------------------- 1 100|158|234 2 125 3 101|105|212|250 Thoughts or suggestions? I am using SQL Server 2k5. Example Setup: create table _orders ( OrderId int identity(1,1) primary key nonclustered ,CustomerId int ) create table _details ( DetailId int identity(1,1) primary key nonclustered ,OrderId int ,ProductId int ) insert into _orders (CustomerId) select 1 union select 2 union select 3 insert into _details (OrderId,ProductId) select 1,100 union select 1,158 union select 1,234 union select 2,125 union select 3,105 union select 3,101 union select 3,212 union select 3,250 using FOR XML PATH: select orderid ,REPLACE(( SELECT ' ' + CAST(ProductId as varchar) FROM _details d WHERE d.OrderId = o.OrderId ORDER BY d.OrderId,d.DetailId FOR XML PATH('') ),'&#x20;','') as Products from _orders o which outputs what I want, however is very slow for large amounts of data. One of the child tables is over 2 million rows, pushing the processing time out to ~ 4 hours. orderid Products ----------- ----------------------- 1 100 158 234 2 125 3 101 105 212 250

    Read the article

  • Efficient way to create/unpack large bitfields in C?

    - by drhorrible
    I have one microcontroller sampling from a lot of ADC's, and sending the measurements over a radio at a very low bitrate, and bandwidth is becoming an issue. Right now, each ADC only give us 10 bits of data, and its being stored in a 16-bit integer. Is there an easy way to pack them in a deterministic way so that the first measurement is at bit 0, second at bit 10, third at bit 20, etc? To make matters worse, the microcontroller is little endian, and I have no control over the endianness of the computer on the other side.

    Read the article

  • SSIS - Parallel Execution of Tasks - How efficient is it?

    - by Randy Minder
    I am building an SSIS package that will contain dozens of Sequence tasks. Each Sequence task will contain three tasks. One to truncate a destination table and remove indexes on the table, another to import data from a source table, and a third to add back indexes to the destination table. My question is this. I currently have nine of these Sequences tasks built, and none are dependent on any of the others. When I execute the package, SSIS seems to do a pretty good job of determining which tasks in which Sequence to execute, which, by the way, appears to be quite random. As I continue adding more Sequences, should I attempt to be smarter about how SSIS should execute these Sequences, or is SSIS smart enough to do it itself? Thanks.

    Read the article

  • What is the most efficient/elegant way to parse a flat table into a tree?

    - by Tomalak
    Assume you have a flat table that stores an ordered tree hierarchy: Id Name ParentId Order 1 'Node 1' 0 10 2 'Node 1.1' 1 10 3 'Node 2' 0 20 4 'Node 1.1.1' 2 10 5 'Node 2.1' 3 10 6 'Node 1.2' 1 20 What minimalistic approach would you use to output that to HTML (or text, for that matter) as a correctly ordered, correctly intended tree? Assume further you only have basic data structures (arrays and hashmaps), no fancy objects with parent/children references, no ORM, no framework, just your two hands. The table is represented as a result set, which can be accessed randomly. Pseudo code or plain English is okay, this is purely a conceptional question. Bonus question: Is there a fundamentally better way to store a tree structure like this in a RDBMS? EDITS AND ADDITIONS To answer one commenter's (Mark Bessey's) question: A root node is not necessary, because it is never going to be displayed anyway. ParentId = 0 is the convention to express "these are top level". The Order column defines how nodes with the same parent are going to be sorted. The "result set" I spoke of can be pictured as an array of hashmaps (to stay in that terminology). For my example was meant to be already there. Some answers go the extra mile and construct it first, but thats okay. The tree can be arbitrarily deep. Each node can have N children. I did not exactly have a "millions of entries" tree in mind, though. Don't mistake my choice of node naming ('Node 1.1.1') for something to rely on. The nodes could equally well be called 'Frank' or 'Bob', no naming structure is implied, this was merely to make it readable. I have posted my own solution so you guys can pull it to pieces.

    Read the article

  • What Amazon S3 .NET Library is most useful and efficient?

    - by Geo
    There are two main open source .net Amazon S3 libraries. Three Sharp LitS3 I am currently using LitS3 in our MVC demo project, but there is some criticism about it. Has anyone here used both libraries so they can give an objective point of view. Below some sample calls using LitS3: On demo controller: private S3Service s3 = new S3Service() { AccessKeyID = "Thekey", SecretAccessKey = "testing" }; public ActionResult Index() { ViewData["Message"] = "Welcome to ASP.NET MVC!"; return View("Index",s3.GetAllBuckets()); } On demo view: <% foreach (var item in Model) { %> <p> <%= Html.Encode(item.Name) %> </p> <% } %> EDIT 1: Since I have to keep moving and there is no clear indication of what library is more effective and kept more up to date, I have implemented a repository pattern with an interface that will allow me to change library if I need to in the future. Below is a section of the S3Repository that I have created and will let me change libraries in case I need to: using LitS3; namespace S3Helper.Models { public class S3Repository : IS3Repository { private S3Service _repository; #region IS3Repository Members public IQueryable<Bucket> FindAllBuckets() { return _repository.GetAllBuckets().AsQueryable(); } public IQueryable<ListEntry> FindAllObjects(string BucketName) { return _repository.ListAllObjects(BucketName).AsQueryable(); } #endregion If you have any information about this question please let me know in a comment, and I will get back and edit the question. EDIT 2: Since this question is not getting attention, I integrated both libraries in my web app to see the differences in design, I know this is probably a waist of time, but I really want a good long run solution. Below you will see two samples of the same action with the two libraries, maybe this will motivate some of you to let me know your thoughts. WITH THREE SHARP LIBRARY: public IQueryable<T> FindAllBuckets<T>() { List<string> list = new List<string>(); using (BucketListRequest request = new BucketListRequest(null)) using (BucketListResponse response = service.BucketList(request)) { XmlDocument bucketXml = response.StreamResponseToXmlDocument(); XmlNodeList buckets = bucketXml.SelectNodes("//*[local-name()='Name']"); foreach (XmlNode bucket in buckets) { list.Add(bucket.InnerXml); } } return list.Cast<T>().AsQueryable(); } WITH LITS3 LIBRARY: public IQueryable<T> FindAllBuckets<T>() { return _repository.GetAllBuckets() .Cast<T>() .AsQueryable(); }

    Read the article

  • Should I write more SQL to be more efficient, or less SQL to be less buggy?

    - by RenderIn
    I've been writing a lot of one-off SQL queries to return exactly what a certain page needs and no more. I could reuse existing queries and issue a number of SQL requests linear to the number of records on the page. As an example, I have a query to return People and a query to return Job Details for a person. To return a list of people with their job details I could query once for people and then once for each person to retrieve their job details. I've found that in most cases that solution returns things in a reasonable amount of time, but I don't know how well it will scale in my environment. Instead I've been writing queries to join people + job details, or people + salary history, etc. I'm looking at my models and I see how I could shave off maybe 30% of my code if I were to re-use existing queries. This is a big temptation. Is it a bad thing to go for reuse over efficiency in general or does it all come down to the specific situation? Should I first do it the easy way and then optimize later, or is it best to get the code knocked out while everything is fresh in my mind? Thoughts, experiences?

    Read the article

  • Whats the most efficient MySQL column types for this data?

    - by AlabamaKush
    I have several tables with some pretty standard data in each. Can somebody help me optimize them by telling me the best column types for this data. Whats beside them is what I have currently. Number (max length 7) --> MEDIUMINT(8) Unsigned Text (max length 30) --> VARCHAR(30) Text (max length 200) --> VARCHAR(200) Number (max length 4) --> SMALLINT(5) Unsigned Number (either 0 or 1) --> TINYINT(1) Unsigned Text (max length 500) --> TEXT Any suggestions? I'm just guessing with this so I know some of them are wrong...

    Read the article

  • What most efficient method to find a that triangle which contains the given point?

    - by Christo
    Given the triangle with vertices (a,b,c): c / \ / \ / \ a - - - b Which is then subdivided into four triangles by halving each of the edges: c / \ / \ ca / \ bc /\ - - - /\ / \ / \ / \ / \ a- - - - ab - - - -b Wich results in four triangles (a, ab, ca), (b, bc, ab), (c, ca, bc), (ab, bc, ca). Now given a point p. How do I determine in which triangle p lies, given that p is within the outer triangle (a, b, c)? Currently I intend to use ab as the origin. Check whether it is to the left of right of the line "ca - ab" using the perp of "ca - ab" and checking the sign against the dot product of "ab - a" and the perp vector and the vector "p - ab". If it is the same or the dot product is zero then it must be in (a, ab, ca)... Continue with this procedure with the other outer triangles (b, ba, ab) & (c, ca, ba). In the end if it didn't match with these it must be contained within the inner triangle (ab, bc, ca). Is there a better way to do it?

    Read the article

  • Most efficient algorithm for mesh-level, optimal occlusion culling?

    - by Fredriku73
    I am new to culling. On a first glance, it seems that most occlusion culling algorithms are object-level, not examining single meshes, which would be practical for game rendering. What I am looking for is an algorithm that culls all meshes within a single object that are occluded for a given viewpoint, with high accuracy. It needs to be at least O(n log n), a naive mesh-by-mesh comparison (O(n^2)) is too slow. I notice that the Blender GUI identifies the occluded meshes for you in real-time, even if you work with large objects of 10,000+ meshes. What algorithm is used there, pray tell?

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >