Search Results

Search found 15637 results on 626 pages for 'memory efficient'.

Page 87/626 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • Efficient implementation of threads in the given scenario

    - by shadeMe
    I've got a winforms application that is set up in the following manner: 2 buttons, a textbox, a collection K, function X and another function, Y. Function X parses a large database and enumerates some of its data in the global collection. Button 1 calls function X. Function Y walks through the above collection and prints out the data in the textbox. Button 2 calls function Y. I'd like to call function X through a worker thread in such a way that: The form remains responsive to user input. This comes intrinsically from the use of a separate thread. There is never more than a single instance of function X running at any point in time. K can be accessed by both functions at all times. What would be the most efficient implementation of the above environment ?

    Read the article

  • Most efficient way for testing links

    - by Burnzy
    I'm currently developping an app that is going through all the files on a server and checking every single hrefs to check wether they are valid or not. Using a WebClient or a HttpWebRequest/HttpWebResponse is kinda overkilling the process because it downloads the whole page each time, which is useless, I only need to check if the link do not return 404. What would be the most efficient way? Socket seems to be a good way of doing it, however I'm not quite sure how this works. Thanks for sharing your expertise!

    Read the article

  • MySQL: Efficient Blobbing?

    - by feklee
    I'm dealing with blobs of up to - I estimate - about 100 kilo bytes in size. The data is compressed already. Storage engine: InnoDB on MySQL 5.1 Frontend: PHP (Symfony with Propel ORM) Some questions: I've read somewhere that it's not good to update blobs, because it leads to reallocation, fragmentation, and thus bad performance. Is that true? Any reference on this? Initially the blobs get constructed by appending data chunks. Each chunk is up to 16 kilo bytes in size. Is it more efficient to use a separate chunk table instead, for example with fields as below? parent_id, position, chunk Then, to get the entire blob, one would do something like: SELECT GROUP_CONCAT(chunk ORDER BY position) FROM chunks WHERE parent_id = 187 The result would be used in a PHP script. Is there any difference between the types of blobs, aside from the size needed for meta data, which should be negligible.

    Read the article

  • efficient video format/codec for sparse & binary blob tracking

    - by user391339
    I am working on a blob tracking project and have many high-definition videos that I would like to reduce in size for storage and downstream tracking/shape-analysis. I want to use a lossless method that takes advantage of the black and white nature of the video as well as the fact that not much is moving between individual frames. The videos are quite sparse, with 5 to 10 b&w blobs per frame occupying <30% of the space in total, with each blob moving <5-10% of the field of view between frames and not changing shape too much between 2-3 frames. I will work in Python, Matlab, or LabView for this project, and could use a batch utility if available. It may be worthwhile to export the files as compressed image stacks if a proper video format can't be found. What are the pros and cons of this? A video codec uses correlations between neighboring frames, so it should be more efficient, but not if the wrong one is chosen or if it is improperly configured.

    Read the article

  • [perl] Efficient processing of large text

    - by jesper
    I have text file that contains over one million urls. I have to process this file in order to assign urls to groups, based on host address: { 'http://www.ex1.com' = ['http://www.ex1.com/...', 'http://www.ex1.com/...', ...], 'http://www.ex2.com' = ['http://www.ex2.com/...', 'http://www.ex2.com/...', ...] } My current basic solution takes about 600mb of RAM to do this (size of file is about 300mb). Could You provide some more efficient ways? My current solution simply reads line by line, extracts host address by regex and put url into hash.

    Read the article

  • What is the most efficient way to read many bytes from SQL Server using SqlDataReader (C#)

    - by eccentric
    Hi everybody! What is the most efficient way to read bytes (8-16 K) from SQL Server using SqlDataReader. It seems I know 2 ways: byte[] buffer = new byte[4096]; MemoryStream stream = new MemoryStream(); long l, dataOffset = 0; while ((l = reader.GetBytes(columnIndex, dataOffset, buffer, 0, buffer.Length)) > 0) { stream.Write(buffer, 0, buffer.Length); dataOffset += l; } and reader.GetSqlBinary(columnIndex).Value The data type is IMAGE

    Read the article

  • Efficient way to access a mapping of identifiers in Python

    - by sixbelo
    I am writing an app to do a file conversion and part of that is replacing old account numbers with a new account numbers. Right now I have a CSV file mapping the old and new account numbers with around 30K records. I read this in and store it as dict and when writing the new file grab the new account from the dict by key. My question is what is the best way to do this if the CSV file increases to 100K+ records? Would it be more efficient to convert the account mappings from a CSV to a sqlite database rather than storing them as a dict in memory?

    Read the article

  • ASP.net: Efficient ways to convert DataSets to GenericCollection (Of ObjectType)

    - by jlrolin
    I currently have a function that gets some data from the database and puts it into a dataset. The return type on my function is GenericCollection (Of CustomerDetails) If I do this: Dim dataset As DataSet = Read(strSQL.ToString) 'Gets Data from DB What's the most efficient way to map the dataset results to an collection of objects. More importantly, since I'm using GenericCollection, is there a way to do this in which I can call a function from the ObjectType class (CustomerDetails) that would have a means to converting that specific object. Or is there a way in which I can use a function that would handle all types? Is there a way to do something like: Return returnedResults.TransformDataSet(dataset) In which returnedResults is an object collection Of CustomerDetails, or would it simply be easier to have TransformDataSet return an object collection Of CustomerDetails by itself? Thanks for any help.

    Read the article

  • What is the most efficient way to store and access images

    - by MT
    I am working on a project which has to store tens and thousands of images on a server and let the users access them. I need the most efficient method to store these images and to retrieve them. Also, I need information about which technology I should opt. I haven't started the project yet. So, I am thinking between PHP w/ CodeIgniter and Ruby on Rails. PS: The site is something similar to Flickr except that the images are uploaded only by the Authors of the content, and not by the users.

    Read the article

  • Efficient way to update all rows in a table

    - by m_pGladiator
    Hi, I have a table with a lot of records (could be more than 500 000 or 1 000 000). I added a new column in this table and I need to fill a value for every row in the column, using the corresponding row value of another column in this table. I tried to use separate transactions for selecting every next chunk of 100 records and update the value for them, but still this takes hours to update all records in Oracle10 for example. What is the most efficient way to do this in SQL, without using some dialect-specific features, so it works everywhere (Oracle, MSSQL, MySQL, PostGre etc.)?

    Read the article

  • Most efficient method of generating PNG as HTTP response

    - by awj
    I've built an ASP.NET page whose output stream is a dynamically-generated PNG image containing only text on a transparent background. The text is based upon database IDs contained in the querystring. There will be a limited number of variations. Which one of the following would be the most efficient means of returning the image to the client? Store each variation upon the first generation, and thenceforth retrieve this from the drive. Simply generate the image each time. Cache the output response based upon the querystring.

    Read the article

  • Need help making this code more efficient

    - by Rendicahya
    I always use this method to easily read the content of a file. Is it efficient enough? Is 1024 good for the buffer size? public static String read(File file) { FileInputStream stream = null; StringBuilder str = new StringBuilder(); try { stream = new FileInputStream(file); } catch (FileNotFoundException e) { } FileChannel channel = stream.getChannel(); ByteBuffer buffer = ByteBuffer.allocate(1024); try { while (channel.read(buffer) != -1) { buffer.flip(); while (buffer.hasRemaining()) { str.append((char) buffer.get()); } buffer.rewind(); } } catch (IOException e) { } finally { try { channel.close(); stream.close(); } catch (IOException e) { } } return str.toString(); }

    Read the article

  • Most efficient way to Update with Linq2Sql

    - by pranay
    can I update my employee record as given in below function or i have to make query of employee collection first and than i update data public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); db.App3_EMPLOYEEs.Attach(employee); db.SubmitChanges(); return employee.PKEY; } or i have to do this public int updateEmployee(App3_EMPLOYEE employee) { DBContextDataContext db = new DBContextDataContext(); App3_EMPLOYEE emp = db.App3_EMPLOYEEs.Single(e => e.PKEY == employee.PKEY); db.App3_EMPLOYEEs.Attach(employee,emp); db.SubmitChanges(); return employee.PKEY; } But i dont want to use second option is there any efficient way to update data

    Read the article

  • More efficient way to write this Jquery code

    - by adamwstl
    Is there a better, more efficient way to write this code? It's a make shift drop down menu that allows user to RSVP for multiple people. Sorry, it's kind of a mess, but I think what I'm doing is clear. If not, I'm at my computer and will respond quickly with more info need be. //There's got to be a better way to do this $('#guest_num_1').click( function() { $('#num_guests a#quant_guests').html("1") $('.guest_name_2, .guest_name_3, .guest_name_4, .guest_name_5, .guest_name_6 ').hide() }); $('#guest_num_2').click( function() { $('#num_guests a#quant_guests').html("2") $('.guest_name_2').fadeIn() $('.guest_name_3, .guest_name_4, .guest_name_5, .guest_name_6').hide() }); $('#guest_num_3').click( function() { $('#num_guests a#quant_guests').html("3") $('.guest_name_2, .guest_name_3').fadeIn() $('.guest_name_4, .guest_name_5, .guest_name_6').hide() }); $('#guest_num_4').click( function() { $('#num_guests a#quant_guests').html("4") $('.guest_name_2, .guest_name_3, .guest_name_4').fadeIn() $('.guest_name_5, .guest_name_6').hide() }); $('#guest_num_5').click( function() { $('#num_guests a#quant_guests').html("5") $('.guest_name_2, .guest_name_3, .guest_name_4, .guest_name_5').fadeIn() $('.guest_name_6').hide() }); $('#guest_num_6').click( function() { $('#num_guests a#quant_guests').html("6") $('.guest_name_2, .guest_name_3, .guest_name_4, .guest_name_5, .guest_name_6').fadeIn() });

    Read the article

  • Is it better to have more small ram chips or fewer large ones?

    - by Alex Andronov
    I am currently building a new server. I have options between say 32GB Memory for 2 CPUs, DDR3, 1066MHz (8x4GB Dual Ranked RDIMMs) and 36GB Memory for 2 CPUs, DDR3, 1066MHz (18x2GB Dual Ranked RDIMMs) Both at the same price. Should I go for the higher ram amount or the fewer chips? This will be for a Dell PowerEdge R710 with two Intel® Xeon® E5530, 2.4Ghz, 8MB Cache, 5.86 GT/s QPI, Turbo, HT Thanks

    Read the article

  • Why are there hard faults when my RAM is not 100% used?

    - by Vilx-
    I've got 2GB of RAM and the resource monitor shows that it's only used about 75%. However there are some apps (NetBeans, Visual Studio) that every once in a while start making a lot of hard faults (up to and over 2000/min), thus predictably slowing down to a crawl. How is this so? The memory usage during these "fits" doesn't change. Perhaps it also includes memory mapped files or something?

    Read the article

  • apache-memory-hacker-linux

    - by bibhudatta
    When we start the linux system it take only 435mb memory and it is 4GB memory server. When we start the httpd services it take 1000mb and outmatically it take all the memory and the server crase. even we stop the apache just it release 200mb memory. What will be the problem Can any one tell me what these hacker are doing. I see they are goinging some hit to my apache by some but I thing they are doing from this system. Below is the log. Please help me out for this. [root@host ~]# tail -20 /var/log/httpd/dostizone.com-combined.log 180.76.5.143 - - [14/Nov/2011:02:30:16 +0530] "GET /blogs/10248/209403/nfl-panties-since-the-quality-of HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 180.76.5.88 - - [14/Nov/2011:02:30:31 +0530] "GET /blogs/815/158725/new-jersey-attorney-search HTTP/1.1" 403 2290 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 220.181.108.186 - - [14/Nov/2011:02:30:32 +0530] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:30:20 +0530] "GET /blogs/805/11279/supra-suprano-high-shoes HTTP/1.1" 200 30642 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:37 +0530] "GET /blogs/10514/215084/oakland-raiders-sweatpants-tags HTTP/1.1" 403 2297 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:12 +0530] "GET /profile/8509 HTTP/1.1" 200 236894 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 220.181.94.237 - - [14/Nov/2011:02:30:43 +0530] "GET /mode-switch?return_url=%2Fblogs%2F8529%2F160217%2Fclimate-jordan-6 HTTP/1.1" 302 1 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:44 +0530] "GET /blogs/390/61573/blackhawk-jerseys-from-the-you HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/core.js HTTP/1.1" 200 26869 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Activity/externals/scripts/core.js HTTP/1.1" 200 26873 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/imagezoom/core.js HTTP/1.1" 200 26899 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 180.76.5.153 - - [14/Nov/2011:02:30:50 +0530] "GET /blogs/10252/212268/cleveland-browns-authentic-jerse HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:51 +0530] "GET /blogs/741/46260/chocolate-ugg-women-boots-1873 HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.1.7 - - [14/Nov/2011:02:30:40 +0530] "GET /blogs/682/97454/swarovski-jewellry-sale-articles HTTP/1.1" 200 25770 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:56 +0530] "GET /blogs/779/60941/players-a-to-z-michael-cuddyer HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:01 +0530] "GET /blogs/469/58551/chicago-bears-news-there-exist HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:54 +0530] "GET /blogs/8529/160217/climate-jordan-6 HTTP/1.1" 200 30750 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 180.76.5.59 - - [14/Nov/2011:02:31:05 +0530] "GET /blogs/815/158197/cheap-calgary-flames-jerseys HTTP/1.1" 403 2292 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:06 +0530] "GET /mode-switch?return_url=%2Fblogs%2F387%2F45679%2Fhandbag-louis-vuitton-judy-mm-m4 HTTP/1.1" 403 2258 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:31:10 +0530] "GET /public/temporary/c83b731ecc556d7fd1a7732d9ac16ed6.png HTTP/1.1" 404 2305 "-" "Googlebot-Image/1

    Read the article

  • Innodb : cannot allocate the memory for the buffer pool

    - by mingyeow
    My innodb keeps crashing. This is the error message below. Does anyone know why this keeps happening? InnoDB: by InnoDB 49201616 bytes. Operating system errno: 12 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. InnoDB: Note that in most 32-bit computers the process InnoDB: memory space is limited to 2 GB or 4 GB. InnoDB: We keep retrying the allocation for 60 seconds... 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in /usr/bin/mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! InnoDB: Fatal error: cannot allocate the memory for the buffer pool [ERROR] Default storage engine (InnoDB) is not available

    Read the article

  • Deep recursion in WHM EasyApache software update causes out of memory

    - by Ernest
    I was trying to load some modules with EasyApache in a software update (WHM) cause I need to install Magento ecommerce. I did the first EasyApache update. However, one module I needed was not loaded. I loaded later but whenever I check Tomcat 5.5 in the profile builder I get: -- Begin opt 'Tomcat' -- -- Begin dryrun test 'Checking for v5' -- -- End dryrun test 'Checking for v5' -- -- Begin step 'Checking jdk' -- Deep recursion on subroutine "Cpanel::CPAN::Digest::MD5::File::_dir" at /usr/local/cpanel/Cpanel/CPAN/Digest/MD5/File.pm line 107. Out of memory! Out of memory! *** glibc detected *** realloc(): invalid next size: 0x09741188 *** Line 107 in question in the file.pm is the third one in this snippet: if(-d $full) { $hr->{ $short } = ''; _dir($full, $hr, $base, $type, $cc) or return; //line 107 } All my client sites are down and I don't know what to do to fix this.

    Read the article

  • Adobe Illustrator Saving to PSD: "Not enough memory to save the file"

    - by fiskfisk
    This is on CS5.5 under Windows XP Professional. There seems to be a known issue about saving (large) Adobe Illustrator files to PSD (thoroughly discussed), where the exporter will complain about "Not enough memory to save the file". This happens regardless of the available memory on the computer, and seems to be a limitation in the PSD exporter itself. The only possible solution so far seems to be to copy-n-paste each layer separately from the illustrator file and into the open Photoshop file. We need to keep the layers intact (and not merged), so selecting all the layers at the same time doesn't work. Do anyone have a workaround to the actual, original export issue, or a way to be able to get the layer information into Photoshop without handling each layer separately?

    Read the article

  • Disable "System Memory Testing" via OMSA 6.4.0

    - by EGr
    Is it possible to disable system memory testing via OMSA 6.4.0? I can only find ways to do it using newer versions of OMSA; and I can't even see the setting in 6.4.0. I have quite a few machines that I want to disable this (BIOS) setting on, but I don't want to have to install the new OMSA and reboot. My intentions are to disable the setting so that when the systems are rebooted in the future, they don't need to go through the system memory testing. If it is possible to disable this another way, without OMSA or manually changing the BIOS settings, I would be open to that as well.

    Read the article

  • Reboot VPS by reaching memory limit

    - by Ali
    When a server uses memory more than available RAM, the system will shut down the virtual machine. Then, it is only possible to boot from outside (VPS control panel, e.g. vePortal or SolusVM). However, it should be possible to plan a reboot before possible shut down. What is the best practical method to check the used memory, and reboot the system upon reaching e.g. 90% of the allowed RAM? Is there a common program or script to do so? I am using Debian/Ubuntu.

    Read the article

  • Is there a way to communicate DBMS with raw memory block or binaries

    - by darkcminor
    I am trying to communicate a numerical matrix operations library like LAPACK with any DBMS. Is it possible to send/receive complete matrices as binary or as a direct memory pointers to process them (it will be something like: The Outside library processes data stored in DBMS, then it computes some huge matrix stuff and then via memory block or a binary DBMS get the result from library)? The main purpose is speed and avoid passing through a flat file, and last but not least, use library toefficiently do some operations DBMS are not designed to. * Is it possible that Oracle, SQL Server, MySQL support this technique?.

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >