Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 257/361 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Converting mxml Rect & SolidColor to actionscript

    - by touB
    I'm trying to learn how to use actionscript over mxml for flexibility. I have this simple block of mxml that I'm trying to convert to actionscript, but I'm stuck half way though <s:Rect id="theRect" x="0" y="50" width="15%" height="15%"> <s:fill> <s:SolidColor color="black" alpha="0.9" /> </s:fill> </s:Rect> I can convert the Rect no problem to private var theRect:Rect = new Rect(); theRect.x = 0; theRect.y = 50; theRect.width = "15%"; theRect.height = "15%"; then I'm stuck on the fill. What's the most efficient way to add the SolidColor in as few lines of code as possible.

    Read the article

  • SDCC and malloc() - allocating much less memory than is available

    - by Duncan Bayne
    When I run compile this code with SDCC 3.1.0, and run it on an Amstrad CPC 464 (under emulation, with WinCPC 0.9.26 running on Wine): void _test_malloc() { long idx = 0; while (1) { if (malloc(5)) { printf("%ld\r\n", ++idx); } else { printf("done"); break; } } } ... it consistently taps out at 92 malloc()s. I make that 460 bytes, which leads me to a couple of questions: What is malloc() doing on this system? I was sort of hoping for an order of magnitude more storage even on a 64kB system The behaviour is consistent on 64kB systems and 128kB systems; do I have to perform some sort of magic to access the additional memory, like manual bank switching?

    Read the article

  • Reverse massive text file in Java

    - by DanJanson
    What would be the best approach to reverse a large text file that is uploaded asynchronously to a servlet that reverses this file in a scalable and efficient way? text file can be massive (gigabytes long) can assume mulitple server/clustered environment to do this in a distributed manner. open source libraries are encouraged to consider I was thinking of using Java NIO to treat file as an array on disk (so that I don't have to treat the file as a string buffer in memory). Also, I am thinking of using MapReduce to break up the file and process it in separate machines. Any input is appreciated. Thanks. Daniel

    Read the article

  • C++ Long switch statement or look up with a map?

    - by Rachel
    In my C++ application, I have some values that act as codes to represent other values. To translate the codes, I've been debating between using a switch statement or an stl map. The switch would look something like this: int code; int value; switch(code) { case 1: value = 10; break; case 2: value = 15; break; } The map would be an stl::map<int, int> and translation would be a simple lookup with the code used as the key value. Which one is better/more efficient/cleaner/accepted? Why?

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • I am designing a bus timetable using SQL. Each bus route has multiple stops, do I need a different t

    - by Henry
    I am trying to come up with the most efficient database as possible. My bus routes all have about 10 stops. The bus starts at number one until it reaches the 10th stop, then it comes back again. This cycle happens 3 times a day. I am really stuck as to how I can efficiently generate the times for the buses and where I should store the stops. If I put all the stops in one field and the times in another, the database won't be very dynamic. If I store all the stops one by one in a column and then the times in another column, there will be a lot of repeating happening further down as one stop has multiple times. Maybe I am missing something, I've only just started learning SQL and this is a task we have been set. Thanks in advance.

    Read the article

  • cutting a text file into multiple parts in emacs

    - by Gaurish Telang
    Hi I am using the GNU-Emacs-23 editor. I have this huge text file containing about 10,000 lines which I want to chop into multiple files. Using the mouse to select the required text to paste in another file is the really painful. Also this is prone to errors too. If I want to divide the text file according to the line numbers into say 4 file where first file:lines 1-2500 second file:lines 2500-5000 third file :lines 5000-7500 fourth file: lines: 7500-10000 how do I do this? At the very least, is there any efficient way to copy large regions of the file just by specifying line numbers

    Read the article

  • Developing browser plug-ins?

    - by JavaMan
    I have a project that I'd like to try that involves developing an internet browser plug-in. I have knowledge in Java and DHTML, but nothing in the world of browser plug-in development. I thought I would just ask here then what is the most efficient way to develop a browser plug-in? If possible, I'd like to streamline the process so that getting the plug-in to work in different browsers involves as little work as possible. Can this be done? I'm not asking for a tutorial like the trolls do, just a few pointers that's all. I don't waste mine or anyone else's time.

    Read the article

  • How to roll my own index in c#?

    - by bill seacham
    I need a faster way to create an index file. The application generates pairs of items to be indexed. I currently add each pair as it is generated to a sorted dictionary and then write it out to a disk file. This works well until the number of items added exceeds one million, at which time it slows to the point that is unacceptable. There can be as many as three million data items to be indexed. I prefer to avoid a database because I do not want to significantly increase the size of the deployment package, which is now less than one-half of one megabyte. I tried Access but it is even slower than the sorted dictionary -if it had an efficient bulk load utility then that might work, but I do not find such a tool for Access. Is there a better way to roll my own index?

    Read the article

  • to get columns from Excel files using Apache POI??

    - by posdef
    Hi, In order to do some statistical analysis I need to extract values in a column of an Excel sheet. I have been using the Apache POI package to read from Excel files, and it works fine when one needs to iterate over rows. However I couldn't find anything about getting columns neither in the API (link text) nor through google searching. As I need to get max and min values of different columns and generate random numbers using these values, so without picking up individual columns, the only other option is to iterate over rows and columns to get the values and compare one by one, which doesn't sound all that time-efficient. Any ideas on how to tackle this problem? Thanks,

    Read the article

  • Return specific HREF attribute using Xpath query

    - by Michael Pasqualone
    Having a major brain freeze, I have the following chunk of code: // Get web address $domQuery = query_HtmlDocument($html, '//a[@class="productLink"]'); foreach($domQuery as $rtn) { $web = $rtn->getAttribute('href'); } Which obviously gets the entire href attribute, however I only want 1 specific attribute within the href. I.e. If the href is: /website/product1234.do?code=1234&version=1.3&somethingelse=blaah I only want to return the variable for "version", so wish to only return "1.3" in my example. What's most efficient way to do this?

    Read the article

  • JQuery slideToggle replace image src

    - by Rob
    Hi, This function is called when an up/down arrow is clicked to slide hidden div. If the div is hidden, the arrow points down and changes to up when the div is shown. If the div is shown, the arrow points up to hide div and changes to down when the div is closed. I just wanted to know if there was a more efficient way of doing this or if this is the correct way. Thanks. function showInfo(info_id) { var img_id = '#arrow_'+info_id; var div = '#info_'+appointment_id; $(div).slideToggle('normal',function() { if ($(this).is(':visible')) { $(img_id).attr('src',$(img_id).attr('src').replace('_down','_up')); } else { $(img_id).attr('src',$(img_id).attr('src').replace('_up','_down')); } });}

    Read the article

  • How do I request a single random row from a force.com database in SOQL?

    - by Ollie C
    Total row-count is in the range 10k-100k rows. Can I use RAND() on force.com? Unfortunately although all the rows have a unique numeric identifier, there are many gaps, and I'd often want to select a random row from a filtered subset anyway. I suspect there's no particularly efficient way to do this, but is it possible at all? Ultimately all I want to do is to extract one row from a table (or a subset based on specific filter criteria) at random. If force.com doesn't let me select a random row, then can I query the rows to select from, and assign sequential IDs to all the rows, say 1-1,035, and then select a random number in that range locally, say 349, and then get row 349?

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • ANTLR - Embedding Java code, evaluate before or after?

    - by wvd
    Hello all, I'm writing a simple scripting language on top of Java/JVM, where you can also embed Java code using the {} brackets. The problem is, how do I parse this in the grammar? I have two options: 1] Allow everything to be in it, such as: [a-z|a-Z|0-9|_|$], and go on 2] Get an extra java grammar and use that grammar to parse that small code (is it actually possible and efficient?) Since option 2] is basically a double-check since when evaluating java code it's also being checked. Now my last question is -- is way that can dynamically execute java code also with objects which have been created at runtime? Thanks, William van Doorn

    Read the article

  • Efficiently compute the row sums of a 3d array in R

    - by Gavin Simpson
    Consider the array a: > a <- array(c(1:9, 1:9), c(3,3,2)) > a , , 1 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 , , 2 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 How do we efficiently compute the row sums of the matrices indexed by the third dimension, such that the result is: [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 ?? The column sums are easy via the 'dims' argument of colSums(): > colSums(a, dims = 1) but I cannot find a way to use rowSums() on the array to achieve the desired result, as it has a different interpretation of 'dims' to that of colSums(). It is simple to compute the desired row sums using: > apply(a, 3, rowSums) [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 but that is just hiding the loop. Are there other efficient, truly vectorised, ways of computing the required row sums?

    Read the article

  • is there a specialized educational institution in enterprise software design ?

    - by dfafa
    Is a software engineering degree sufficient for being able to design efficient code in enterprise architecture ? I mean that's what I want to do, some people go to game schools (Vancouver Film School) to make games or work in that industry. are there such similar programs for enterprise software design/development ? Are there special courses in Java EE space and .NET ? is it suitable to just focus on java or both ? My ultimate goal would be consulting and developing enterprise software independently....but right now, I am starting school and just keep learning on the side. any guidance to resources on this industry would be appreciated or your insights. Thank you.

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • Does Exchange have ability to run hidden mailboxes?

    - by MadBoy
    Hello, Title of my question may sound a little bit odd but I was thinking if Exchange 2010 or 2007 or any program that would work in conjunction with Exchange has ability to create this structure: Users having their normal mailboxes connected and using them as everyone would in Outlook 2003/2007/2010. Users having additional mailboxes (from old Exchange 2003) attached but hidden on demand of Administrator. For example administrator could easy disable them just like they never been attached making them invisible to users and everyone else. Would be good if such mailboxes could be easily removed out of system (lets say on external drive) by simple step not manual job for 100 mailboxes. Users without ability to copy/move their mails to outside storage (like a local .pst file)? Do you guys have any suggestions on this? I was thinking maybe using public folders but this seems like overkill and not really suited for this. And please don't ask me why I need this type of security (it's not something I requested).

    Read the article

  • How to deal with Rounding-off TimeSpan?

    - by infant programmer
    I take the difference between two DateTime fields, and store it in a TimeSpan variable, Now I have to round-off the TimeSpan by the following rules: if the minutes in TimeSpan is less than 30 then Minutes and Seconds must be set to zero, if the minutes in TimeSpan is equal to or greater than 30 then hours must be incremented by 1 and Minutes and Seconds must be set to zero. TimeSpan can also be a negative value, so in that case I need to preserve the sign.. I could be able to achieve the requirement if the TimeSpan wasn't a negative value, though I have written a code I am not happy with its inefficiency as it is more bulky .. Please suggest me a simpler and efficient method. Thanks regards,

    Read the article

  • Efficiency of the .NET garbage collector

    - by Jonas B
    OK here's the deal. There are some people who put their lives in the hands of .NET's garbage collector and some who simply wont trust it. I am one of those who partially trusts it, as long as it's not extremely performance critical (I know I know.. performance critical + .net not the favored combination), in which case I prefer to manually dispose of my objects and resources. What I am asking is if there are any facts as to how efficient or inefficient performance-wise the garbage collector really is? Please don't share any personal opinions or likely-assumptions-based-on-experience, I want unbiased facts. I also don't want any pro/con discussions because it won't answer the question. Thanks

    Read the article

  • Distance between numpy arrays, columnwise

    - by Jaapsneep
    I have 2 arrays in 2D, where the column vectors are feature vectors. One array is of size F x A, the other of F x B, where A << B. As an example, for A = 2 and F = 3 (B can be anything): arr1 = np.array( [[1, 4], [2, 5], [3, 6]] ) arr2 = np.array( [[1, 4, 7, 10, ..], [2, 5, 8, 11, ..], [3, 6, 9, 12, ..]] ) I want to calculate the distance between arr1 and a fragment of arr2 that is of equal size (in this case, 3x2), for each possible fragment of arr2. The column vectors are independent of each other, so I believe I should calculate the distance between each column vector in arr1 and a collection of column vectors ranging from i to i + A from arr2 and take the sum of these distances (not sure though). Does numpy offer an efficient way of doing this, or will I have to take slices from the second array and, using another loop, calculate the distance between each column vector in arr1 and the corresponding column vector in the slice?

    Read the article

  • Embedded C++, any tips to avoid a local thats only used to return a value on the stack?

    - by lisarc
    I have a local that's only used for the purposes of checking the result from another function and passing it on if it meets certain criteria. Most of the time, that criteria will never be met. Is there any way I can avoid this "extra" local? I only have about 1MB of storage for my binary, and I have several thousand function calls that follow this pattern. I know it's a minor thing, but if there's a better pattern I'd love to know! SomeDataType myclass::myFunction() { SomeDataType result; // do I really need this local??? // i need to check the result and pass it on if it meets a certain condition result = doSomething(); if ( ! result ) { return result; } // do other things here ... // normal result of processing return SomeDataType(whatever); }

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >