Search Results

Search found 9017 results on 361 pages for 'efficient storage'.

Page 257/361 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • is there a specialized educational institution in enterprise software design ?

    - by dfafa
    Is a software engineering degree sufficient for being able to design efficient code in enterprise architecture ? I mean that's what I want to do, some people go to game schools (Vancouver Film School) to make games or work in that industry. are there such similar programs for enterprise software design/development ? Are there special courses in Java EE space and .NET ? is it suitable to just focus on java or both ? My ultimate goal would be consulting and developing enterprise software independently....but right now, I am starting school and just keep learning on the side. any guidance to resources on this industry would be appreciated or your insights. Thank you.

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • SQL Query: Using Cursors

    - by user2953138
    I need some directions for SQL Server & Cursors: I have a table named Order: OrderID Item Amount 1 A 10 1 B 1 2 A 5 2 C 4 2 D 21 3 B 11 I have a second table named Storage: Item Amount A 40 B 44 C 20 D 1 For every OrderID, I want to check if enough items are available. If not, I want to return an error message. How can this be done with Cursors at all? Are nested cursors the solution to this? My main issue is to understand how I can fetch the OrderID as actual "Group" of ID=1, 2, 3 etc. instead of line by line

    Read the article

  • Developing browser plug-ins?

    - by JavaMan
    I have a project that I'd like to try that involves developing an internet browser plug-in. I have knowledge in Java and DHTML, but nothing in the world of browser plug-in development. I thought I would just ask here then what is the most efficient way to develop a browser plug-in? If possible, I'd like to streamline the process so that getting the plug-in to work in different browsers involves as little work as possible. Can this be done? I'm not asking for a tutorial like the trolls do, just a few pointers that's all. I don't waste mine or anyone else's time.

    Read the article

  • How to roll my own index in c#?

    - by bill seacham
    I need a faster way to create an index file. The application generates pairs of items to be indexed. I currently add each pair as it is generated to a sorted dictionary and then write it out to a disk file. This works well until the number of items added exceeds one million, at which time it slows to the point that is unacceptable. There can be as many as three million data items to be indexed. I prefer to avoid a database because I do not want to significantly increase the size of the deployment package, which is now less than one-half of one megabyte. I tried Access but it is even slower than the sorted dictionary -if it had an efficient bulk load utility then that might work, but I do not find such a tool for Access. Is there a better way to roll my own index?

    Read the article

  • Efficiency of the .NET garbage collector

    - by Jonas B
    OK here's the deal. There are some people who put their lives in the hands of .NET's garbage collector and some who simply wont trust it. I am one of those who partially trusts it, as long as it's not extremely performance critical (I know I know.. performance critical + .net not the favored combination), in which case I prefer to manually dispose of my objects and resources. What I am asking is if there are any facts as to how efficient or inefficient performance-wise the garbage collector really is? Please don't share any personal opinions or likely-assumptions-based-on-experience, I want unbiased facts. I also don't want any pro/con discussions because it won't answer the question. Thanks

    Read the article

  • How to return arrays with the biggest elements in C#?

    - by theateist
    I have multiple int arrays: 1) [1 , 202,4 ,55] 2) [40, 7] 3) [2 , 48 ,5] 4) [40, 8 ,90] I need to get the array that has the biggest numbers in all positions. In my case that would be array #4. Explanation: arrays #2, #4 have the biggest number in 1st position, so after the first iteration these 2 arrays will be returned ([40, 7] and [40, 8 ,90]) now after comparing 2nd position of the returned array from previous iteration we will get array #4 because 8 7 and so on... Can you suggest an efficient algorithm for this? Doing with Linq will be preferable. UPDATE There is no limitation for the length, but as soon as some number in any position is greater, so this array is the biggest.

    Read the article

  • ANTLR - Embedding Java code, evaluate before or after?

    - by wvd
    Hello all, I'm writing a simple scripting language on top of Java/JVM, where you can also embed Java code using the {} brackets. The problem is, how do I parse this in the grammar? I have two options: 1] Allow everything to be in it, such as: [a-z|a-Z|0-9|_|$], and go on 2] Get an extra java grammar and use that grammar to parse that small code (is it actually possible and efficient?) Since option 2] is basically a double-check since when evaluating java code it's also being checked. Now my last question is -- is way that can dynamically execute java code also with objects which have been created at runtime? Thanks, William van Doorn

    Read the article

  • cutting a text file into multiple parts in emacs

    - by Gaurish Telang
    Hi I am using the GNU-Emacs-23 editor. I have this huge text file containing about 10,000 lines which I want to chop into multiple files. Using the mouse to select the required text to paste in another file is the really painful. Also this is prone to errors too. If I want to divide the text file according to the line numbers into say 4 file where first file:lines 1-2500 second file:lines 2500-5000 third file :lines 5000-7500 fourth file: lines: 7500-10000 how do I do this? At the very least, is there any efficient way to copy large regions of the file just by specifying line numbers

    Read the article

  • Embarrassingly parallel workflow creates too many output files

    - by Hooked
    On a Linux cluster I run many (N > 10^6) independent computations. Each computation takes only a few minutes and the output is a handful of lines. When N was small I was able to store each result in a separate file to be parsed later. With large N however, I find that I am wasting storage space (for the file creation) and simple commands like ls require extra care due to internal limits of bash: -bash: /bin/ls: Argument list too long. Each computation is required to run through a qsub scheduling algorithm so I am unable to create a master program which simply aggregates the output data to a single file. The simple solution of appending to a single fails when two programs finish at the same time and interleave their output. I have no admin access to the cluster, so installing a system-wide database is not an option. How can I collate the output data from embarrassingly parallel computation before it gets unmanageable?

    Read the article

  • How to deal with Rounding-off TimeSpan?

    - by infant programmer
    I take the difference between two DateTime fields, and store it in a TimeSpan variable, Now I have to round-off the TimeSpan by the following rules: if the minutes in TimeSpan is less than 30 then Minutes and Seconds must be set to zero, if the minutes in TimeSpan is equal to or greater than 30 then hours must be incremented by 1 and Minutes and Seconds must be set to zero. TimeSpan can also be a negative value, so in that case I need to preserve the sign.. I could be able to achieve the requirement if the TimeSpan wasn't a negative value, though I have written a code I am not happy with its inefficiency as it is more bulky .. Please suggest me a simpler and efficient method. Thanks regards,

    Read the article

  • Return specific HREF attribute using Xpath query

    - by Michael Pasqualone
    Having a major brain freeze, I have the following chunk of code: // Get web address $domQuery = query_HtmlDocument($html, '//a[@class="productLink"]'); foreach($domQuery as $rtn) { $web = $rtn->getAttribute('href'); } Which obviously gets the entire href attribute, however I only want 1 specific attribute within the href. I.e. If the href is: /website/product1234.do?code=1234&version=1.3&somethingelse=blaah I only want to return the variable for "version", so wish to only return "1.3" in my example. What's most efficient way to do this?

    Read the article

  • to get columns from Excel files using Apache POI??

    - by posdef
    Hi, In order to do some statistical analysis I need to extract values in a column of an Excel sheet. I have been using the Apache POI package to read from Excel files, and it works fine when one needs to iterate over rows. However I couldn't find anything about getting columns neither in the API (link text) nor through google searching. As I need to get max and min values of different columns and generate random numbers using these values, so without picking up individual columns, the only other option is to iterate over rows and columns to get the values and compare one by one, which doesn't sound all that time-efficient. Any ideas on how to tackle this problem? Thanks,

    Read the article

  • InnoDB not supported by webhost. What now?

    - by Peter Perhác
    I was developing a small WAMP web application on my laptop, where I have an instance of mySQL running and I chose InnoDB for my DB engine. After several weeks' development I wanted to make it available to the public and found out the database server provided by my web host does not support InnoDB, only MyISAM. The create-and-populate script generated from the innoDB schema on my laptop, when executed against the live database, can manage to create individual TABLES but then runs into problems creating the VIEWs. Are views not supported in MyISAM? I know FOREIGN KEYs are not. That's very much why I made the choice of InnoDB... What are my chances of making my innoDB schema design work with myISAM? Is there any straightforward way of converting the whole schema from one storage engine to the other? Should I look for another web host that does provide a mysql db that supports innoDB?

    Read the article

  • Migrating a Core Data Store from iCloud to local

    - by schmok
    I'm currently struggling with Core Data iCloud migration. I want to move a store from an iCloud ubiquity container (.nosync) to a local URL. Problem is whenever I call something like this: NSPersistentStore *newStore = [self.persistentStoreCoordinator migratePersistentStore: currentiCloudStore toURL: localURL options: nil withType: NSSQLiteStoreType error: &error]; I get this error: -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:](1055): CoreData: Ubiquity: Error: A persistent store which has been previously added to a coordinator using the iCloud integration options must always be added to the coordinator with the options present in the options dictionary. If you wish to use the store without iCloud, migrate the data from the iCloud store file to a new store file in local storage. file://localhost/Users/sch/Library/Containers/bla/Data/Documents/tmp.sqlite. This will be a fatal error in a future release Anyone ever seen this error? Maybe I'm just missing the right migration options?

    Read the article

  • iPhone App with Web Service Access

    - by blake
    I have been asked to write a compliment website/service for an iPhone app. The app creates images. The author wants these images to be uploaded onto the server, into their personal storage area. These images need to be able to be pulled down to the iPhone later for editing. The user will be able to use the website as well to see these images. I have yet to decide (or understand) what the best way of implementing this would be. And with no experience with iPhone development I have no idea what it can actually handle.

    Read the article

  • How to store images efficiently (memory-wise) while still being able to process them

    - by Sheeo
    I'm working on a silverlight project where users get to create their own Collages. The problem When loading images into memory, I'm using BitmapImage so that they can be displayed directly with the Image control--but they're locked in afterwards. So I've tried storing them seperately aswell, but that just sucks up huge amounts of RAM. So in short, is there a class that'll let me store JPEG images, be able to show them with the image control, and still be able to export it afterwards? All this needs to be efficient--i.e. I'd rather not want any copying to ARGB arrays or use the WriteableBitmap to copy them over. I require to work with large collections of images, up to 300 at most. Any help apreciated!

    Read the article

  • Breaking the SQL Compact 8K Limit?

    - by David Veeneman
    I am creating a desktop application that stores rich text documents to a SQL Compact database. Documents are converted to a byte array and stored as a Binary column, and I am running into SQL Compact's 8K limit for Binary field length. Is there a simple way to get around the 8K limit? I can come up with lots of complicated ways to do it, such as parsing into 8K chunks for storage and reassembling on fetch. But before I get into something that complex, I would like to make sure I can't solve the problem more simply, such as by changing data type. If there is no simple way of getting around the 8K limit, is thare a best practice for storing documents greater than 8K? Thanks for your help.

    Read the article

  • is this a secure approach in ActiveRecords in Rails?

    - by Adnan
    Hello, I am using the following for my customers to unsubscribe from my mailing list; def index @user = User.find_by_salt(params[:subscribe_code]) if @user.nil? flash[:notice] = "the link is not valid...." render :action => 'index' else Notification.delete_all(:user_id => @user.id) flash[:notice] = "you have been unsubscribed....." redirect_to :controller => 'home' end end my link looks like; http://site.com/unsubscribe/32hj5h2j33j3h333 so the above compares the random string to a field in my user table and accordingly deletes data from the notification table. My question; is this approach secure? is there a better/more efficient way for doing this? All suggestions are welcome.

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • Shared library to minimise size of FLA file

    - by Dmitry
    In a project we use large flash FLA file with lots of graphic assets, but the actual data that is changed is just in a few symbols. Sometimes it is not very efficient to transfer the whole FLA file that comes up to 20MB now. I was thinking about using Shared Libraries, but it seems that, even if you import external library, it still copies the whole assets into the destination file, but does not link it from external file. Consequently, size of the FLA file still remains the same. Is there any way to split FLA files into few separate in order to minimise size of the most frequently updated file and keep all unchanged data in another file?

    Read the article

  • How do I request a single random row from a force.com database in SOQL?

    - by Ollie C
    Total row-count is in the range 10k-100k rows. Can I use RAND() on force.com? Unfortunately although all the rows have a unique numeric identifier, there are many gaps, and I'd often want to select a random row from a filtered subset anyway. I suspect there's no particularly efficient way to do this, but is it possible at all? Ultimately all I want to do is to extract one row from a table (or a subset based on specific filter criteria) at random. If force.com doesn't let me select a random row, then can I query the rows to select from, and assign sequential IDs to all the rows, say 1-1,035, and then select a random number in that range locally, say 349, and then get row 349?

    Read the article

  • CUDA small kernel 2d convolution - how to do it

    - by paulAl
    I've been experimenting with CUDA kernels for days to perform a fast 2D convolution between a 500x500 image (but I could also vary the dimensions) and a very small 2D kernel (a laplacian 2d kernel, so it's a 3x3 kernel.. too small to take a huge advantage with all the cuda threads). I created a CPU classic implementation (two for loops, as easy as you would think) and then I started creating CUDA kernels. After a few disappointing attempts to perform a faster convolution I ended up with this code: http://www.evl.uic.edu/sjames/cs525/final.html (see the Shared Memory section), it basically lets a 16x16 threads block load all the convolution data he needs in the shared memory and then performs the convolution. Nothing, the CPU is still a lot faster. I didn't try the FFT approach because the CUDA SDK states that it is efficient with large kernel sizes. Whether or not you read everything I wrote, my question is: how can I perform a fast 2D convolution between a relatively large image and a very small kernel (3x3) with CUDA?

    Read the article

  • c arrays: setting size dynamically?

    - by user336994
    Hello, I am new to C programming. I am trying to set the size of the array using a variable but I am getting an error: Storage size of 'array' isn't constant !! 01 int bound = bound*4; 02 static GLubyte vertsArray[bound]; I have noticed that when I replace bounds (within the brackets on line 02) with the number say '20', the program would run with no problems. But I am trying to set the size of the array dynamically ... Any ideas why I am getting this error ? thanks much,

    Read the article

  • Efficiently compute the row sums of a 3d array in R

    - by Gavin Simpson
    Consider the array a: > a <- array(c(1:9, 1:9), c(3,3,2)) > a , , 1 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 , , 2 [,1] [,2] [,3] [1,] 1 4 7 [2,] 2 5 8 [3,] 3 6 9 How do we efficiently compute the row sums of the matrices indexed by the third dimension, such that the result is: [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 ?? The column sums are easy via the 'dims' argument of colSums(): > colSums(a, dims = 1) but I cannot find a way to use rowSums() on the array to achieve the desired result, as it has a different interpretation of 'dims' to that of colSums(). It is simple to compute the desired row sums using: > apply(a, 3, rowSums) [,1] [,2] [1,] 12 12 [2,] 15 15 [3,] 18 18 but that is just hiding the loop. Are there other efficient, truly vectorised, ways of computing the required row sums?

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >