Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 51/558 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Java: Calculate distance between a large number of locations and performance

    - by Ally
    I'm creating an application that will tell a user how far away a large number of points are from their current position. Each point has a longitude and latitude. I've read over this article http://www.movable-type.co.uk/scripts/latlong.html and seen this post http://stackoverflow.com/questions/837872/calculate-distance-in-meters-when-you-know-longitude-and-latitude-in-java There are a number of calculations (50-200) that need carried about. If speed is more important than the accuracy of these calculations, which one is best?

    Read the article

  • Make dialogs compatible with "large fonts".

    - by Narcís Calvet
    Which do you think are best practices for making a windows dialog compatible both with standard fonts (96 dpi) and "large fonts" setting (120 dpi) so that objects don't overlap or get cut off? BTW: Just in case it's relevant, I'm interested in doing this for Delphi dialogs. Thanks in advance!

    Read the article

  • Unable to upload large files to Google Docs

    - by Preeti
    I am uploading documents on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when I try to upload a file of 3 MB I get an exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can I upload large files to Google Docs? I am using Google API ver 2.

    Read the article

  • VS2008 intellisense performance issue with large number of partial static classes

    - by scebula
    My question is a follow-up to the issue posted here regarding the Intellisense performance issue when building a large solution in VS2008 that has many partial static classes. Since Microsoft does not seem to be addressing the issue for VS2008, I would like to know if there are other ways around the problem? Waiting for VS2010 is not an option at this time. The proposed solution in the previous post is not practical as some of the partial classes may be regenerated and this would be a maintenance headache.

    Read the article

  • Isolate large amounts of data w/ jQuery

    - by Bry4n
    I work for a transit agency and I have large amounts of data (mostly times), and I need a way to filter the data using two textboxes (To and From). I found jQuery quick search, but it seems to only work with one textbox. If anyone has any ideas via jQuery or some other client side library, that would be fantastic.

    Read the article

  • Large or small company?

    - by James
    Hi, I would like to hear some opinions regarding working in small companies versus large corporations. So far, my personal experience has been that esp. for junior programmers small companies have given a more solid background, as follow-up is with experienced workers. In larger corporations on the other hand, the experienced have already worked they way way out of reach. Is this a general feeling or just my bad experience?

    Read the article

  • Installing a large size apk application on Android phone

    - by user348686
    I have an application which is of size 130MB. when i try to install its displaying insufficient memory error. but i have around 170MB left in available space in internal memory. How can i Install this app? The size of the app is large because it contains many media files. In Motorolla droid its getting installed. but on Nexus One its giving this error.

    Read the article

  • Entering large amounts of text data in HTML form

    - by MakkyNZ
    Hi I have a web page that has a requirement to allow users to paste large amounts of text data from an Excel spreadsheet directly into the input controls of the page. Typically the user would be pasting anywhere between 150 to 300 spreadsheet cells that are all in one column. What are some good ways i could use to capture this data on the web page? thanks

    Read the article

  • Refactoring one large list of C# properties/fields

    - by dotnetdev
    If you take a look at http://www.c-sharpcorner.com/UploadFile/dhananjaycoder/activedirectoryoperations11132009113015AM/activedirectoryoperations.aspx, there is a huge list of properties for AD in one class. What is a good way to refactor such a large list of (Related) fields? Would making seperate classes be adequate or is there a better way to make this more manageable? Thanks

    Read the article

  • Parallel computation of the median of a large array

    - by recipriversexclusion
    I got asked this question once and still haven't been able to figure it out: You have an array of N integers, where N is large, say, a billion. You want to calculate the median value of this array. Assume you have m+1 machines (m workers, one master) to distribute the job to. How would you go about doing this? Since the median is a nonlinear operator, you can't just find the median in each machine and then take the median of those values.

    Read the article

  • HTML table print large column

    - by Emma
    Windows XP, IE 7 If the data in one of the column in table is more than say 800 bytes, it seems to print partial pages. Previews also appears same (span into multiple partial pages). What is the best way so that large number of rows with wide coulumns (fixed width) are printed properly without giving blank or partial page. Used table with thead and colgroup with width in 14%.

    Read the article

  • Dojo treemodel- adding large number of items

    - by Ashley
    I am trying to add a large number of items (100+) to my tree via ForestStoreModel by calling newItem in a loop. This seems to be quite slow and locks up the browser. Is there any way I can do something similar to grid's beginUpdate & endUpdate? I want to basically 'turn off' my tree, add 100 items in a batch, then 'turn on' my tree. Any ideas? Thanks!

    Read the article

  • How do you handle large projects?

    - by cam
    I've just inherited a large project previously coded by about 4-5 people. The documentation consists of comments, and is not very well written. I have to get up to date on this project. How do I start? It consists of many different source files. Do you just dig in? Are there tools that can help visualize the structure/flow?

    Read the article

  • dealing with a large flat data files with a very big record length

    - by gsp
    I have a large data file that is creatd from a shell script. Next script processes it by sorting and reading several times, that takes more than 14 hours, it is not viable. I want to replace this long running script with a program, probably in JAVA, C, or COBOl, that can run on windows or on sun solaris. I have to read a group of records everytime, sort and process and write to the output sort file and at the same time insert into db2/sql tables.

    Read the article

  • How to save a large nhibernate collection without causing OutOfMemoryException

    - by Michael Hedgpeth
    How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process? I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video? For your reference, here is the code from the simple sample I created: public class Video { public long Id { get; set; } public string Name { get; set; } public Video() { Screenshots = new ArrayList(); } public IList Screenshots { get; set; } } public class Screenshot { public long Id { get; set; } public byte[] Data { get; set; } } And mappings: <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" default-access="property"> <class name="Screenshot" lazy="false"> <id name="Id" type="Int64"> <generator class="hilo"/> </id> <property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" /> </class> </hibernate-mapping> <?xml version="1.0" encoding="utf-8" ?> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="SavingScreenshotsTrial" namespace="SavingScreenshotsTrial" > <class name="Video" lazy="false" table="Video" discriminator-value="0" abstract="true"> <id name="Id" type="Int64" access="property"> <generator class="hilo"/> </id> <property name="Name" /> <list name="Screenshots" cascade="all-delete-orphan" lazy="false"> <key column="VideoId" /> <index column="SortOrder" /> <one-to-many class="Screenshot" /> </list> </class> </hibernate-mapping> When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using: using (var session = CreateSession()) { Video video = new Video(); for (int i = 0; i < 10000; i++) { video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)}); } session.SaveOrUpdate(video); }

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • Database structure for ecommerce site

    - by imanc
    Hey Guys, I have been tasked with designing an ecommerce solution. The aspect that is causing me the most problems is the database. Currently the site consists of 10+ country based shops each with their own database (all residing on the same mysql instance). For the new site I'd rather all these shop databases be merged into one database so that all tables (products, orders, customers etc.) have a shop_id field. From a programming perspective this seems to make the most sense as we won't have to manage data across multiple databases. Currently the entire site generates about 120k orders a year, but is experiencing fairly heavy growth and we need to design a solution that will scale. In 5 years there may be more than a million orders per year and a database that contains 5 years order history (archiving maybe a solution here). The question is - do we use a single database, or do we keep the database-per-shop structure? I am currently trying to find supporting evidence for either avenue. The company I am designing the solution for prefer the per-shop database structure because they believe it will allow the sites to scale. But my argument is that the shop's database probably won't get that busy over the next few years that they exceed the capacity of a mysql database and a "no expenses spared" hardware set-up. I am wondering if anyone has any advice either way? Does anyone have experience with websites / ecommerce sites that have tables containing millions of records? I know there is probably not a clear answer here, but at what stage do we have too many records or too large table files to have a fast loading site? Also, if anyone has any advice on sources of information - books, websites, etc. where I can do further research, it would be highly appreciated! Cheers, imanc

    Read the article

  • A view interface for large object/array dumps

    - by user685107
    I want to embed in a page a detailed structure report of my model objects, like print_r() or var_export() produce (now I’m doing this with running var_export() on get_object_vars()). But what I actually want to see is only some properties (in most cases), but at this moment I have to use Ctrl+F and seek the variable I want, instead of just staring at it right after the page completes loading. So I’m embedding buttons to show/hide large arrays etc. but thought: ‘What if there already is the thing I do right now?’ So is there? Update: What would your ideal interface look like? First of all, dumped models fit in the first screen. All the properties can be seen at the first look at the screen (there are not many of them, around 10 per each, three models total, so it is possible). Small arrays can be shown unrolled too. Let the size of the array to count it as ‘small’ be definable. Ideally, the user can see values of the properties without doing any click, scrolling the screen or typing something. There must be some improvements to representing the values, say, if an array is empty, show array ‘My_big_array’ is empty and if a boolean variable starting with is_, has_, had_ has a false as the value, make the variable (let us take is_available for example) shown as is_NOT_available in red, and if it has true as the value, show is_available in green. Without any value shown. The same goes for defined constants. That would be ideal. I want to make focus on this kind of switches. Krumo seems useful, but since it always closes up the variable without making difference of how large it is, I cannot use it as is, but there might appear something similar on github soon :) Second update starts here: Any programmer who sees is_available = false will know what it means, no need to do more Bringing in color indication I forgot about one thing: the ‘switches’ let’s call them so, may me important or not. So I have right now some of them that will show in green or red, this is for something global, like caching, which is shown as Caching is… ON with ‘ON’ written in green, (and ‘OFF’ in red when disabled) while the words about what it is, i.e. ‘Caching is… ’ are written in black. And some which are not so important, for example I haven’t defined REVEAL_TIES is… not set with ‘not set’ written in gray, while the words describing what it is stay in black. And if it would be set the whole phrase would be in black since there is nothing important: if this small utility for showing some undercover things is working, I will see some messages after it, if it isn’t — site will be working independently of its state. Dividing switches into important ones and not with corresponding color match should improve readability, especially for those users who are not programmers and just enabled debug mode because some guy from bugzilla said do that — for them it would help to understand what is important and what is not.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >