Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 82/2640 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • List of drugs for sample data

    - by Skoder
    Where can I find a list of common medical drugs? Researching and typing 150+ drug names would be quite inefficient. In general, are there any sites which have a list of items for developers to use in applications? For example, you can download dictionaries in specific formats (e.g. XML) for use in word games.

    Read the article

  • ASP.Net 4.0 Database Created Pages

    - by Tyler
    I want to create asp.net 4.0 dynamic pages loaded from my MS SQL server. Basically, its a list of locations with informations. For example: Location1 would have the page www.site.com/location/location1.aspx Location44 would have the page www.site.com/location/location44.aspx I dont even know where to start with this, url writting maybe?

    Read the article

  • Exception in inserting data into data using JPA in netbeans

    - by sandeep
    SEVERE: Local Exception Stack: Exception [EclipseLink-7092] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.ValidationException Exception Description: Cannot add a query whose types conflict with an existing query. Query To Be Added: [ReadAllQuery(name="Voter.findAll" referenceClass=Voter jpql="SELECT v FROM Voter v")] is named: [Voter.findAll] with arguments [[]].The existing conflicting query: [ReadAllQuery(name="Voter.findAll" referenceClass=Voter jpql="SELECT v FROM Voter v")] is named: [Voter.findAll] with arguments: [[]].

    Read the article

  • How to process large block data visualization with Flex?

    - by hydra1983
    I know that's a big topic. However, it's better to know some general ideas to handle such problems. I have an application which requires Flex to render statistics data calculated instantly on the client side from a downloaded data set. The problems are: the data set is large and needs more than 10 seconds to be downloaded. there are some filters to control the statistics calculation algorithms. If user changes the filters, it would take a long time to recalculate the result and freeze the UI.

    Read the article

  • Data Access Layer in an ASP.NET website

    - by user3519124
    :) i have a DAL class file in my project, that my teacher sent me and explained to me but i did not really understand it. It has number of functions, and I understand only few of them, like with connecting to the database or creating a command object but there are 2 that I dont understand: public static DataTable GetTable(string str) { OleDbConnection con = DAL.GetConnection(); OleDbCommand cmd = DAL.GetCommand(con, str); DataTable dt = new DataTable(); OleDbDataAdapter adp = new OleDbDataAdapter(); adp.SelectCommand = cmd; adp.Fill(dt); return dt; } public static int ExecuteNonQuery(string str) { int num = -1; OleDbConnection con = DAL.GetConnection(); con.Open(); if (con.State == ConnectionState.Open) { OleDbCommand cmd = DAL.GetCommand(con, str); num = cmd.ExecuteNonQuery(); con.Close(); } return num; } thank you :)

    Read the article

  • Which data structure for List of objects + datagrid wiev

    - by Martin
    Hi, I have to develop a code which will store a list of objects, as example below 101, value 11, value 12, value 13 ...etc 102, value 21, value 22, value 23 ...etc 103, value 31, value 32, value 33 ...etc 104, value 41, value 42, value 43 ...etc Now, the difficulty is, that first column is an identifier, and whole table should always be sorted by it. Easy access to each element is required. Additionally, list should be easily updated, and extended by adding element at the end as well as in front and still keep being sorted by first column. Finally, I would like to be able to display values of the above in datagridview. What is most important is a performance of the implementation, as rows will be updated many times per second, and datagridview should be able to display all changes immediately. I was thinking about creating class for the values, and then a Dictionary but encountered a problem with displaying values in gridview. What would be the most optimal way of implementing the code? Thanks in advance Martin

    Read the article

  • Implement delegates for Core Data's fetched results controller or not

    - by Spanky
    What advantage is there to implementing the four delegate methods: (void)controllerWillChangeContent:(NSFetchedResultsController *)controller (void)controller:(NSFetchedResultsController *)controller didChangeSection:(id )sectionInfo atIndex:(NSUInteger)sectionIndex forChangeType:(NSFetchedResultsChangeType)type (void)controller:(NSFetchedResultsController *)controller didChangeObject:(id)anObject atIndexPath:(NSIndexPath *)indexPath forChangeType:(NSFetchedResultsChangeType)type newIndexPath:(NSIndexPath *)newIndexPath (void)controllerDidChangeContent:(NSFetchedResultsController *)controller rather than implement: (void)controllerDidChangeContent:(NSFetchedResultsController *)controller Any help appreciated // :)

    Read the article

  • Multi-property "transactions" in Core Data / NSManagedObject / NSFetchedResultsController?

    - by Martijn Thé
    Hi, Is it possible to set multiple properties of an NSManagedObject and have the NSFetchedResultsController call controllerDidChangeContent: only once? In other words, is it possible to say something like: [managedObject beginChanges]; [managedObject setPropertyA:@"Foo"]; [managedObject setPropertyB:@"Bar"]; [managedObject commitChanges]; and then have the NSFetchedResultsController call controllerDidChangeContent: (and the other methods) only one time? Thanks!

    Read the article

  • How to recover ~1TB data from a bricked NAS?

    - by alastairs
    I bought an IcyBox NAS a little while back, and it recently died on me. I have physical access to the disks inside (1.5TB RAID 1 array), and the box was running a version of Linux. I now have the difficulty of retrieving the data from the disks. All I have available are 2 Windows machines, one of which has sufficient free space to hold the data from the NAS. What would be the quickest and easiest way to retrieve the data from the disks?

    Read the article

  • Reason for monolithic data files

    - by Ali Lown
    Primarily this seems to be a technique used by games, where they have all the sounds in one file, textures in another etc. With these files commonly reaching the GB size. What is the reason behind doing this over maintaining it all in subdirectories as small files - one per texture which many small games use this, with the monolithic system being favoured by larger companies? Is there some file system overhead with lots of small files? Are they trying to protect their property - although most just seem to be a compressed file with a new extension?

    Read the article

  • Big trouble after app update. CoreData migration error

    - by MrBr
    this morning we had a big trouble with our iphone app. We had to even take it off the store. The thing is that we made real small changes to our xcdatamodel. We thought that the update process is automatically taking care about exchanging it the right way until we found out something like CoreData migration exists. We are using the UIManagedDocument to connect to the persistent store. How is it possible to exchange this file with the new one? While we were developing we just uninstalled the whole app from the device and then installed it again and everything worked. How can we simulate this process in the app store with updates? UPDATE I try to set the migration option like this _database = [[UIIManagedDocument alloc] init]; NSMutableDictionary *options = [[NSMutableDictionary alloc] init]; [options setObject:[NSNumber numberWithBool:YES] forKey:NSMigratePersistentStoresAutomaticallyOption], _database.persistentStoreOptions = options; but the app is still crashing with ** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'This NSPersistentStoreCoordinator has no persistent stores. It cannot perform a save operation.'

    Read the article

  • modifying MetaData in asp.net Dynamic Data

    - by loviji
    Hello. I want to modify MetaData. For example, let's work with Details.aspx page. There have code: protected void Page_Init(object sender, EventArgs e) { table = DynamicDataRouteHandler.GetRequestMetaTable(Context); FormView1.SetMetaTable(table); DetailsDataSource.EntityTypeName = table.EntityType.AssemblyQualifiedName; } Now, how to force to modify table.

    Read the article

  • Error in inserting data into data using JPA in netbeans

    - by sandeep
    SEVERE: Local Exception Stack: Exception [EclipseLink-7092] (Eclipse Persistence Services - 2.0.0.v20091127-r5931): org.eclipse.persistence.exceptions.ValidationException Exception Description: Cannot add a query whose types conflict with an existing query. Query To Be Added: [ReadAllQuery(name="Voter.findAll" referenceClass=Voter jpql="SELECT v FROM Voter v")] is named: [Voter.findAll] with arguments [[]].The existing conflicting query: [ReadAllQuery(name="Voter.findAll" referenceClass=Voter jpql="SELECT v FROM Voter v")] is named: [Voter.findAll] with arguments: [[]].

    Read the article

  • jQuery: Sorting hierarchical data?

    - by Industrial
    Hi everybody, I have tried for some time to work out a way of sorting nested categories with jQuery. I failed to build my own plugin to do this, so I tried to find something that were available already. Tried a few hours now with this one, http://www.jordivila.net/code/js/jquery/ui-widgetTreeList_inheritance/widgetTreeListSample.aspx and cant get it to work. What are the alternatives of creating a jQuery / jQuery UI script that handles sorting children and parent categories in a way that can be combined with a AJAX PHP backend to handle the actual sorting in the database? Thanks!

    Read the article

  • Loosing Route data in RedirectToAction

    - by user1512359
    hi i have a weird problem and here we go: i am redirecting using this command : return RedirectToAction("ViewMessage", "Account", new {id = model.MessageId}); but in ViewMessage action when i try to get id, its null ?!?!?!?!?? string strMessageId = RouteData.Values["id"] as string; i have done this code in lots of places and it works fine but i dont know what is going on here.... :( i know i can use TempData but i dont want to :)

    Read the article

  • Implementing Tagging using Core Data on the iPhone

    - by Jonathan Penn
    I have an application that uses CoreData and I'm trying to figure out the best way to implement tagging and filtering by tag. For my purposes, if I was doing this in raw SQLite I would only need three tables, tags, item_tags and of course my items table. Then filtering would be as simple as joining between the three tables where only items are related to the given tags. Quite straightforward. But, is there a way to do this in CoreData and utilizing NSFetchedResultsController? It doesn't seem that NSPredicate give you the ability to filter through joins. NSPredicate's aren't full SQL anyway so I'm probably barking up the wrong tree there. I'm trying to avoid reimplementing my app using SQLite without CoreData since I'm enjoying the performance CoreData gives me in other areas. Yes, I did consider (and built a test implementation) diving into the raw SQLite that CoreData generates, but that's not future proof and I want to avoid that, too. Has anyone else tried to tackle tagging/filtering with CoreData in a UITableView with NSFetchedResultsController

    Read the article

  • Puzzle - Dynamically change data template control from another data template

    - by Burt
    I have a DataTemplate that contains an Expander with a border in the header. I want the header border to have round corners when collapsed and straight bottom corners when expanded. What would best practice be for achieving this (bonus points for code samples as I am new to XAML)? This is the template that holds the expander: <DataTemplate x:Key="A"> <StackPanel> <Expander Name="ProjectExpander" Header="{Binding .}" HeaderTemplate="{StaticResource B}" > <StackPanel> <Border CornerRadius="0,0,2,2"> This is the expander datatemplate: <DataTemplate x:Key="B"> <Border x:Name="ProjectExpanderHeader" CornerRadius="{Binding local:ItemUserControl.ProjectHeaderBorderRadius, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ContentPresenter}}}" Background="{StaticResource ItemGradient}" HorizontalAlignment="{Binding HorizontalAlignment, RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type ContentPresenter}}, Mode=OneWayToSource}"> <local:ItemContentsUserControl Height="30"/> </Border> </DataTemplate>

    Read the article

  • iPhone Core Data - Access deep attributes with to many relationships

    - by ncohen
    Hi everyone, Let say I have an entity user which has a one to many relationship with the entity menu which has a one to many relationship with the entity meal which has a many to one relationship with the entity recipe which has a one to many relationship with the entity element. What I would like to do is to select the elements which belong to a particular user (username = myUsername) and particular menu*s* (minDate < menu.date < maxDate). Does anyone have an idea how to get them? Thanks

    Read the article

  • iPhone - Create non-persistent entities in core data

    - by ncohen
    Hi everyone, I would like to use entity objects but not store them... I read that I could create them like this: myElement = (Element *)[NSEntityDescription insertNewObjectForEntityForName:@"Element" inManagedObjectContext:managedObjectContext]; And right after that remove them: [managedObjectContext deleteObject:myElement]; then I can use my elements: myElement.property1 = @"Hello"; This works pretty well even though I think this is probably not the most optimal way to do it... Then I try to use it in my UITableView... the problem is that the object get released after the initialization. My table becomes empty when I move it! Thanks edit: I've also tried to copy the element ([myElement copy]) but I get an error...

    Read the article

  • How do I compare two complex data structures?

    - by Phil H
    I have some nested datastructures, each something like: [ ('foo', [ {'a':1, 'b':2}, {'a':3.3, 'b':7} ]), ('bar', [ {'a':4, 'd':'efg', 'e':False} ]) ] I need to compare these structures, to see if there are any differences. Short of writing a function to explicitly walk the structure, is there an existing library or method of doing this kind of recursive comparison?

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >