Search Results

Search found 65999 results on 2640 pages for 'large data volumes'.

Page 463/2640 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • What is the best way in Silerlight to make areas of a large graphic clickable?

    - by Edward Tanguay
    In a Silverlight application I have large images which have flow charts on them. I need to handle the clicks on specific hotspots of the image where the flow chart boxes are. Since the flow charts will always be different, the information of where the hotspots has to be dynamic, e.g. in a list of coordinates. I've found article like this one but don't need the detail of e.g. the outline of countries but just simple rectangle and circle areas. I've also found articles where they talk about overlaying an HTML image map over the silverlight application, but it has to be easier than this. What is the best way to handle clicks on specific areas of an image in silverlight?

    Read the article

  • Entity Framework 4 - Handling very large (1000+ tables) data models?

    - by David Kreps
    We've got a database with over 1000+ tables and would like to consider using EF4 for our data access layer, but I'm concerned about the practical realities of using it for such a large data model. I've seen this question and read about the suggested solutions here and here. These may work, but appear to refer to the first version of the Entity Framework (and are more complex than I'd like). Does anyone know if these solutions have been improved upon in EF4? Or have other suggestions all together? Thanks.

    Read the article

  • How can a large number of developers write software without either a cumbersome process or poor qual

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • Assigning a variable of a struct that contains an instance of a class to another variable

    - by xport
    In my understanding, assigning a variable of a struct to another variable of the same type will make a copy. But this rule seems broken as shown on the following figure. Could you explain why this happened? using System; namespace ReferenceInValue { class Inner { public int data; public Inner(int data) { this.data = data; } } struct Outer { public Inner inner; public Outer(int data) { this.inner = new Inner(data); } } class Program { static void Main(string[] args) { Outer p1 = new Outer(1); Outer p2 = p1; Console.WriteLine("p1:{0}, p2:{1}", p1.inner.data, p2.inner.data); p1.inner.data = 2; Console.WriteLine("p1:{0}, p2:{1}", p1.inner.data, p2.inner.data); p2.inner.data = 3; Console.WriteLine("p1:{0}, p2:{1}", p1.inner.data, p2.inner.data); Console.ReadKey(); } } }

    Read the article

  • Has anyone used an object database with a large amount of data?

    - by Jon Kruger
    Object databases like MongoDB and db4o are getting lots of pub lately. Everyone that plays with them seems to love it. I'm guessing that they are dealing with about 640K of data in their sample apps. Has anyone tried to use an object database with a large amount of data (say, 50GB or more)? Are you able to still execute complex queries against it (like from a search screen)? How does it compare to your usual relational database of choice? I'm just curious. I want to take the object database plunge, but I need to know if it'll work on something more than a sample app.

    Read the article

  • Build a decision tree for classification of large amount data,using python?

    - by kaushik
    Hi,i am working for speech synthesis.In this i have a large number of pronunciation for each phone i.e alphabet and need to classify them according to few feature such as segment size(int) and alphabet itself(string) into a smaller set suitable for that particular context. For this purpose,i have decided to use decision tree for classification.the data to be parsed is in the S expression format.eg:((question)(LEFTNODE)(RIGHTNODE)). i hav idea for building decision tree for normal buit in type such as list..looking for suggestion for implementation for S expression.. kindly help.. Thanks in advance.. Note:this question may look similar to my prev post,srry if cant giv multiple post.already edited it many times so though of wirting new question instead of editing again

    Read the article

  • Is there a convention for organizing the include/exports in a large C++ project ?

    - by BlueTrin
    Hello, In a large C++ solution, is there a best/standard way to separate the include files necessary to build an intermediary DLL and the include files which will be used by the DLL clients ? We have grouped all the include files in a folder called Interface (for DLL interface), but there the customers have to either include the Interface folder as a default include folder or type the full name as: #include "ProjectName/Interface/myinterface.h" Wouldn't it be better to create a separate folder called exports where I would create a folder called ProjectName and put the include files there ? So that the customers would be typing: #include "ProjectName/myinterface.h" If I do the thing right above, then should I keep the files within the solution and produce a post build event (I use Visual Studio 2k5) to copy the files into the "export" folder (/ProjectName/) ? Or is it better to just include directly the files from this folder within my project (this is more direct and has less chances to cause maintenance issues ? I am more looking for advice than for a definite solution. Thank you for reading this ! Anthony

    Read the article

  • How it is better to organise a large database of addresses?

    - by shurik2533
    How it is better to organise a large database of addresses? It is need to create mysql database of addresses. How it is better for organising? I have two variants: 1) cuontries id|name 1 |Russia cities id|name 1 |Moscow 2 |Saratov villages id|name streets id|name 1 |Lenin st. places id|name |country_id|city_id|village_id|street_id|building_number|office|flat_number|room_number 1 |somebuilding |1 |1 |NULL |1 |31 |12a |NULL |NULL For simplification I use not all making addresses. If any part does not participate in the address it is equal NULL 2) addressElements id|name 1 |country 2 |city 3 |village 4 |street 5 |office 6 |flat_number 7 |room_number addressValues id|addressElement_id|value 1 |1 |Russia 2 |2 |Saratov 3 |2 |Moscow 4 |3 |Prostokvashino 5 |4 |Lenin st. places id| name 1 | somebuilding places_has_addressValues place_id|addressValue_id 1 |1 1 |3 1 |5 UPD. I have decided to make as follows

    Read the article

  • How to deal with large result sets with Linq to Entities?

    - by user169867
    I have a fairly complex linq to entities query that I display on a website. It uses paging so I never pull down more than 50 records at a time for display. But I also want to give the user the option to export the full results to Excel or some other file format. My concern is that there could potentially be a large # of records all being loaded into memory at one time to do this. Is there a way to process a linq result set 1 record at a time like you could w/ a datareader so only 1 record is really being kept in memory at a time? I'd appreciate any help. Thanks

    Read the article

  • Best practice for handling memory leaks in large Java projects?

    - by knorv
    In almost all larger Java projects I've been involved with I've noticed that the quality of service of the application degrades with the uptime of the container. This is most probably due to memory leaks in the code. The correct way to solve this problem is obviously to trace back to the root cause of the problem and fix the leaks in the code. The quick and dirty way of solving the problem is simply restarting Tomcat (or whichever servlet container you're using). These are my three questions: Assume that you choose to solve the problem by tracing the root cause of the problem (the memory leaks), how would you collect data to zoom in on the problem? Assume that you choose the quick and dirty way of speeding things up by simply restarting the container, how would you collect data to choose the optimal restart cycle? Have you been able to deploy and run projects over an extended period of time without ever restarting the servlet container to regain snappiness? Or is an occasional servlet restart something that one has to simply accept?

    Read the article

  • iPhone SDK: Downloading large files from a server into the app bundle.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's bundle so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • How do I write raw binary data in Python?

    - by Chris B.
    I've got a Python program that stores and writes data to a file. The data is raw binary data, stored internally as str. I'm writing it out through a utf-8 codec. However, I get UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 25: character maps to <undefined> in the cp1252.py file. This looks to me like Python is trying to interpret the data using the default code page. But it doesn't have a default code page. That's why I'm using str, not unicode. I guess my questions are: How do I represent raw binary data in memory, in Python? When I'm writing raw binary data out through a codec, how do I encode/unencode it?

    Read the article

  • What are the repercussions of not checking existing data when adding a foreign key?

    - by scottm
    I've inherited a database that doesn't exactly strive for data integrity. I am trying to add some foreign keys to change that, but there is data in some tables that doesn't fit the constraints. Most likely, the data won't be used again so I want to know what problems I might face by leaving it there. The other option I see is to move it into some kind of table without referential constraints, just for historical purposes. So, what are the repercussions of not checking existing data? If I create a foreign key constraint on a table and don't check existing data, will all new data inserted into the table be enforced?

    Read the article

  • Want to save data field from form into two columns of two models.

    - by vette982
    I have a Profile model with a hasOne relationship to a Detail model. I have a registration form that saves data into both model's tables, but I want the username field from the profile model to be copied over to the usernamefield in the details model so that each has the same username. function new_account() { if(!empty($this->data)) { $this->Profile->modified = date("Y-m-d H:i:s"); if($this->Profile->save($this->data)) { $this->data['Detail']['profile_id'] = $this->Profile->id; $this->data['Detail']['username'] = $this->Profile->username; $this->Profile->Detail->save($this->data); $this->Session->setFlash('Your registration was successful.'); $this->redirect(array('action'=>'index')); } } } This code in my Profile controller gives me the error: Undefined property: Profile::$username Any ideas?

    Read the article

  • Are indexes good or bad for a large database?

    - by gmemon
    Hello All, I read on MySQL Performance Blog that when tables are large, it is better to scan full tables, instead of using indexes. I have a table with tens of millions of rows. When conducting queries, if I use no indexes, then queries are 24 times slower than with indexes. I know lot of things may cause this (e.g., are rows stored sequentially), but can you please give me some hints what might be happening? Or how I should start examining this issue? I want to understand when use of indexes is preferred and when it's not Thanks

    Read the article

  • What are the best practices for importing large datasets into MongoDB?

    - by snl
    We are just giving MongoDB a test run and have set up a Rails 3 app with Mongoid. What are the best practices for inserting large datasets into MongoDB? To flesh out a scenario: Say, I have a book model and want to import several million records from a CSV file. I suppose this needs to be done in the console, so this may possibly not be a Ruby-specific question. Edited to add: I assume it makes a huge difference whether the imported data includes associations or is supposed to go into one model only. Any comments on either scenario welcome.

    Read the article

  • How the existing data to be if entity structure modified or deleted on GAE?

    - by Eonil
    GAE recommends using JDO/JPA. But I have serious question about using OODB like them. JDO based on user's class structure. And data structure should be modified continually as service advances. So, If data(entity) class property being removed, what happened to existing data on the property? If data(entity) class renamed for refactoring reason, how the JDO know those renaming? Or all data loss? Major point is "How JDO/GAE/BigTable applies modification of class into existing entity structure and data?".

    Read the article

  • iPhone SDK: Downloading large files from a server into the app's documents.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's documents so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • best way to store php data on a page for use with javascript/jquery?

    - by Haroldo
    Ok, so im trying to work out the fastest way of storing data on my page without slowing the page load: I need to store information in the page to be later used by jquery. My page is an events page and i want to attach data to each event anchor. there are 100+ events to attach data to. The events anchors are created with a php loop, so i could create the data elements within this loop using either use un-semantic tags ie *rel="some_data"* create a jquery.data() for each iteration of the loop or i could run the loop again, separately, this time inside script tags with jquery.data(); would really appreciate any thoughts on this!

    Read the article

  • Offsite data storage for simple app, or a similar supported persistence mechanism?

    - by jdk
    Question Is there a usable facebook entry point to the Data Storage API that facebook lists on their app admin page for developers, or should I consider an alternate mechanism? What alternative mechanisms exist to simply persist my information offsite (away from my server app) without stuffing it into a cookie that's prone to expire? ... Background The facebook Data Store Admin tool is made available in a facebook App's Settings as seen here: (continue reading below) However when I visit the DataStoreAdmin link nothing works (i.e. clicking the buttons to define the data store types and objects does nothing - I have tried different browsers). The Wiki page for Data Store API hasn't been updated recently and the second last update says the beta Data Store was taken offline. It seems odd the link would be readily available and highly visible at the top of the App configuration area if indeed it's defunct. I was hoping some kind of key/value pair solution to remove the data calls from my own server.

    Read the article

  • How to convert searchTwitter results (from library(twitteR)) into a data.frame?

    - by analyticsPierce
    I am working on saving twitter search results into a database (SQL Server) and am getting an error when I pull the search results from twitteR. If I execute: library(twitteR) puppy <- as.data.frame(searchTwitter("puppy", session=getCurlHandle(),num=100)) I get an error of: Error in as.data.frame.default(x[[i]], optional = TRUE) : cannot coerce class structure("status", package = "twitteR") into a data.frame This is important because in order to use RODBC to add this to a table using sqlSave it needs to be a data.frame. At least that's the error message I got: Error in sqlSave(localSQLServer, puppy, tablename = "puppy_staging", : should be a data frame So does anyone have any suggestions on how to coerce the list to a data.frame or how I can load the list through RODBC?

    Read the article

  • What's the most simple way to retrieve all data from a table and save it back in .NET 3.5?

    - by zoman
    I have a number of tables containing some basic (business related) mapping data. What's the most simple way to load the data from those tables, then save the modified values back. (all data should be replaced in the tables) An ORM is out of question as I would like to avoid creating domain objects for each table. The actual editing of the data is not an issue. (it is exported into Excel where the data is edited, then the file is uploaded with the modified data) The technology is .NET 3.5 (ASP.NET MVC) and SQL Server 2005. Thanks.

    Read the article

  • Good way to send a large file over a network in C#?

    - by BFreeman
    I am trying to build an application that can request files from a service running on another machine in the network. These files can be fairly large (500mb + at times). I was looking into sending it via TCP but I'm worried that it may require that the entire file be stored in memory. There will probably only be one client. Copying to a shared directory isn't acceptable either. The only communication required is for the client to say "gimme xyz" and the server to send it (and whatever it takes to ensure this happens correctly). Any suggestions?

    Read the article

  • C++ - Efficient container for large amounts of searchable data?

    - by Francisco P.
    Hello, everybody! I am implementing a text-based version of Scrabble for a College project. My dictionary is quite large, weighing in at around 400.000 words (std::string). Searching for a valid word will suck, big time, in terms of efficiency if I go for a vector<string> ( O(n) ). Are there any good alternatives? Keep in mind, I'm enrolled in freshman year. Nothing TOO complex! Thanks for your time! Francisco

    Read the article

  • PHP: How to begin testing large, existing codebase, and test for regression on production site?

    - by anonymous coward
    I'm in charge of at least one large body of existing PHP code, that desperately needs tests, and as well I need some method of checking the production site for errors. I've been working with PHP for many years, but am unfortunately new to testing. (Sorry!). While writing tests for code that has predictable outcomes seems easy enough, I'm having trouble wrapping my head around just how I can test the live site, to ensure proper output. I know that in a test environment, I could set up the database in a known state... but are there proper methods or techniques for testing a live site? Where should I begin? [I am aware of PHPUnit and SimpleTest, but haven't chosen one over the other yet]

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >