Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 57/457 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Entity Framework 4 - Handling very large (1000+ tables) data models?

    - by David Kreps
    We've got a database with over 1000+ tables and would like to consider using EF4 for our data access layer, but I'm concerned about the practical realities of using it for such a large data model. I've seen this question and read about the suggested solutions here and here. These may work, but appear to refer to the first version of the Entity Framework (and are more complex than I'd like). Does anyone know if these solutions have been improved upon in EF4? Or have other suggestions all together? Thanks.

    Read the article

  • How can a large number of developers write software without either a cumbersome process or poor qual

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • Has anyone used an object database with a large amount of data?

    - by Jon Kruger
    Object databases like MongoDB and db4o are getting lots of pub lately. Everyone that plays with them seems to love it. I'm guessing that they are dealing with about 640K of data in their sample apps. Has anyone tried to use an object database with a large amount of data (say, 50GB or more)? Are you able to still execute complex queries against it (like from a search screen)? How does it compare to your usual relational database of choice? I'm just curious. I want to take the object database plunge, but I need to know if it'll work on something more than a sample app.

    Read the article

  • Build a decision tree for classification of large amount data,using python?

    - by kaushik
    Hi,i am working for speech synthesis.In this i have a large number of pronunciation for each phone i.e alphabet and need to classify them according to few feature such as segment size(int) and alphabet itself(string) into a smaller set suitable for that particular context. For this purpose,i have decided to use decision tree for classification.the data to be parsed is in the S expression format.eg:((question)(LEFTNODE)(RIGHTNODE)). i hav idea for building decision tree for normal buit in type such as list..looking for suggestion for implementation for S expression.. kindly help.. Thanks in advance.. Note:this question may look similar to my prev post,srry if cant giv multiple post.already edited it many times so though of wirting new question instead of editing again

    Read the article

  • How it is better to organise a large database of addresses?

    - by shurik2533
    How it is better to organise a large database of addresses? It is need to create mysql database of addresses. How it is better for organising? I have two variants: 1) cuontries id|name 1 |Russia cities id|name 1 |Moscow 2 |Saratov villages id|name streets id|name 1 |Lenin st. places id|name |country_id|city_id|village_id|street_id|building_number|office|flat_number|room_number 1 |somebuilding |1 |1 |NULL |1 |31 |12a |NULL |NULL For simplification I use not all making addresses. If any part does not participate in the address it is equal NULL 2) addressElements id|name 1 |country 2 |city 3 |village 4 |street 5 |office 6 |flat_number 7 |room_number addressValues id|addressElement_id|value 1 |1 |Russia 2 |2 |Saratov 3 |2 |Moscow 4 |3 |Prostokvashino 5 |4 |Lenin st. places id| name 1 | somebuilding places_has_addressValues place_id|addressValue_id 1 |1 1 |3 1 |5 UPD. I have decided to make as follows

    Read the article

  • Is there a convention for organizing the include/exports in a large C++ project ?

    - by BlueTrin
    Hello, In a large C++ solution, is there a best/standard way to separate the include files necessary to build an intermediary DLL and the include files which will be used by the DLL clients ? We have grouped all the include files in a folder called Interface (for DLL interface), but there the customers have to either include the Interface folder as a default include folder or type the full name as: #include "ProjectName/Interface/myinterface.h" Wouldn't it be better to create a separate folder called exports where I would create a folder called ProjectName and put the include files there ? So that the customers would be typing: #include "ProjectName/myinterface.h" If I do the thing right above, then should I keep the files within the solution and produce a post build event (I use Visual Studio 2k5) to copy the files into the "export" folder (/ProjectName/) ? Or is it better to just include directly the files from this folder within my project (this is more direct and has less chances to cause maintenance issues ? I am more looking for advice than for a definite solution. Thank you for reading this ! Anthony

    Read the article

  • How to deal with large result sets with Linq to Entities?

    - by user169867
    I have a fairly complex linq to entities query that I display on a website. It uses paging so I never pull down more than 50 records at a time for display. But I also want to give the user the option to export the full results to Excel or some other file format. My concern is that there could potentially be a large # of records all being loaded into memory at one time to do this. Is there a way to process a linq result set 1 record at a time like you could w/ a datareader so only 1 record is really being kept in memory at a time? I'd appreciate any help. Thanks

    Read the article

  • iPhone SDK: Downloading large files from a server into the app bundle.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's bundle so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • Are indexes good or bad for a large database?

    - by gmemon
    Hello All, I read on MySQL Performance Blog that when tables are large, it is better to scan full tables, instead of using indexes. I have a table with tens of millions of rows. When conducting queries, if I use no indexes, then queries are 24 times slower than with indexes. I know lot of things may cause this (e.g., are rows stored sequentially), but can you please give me some hints what might be happening? Or how I should start examining this issue? I want to understand when use of indexes is preferred and when it's not Thanks

    Read the article

  • Database for Large number of 1kB data chunks (MySQL?)

    - by The Unknown
    I have a very large dataset, each item in the dataset being roughly 1kB in size. The data needs to be queried rapidly by many applications distributed over a network. The dataset has more than a million items (so 500 million+ 1kB data chunks). What would be the best method to storing this dataset (need to allow adding more items, and reading them rapidly, but never modifying already added data)? Would using a MySQL DB using the binary blob format be appropriate? Or should each of these be stored as files on a file system? edit: the number is 1 million items now, but needs to be able to scale to well over 500 million items easily.

    Read the article

  • What are the best practices for importing large datasets into MongoDB?

    - by snl
    We are just giving MongoDB a test run and have set up a Rails 3 app with Mongoid. What are the best practices for inserting large datasets into MongoDB? To flesh out a scenario: Say, I have a book model and want to import several million records from a CSV file. I suppose this needs to be done in the console, so this may possibly not be a Ruby-specific question. Edited to add: I assume it makes a huge difference whether the imported data includes associations or is supposed to go into one model only. Any comments on either scenario welcome.

    Read the article

  • iPhone SDK: Downloading large files from a server into the app's documents.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's documents so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • Good way to send a large file over a network in C#?

    - by BFreeman
    I am trying to build an application that can request files from a service running on another machine in the network. These files can be fairly large (500mb + at times). I was looking into sending it via TCP but I'm worried that it may require that the entire file be stored in memory. There will probably only be one client. Copying to a shared directory isn't acceptable either. The only communication required is for the client to say "gimme xyz" and the server to send it (and whatever it takes to ensure this happens correctly). Any suggestions?

    Read the article

  • What's the most efficient way to manage large datasets with Javascript/jQuery in IE?

    - by RenderIn
    I have a search that returns JSON, which I then transform into a HTML table in Javascript. It repeatedly calls the jQuery.append() method, once for each row. I have a modern machine, and the Firefox response time is acceptable. But in IE 8 it is unbearably slow. I decided to move the transformation from data to HTML into the server-side PHP, changing the return type from JSON to HTML. Now, rather than calling the jQuery.append() time repeatedly, I call the jQuery.html() method once with the entire table. I noticed Firefox got faster, but IE got slower. These results are anecdotal and I have not done any benchmarking, but the IE performance is very disappointing. Is there something I can do to speed up the manipulation of large amounts of data in IE or is it simply a bad idea to process very much data at once with AJAX/Javascript?

    Read the article

  • C++ - Efficient container for large amounts of searchable data?

    - by Francisco P.
    Hello, everybody! I am implementing a text-based version of Scrabble for a College project. My dictionary is quite large, weighing in at around 400.000 words (std::string). Searching for a valid word will suck, big time, in terms of efficiency if I go for a vector<string> ( O(n) ). Are there any good alternatives? Keep in mind, I'm enrolled in freshman year. Nothing TOO complex! Thanks for your time! Francisco

    Read the article

  • PHP: How to begin testing large, existing codebase, and test for regression on production site?

    - by anonymous coward
    I'm in charge of at least one large body of existing PHP code, that desperately needs tests, and as well I need some method of checking the production site for errors. I've been working with PHP for many years, but am unfortunately new to testing. (Sorry!). While writing tests for code that has predictable outcomes seems easy enough, I'm having trouble wrapping my head around just how I can test the live site, to ensure proper output. I know that in a test environment, I could set up the database in a known state... but are there proper methods or techniques for testing a live site? Where should I begin? [I am aware of PHPUnit and SimpleTest, but haven't chosen one over the other yet]

    Read the article

  • How to transfer large file (File size > Heap Size) over the network?

    - by neo
    How to transfer large file (File size Heap/RAM Size) over the network ? Lets say I have file (size 10GB) I want to transfer it machine a (RAM 512mb) to machine b (RAM 512mb). Want achieve this using java code. First, is it possible ? Any recommendation on framework. If possible, can we speed this up using threading ? Important criteria: file's data sequence needs to be maintained during transfer. Any example will be great help.

    Read the article

  • How to manage memory for ios with large csv files?

    - by Pell000
    I'm new to ios development, and I'm running into issues relating to memory management and my approaches to dealing with large datasets. Right now, I am loading the csv files and storing the relevant data as objects in memory at app initialization. Some of the csv files are larger than 1MB, and in total, my app uses about 180MB of memory. This is obviously way too high of a number (unless the info I found is wrong and this is an acceptable number, then please let me know). I feel as though there is a fundamental flaw in my approach: is there a way I can avoid storing the csv files in the project itself? Or, is there a kind of "lazy" loading I can do so that I can simply look up info in the csv file, as opposed to loading all of the data from it at once? Any help would do. I think that I need a new perspective in how to manage this more efficiently.

    Read the article

  • Is it better to have more small ram chips or fewer large ones?

    - by Alex Andronov
    I am currently building a new server. I have options between say 32GB Memory for 2 CPUs, DDR3, 1066MHz (8x4GB Dual Ranked RDIMMs) and 36GB Memory for 2 CPUs, DDR3, 1066MHz (18x2GB Dual Ranked RDIMMs) Both at the same price. Should I go for the higher ram amount or the fewer chips? This will be for a Dell PowerEdge R710 with two Intel® Xeon® E5530, 2.4Ghz, 8MB Cache, 5.86 GT/s QPI, Turbo, HT Thanks

    Read the article

  • Why do I see a large performance hit with DRBD?

    - by BHS
    I see a much larger performance hit with DRBD than their user manual says I should get. I'm using DRBD 8.3.7 (Fedora 13 RPMs). I've setup a DRBD test and measured throughput of disk and network without DRBD: dd if=/dev/zero of=/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 4.62985 s, 116 MB/s / is a logical volume on the disk I'm testing with, mounted without DRBD iperf: [ 4] 0.0-10.0 sec 1.10 GBytes 941 Mbits/sec According to Throughput overhead expectations, the bottleneck would be whichever is slower, the network or the disk and DRBD should have an overhead of 3%. In my case network and I/O seem to be pretty evenly matched. It sounds like I should be able to get around 100 MB/s. So, with the raw drbd device, I get dd if=/dev/zero of=/dev/drbd2 bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 6.61362 s, 81.2 MB/s which is slower than I would expect. Then, once I format the device with ext4, I get dd if=/dev/zero of=/mnt/data.tmp bs=512M count=1 oflag=direct 536870912 bytes (537 MB) copied, 9.60918 s, 55.9 MB/s This doesn't seem right. There must be some other factor playing into this that I'm not aware of. global_common.conf global { usage-count yes; } common { protocol C; } syncer { al-extents 1801; rate 33M; } data_mirror.res resource data_mirror { device /dev/drbd1; disk /dev/sdb1; meta-disk internal; on cluster1 { address 192.168.33.10:7789; } on cluster2 { address 192.168.33.12:7789; } } For the hardware I have two identical machines: 6 GB RAM Quad core AMD Phenom 3.2Ghz Motherboard SATA controller 7200 RPM 64MB cache 1TB WD drive The network is 1Gb connected via a switch. I know that a direct connection is recommended, but could it make this much of a difference? Edited I just tried monitoring the bandwidth used to try to see what's happening. I used ibmonitor and measured average bandwidth while I ran the dd test 10 times. I got: avg ~450Mbits writing to ext4 avg ~800Mbits writing to raw device It looks like with ext4, drbd is using about half the bandwidth it uses with the raw device so there's a bottleneck that is not the network.

    Read the article

  • Is anybody using Splunk in a large-scale production environment?

    - by Nano Taboada
    I've been watching the videos at splunk.com and really it's hard to believe that one can get all those features for free, there's still that "where's the catch?" in the back of my head. So it'd be great if anybody that is actually using it Splunk on production would like to share their experiences, perhaps highlighting its benefits over, say, Nagios? Thanks much in advance.

    Read the article

  • Best way to send large files point-to-point?

    - by Adam S
    I'm looking for a way to send a 10GB file to a friend. I really need to send it over the internet, but e-mail or uploading sites are not really an option. I remember using MSN messenger and having a file transfer feature that worked decently well. However, my friend doesn't have this software and doesn't want to get it. I know that the professional versions of TeamViewer have such a feature, but are there any free alternatives?

    Read the article

  • Copying and rotating large table from Excel to Word without turning it into picture/wmf/...

    - by ldigas
    What would be the easiest way of copying and rotating a table made in Excel, to Word without turning it into a picture/enhanced metafile/or something alike. I know I can use the Section Break routine, but the problem is the table needs to go into a company frame (which I cannot turn onto a landscape), so I literally need to turn the table by 90 degrees. Any way of doing something like that ?

    Read the article

  • Website with large number of users keeps going down due to memory leaks:Tomcat6 and Java 6

    - by user1766478
    We host many websites on one of our two virtual servers. We use tomcat 6 and Java 6. It is an MVC model with a hibernate like layer. The problem is, one of our biggest clients with the most number of members, keeps crashing the server every 6-8 hours(precisely in the mornings when most members login) We have been having this issue for 4 days now. Trying to figure out the problem but we suspect memory leaks. Any suggestions?

    Read the article

  • Viewing a large field in a query in SQL management studio with ZOOM?

    - by smithym
    Hi there, Can anyone help? I am using SQL management studio (sql server 2008) to run queries and some of the fields that come back are varchar(max) for example and it has a lot of information - Is there a zoom feature to open the window and show me the contents with vertical and horizontal scrollbars? I remember there was, i thought it was F2 but i must have been mistaken as it doesn't work Now i have to scroll horizontal on the field and its really difficult to see everything Also some of the fields contain new line codes etc so it would be great if the zoom feature would display the info using the new line codes etc Any body know how to do this?

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >