Search Results

Search found 10417 results on 417 pages for 'large'.

Page 53/417 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • Where can work-at-home coders go to find other coders to real-time chat with and get support like they were on a large team at an established company?

    - by cypherblue
    I used to work in an office surrounded by a large team of programmers where we all used the same languages and had different expertises. Now that I am on my own forming a startup at home, my productivity is suffering because I miss having people I can talk to for specific help, inspiration and reality checks when working on a coding problem. I don't have access to business incubators or shared (co-working) office spaces for startups so I need to chat with people virtually. Where can I go for real-time chat with other programmers and developers (currently I'm looking for people developing for the web, javascript and python) for live debugging and problem-solving of the tasks I am working on? And what other resources can I use to get fellow programmer support?

    Read the article

  • How do I change the cursor and its size?

    - by Thomas Le Feuvre
    I have recently created an Ubuntu 12.04 partition on my Windows 7 laptop. When installing it, I switched to "high contrast" mode, which has rather large cursors (by large I mean about twice as large and thick as they should normally are). Now I have successfully installed the partition, the large cursors have stuck around even after exiting this high contrast mode, but only when I am hovering over stuff e.g. hovering over text inputs, links, and when resizing windows. All of these cursors are too large. They cursor is only normally sized when the computer should be displaying the normal mouse pointer. Does anyone know how I might go about fixing this?

    Read the article

  • Faster method for Matrix vector product for large matrix in C or C++ for use in GMRES

    - by user35959
    I have a large, dense matrix A, and I aim to find the solution to the linear system Ax=b using an iterative method (in MATLAB was the plan using its built in GMRES). For more than 10,000 rows, this is too much for my computer to store in memory, but I know that the entries in A are constructed by two known vectors x and y of length N and the entries satisfy: A(i,j) = .5*(x[i]-x[j])^2+([y[i]-y[j])^2 * log(x[i]-x[j])^2+([y[i]-y[j]^2). MATLAB's GMRES command accepts as input a function call that can compute the matrix vector product A*x, which allows me to handle larger matrices than I can store in memory. To write the matrix-vecotr product function, I first tried this in matlab by going row by row and using some vectorization, but I avoid spawning the entire array A (since it would be too large). This was fairly slow unfortnately in my application for GMRES. My plan was to write a mex file for MATLAB to, which is in C, and ideally should be significantly faster than the matlab code. I'm rather new to C, so this went rather poorly and my naive attempt at writing the code in C was slower than my partially vectorized attempt in Matlab. #include <math.h> #include "mex.h" void Aproduct(double *x, double *ctrs_x, double *ctrs_y, double *b, mwSize n) { mwSize i; mwSize j; double val; for (i=0; i<n; i++) { for (j=0; j<i; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } for (j=i+1; j<n; j++) { val = pow(ctrs_x[i]-ctrs_x[j],2)+pow(ctrs_y[i]-ctrs_y[j],2); b[i] = b[i] + .5* val * log(val) * x[j]; } } } The above is the computational portion of the code for the matlab mex file (which is slightly modified C, if I understand correctly). Please note that I skip the case i=j, since in that case the variable val will be a 0*log(0), which should be interpreted as 0 for me, so I just skip it. Is there a more efficient or faster way to write this? When I call this C function via the mex file in matlab, it is quite slow, slower even than the matlab method I used. This surprises me since I suspected that C code should be much faster than matlab. The alternative matlab method which is partially vectorized that I am comparing it with is function Ax = Aprod(x,ctrs) n = length(x); Ax = zeros(n,1); for j=1:(n-3) v = .5*((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2).*log((ctrs(j,1)-ctrs(:,1)).^2+(ctrs(j,2)-ctrs(:,2)).^2); v(j)=0; Ax(j) = dot(v,x(1:n-3); end (the n-3 is because there is actually 3 extra components, but they are dealt with separately,so I excluded that code). This is partly vectorized and only needs one for loop, so it makes some sense that it is faster. However, I was hoping I could go even faster with C+mex file. Any suggestions or help would be greatly appreciated! Thanks! EDIT: I should be more clear. I am open to any faster method that can help me use GMRES to invert this matrix that I am interested in, which requires a faster way of doing the matrix vector product without explicitly loading the array into memory. Thanks!

    Read the article

  • Silverlight RIA Services. Query fails on large table but works with Where clause

    - by FauxReal
    I have a somewhat large table, maybe 2000 rows by 50 columns. When using the most basic imaginable RIA implementation. Create one-table Model Create DomainService Drop datagrid onto MainPage.xaml Drop datasource onto datagrid Ctrl-F5 I get this error: System.ServiceModel.DomainServices.Client.DomainOperationException: Load operation faild for query. Value cannot be null. Error is much larger, but thats the beginning of it. The weird thing is that if I narrow the results down with a where clause on the GetQuery, it works fine. In fact six different querys which together result in all of the rows being called works fine also. So basically, I'm sure its not some sort of rogue row. Why do I get this "Value cannot be null" error if I query the whole table? Thanks

    Read the article

  • How to transfer large amount of data using WCF?

    - by JSprang
    We are currently trying to move large amounts of data to a Silverlight 3 client using WCF with PollingDuplex. I have read about the MultiplerMessagesPerPoll in Silverlight 4 and it appears to be quite a bit faster. Are there any examples out there for me to reference (using MultipleMessagesPerPoll)? Or maybe some good references on using Net.TCP? Maybe I should be taking a completely different approach? Any ideas or suggestions would be greatly appreciated. Thanks!

    Read the article

  • Plugin or module for filtering/sorting a large amount of data?

    - by prometheus
    I have a rather large amount of data (100 MB or so), that I would like to present to a user. The format of the data is similar to the following... Date              Location      Log File          Link 03/21/2010   San Diego   some_log.txt   http://somelink.com etc My problem is that I would like to have some nice/slick way for the user to filter the information. Unfortunately, because there is so much of it, the jQuery Table Filter plugin does not work (crashes the browser). I was wondering if there is a nice solution or if I have to simply do the filtering on the server end and have a bland pull-down menu / select-box interface for the client to use.

    Read the article

  • Best way to store large dataset in SQL Server?

    - by gary
    I have a dataset which contains a string key field and up to 50 keywords associated with that information. Once the data has been inserted into the database there will be very few writes (INSERTS) but mostly queries for one or more keywords. I have read "Tagsystems: performance tests" which is MySQL based and it seems 2NF appears to be a good method for implementing this, however I was wondering if anyone had experience with doing this with SQL Server 2008 and very large datasets. I am likely to initially have 1 million key fields which could have up to 50 keywords each. Would a structure of keyfield, keyword1, keyword2, ... , keyword50 be the best solution or two tables keyid keyfield | 1 | | M keyid keyword Be a better idea if my queries are mostly going to be looking for results that have one or more keywords?

    Read the article

  • How to scroll and zoom in/out large images on iPhone?

    - by Horace Ho
    I have a large image, size around 30000 (w) x 6000 (h) pixels. You may consider it's like a big map. I assume I need to crop it up into smaller tiles. Questions: what are the right ViewControllers to use? (link) what is the tile strategy? (I put this in another question, as it's not iPhone specific) Requirements: whole image (though cropped) can be scrolled up/down/left/right by swipes zoom in (up to pixel-to-pixel) out (down to screen-fit-by-height) by the 2-finger operation memory efficiency by lazy loading tiles Bonus requirements: automatic scroll, say from left to right slowly and smoothly Thanks!

    Read the article

  • Groovy Grails, How do you stream or buffer a large file in a Controller's response?

    - by Julian Noye
    Hi Guys I have a controller that makes a connection to a url to retrieve a csv file. I am able to send the file in the response using the following code, this works fine. def fileURL = "www.mysite.com/input.csv" def thisUrl = new URL(fileURL); def connection = thisUrl.openConnection(); def output = connection.content.text; response.setHeader "Content-disposition", "attachment; filename=${'output.csv'}" response.contentType = 'text/csv' response.outputStream << output response.outputStream.flush() However I don't think this method is inappropriate for a large file, as the whole file is loaded into the controllers memory. I want to be able to read the file chunk by chunk and write the file to the response chunk by chunk. Any ideas?

    Read the article

  • Database Design: A proper table design for large number of column values.

    - by Jake
    I wish to perform an experiment many different times. After every trial, I am left with a "large" set of output statistics -- let's say, 1000. I would like to store the outputs of my experiments in a table, but what's the best way...? Option 1 Have a table with 1000 columns. Seems like a bad idea. What if the number of statistics one day exceeds the maximum number of columns? Option 2 Have a table with three columns. Let's say, ID, StatisticType, and StatisticValue. That way, you can have as many statistics as you want. However, reading a single experiments statistics becomes more complicated. Moreover, what if different statistics are different data types?? Any suggestions?

    Read the article

  • How can I pipe a large amount of data as a runtime argument?

    - by Zombies
    Running an executable JAR on a linux platform here. The program it self works on a somewhat large amount of data, basically a list of URLs... could be up to 2k. Currently I get this from a simple DB call. But I was thinking that instead of creating a new mode and writing SQL to get a new result set and having to redploy everytime, I could just make the program more robust by passing in the result set (the list of URLs) that need to be worked on... so, within a linux environment, is there a pain-free/simple way to get the result set and pass it in dynamically? I know file i/o is one, but it doesn't seem to be effecient because each file has to be named, as well more logic to handle grabbing the correct file, creating a file with a unique name, etc.

    Read the article

  • How do I get the sums of the digits of a large number in Haskell?

    - by Tim
    I'm a C++ Programmer trying to teach myself Haskell and it's proving to be challenging grasping the basics of using functions as a type of loop. I have a large number, 50!, and I need to add the sum of its digits. It's a relatively easy loop in C++ but I want to learn how to do it in Haskell. I've read some introductory guides and am able to get 50! with sum50fac.hs:: fac 0 = 1 fac n = n * fac n - 1 x = fac 50 main = print x Unfortunately at this point I'm not entirely sure how to approach the problem. Is it possible to write a function that adds (mod) x 10 to a value and then calls the same function again on x / 10 until x / 10 is less than 10? If that's not possible how should I approach this problem? Thanks!

    Read the article

  • Low cost way to host a large table yet keep the performance scalable?

    - by Leo Liang
    I have a growing table storing time series data, 500M entries now, and 200K new records every day. The total size is around 15GB for now. My clients are querying the table via a PHP script mostly, and the size of the result set is around 10K records (not very large). select * from T where timestamp > X and timestamp < Y and additionFilters And I want this operation cheap. Currently my table is hosting in Postgres 7, on a single 16G memory Box, and I would love to see some good suggestion for me to host this in low cost and also allow me to scale up for performance if needed. The table serves: 1. Query: 90% 2. Insert: 9.9% 2. Update: 0.1% <-- very rare.

    Read the article

  • What is the best way in Silerlight to make areas of a large graphic clickable?

    - by Edward Tanguay
    In a Silverlight application I have large images which have flow charts on them. I need to handle the clicks on specific hotspots of the image where the flow chart boxes are. Since the flow charts will always be different, the information of where the hotspots has to be dynamic, e.g. in a list of coordinates. I've found article like this one but don't need the detail of e.g. the outline of countries but just simple rectangle and circle areas. I've also found articles where they talk about overlaying an HTML image map over the silverlight application, but it has to be easier than this. What is the best way to handle clicks on specific areas of an image in silverlight?

    Read the article

  • How can a large number of developers write software without either a cumbersome process or poor qual

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • Entity Framework 4 - Handling very large (1000+ tables) data models?

    - by David Kreps
    We've got a database with over 1000+ tables and would like to consider using EF4 for our data access layer, but I'm concerned about the practical realities of using it for such a large data model. I've seen this question and read about the suggested solutions here and here. These may work, but appear to refer to the first version of the Entity Framework (and are more complex than I'd like). Does anyone know if these solutions have been improved upon in EF4? Or have other suggestions all together? Thanks.

    Read the article

  • Build a decision tree for classification of large amount data,using python?

    - by kaushik
    Hi,i am working for speech synthesis.In this i have a large number of pronunciation for each phone i.e alphabet and need to classify them according to few feature such as segment size(int) and alphabet itself(string) into a smaller set suitable for that particular context. For this purpose,i have decided to use decision tree for classification.the data to be parsed is in the S expression format.eg:((question)(LEFTNODE)(RIGHTNODE)). i hav idea for building decision tree for normal buit in type such as list..looking for suggestion for implementation for S expression.. kindly help.. Thanks in advance.. Note:this question may look similar to my prev post,srry if cant giv multiple post.already edited it many times so though of wirting new question instead of editing again

    Read the article

  • Has anyone used an object database with a large amount of data?

    - by Jon Kruger
    Object databases like MongoDB and db4o are getting lots of pub lately. Everyone that plays with them seems to love it. I'm guessing that they are dealing with about 640K of data in their sample apps. Has anyone tried to use an object database with a large amount of data (say, 50GB or more)? Are you able to still execute complex queries against it (like from a search screen)? How does it compare to your usual relational database of choice? I'm just curious. I want to take the object database plunge, but I need to know if it'll work on something more than a sample app.

    Read the article

  • How it is better to organise a large database of addresses?

    - by shurik2533
    How it is better to organise a large database of addresses? It is need to create mysql database of addresses. How it is better for organising? I have two variants: 1) cuontries id|name 1 |Russia cities id|name 1 |Moscow 2 |Saratov villages id|name streets id|name 1 |Lenin st. places id|name |country_id|city_id|village_id|street_id|building_number|office|flat_number|room_number 1 |somebuilding |1 |1 |NULL |1 |31 |12a |NULL |NULL For simplification I use not all making addresses. If any part does not participate in the address it is equal NULL 2) addressElements id|name 1 |country 2 |city 3 |village 4 |street 5 |office 6 |flat_number 7 |room_number addressValues id|addressElement_id|value 1 |1 |Russia 2 |2 |Saratov 3 |2 |Moscow 4 |3 |Prostokvashino 5 |4 |Lenin st. places id| name 1 | somebuilding places_has_addressValues place_id|addressValue_id 1 |1 1 |3 1 |5 UPD. I have decided to make as follows

    Read the article

  • Is there a convention for organizing the include/exports in a large C++ project ?

    - by BlueTrin
    Hello, In a large C++ solution, is there a best/standard way to separate the include files necessary to build an intermediary DLL and the include files which will be used by the DLL clients ? We have grouped all the include files in a folder called Interface (for DLL interface), but there the customers have to either include the Interface folder as a default include folder or type the full name as: #include "ProjectName/Interface/myinterface.h" Wouldn't it be better to create a separate folder called exports where I would create a folder called ProjectName and put the include files there ? So that the customers would be typing: #include "ProjectName/myinterface.h" If I do the thing right above, then should I keep the files within the solution and produce a post build event (I use Visual Studio 2k5) to copy the files into the "export" folder (/ProjectName/) ? Or is it better to just include directly the files from this folder within my project (this is more direct and has less chances to cause maintenance issues ? I am more looking for advice than for a definite solution. Thank you for reading this ! Anthony

    Read the article

  • How to deal with large result sets with Linq to Entities?

    - by user169867
    I have a fairly complex linq to entities query that I display on a website. It uses paging so I never pull down more than 50 records at a time for display. But I also want to give the user the option to export the full results to Excel or some other file format. My concern is that there could potentially be a large # of records all being loaded into memory at one time to do this. Is there a way to process a linq result set 1 record at a time like you could w/ a datareader so only 1 record is really being kept in memory at a time? I'd appreciate any help. Thanks

    Read the article

  • iPhone SDK: Downloading large files from a server into the app bundle.

    - by Jessica
    Hi, I am building an app that plays multiple video files, But I would like to know How do you download a video file (100mb - 300mb) from a server into the application's bundle so it can later be locally referred to in code? The reason I want this type of a set up in my app is that I don't want the app binary to be made unnecessarily large due to including videos some users may not want. Also does this violate any of apple's terms? Also would it be simple to implement a progress view with this kind of set up and if so how? Any help is appreciated.

    Read the article

  • Are indexes good or bad for a large database?

    - by gmemon
    Hello All, I read on MySQL Performance Blog that when tables are large, it is better to scan full tables, instead of using indexes. I have a table with tens of millions of rows. When conducting queries, if I use no indexes, then queries are 24 times slower than with indexes. I know lot of things may cause this (e.g., are rows stored sequentially), but can you please give me some hints what might be happening? Or how I should start examining this issue? I want to understand when use of indexes is preferred and when it's not Thanks

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >