Search Results

Search found 10417 results on 417 pages for 'large'.

Page 11/417 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Am I sending large amounts of data sensibly?

    - by Sofus Albertsen
    I am about to design a video conversion service, that is scalable on the conversion side. The architecture is as follows: Webpage for video upload When done, a message gets sent out to one of several resizing servers The server locates the video, saves it on disk, and converts it to several formats and resolutions The resizing server uploads the output to a content server, and messages back that the conversion is done. Messaging is something I have covered, but right now I am transferring via FTP, and wonder if there is a better way? is there something faster, or more reliable? All the servers will be sitting in the same gigabit switch or neighboring switch, so fast transfer is expected.

    Read the article

  • Split large file, have arbitrary start index number

    - by nEJC
    I do a lot of file manipulation on my system and in one particular batch job I end up with around 16 Gb file. I need to prepare this data into smaller chunks for another process. I split it into 10k lines per file and numeric index, padded to 5 digits split -a 5 -d -l 10000 large_input_file /out_path/out. This way I end up with files named out.00000 out.00001 ... The problem is that this way indexing always starts with 0. Is there a way to set it to arbitrary starting index? man reveals nothing ...

    Read the article

  • Replicating A Volume Of Large Data via Transactional Replication

    During weekend maintenance, members of the support team executed an UPDATE statement against the database on the OLTP Server. This database was a part of Transactional Replication, and once the UPDATE statement was executed the Replication procedure came to a halt with an error message. Satnam Singh decided to work on this case and try to find an efficient solution to rebuild the procedure without significant downtime.

    Read the article

  • Canonicalization Within Large Corporations

    As you probably have heard, canonicalization is one of the latest announcements sweeping the SEO industry. What you probably haven't heard, however, is how to pronounce it - or how it will effect your website. I'll do my best to explain canonicalization in layman's terms, but forgive me if it's still tough to understand.

    Read the article

  • Continuing to code on large projects

    - by user3487347
    I am a hobbyist programmer, and I've started many medium - sized projects to work on just by myself. These include games, a raytracer, physics simulations etc. By the time these projects get to a certain size (around 5000 lines), I begin to slow down in adding features to the program. This is not because of a lack of ideas of what to implement in a program, but rather a struggle in how to go about it. In particular, I'm afraid of breaking what I already have working in order to implement a new feature. I've tried using version control like Git and Subversion, but these seem unnecessary when you are a one man team. I simply have a folder of "versions" of my program, one for each major change I make. How do I keep coding past this 5000 line mark? What about the 50000 line mark?

    Read the article

  • How to quickly search through a very large list of strings / records on a database

    - by Giorgio
    I have the following problem: I have a database containing more than 2 million records. Each record has a string field X and I want to display a list of records for which field X contains a certain string. Each record is about 500 bytes in size. To make it more concrete: in the GUI of my application I have a text field where I can enter a string. Above the text field I have a table displaying the (first N, e.g. 100) records that match the string in the text field. When I type or delete one character in the text field, the table content must be updated on the fly. I wonder if there is an efficient way of doing this using appropriate index structures and / or caching. As explained above, I only want to display the first N items that match the query. Therefore, for N small enough, it should not be a big issue loading the matching items from the database. Besides, caching items in main memory can make retrieval faster. I think the main problem is how to find the matching items quickly, given the pattern string. Can I rely on some DBMS facilities, or do I have to build some in-memory index myself? Any ideas? EDIT I have run a first experiment. I have split the records into different text files (at most 200 records per file) and put the files in different directories (I used the content of one data field to determine the directory tree). I end up with about 50000 files in about 40000 directories. I have then run Lucene to index the files. Searching for a string with the Lucene demo program is pretty fast. Splitting and indexing took a few minutes: this is totally acceptable for me because it is a static data set that I want to query. The next step is to integrate Lucene in the main program and use the hits returned by Lucene to load the relevant records into main memory.

    Read the article

  • Display large amount of data to client through pagination

    - by ebram tharwat
    I have a web application in which i need to show a big number of data or records for clients. Now i 'll use pagination but i was wondering should I: Load all the data once then pagination, sorting and sarching 'll be easy..But it 'll takes big time(using local DB it takes up to 9 sec.) Or each time i show new page(from the pagination) i make a new request to server and then new request to DB to get the next records..But then what if the client click on Prev button, i 'll make a new request to get data that I had previously..Should i cach data that are loaded before and how if that's good technique? So load all data once or make a new request every time i need data that maybe have been loaded before. I'm using ASP.NET MVC SPA with durandaljs and knockoutjs

    Read the article

  • Sharing Large Database Backup Among Team

    - by MattGWagner
    I work on a team of three - five developers that work on an ASP.net web application remotely. We currently run a full local database from a recent backup on all of our machines during development. The current backup, compressed, is about 18 GB. I'm looking to see if there's an easier way to keep all of our local copies relatively fresh without each of us individually downloading the 18 GB file over HTTP from our web server on a regular basis. I guess FTP is an option, but it won't speed the process up at all. I'm familiar with torrents and the thought keeps hitting me that something like that would be effective, but I'm unsure of the security or the process.

    Read the article

  • Ubuntu 12.04 tilts when trying to open large excel file with libreoffice or matlab

    - by user1565754
    I have an xlsx-file of size 27.3MB and when I try to open it either in Libreoffice or Matlab the whole system slows down My processor is AMD Sempron(tm) 140 Processor (should be about 2.7Ghz) Memory I have about 1.7GB Any ideas? I opened this file in Windows no problem...of course it took a few seconds to load but Ubuntu freezes with this file completely...smaller files of size 3MB, 5MB etc open just fine... thnx for support =)

    Read the article

  • Why might the Large Object Heap grow rather than throw an exception?

    - by Unsliced
    In a previous question I asked possible programatic ways of maximising the largest block allocatable on the LOH. I'm still seeing the problems, but now I'm trying to get my head around why the LOH seems to grow and shrink in size, yet I'm still seeing OutOfMemoryExceptions that tally with what others have reported as being due to LOH fragmentation. Why might one call to, for example, StringBuilder.EnsureCapacity throw an OutOfMemoryException for me, but another call from somewhere else result in the LOH expanding in size (according to the performance counters, it is growing and shrinking)?

    Read the article

  • How do I get the file size of a large (> 4 GB) file?

    - by endeavormac
    How can I get the file size of a file in C when the file size is greater than 4gb? ftell returns a 4 byte signed long, limiting it to two bytes. stat has a variable of type off_t which is also 4 bytes (not sure of sign), so at most it can tell me the size of a 4gb file. What if the file is larger than 4 gb?

    Read the article

  • Problem with large number of markers on the map...

    - by bobetko
    I am working on an Android app that already exists on iPhone. In the app, there is a Map activity that has (I counted) around 800 markers in four groups marked by drawable in four different colors. Each group can be turned on or off. Information about markers I have inside List. I create a mapOverlay for each group, then I attach that overlay to the map. I strongly believe that coding part I did properly. But I will attach my code anyway... The thing is, my Nexus One can't handle map with all those markers. It takes around 15 seconds just to draw 500 markers. Then when all drawn, map is not quite smooth. It is sort of hard to zoom and navigate around. It can be done, but experience is bad and I would like to see if something can be done there. iPhone seems doesn't have problems showing all these markers. It takes roughly about 1-2 seconds to show all of them and zooming and panning is not that bad. Slow down is noticeable but still acceptable. I personally think it is no good to draw all those markers, but app is designed by somebody else and I am not supposed to make any drastic changes. I am not sure what to do here. It seems I will have to come up with different functionality, maybe use GPS location, if known, and draw only markers within some radius, or, if location not known, use center of the screen(map) and draw markers around that. I will have to have reasonable explanation for my bosses in case I make these changes. I appreciate if anybody has any idas. And the code: ... for (int m = 0; m < ArrList.size(); m++) { tName = ArrList.get(m).get("name").toString(); tId = ArrList.get(m).get("id").toString(); tLat = ArrList.get(m).get("lat").toString();; tLng = ArrList.get(m).get("lng").toString();; try { lat = Double.parseDouble(tLat); lng = Double.parseDouble(tLng); p1 = new GeoPoint( (int) (lat * 1E6), (int) (lng * 1E6)); OverlayItem overlayitem = new OverlayItem(p1, tName, tId); itemizedoverlay.addOverlay(overlayitem); } catch (NumberFormatException e) { Log.d(TAG, "NumberFormatException" + e); } } mapOverlays.add(itemizedoverlay); mapView.postInvalidate(); ................................ public class HelloItemizedOverlay extends ItemizedOverlay<OverlayItem> { private ArrayList<OverlayItem> mOverlays = new ArrayList<OverlayItem>(); private Context mContext; public HelloItemizedOverlay(Drawable defaultMarker, Context context) { super(boundCenterBottom(defaultMarker)); mContext = context; } public void addOverlay(OverlayItem overlay) { mOverlays.add(overlay); populate(); } @Override protected OverlayItem createItem(int i) { return mOverlays.get(i); } @Override public int size() { return mOverlays.size(); } @Override protected boolean onTap(int index) { final OverlayItem item = mOverlays.get(index); ... EACH MARKER WILL HAVE ONCLICK EVENT THAT WILL PRODUCE CLICABLE ... BALOON WITH MARKER'S NAME. return true; } }

    Read the article

  • Reading from a very large table using multiple threads (Java ) and writing them to a single file

    - by user2534926
    I am currently facing a situation where i have a table with almost 80 millions data and i have to take a dump of that table and store it into a csv file. Currently i am using a not so professional approach( with a perl script+DBI interface , printing the values to stdout and redirecting to a csv file). Now i am planning to use java threading approach. Can you suggest a way forward. Thanks in advance

    Read the article

  • Diff 2 large XML files to produce a delta xml file

    - by aniln
    Need to be able to diff 2 large / very large XML files and produce the delta XML file. Also, as this process will be part of a larger automated process on below hardware / OS config. Machine hardware: sun4v OS version: 5.10 Processor type: sparc Hardware: SUNW,SPARC-Enterprise-T5220 Please let me know if there's an installable application on Solaris which can be called as part of a ksh script Example: Run driver_script.ksh Above script will have a line: xml_delta file1.xml file2.xml delta_file.xml where xml_delta is the installable application which produces the delta file after comparing file1.xml and file2.xml

    Read the article

  • Good text editors or viewers for large log files

    - by Kristopher Johnson
    Log files and other textual data files are often tens or hundreds of megabytes in size, and some editors choke when you try to open something so large. What are some good applications for viewing large files? Bonus points for apps that can open compressed files, search for things with regular expressions, parse output lines, etc.

    Read the article

  • Windows 2008 R2 large file copy causes Hyper-V Manager to stop responding

    - by maryeileen
    I'm using the EXPORT feature in Hyper-V to move a large Virtual Machine (VM) over a 1GB network from a Windows 2008 to a Windows 2008 R2 box (200GB) and its so intense that I get the following icon on my destination Hyper-V manager: Is this expected? Is there another way to get large file across the network and minimize this intense I/O effect? Anyones else ever seen that Do Not Enter sign? The other VMs are functional but slow, but I'm guessing that is expected.

    Read the article

  • Saving table yields "Record is too large" in Access

    - by C. Ross
    I have an access database that I gave to a user (shame on my head). They were having trouble with some data being too long, so I suggested changing several text fields to memo fields. I tried this in my copy and it worked perfectly, but when the user tries it they get a "Record is too large" messagebox on saving the modified table design. Obviously the same record is not too large in my database, why would it be in theirs?

    Read the article

  • MySQL reclaim index space after large delete?

    - by cdunn
    After performing a large delete in MySQL, I understand you need to run a NULL ALTER to reclaim disk space, is this also true for reclaiming index space? We have tables using 10G of index space and have deleted/archived large chunks of this data and unsure if we need to rebuild the table in order to decrease the size of the index. Can anyone offer any advice? We are trying to avoid rebuilding the table since it would take quite awhile and lock the table. Thanks!

    Read the article

  • Explorer.exe hangup during move large file into external drive

    - by PiotrK
    During move large files (700mb+) to external drive formated NTFS via USB 3.0 I've noticed strange things about explorer.exe (I am using Windows 7 up to date) Sometimes after move file the explorer get stuck (ie. it can happen after few files during move of several large files) - moving window freeze and I am unable to kill explorer (via taskmgr, or cmdline TASKKILL). In command line I've got something like this (taskmgr shows that explorer.exe is still running - I've got the same PID every time I try to kill it and no diagnostic message): C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. If I try to run another explorer.exe process at this point, I got desktop icon and start bar back but I cannot open any explorer window After few minutes explorer.exe finally dies and I am able to rerun it without rebooting File that I moved have two copies - one local and one on the external drive (the original file wasn't delete after move); Both copies seems to contain the same data (same length and CRC info) If this happen during move of multiply files, only some files are moved and one of them have two copies (both locally and on the external drive) What can I do to fix those explorer hangs? Added: The same problem exist when copying files, it hangsup between large files Similar problem exist when I tried to use TotalCommander (x64): copying paused at 80% of one of files, TC didn't hung up (but clicking cancel in copying dialog box doesn't have any effect). During this pause I can't kill TotalCmd.exe just like Explorer.exe Added (2): This problem seems to disappear when I use 32 bit applications (like TotalCommander (x86) ), but I need to do more testing to be sure of this Added (3): There are several errors in event log, source: disk, id: 11, qualifiers: 49156, task: 0, level: 2, keywords: 0x80000000000000 (This may be important, and I forgot to mention this): Main disk is encrypted via Truecrypt (boot-in password)

    Read the article

  • Linux Has Become Very Slow Dealing With Large Data

    - by Kohjah Breese
    Last year I bought a computer, for around $1,800, so it is relatively high-end. When I first got it I was particularly pleased at how quick it dealt with large MySQL queries, imports and exports. But somewhere along the way something has gone wrong and I am not sure how to diagnose the problem. Any job that involves processing large amounts of data, e.g. gzipping file c. 1GB+, UPDATEs on large MySQL tables etc. have become very slow. I just performed an intensive alter statement on a 240,000,000 row table on a remote server, which is lower spec. This took about 10 minutes. However, performing the same query on a 167,000,000 row table on my computer went fine until it hit 860MB. Now it is only writing about 1MB every 15 seconds. Does anyone have any advice as to debugging what the issue is? I am using LinuxMint (based on Ubuntu 12.04.) The home partition is encrypted, which really slows down gzip. I have noticed the swap is barely used, but am not sure if that is because there is more than enough RAM. The filesystem is ext4. The MySQL server is on a separate hard drive, but it was fine when I first installed it. Other than the above issues, there are no other problems with it. I am going to install a fresh Ubuntu on the 4th hard drive to see if that is any different.

    Read the article

  • Database design for very large amount of data

    - by Hossein
    Hi, I am working on a project, involving large amount of data from the delicious website.The data available is at files are "Date,UserId,Url,Tags" (for each bookmark). I normalized my database to a 3NF, and because of the nature of the queries that we wanted to use In combination I came down to 6 tables....The design looks fine, however, now a large amount of data is in the database, most of the queries needs to "join" at least 2 tables together to get the answer, sometimes 3 or 4. At first, we didn't have any performance issues, because for testing matters we haven't had added too much data in the database. No that we have a lot of data, simply joining extremely large tables does take a lot of time and for our project which has to be real-time is a disaster.I was wondering how big companies solve these issues.Looks like normalizing tables just adds complexity, but how does the big company handle large amounts of data in their databases, don't they do the normalization? thanks

    Read the article

  • Ubuntu Deluge shows errors when downloading large bit torrent files and keeps erroring out after try

    - by MikeN
    Ubuntu Deluge shows errors when downloading large bit torrent files and keeps erroring out after trying to resume. The error on details shows: "Invalid argument". This happens for many large torrents that are running for several days (trying to download.) I try to "resume" and "force rechceck" but it never works. Smaller torrents seem to work ok. What is causing these torrents to never complete? Is there a way to force Deluge to keep auto-resuming every few minutes after a failure instead of just giving up?

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >