Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 48/558 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Windows.Forms RichTextBox Control - Avoid inserting large data.

    - by SchlaWiener
    I have a Windows Form with a RichTextBox on it. The content of the RichTextBox is written to a database field that ist limited to 64k data. For my purpose that is way more than enough text to store. I have set the MaxLength property to avoid insertng more data than allowed. rtcControl.MaxLength = 65536 Howevery, that only restricts the amount of characters that so is allowed to put in the text. But with the formatting overhead from the Rtf I can type more text than I should be allowed to. It even get's worse if I insert a large image, which dosn't increase the TextLength at all but the Rtf Length grows quite a lot. At the moment I check the Length of the richttextboxes' Rtf property in the FormClosing event and display a message to the user if it's to large. However that is just a workaround because I want to disallow putting more data than allowed into the control (like in a textbox if you exceed the MaxLength property nothing is inserted into the control and you hear the default beep(). Any ideas how to achive this? I already tried: using a custom control which extends the richtextbox and shadows th Rtf property to intercept the insertation. But it seems it isn't executed if I add text. Even the TextChanged Event does not fire if I type smth. in the control.

    Read the article

  • Memorystream and Large Object Heap

    - by Flo
    I have to transfer large files between computers on via unreliable connections using WCF. Because I want to be able to resume the file and I don't want to be limited in my filesize by WCF, I am chunking the files into 1MB pieces. These "chunk" are transported as stream. Which works quite nice, so far. My steps are: open filestream read chunk from file into byet[] and create memorystream transfer chunk back to 2. until the whole file is sent My problem is in step 2. I assume that when I create a memory stream from a byte array, it will end up on the LOH and ultimately cause an outofmemory exception. I could not actually create this error, maybe I am wrong in my assumption. Now, I don't want to send the byte[] in the message, as WCF will tell me the array size is too big. I can change the max allowed array size and/or the size of my chunk, but I hope there is another solution. My actual question(s): Will my current solution create objects on the LOH and will that cause me problem? Is there a better way to solve this? Btw.: On the receiving side I simple read smaller chunks from the arriving stream and write them directly into the file, so no large byte arrays involved.

    Read the article

  • Including/Organzing HTML in large javascript project

    - by Bill Zimmerman
    Hi, I've a got a fairly large web app, with several mini applets on each page. These applets are almost always identical jquery apps. I am looking for advice on how I should organize/include smaller parts of these jquery apps within my larger project. For example, each app has several independent tabs. If possible, I would like to store each of the tabs as a seperate .html file because this makes development easier. My requirements are: 1) All of the html 'tabs' are loaded on the clients end when the page loads. I would like to avoid any delays by dynamically requesting the tab html. 2) If possible, I would like to minimize the raw data sent. For example, it would be preferable to send each tab 1 time, instead of sending each tab 10 times if there are ten applets on that page. Questions: 1) What are my options for 'including' the HTML files / javascript code 2) Any tips for keeping my development simple in this situation? Surely there has to be a better way than just editing one massive html file when working with large pages.

    Read the article

  • incremental way of counting quantiles for large set of data

    - by Gacek
    I need to count the quantiles for a large set of data. Let's assume we can get the data only through some portions (i.e. one row of a large matrix). To count the Q3 quantile one need to get all the portions of the data and store it somewhere, then sort it and count the quantile: List<double> allData = new List<double>(); foreach(var row in matrix) // this is only example. In fact the portions of data are not rows of some matrix { allData.AddRange(row); } allData.Sort(); double p = 0.75*allData.Count; int idQ3 = (int)Math.Ceiling(p) - 1; double Q3 = allData[idQ3]; Now, I would like to find a way of counting this without storing the data in some separate variable. The best solution would be to count some parameters od mid-results for first row and then adjust it step by step for next rows. Note: These datasets are really big (ca 5000 elements in each row) The Q3 can be estimated, it doesn't have to be an exact value. I call the portions of data "rows", but they can have different leghts! Usually it varies not so much (+/- few hundred samples) but it varies! This question is similar to this one: http://stackoverflow.com/questions/1058813/on-line-iterator-algorithms-for-estimating-statistical-median-mode-skewness But I need to count quantiles. ALso there are few articles in this topic, i.e.: http://web.cs.wpi.edu/~hofri/medsel.pdf http://portal.acm.org/citation.cfm?id=347195&dl But before I would try to implement these, I wanted to ask you if there are maybe any other, qucker ways of counting the 0.25/0.75 quantiles?

    Read the article

  • Finding ALL positions of a substring in a large string in C#

    - by Tommy
    Alright, so what i have, is a large string i need to parse, and what i need to happen, is find all the instances of extract"(me,i-have lots. of]punctuation, and store them to a list. So say this piece of string was in the beginning and middle of the larger string, both of them would be found, and their index's would be added to the List. and the List would contain 0 and the other index whatever it would be. Ive been playing around, and the string.IndexOf does almost what i'm looking for, and ive written some code. But i cant seem to get it to work: List<int> inst = new List<int>(); int index = 0; while (index < source.LastIndexOf("extract\"(me,i-have lots. of]punctuation", 0) + 39) { int src = source.IndexOf("extract\"(me,i-have lots. of]punctuation", index); inst.Add(src); index = src + 40; } inst = The list source = The large string Any better ideas?

    Read the article

  • Loading/Displaying large amount of data on webpage.

    - by jb
    I have a webpage which contains a table for displaying a large amount of data (on average from 2,000 to 10,000 rows). This page takes a long time to load/render. Which is understandable. The problem is, while the page is loading the PCs memory usage skyrockets (500mb on my test system is in use by iexplorer) and the whole PC grinds to a halt until it has finished, which can take a minute or two. IE hangs until it is complete, switching to another running program is the same. I need to fix this - and ideally i want to accomplish 2 things: 1) Load individual parts of the page seperately. So the page can render initially without the large data table. A loading div will be placed there until it is ready. 2) Dont use up so much memory or local resources while rendering - so at least they can use a different tab/application at the same time. How would I go about doing both or either of these? I'm an applications programmer by trade so i am still a little fizzy on the things I can do in a web environment. Cheers all.

    Read the article

  • how to get large pictures from photo album via facebook graph api

    - by Nav
    I am currently using the following code to retrieve all the photos from a user profile FB.api('/me/albums?fields=id,name', function(response) { //console.log(response.data.length); for (var i = 0; i < response.data.length; i++) { var album = response.data[i]; FB.api('/' + album.id + '/photos', function(photos) { if (photos && photos.data && photos.data.length) { for (var j = 0; j < photos.data.length; j++) { var photo = photos.data[j]; // photo.picture contain the link to picture var image = document.createElement('img'); image.src = photo.picture; document.body.appendChild(image); image.className = "border"; image.onclick = function() { //this.parentNode.removeChild(this); document.getElementById(info.id).src = this.src; document.getElementById(info.id).style.width = "220px"; document.getElementById(info.id).style.height = "126px"; }; } } }); } }); but, the photos that it returns are of poor quality and are basically like thumbnails.How to retrieve larger photos from the albums. Generally for profile picture I use ?type=large which returns a decent profile image but the type=large is not working in the case of photos from photo albums and also is there a way to get the photos in zip format from facebook once I specify the photo url as I want users to be able to download the photos.

    Read the article

  • PHP: problem rendering large images (error 321)

    - by JP19
    Hi ... its me again with a php problem :) Following is part of my PHP script which is rendering JPEG images. ... $tf=$requested_file; $image_type="jpeg"; header("Content-type: image/${image_type}"); $CMD="\$image=imagecreatefrom${image_type}('$tf'); image${image_type}(\$image);"; eval($CMD); exit; ... There is no syntactical error, because above code is working fine for small images, but for large images, it gives: Error 321 (net::ERR_INVALID_CHUNKED_ENCODING): Unknown error. in the browser. To be sure, I created two images using imagemagick from same source image - one resized to 10% of original and other 90%. http://mostpopularsports.net/images/misc/ttt10.jpg works http://mostpopularsports.net/images/misc/ttt90.jpg gives Error 301 in the browser. There is a related question with solution posted by OP here Error writing content through Apache. but I cannot understand how to make the fix. Can someome help me with it? I have looked at the headers in Chrome. For the first request, everything is fine. For the second request - the request headers are all garbled. Both images are jpeg (as they are created from imagemagick. But still to be sure I checked): misc/ttt10.jpg: JPEG image data, JFIF standard 1.01 misc/ttt90.jpg: JPEG image data, JFIF standard 1.01 Finally, the way I fixed is, remove the Transfer-Encoding: chunked header from the response. [This header was sent by apache only when the data was large enough]. (I had an internal proxy, so did it in the proxy script - otherwise one may need to do it in apache settings). There were some good answers and I have selected the one that helped me solve the problem best. thanks JP

    Read the article

  • Large free block of english non-pronoun text

    - by Tom
    As part of teaching myself python I've written a script which allows a user to play hangman. At the moment, the hangman word to be guessed is simply entered manually at the start of the script's code. I want instead for the script to choose randomly from a large list of english words. This I know how to do - my problem is finding that list of words to work from in the first place. Does anyone know of a source on the net for, say, 1000 common english words where they can be downloaded as a block of text or something similar that I can work with? (My initial thought was grabbing a chunk of a novel from project gutenburg [this project is only for my own amusement and won't be available anywhere else so copyright etc doesn't matter hugely to me btw], but anything like that is likely to contain too many names or non-standard words that wouldn't be suitable for hangman. I need text that only has words legal for use in scrabble, basically). It's a slightly odd question for here I suppose, but actually I thought the answer might be of use not just to me but anyone else working on a project for a wordgame or similar that needs a large seed list of words to work from. Many thanks for any links or suggestions :)

    Read the article

  • Dealing with large number of text strings

    - by Fadrian
    My project when it is running, will collect a large number of string text block (about 20K and largest I have seen is about 200K of them) in short span of time and store them in a relational database. Each of the string text is relatively small and the average would be about 15 short lines (about 300 characters). The current implementation is in C# (VS2008), .NET 3.5 and backend DBMS is Ms. SQL Server 2005 Performance and storage are both important concern of the project, but the priority will be performance first, then storage. I am looking for answers to these: Should I compress the text before storing them in DB? or let SQL Server worry about compacting the storage? Do you know what will be the best compression algorithm/library to use for this context that gives me the best performance? Currently I just use the standard GZip in .NET framework Do you know any best practices to deal with this? I welcome outside the box suggestions as long as it is implementable in .NET framework? (it is a big project and this requirements is only a small part of it) EDITED: I will keep adding to this to clarify points raised I don't need text indexing or searching on these text. I just need to be able to retrieve them in later stage for display as a text block using its primary key. I have a working solution implemented as above and SQL Server has no issue at all handling it. This program will run quite often and need to work with large data context so you can imagine the size will grow very rapidly hence every optimization I can do will help.

    Read the article

  • find the difference between two very large list

    - by user157195
    I have two large list(could be a hundred million items), the source of each list can be either from a database table or a flat file. both lists are of comparable sizes, both unsorted. I need to find the difference between them. so I have 3 scenarios: 1. List1 is a database table(assume each row simply have one item(key) that is a string), List2 is a large file. 2. Both lists are from 2 db tables. 3. both lists are from two files. in case 2, I plan to use: select a.item from MyTable a where a.item not in (select b.item form MyTable b) this clearly is inefficient, is there a better way? Another approach is: I plan to sort each list, and then walk down both of them to find the diff. If the list is from a file, I have to read it into a db table first, then use db sorting to output the list. Is the run time complexity still O(nlogn) in db sorting? either approach is a pain and seems would be very slow when the list involved has hundreds of millions of items. any suggestions?

    Read the article

  • iproute2 rules and iptables NAT... what is the difference?

    - by Jakobud
    We have 2 different ISP connections. Our previous "IT guy" setup our firewall like so: When /etc/rc.local was executed on startup, it did a bunch of ip rule add and ip route add commands in order to route certain internal hosts to use certain ISP connections. Then at the end of /etc/rc.local, he executed our iptables firewall rules that were generated by Firewall Builder. These iptables rules have both Policy and NAT rules setup in them. What I don't understand, is why did he use iproute2 to specify rules and routes but also specify NAT rules for iptables? Why didn't he just do it all in one or the other instead of using them both? Could he have got rid of the iproute2 rules and routes and just put all those same rules into the iptables NAT settings?

    Read the article

  • How can I make OpenGL textures scale without becoming blurry?

    - by adorablepuppy
    I'm using OpenGL through LWJGL. I have a 16x16 textured quad rendering at 16x16. When I change it's scale amount, the quad grows, then becomes blurrier as it gets larger. How can I make it scale without becoming blurry, like in Minecraft. Here is the code inside my RenderableEntity object: public void render(){ Color.white.bind(); this.spriteSheet.bind(); GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(this.x, this.y); GL11.glTexCoord2f(1,0); GL11.glVertex2f(getDrawingWidth(), this.y); GL11.glTexCoord2f(1,1); GL11.glVertex2f(getDrawingWidth(), getDrawingHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(this.x, getDrawingHeight()); GL11.glEnd(); } And here is code from my initGL method in my game class GL11.glEnable(GL11.GL_TEXTURE_2D); GL11.glClearColor(0.46f,0.46f,0.90f,1.0f); GL11.glViewport(0,0,width,height); GL11.glOrtho(0,width,height,0,1,-1); And here is the code that does the actual drawing public void start(){ initGL(800,600); init(); while(true){ GL11.glClear(GL11.GL_COLOR_BUFFER_BIT); for(int i=0;i<entities.size();i++){ ((RenderableEntity)entities.get(i)).render(); } Display.update(); Display.sync(100); if(Display.isCloseRequested()){ Display.destroy(); System.exit(0); } } }

    Read the article

  • Cannot upload files bigger than 8GB to Amazon S3 by multi-part upload due to broken pipe

    - by spencerho
    I implemented S3 multi-part upload, both high level and low level version, based on the sample code from http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?HLuploadFileJava.html and http://docs.amazonwebservices.com/AmazonS3/latest/dev/index.html?llJavaUploadFile.html When I uploaded files of size less than 4 GB, the upload processes completed without any problem. When I uploaded a file of size 13 GB, the code started to show IO exception, broken pipes. After retries, it still failed. Here is the way to repeat the scenario. Take 1.1.7.1 release, create a new bucket in US standard region create a large EC2 instance as the client to upload file create a file of 13GB in size on the EC2 instance. run the sample code on either one of the high-level or low-level API S3 documentation pages from the EC2 instance test either one of the three part size: default part size (5 MB) or set the part size to 100,000,000 or 200,000,000 bytes. So far the problem shows up consistently. I attached here a tcpdump file for you to compare. In there, the host on the S3 side kept resetting the socket.

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • 2D colliding n-body simulation (fast Collision Detection for large number of balls)

    - by osgx
    Hello I want to write a program for simulating a motion of high number (N = 1000 - 10^5 and more) of bodies (circles) on 2D plane. All bodies have equal size and the only force between them is elastic collision. I want to get something like but in larger scale, with more balls and more dense filling of the plane (not a gas model as here, but smth like boiling water model). So I want a fast method of detection that ball number i does have any other ball on its path within 2*radius+V*delta_t distance. I don't want to do a full search of collision with N balls for each of i ball. (This search will be N^2.) PS Sorry for loop-animated GIF. Just press Esc to stop it. (Will not work in Chrome).

    Read the article

  • gzip: stdout: File too large when running customized backup script

    - by Roland
    I've create a plain and siple backup script that only backs up certain files and folders. tar -zcf $DIRECTORY/var.www.tar.gz /var/www tar -zcf $DIRECTORY/development.tar.gz /development tar -zcf $DIRECTORY/home.tar.gz /home Now this script runs for about 30mins then gives me the following error gzip: stdout: File too large Any other solutions that I can use to backup my files using shell scripting or a way to solve this error? I'm grateful for any help.

    Read the article

  • How can I insert or remove bytes from the middle of a large file in .NET

    - by Eric
    Is it possible to efficiently insert or remove bytes from the middle of a large file, and if so how? Or am I stuck rewriting the entire file after the point where the data was inserted or removed? [A lot of Bytes][Unwanted Bytes][A lot of Bytes] - > [A lot of Bytes][A lot of Bytes] or [A lot of Bytes][A lot of Bytes] - > [A loto f Bytes][New Inserted Bytes][A lot of Bytes]

    Read the article

  • Uploading Large Files

    - by mohamed hadad
    Hello, I'm using a windows service on my server to receive large files (1 GB) from desktop clients when I use the TCPClient class it creates astream to send the file which bloks my memory, What is the well designe for this problem

    Read the article

  • Django - testing using large tables of static data

    - by Michael B
    I am using "manage.py test" along with a JSON fixture I created using using 'dumpdata' My problem is that several of the tables in the fixture are very large (for example one containing the names of all cities in the US) which makes running a test incredibly slow. Seeing as several of these tables are never modified by the program (eg - the city names will never need to be modified), it doesn't make much sense to create and tear down these tables for every test run. Is there a better way to be testing this code using this kind of data?

    Read the article

  • Mercurial hook to disallow committing large binary files

    - by hekevintran
    I want to have a Mercurial hook that will run before committing a transaction that will abort the transaction if a binary file being committed is greater than 1 megabyte. I found the following code which works fine except for one problem. If my changeset involves removing a file, this hook will throw an exception. The hook (I'm using pretxncommit = python:checksize.newbinsize): from mercurial import context, util from mercurial.i18n import _ import mercurial.node as dpynode '''hooks to forbid adding binary file over a given size Ensure the PYTHONPATH is pointing where hg_checksize.py is and setup your repo .hg/hgrc like this: [hooks] pretxncommit = python:checksize.newbinsize pretxnchangegroup = python:checksize.newbinsize preoutgoing = python:checksize.nopull [limits] maxnewbinsize = 10240 ''' def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) tip = context.changectx(repo, 'tip').rev() ctx = context.changectx(repo, node) for rev in range(ctx.rev(), tip+1): ctx = context.changectx(repo, rev) print ctx.files() for f in ctx.files(): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid The exception: $ hg commit -m 'commit message' error: pretxncommit hook raised an exception: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest transaction abort! rollback completed abort: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest! I'm not familiar with writing Mercurial hooks, so I'm pretty confused about what's going on. Why does the hook care that a file was removed if hg already knows about it? Is there a way to fix this hook so that it works all the time? Update (solved): I modified the hook to filter out files that were removed in the changeset. def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) ctx = repo[node] for rev in xrange(ctx.rev(), len(repo)): ctx = context.changectx(repo, rev) # do not check the size of files that have been removed # files that have been removed do not have filecontexts # to test for whether a file was removed, test for the existence of a filecontext filecontexts = list(ctx) def file_was_removed(f): """Returns True if the file was removed""" if f not in filecontexts: return True else: return False for f in itertools.ifilterfalse(file_was_removed, ctx.files()): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid

    Read the article

  • remote desktop client with panning of large desktops?

    - by mikewse
    When using the standard Windows remote desktop client to connect to a large remote desktop, the result is usually that the remote desktop is resized to a smaller size matching the local display. Are there any RDP clients that instead leave the remote desktop at its original size and pan this larger area when the mouse reaches the border edges of the local display? Thanks Mike

    Read the article

  • WCF service to send large binaries to server

    - by Ali Shafai
    I need to upload large (100 meg max) binairies to server using WCF. I followed instructions from this: http://www.c-sharpcorner.com/UploadFile/dhananjaycoder/fileuploadsilverlightwcf07142009104020AM/fileuploadsilverlightwcf.aspx it workds for anything less than 50K. going above that I get 415 errors. any idea?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >