Search Results

Search found 13928 results on 558 pages for 'large scale nat'.

Page 53/558 | < Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >

  • Split large text string into variable length strings without breaking words and keeping linebreaks a

    - by Frank
    I am trying to break a large string of text into several smaller strings of text and define each smaller text strings max length to be different. for example: "The quick brown fox jumped over the red fence. The blue dog dug under the fence." I would like to have code that can split this into smaller lines and have the first line have a max of 5 characters, the second line have a max of 11, and rest have a max of 20, resulting in this: Line 1: The Line 2: quick brown Line 3: fox jumped over the Line 4: red fence. Line 5: The blue dog Line 6: dug under the fence. All this in C# or MSSQL, is it possible?

    Read the article

  • Making large toolbars like the iPod app

    - by andybee
    I am trying to create a toolbar programatically (rather than via IB) very similar to the toolbar featured in the iPhone app. Currently I've been experimenting with the UIToolbar class, but I'm not sure how (and if?) you can make the toolbar buttons centrally aligned and large like that in the iPod app. Additionally, regardless of size, the gradient/reflection artwork never correctly respects the size and is stuck as if the object is the default smaller size. If this cannot be done with a standard UIToolbar, I guess I need to create my own view. In this case, can the reflection/gradient be created programmatically or will it require some clever alpha tranparency Photoshopped artwork?

    Read the article

  • Request header is too large

    - by stck777
    I found serveral IllegalStateException Exception in the logs: [#|2009-01-28T14:10:16.050+0100|SEVERE|sun-appserver2.1|javax.enterprise.system.container.web|_ThreadID=26;_ThreadName=httpSSLWorkerThread-80-53;_RequestID=871b8812-7bc5-4ed7-85f1-ea48f760b51e;|WEB0777: Unblocking keep-alive exception java.lang.IllegalStateException: PWC4662: Request header is too large at org.apache.coyote.http11.InternalInputBuffer.fill(InternalInputBuffer.java:740) at org.apache.coyote.http11.InternalInputBuffer.parseHeader(InternalInputBuffer.java:657) at org.apache.coyote.http11.InternalInputBuffer.parseHeaders(InternalInputBuffer.java:543) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.parseRequest(DefaultProcessorTask.java:712) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.doProcess(DefaultProcessorTask.java:577) at com.sun.enterprise.web.connector.grizzly.DefaultProcessorTask.process(DefaultProcessorTask.java:831) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.executeProcessorTask(DefaultReadTask.java:341) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:263) at com.sun.enterprise.web.connector.grizzly.DefaultReadTask.doTask(DefaultReadTask.java:214) at com.sun.enterprise.web.portunif.PortUnificationPipeline$PUTask.doTask(PortUnificationPipeline.java:380) at com.sun.enterprise.web.connector.grizzly.TaskBase.run(TaskBase.java:265) at com.sun.enterprise.web.connector.grizzly.ssl.SSLWorkerThread.run(SSLWorkerThread.java:106) |#] Does anybody know configuration changes to fix this?

    Read the article

  • How can a large number of developers write software together without either a cumbersome process or

    - by Mark Robinson
    I work at a company with hundreds of people writing software for essentially the same product. The quality of the software has to be high because so many people depend on it (not least the developers themselves). Because of this every major issue has resulted in a new check - either automated or manual. As a result the process of delivering software is becoming ever more burdensome. So that requires more developers which... well you can see it is a vicious circle. We now have a problem with releasing software quickly - the lead time even to change one line of code for a very serious issue is at least one day. What techniques do you use to speed up the delivery of software in a large organization, while still maintaining software quality?

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • mmap() for large file I/O?

    - by Boatzart
    I'm creating a utility in C++ to be run on Linux which can convert videos to a proprietary format. The video frames are very large (up to 16 megapixels), and we need to be able to seek directly to exact frame numbers, so our file format uses libz to compress each frame individually, and append the compressed data onto a file. Once all frames are finished being written, a journal which includes meta data for each frame (including their file offsets and sizes) is written to the end of the file. I'm currently using ifstream and ofstream to do the file i/o, but I am looking to optimize as much as possible. I've heard that mmap() can increase performance in a lot of cases, and I'm wondering if mine is one of them. Our files will be in the tens to hundreds of gigabytes, and although writing will always be done sequentially, random access reads should be done in constant time. Any thoughts as to whether I should investigate this further, and if so does anyone have any tips for things to look out for? Thanks!

    Read the article

  • WCF large message exception

    - by Saurabh Lalwani
    Hi, I have a web service which is returning data to the desktop application. The problem I am having is, when the web service returns small volume of data everything works fine but when the volume of data is large it throws the following exception: System.Net.WebException: The underlying connection was closed: An unexpected error occurred on a receive. Can it be because it is taking a good 4-5 seconds to fetch all the records from the database and the session times out or something? Also, when I am debugging the web service, I see that this particular method is called twice and it is also returning the correct value both times, but for some reason the desktop application seems to be throwing the aforementioned exception. I found similar posts before on stackoverflow but they did not solve my problem. Can anybody please tell me what's going on in here? Thanks!

    Read the article

  • How to quickly analyse large MDB file

    - by Craig Johnston
    I need to know how to quickly analyse a large MDB file (about 1GB) to see which tables are causing it to be so big. Is there something will easily allow me to show a breakdown of which tables are responsible for how much data. I need to know whether it is just this one customer who is using the application differently, or whether there is genuinely a lot of data in the MDB. This MDB is currently causing the VB app to crash, and I need to know why it is so big so that I can maybe think about how to put some of the data into another 'archival' MDB. Migrating to SQL Server is not an option, unless the use of linked tables from an MDB is a realistic option.

    Read the article

  • Effective methods for reading and writing large files in C

    - by Bertholt Stutley Johnson
    I'm writing an application that deals with very large user-generated input files. The program will copy about 95 percent of the file, effectively duplicating it and switching a few words and values in the copy, and then appending the copy (in chunks) to the original file, such that each block (consisting of between 10 and 50 lines) in the original is followed by the copied and modified block, and then the next original block, and so on. The user-generated input conforms to a certain format, and it is highly unlikely that any line in the original file is longer than 100 characters long. Which would be the better approach? a) To use one file pointer and use variables that hold the current position of how much has been read and where to write to, seeking the file pointer back and forth to read and write; or b) To use multiple file pointers, one for reading and one for writing. I am mostly concerned with the efficiency of the program, as the input files will reach up to 25,000 lines, each about 50 characters long. Thanks!

    Read the article

  • Large ResultSet on postgresql query

    - by tuler
    I'm running a query against a table in a postgresql database. The database is on a remote machine. The table has around 30 sub-tables using postgresql partitioning capability. The query will return a large result set, something around 1.8 million rows. In my code I use spring jdbc support, method JdbcTemplate.query, but my RowCallbackHandler is not being called. My best guess is that the postgresql jdbc driver (I use version 8.3-603.jdbc4) is accumulating the result in memory before calling my code. I thought the fetchSize configuration could control this, but I tried it and nothing changes. I did this as postgresql manual recomended. This query worked fine when I used Oracle XE. But I'm trying to migrate to postgresql because of the partitioning feature, which is not available in Oracle XE. My environment: Postgresql 8.3 Windows Server 2008 Enterprise 64-bit JRE 1.6 64-bit Spring 2.5.6 Postgresql JDBC Driver 8.3-603

    Read the article

  • Selecting an item from a very large list

    - by Ben Fulton
    Suppose I have a list of a couple of thousand organizations and a user needs to be able to select one of them. The list is too large to populate in a dropdown at page load, and the user often knows what they want but it's not the first part of the organization name. That is, they know "Collections" but not that the precise name of the organization is "Department of Collections". So the user will need/want to type in some information. It's easy enough to use an autocompleting textbox of some kind, but I don't want to allow the user to type in random text - they have to choose one of the organizations explicitly. What's the best solution?

    Read the article

  • Large number of Google Map Markers and IE6?

    - by Sivakanesh
    I'm working on an application that generates a large number of Google Map markers (2000 - 7000) via JSON. I'm also using MarkerCluster. It works quick on Chrome and FF but IE6 takes few minutes and just crashes the first time I try to zoom in. I'm not doing any more than just adding the markers to a map using JQuery & GMap API. So I looked at the following URL of the regular Google Map. http://maps.google.co.uk/maps?f=q&source=s_q&hl=en&q=hotel&sll=53.182996,-2.581787&sspn=1.494529,4.927368&ie=UTF8&split=1&rq=1&ev=p&hq=hotel&hnear=&ll=53.123702,-2.730103&spn=1.496594,4.927368&t=h&z=8 It shows a lot of tiny markers (~1000) and works fine on IE6. Do you have any ideas why this works and the markers added via the API struggles? Thanks

    Read the article

  • IE Problem: Jagged Scrolling and Dragging Inside Large Viewport

    - by br4inwash3r
    My site is a single page website with a very large "canvas" size. and to navigate around the site i'm using jquery scrollTo and jquery Dragscrollable plugin. in IE 7 & 8 the scrolling/dragging movement is very jagged. at first i thought it was my script or some other plugin that's causing this. but after i stripped down everything it's still the same. i've tried a few tips i've found around here. but none is really working for me. i know i should be asking this question to the plugin developers. i did.. i just thought maybe you guys have some other solution for this issue. here's the URL to the demo site: http://satuhati.com/bare/template/ really appreciate any help u can give :) thx..

    Read the article

  • C#: Efficiently search a large string for occurences of other strings

    - by Jon
    Hi, I'm using C# to continuously search for multiple string "keywords" within large strings, which are = 4kb. This code is constantly looping, and sleeps aren't cutting down CPU usage enough while maintaining a reasonable speed. The bog-down is the keyword matching method. I've found a few possibilities, and all of them give similar efficiency. 1) http://tomasp.net/articles/ahocorasick.aspx -I do not have enough keywords for this to be the most efficient algorithm. 2) Regex. Using an instance level, compiled regex. -Provides more functionality than I require, and not quite enough efficiency. 3) String.IndexOf. -I would need to do a "smart" version of this for it provide enough efficiency. Looping through each keyword and calling IndexOf doesn't cut it. Does anyone know of any algorithms or methods that I can use to attain my goal?

    Read the article

  • Faster forking of large processes on Linux ?

    - by timday
    What's the fastest, best way on modern Linux of achieving the same effect as a fork-execve combo from a large process ? My problem is that the process forking is ~500MByte big, and a simple benchmarking test achieves only about 50 forks/s from the process (c.f ~1600 forks/s from a minimally sized process) which is too slow for the intended application. Some googling turns up vfork as having being invented as the solution to this problem... but also warnings about not to use it. Modern Linux seems to have acquired related clone and posix_spawn calls; are these likely to help ? What's the modern replacement for vfork ? I'm using 64bit Debian Lenny on an i7 (the project could move to Squeeze if posix_spawn would help).

    Read the article

  • Speeding up inner joins between a large table and a small table

    - by Zaid
    This may be a silly question, but it may shed some light on how joins work internally. Let's say I have a large table L and a small table S (100K rows vs. 100 rows). Would there be any difference in terms of speed between the following two options?: OPTION 1: OPTION 2: --------- --------- SELECT * SELECT * FROM L INNER JOIN S FROM S INNER JOIN L ON L.id = S.id; ON L.id = S.id; Notice that the only difference is the order in which the tables are joined. I realize performance may vary between different SQL languages. If so, how would MySQL compare to Access?

    Read the article

  • Write large PDFs with Java sequentially

    - by Benjamin Muschko
    I am looking for a Java library that let's you write large PDFs sequentially with a minimum amount of memory. Most of the libraries I had a look at has to build up the document in memory first before you can actually write it. The problem I have to deal with are OutOfMemoryErrors. It would be great if I could flush the writer programmatically whenever needed e.g. for each page. Does anyone have any recommendations? I need something with a license along the lines of the LGPL (so not the GPL or the Affero GPL that iText uses).

    Read the article

  • Problem with large Canvas in Silverlight

    - by Fury
    Hi, I am developing (using Silvelight 3) an aplication that creates some kind of timeline and places objects on it. For this purpose I need a really large Canvas (up to 2000000 pixels width) with long lines on it, but whenever I create Canvas even 40000 pixels width it behaves very strangely, randomly disappearing. I have found a post with the description of the exactly same problem on silverlight forums and another one here on the stackoverflow. It seems that is a known problem since silverlight 2, but I can't find any good workaround. Does anybody know such workaround or can check is it still an issue in Silverlight 4? Thanks in advance.

    Read the article

  • How do you pronounce large hex numbers?

    - by warrenm
    This question might be subjective, but I'm hoping there's some consensus that I just don't know about. Short hex numbers are relatively easy to spell out (e.g., 0xC4A might be "cee-four-ay"). Hex numbers ending with a multiple of three zeros are likewise pretty easy (e.g., 0xC000 might be "cee-thousand"). But is there a concise way to pronounce 0xFFFF0000 or 0xCA000000? Magic numbers like 0xDEADBEEF are popular for their pronounceability, but I'm mostly asking about large-ish, round numbers that seem like they should have a more concise pronunciation.

    Read the article

  • Technical reasons for not having large background images in websites

    - by kees-kist
    Most websites tend to have either a solid color as background, or a small image that is repeated. Why aren't more websites using a large image (such as a photo) as background? I can think of the following reasons: 1) Problems with different screen resolutions. Too small and gaps start to appear on the left and/or right side for higher resolutions, too big and lower resolutions only show part of the image. 2) Bandwidth. Although this is unlikely to be a problem for most websites. Are there any other reasons why such backgrounds are not being used more often?

    Read the article

  • Upload Large files(1GB)-ASP.net

    - by Ramya Raj
    I need to upload large files of atleast 1GB file size. I am using ASP.Net, C# and IIS 5.1 as my development platform. I am using HIF.PostedFile.InputStream.Read(fileBytes,0,HIF.PostedFile.ContentLength) before using File.WriteAllBytes(filePath, fileByteArray)(doesnt go here but gives System.OutOfMemoryException' exception) Currently i have set the httpRuntime to executionTimeout="999999" maxRequestLength="2097151"(thats 2GB!) useFullyQualifiedRedirectUrl="true" minFreeThreads="8" minLocalRequestFreeThreads="4" appRequestQueueLimit="5000" enableVersionHeader="true" requestLengthDiskThreshold="8192" Also i have set maxAllowedContentLength="2097151" (guess its only for IIS7) I have changed IIS connection timeout to 999,999 secs too. I am unable to upload files of even 4578KB(Ajaz-Uploader.zip)

    Read the article

  • C# Importing Large Volume of Data from CSV to Database

    - by guazz
    What's the most efficient method to load large volumes of data from CSV (3 million + rows) to a database. The data needs to be formatted(e.g. name column needs to be split into first name and last name, etc.) I need to do this in a efficiently as possible i.e. time constraints I am siding with the option of reading, transforming and loading the data using a C# application row-by-row? Is this ideal, if not, what are my options? Should I use multithreading?

    Read the article

  • Large file download for a Rails project

    - by Horace Ho
    One client project will be online two months later. One of the requirements changed is to support large files (10 to 15MB per RAW camera file, expected 1000 to 5000 files download per day) download worldwide for their customers. The process will be: there is upload screen via paperclip to the rails local public folder a hourly task to upload to web storage (S3?) update the download url from paperclip url to the web url Questions: is there a gem/plug-in for this purpose? if no, any gem/plug-in for S3 to recommend? Questions about the storage provider: is S3 recommended? or other service to recommend? The baseline is: the client's web server does not and will not have the bandwidth to handle the downloads. Thanks

    Read the article

  • Unable upload large file size on Google Docs

    - by Preeti
    Hi, I am uploading document on Google Docs as: DocumentsService myService = new DocumentsService(""); myService.setUserCredentials("[email protected]", password ); DocumentEntry newEntry = myService.UploadDocument(@"C:\Sample.txt", "Sample.txt"); But when i try to upload a file of 3 MB it result into exception: An unhandled exception of type 'Google.GData.Client.GDataRequestException' occurred in Google.GData.Client.dll Additional information: Execution of request failed: http://docs.google.com/feeds/documents/private/full How can i upload large size file on Google Docs? I am using Google API ver 2. Thanx

    Read the article

  • POST data disapearing on large file upload

    - by DfKimera
    I'm having issues with a file uploading utility in my PHP application. When sending large files (9MB+) over the form, I get a very odd behaviour: the POST data I've included in the form dissapears, including the file information. I've already increased all PHP limits I could (time limit, max input time, post max size, memory limit and upload max filesize) and I still can't get the proper behaviour. I've tried replacing the regular HTTP forms with a Flash-based solution (SWFUpload, www.swfupload.org), still the same behaviour. I've tried multiple files of similar sizes and its definitely not a particular file issue. I've debugged the POST vars sent using Firebug, and the correct variables are still there in the header, together with the file. What could be going on here?

    Read the article

< Previous Page | 49 50 51 52 53 54 55 56 57 58 59 60  | Next Page >