Search Results

Search found 10417 results on 417 pages for 'large'.

Page 43/417 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Permissions for Large Variables to Be Sent Via Stored Procedures (SQL Server)

    - by Joe Majewski
    I can't figure out a way to allow more than 4000 bytes to be received at once via a call to a stored procedure. I am storing images in the table that are around 15 - 20 kilobytes each, but upon getting them and displaying them to the page, they are always exactly 3.91 KB in size (or 4000 bytes). Do stored procedures have a limit on how much data can be sent at once? I double-checked my data, and I am indeed only receiving the first 4000 characters from the varbinary(MAX) field. Is there a permission setting to allow more than 4k bytes at once?

    Read the article

  • hadoop - large database query

    - by Mastergeek
    Situation: I have a Postgres DB that contains a table with several million rows and I'm trying to query all of those rows for a MapReduce job. From the research I've done on DBInputFormat, Hadoop might try and use the same query again for a new mapper and since these queries take a considerable amount of time I'd like to prevent this in one of two ways that I've thought up: 1) Limit the job to only run 1 mapper that queries the whole table and call it good. or 2) Somehow incorporate an offset in the query so that if Hadoop does try to use a new mapper it won't grab the same stuff. I feel like option (1) seems more promising, but I don't know if such a configuration is possible. Option(2) sounds nice in theory but I have no idea how I would keep track of the mappers being made and if it is at all possible to detect that and reconfigure. Help is appreciated and I'm namely looking for a way to pull all of the DB table data and not have several of the same query running because that would be a waste of time.

    Read the article

  • Empty R environment becomes large file when saved

    - by user1052019
    I'm getting behaviour I don't understand when saving environments. The code below demonstrates the problem. I would have expected the two files (far-too-big.RData, and right-size.RData) to be the same size, and also very small because the environments they contain are empty. In fact, far-too-big.R ends up the same size as bigfile.RData. I get the same results using 2.14.1 and 2.15.2, both on WinXP 5.1 SP3. Can anyone explain why this is happening? Thanks. a <- matrix(runif(1000000, 0, 1), ncol=1000) save(a, file="c:/temp/bigfile.RData") test <- function() { load("c:/temp/bigfile.RData") test <- new.env() save(test, file="c:/temp/far-too-big.RData") test1 <- new.env(parent=globalenv()) save(test1, file="c:/temp/right-size.RData") } test()

    Read the article

  • how can i execute large mysql queries fast

    - by testkhan
    I have 4 mysql tables and have a single query with JOIN on multiple tables and I am requesting it via jquery ajax, but it takes to much long time from about 1-3 minutes while I want to execute them on average 2-5 seconds. is there any special way to execute the quries fast

    Read the article

  • Best way to support large dropdowns

    - by JustAProgrammer
    Say I have a report that can be restricted by specifying some value in a dropdown. This dropdown list references a table with 30,000 records. I don't think this would be feasible to populate a dropdown with! So, what is the best way to provide the user the ability to select a value given this situation? These values do not really have categories and even if I subdivided (by having some nesting dropdown situation) by the first letter of the value, that may still leave a few thousand entries. What's the best way to deal with this?

    Read the article

  • Sending large XML from Silverlight to SVC (WCF)

    - by alexbf
    Hi! I want to send a big XML string to a WCF SVC service from Silverlight. It looks like anything under about 50k is sent correctly but if I try to send something over that limit, my request reaches the server (BeginRequest is called) but never reaches my SVC. I get the classic "NotFound" exception. Any idea on how to raise that limit? If I can't raise it? What are my other options? Thanks, Alex

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Read a large result set in chunks from mysql

    - by ripper234
    I am trying to read a huge result set from mysql. Reading them in a straight-forward manner didn't work, as mysql tries to return all results together, which times out. I found the following piece of code which tells mysql to read the results back one at a time: stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE); Can I read a chunk at a time instead of one by one? I've tried setting fetch size to a different value, but it doesn't work.

    Read the article

  • VS2008 is very slow on a specific large C++ solution

    - by VioletRose
    I have a solution with 21 C++ projects and 1 VB.NET project. The IDE responds very slowly when I simply move the carret in a file or try to open the menu. The process seems to take 50% of CPU for each movement. It only happens with this solution and only on my machine. The solution has total of 2380 source and header files, of which 1280 are header files. I tried to remove all connection to the source control (Perforce) but it didn't help. Also, I have Visual Assist installed but even after removing it (uninstall), the same behavior continued. Any idea?

    Read the article

  • Querying Postgresql with a very large result set

    - by sanity
    In an application I need to query a Postgres DB where I expect tens or even hundreds of millions of rows in the result set. I might do this query once a day, or even more frequently. The query itself is relatively simple, although may involve a few JOINs. My question is: How smart is Postgres with respect to avoiding having to seek around the disk for each row of the result set? Given the time required for a hard disk seek, this could be extremely expensive. If this isn't an issue, how does Postgres avoid it? How does it know how to lay out data on the disk such that it can be streamed out in an efficient manner in response to this query?

    Read the article

  • Best way to get distinct values from large table

    - by derivation
    I have a db table with about 10 or so columns, two of which are month and year. The table has about 250k rows now, and we expect it to grow by about 100-150k records a month. A lot of queries involve the month and year column (ex, all records from march 2010), and so we frequently need to get the available month and year combinations (ie do we have records for april 2010?). A coworker thinks that we should have a separate table from our main one that only contains the months and years we have data for. We only add records to our main table once a month, so it would just be a small update on the end of our scripts to add the new entry to this second table. This second table would be queried whenever we need to find the available month/year entries on the first table. This solution feels kludgy to me and a violation of DRY. What do you think is the correct way of solving this problem? Is there a better way than having two tables?

    Read the article

  • Rendering CALayer in context uses large amounts of memory

    - by Otium
    I am taking a snapshot of a UIWebView layer, but when I render the webview's layer in the current context my app uses 10mb more memory, and I don't think that should be right. Here is my current code: CGSize imageSize = self.bounds.size; UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0); CGContextRef context = UIGraphicsGetCurrentContext(); [self.layer renderInContext:context]; _snapshot = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext();

    Read the article

  • handling large number

    - by klw
    Hello, this is actually an problem question from project euler site http://projecteuler.net/index.php?section=problems&id=3 Anyway I'm not out after the solution, but I probably guess you will know what my approach is. To my question now, how do I handle numbers exceeding unsigned int? Is there a mathematical approach for this, if so where can I read about it?

    Read the article

  • Handling large numbers of sockets with .NET

    - by Dreaddan
    I'm looking at writing a application that need to be able to handle in the region of 200 connections / sec and was wondering if C# and .NET will handle this or if I need to really be looking at C++ to do this? It looks like SocketAsyncEventArgs may be the way to go but thought id check before I plough in to it. Each transaction should only last less than a second but could take up to 15 seconds each if that makes any difference.

    Read the article

  • Using large transparent pictures as texture atlases

    - by azlisum
    i'm new to android programming and i'm trying to create a relatively big 2D game. I have to use lots of images and objects in my game so I decided to use OpenGL ES. I have several texture atlases, all of them saved as png's because of the transparency. I also know, but i'm not sure why, that I have to use images, which height and width is multiple of two. I test my game on an old HTC Hero running Android 2.3.3. When my picture atlases are 512x512 each, my game has a frame rate of between 50 to 60 fps. When I use 1024x1024 non transparent png, there is no problem - the FPS is again between 50 to 60 fps. But when i decide to use a 1024x1024 transperent PNG's my frame rate drops to 4,5 fps. Could this be a problem related to the age of the device i'm using for testing? These are the OpenGL functions I use each loop to draw batches: gl.glEnable(GL10.GL_TEXTURE_2D); gl.glEnable(GL10.GL_BLEND); gl.glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA); //drawing happens here gl.glDisable(GL10.GL_BLEND); Thanks in advance :)

    Read the article

  • Dealing with large directories in a checkout

    - by Eric
    I am trying to come up with a version control process for a web app that I work on. Currently, my major stumbling blocks are two directories that are huge (both over 4GB). Only a few people need to work on things within the huge directories; most people don't even need to see what's in them. Our directory structure looks something like: / --file.aspx --anotherFile.aspx --/coolThings ----coolThing.aspx --/bigFolder ----someHugeMovie.mov ----someHugeSound.mp3 --/anotherBigFolder ----... I'm sure you get the picture. It's hard to justify a checkout that has to pull down 8GB of data that's likely useless to a developer. I know, it's only once, but even once could be really frustrating for someone (and will make it harder for me to convince everyone to use source control). (Plus, clean checkouts will be painfully slow.) These folders do have to be available in the web application. What can I do? I've thought about separate repositories for the big folders. That way, you only download if you need it; but then how do I manage checking these out onto our development server? I've also thought about not trying to version control those folders: just update them directly on the web server... but I am not enamored of this idea. Is there some magic way to simply exclude directories from a checkout that I haven't found? (Pretty sure there is not.) Of course, there's always the option to just give up, bite the bullet, and accept downloading 8 useless GB. What say you? Have you encountered this problem before? How did you solve it?

    Read the article

  • C# chart control Performance with large amounts of data

    - by user3642115
    I am using a chart control with a range bar graph to basically make a gantt chart for lots of people and lots of projects, say about 1000 total series. The issue that I am running in to is that once I have all my data added to the chart, which takes some time but that is to be expected, and I go to scroll down on my graph it freezes the whole application and takes a while before it unfreezes and scrolls down. Is there any way to improve the performance of this? I tried adding the graph to a panel and growing the graph size dynamically and then scrolling down from the panel but that cause a whole plethora of other issues. Any tips for speeding this up? I don't think it is my code as it has already finished running when this issue happens. Thanks.

    Read the article

  • Display Commas in Large Numbers: JavaScript

    - by user3723918
    I'm working on a customized calculator, which is working pretty well except that I can't figure out how to get the generated numbers to display commas within the number. For example, it might spit out "450000" when I need it to say "450,000". This thread gives a number of suggestions on how to create a new function to deal with the problem, but I'm rather new to JavaScript and I don't really know how to make such a function interact with what I have now. I'd really appreciate any help as to how to get generated numbers with commas! :) HTML: <table id="inputValues"> <tr> <td>Percentage:</td> <td><input id="sempPer" type="text"></td> </tr> <tr> <td>Price:</td> <td><input id="unitPrice" type="text"></td> </tr> <tr> <td colspan="2"><input id="button" type="submit" value="Calculate"></td> </tr> </table> <table id="revenue" class="TFtable"> <tr> <td class="bold">Market Share</td> <td class="bold">Partner A</td> <td class="bold">Partner B</td> </tr> <tr> <td class="bold">1%</td> <td><span id="moss1"></span></td> <td><span id="semp1"></span></td> </tr> </table> </form> JavaScript: <script> function calc() { var z = Number(document.getElementById('sempPer').value); var x = Number(document.getElementById('unitPrice').value); var y = z / 100; var dm1 = .01 * 50000 * x * (1-y); var se1 = .01 * 50000 * x * y; document.getElementById("moss1").innerHTML= "$"+Number(dm1).toFixed(2); document.getElementById("semp1").innerHTML= "$"+Number(se1).toFixed(2); } </script>

    Read the article

  • Large Data Table with first column fixed

    - by bhavya_w
    I have structure as shown in the fiddle http://jsfiddle.net/5LN7U/. <section class="container"> <section class="field"> <ul> <li> Question 1 </li> <li> question 2 </li> <li> question 3 </li> <li> question 4 </li> <li> question 5 </li> <li> question 6 </li> <li> question 7 </li> </ul> </section> <section class="datawrap"> <section class="datawrapinner"> <ul> <li><b>Answer 1 :</b></li> <li><b>Answer 2 :</b></li> <li><b>Answer 3 :</b></li> <li><b>Answer 4 :</b></li> <li><b>Answer 5 :</b></li> <li><b>Answer 6 :</b></li> <li><b>Answer 7 :</b></li> </ul> </section> </section> </section> Basically its a table structure made using divs. The first column contains a long list of questions and the second column contains answers/multiple answers which can be quite big ( there has to be horizontal scrolling in the second column.) The problem i am facing is when i scroll downwards the second column which has the horizontal scroll bar is also scrolling downward. I want horizontal scrollbar to be fixed there. as in it should be always fixed there no matter how much i scroll vertically. Much Like Google Spreadsheets: where the first column stays fixed and there's horizontal scrolling on rest of the columns with over vertical scrolling for whole of the data. I cannot used position fixed in the second column. P.S : please no lectures on using div's for making a table structure. I have my own reasons. and its kinda urgent. Thanks in advance.

    Read the article

  • How to define large list of strings in Visual Basic

    - by Jenny_Winters
    I'm writing a macro in Visual Basic for PowerPoint 2010. I'd like to initialize a really big list of strings like: big_ol_array = Array( _ "string1", _ "string2", _ "string3", _ "string4" , _ ..... "string9999" _ ) ...but I get the "Too many line continuations" error in the editor. When I try to just initialize the big array with no line breaks, the VB editor can't handle such a long line (1000+) characters. Does anyone know a good way to initialize a huge list of strings in VB? Thanks in advance!

    Read the article

  • Longest substring in a large set of strings

    - by user1516492
    I have a huge fixed library of text strings, and a frequently changing input string s. I need to find the longest matching substring from any string in the library to s, starting from the beginning of string s, in minimal time. In a perfect world, I would also return the next longest match from the library, and the next best, and so on. This is not the longest common string problem - I'm not looking for the longest common string for all the strings in the library... I just need a pairwise best substring between s and each string in the vast library as fast as possible.

    Read the article

  • SQL Server 05, which is optimal, LIKE %<term>% or CONTAINS() for searching large column

    - by Spud1
    I've got a function written by another developer which I am trying to modify for a slightly different use. It is used by a SP to check if a certain phrase exists in a text document stored in the DB, and returns 1 if the value is found or 0 if its not. This is the query: SELECT @mres=1 from documents where id=@DocumentID and contains(text, @search_term) The document contains mostly XML, and the search_term is a GUID formatted as an nvarchar(40). This seems to run quite slowly to me (taking 5-6 seconds to execute this part of the process), but in the same script file there is also this version of the above, commented out. SELECT @mres=1 from documents where id=@DocumentID and textlike '%' + @search_term + '%' This version runs MUCH quicker, taking 4ms compared to 15ms for the first example. So, my question is why use the first over the second? I assume this developer (who is no longer working with me) had a good reason, but at the moment I am struggling to find it.. Is it possibly something to do with the full text indexing? (this is a dev DB I am working with, so the production version may have better indexing..) I am not that clued up on FTI really so not quite sure at the moment. Thoughts/ideas?

    Read the article

  • WCF Streaming not working at server

    - by Radhi
    hi, i have used WCF service to transfer large files in chunks to the server for that i have reference this article http://kjellsj.blogspot.com/2007/02/wcf-streaming-upload-files-over-http.html i have configured my application on IIS on my machine. its work fine here. it allows upto 64mb file upload but when we have published the site. it allows only maximum 30Mb file if i try to upload more than that i got error 404 - resource not found. here is the binding config i have used. <basicHttpBinding> <!-- buffer: 64KB; max size: 64MB --> <binding name="FileTransferServicesBinding" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" transferMode="Streamed" messageEncoding="Mtom" maxBufferSize="65536" maxReceivedMessageSize="67108864"> <security mode="None"> <transport clientCredentialType="None"/> </security> </binding> </basicHttpBinding> please suggest me where i am missing anything. and if required more code please let me know -thanks in advance

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >