Search Results

Search found 38088 results on 1524 pages for 'large scale project'.

Page 128/1524 | < Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >

  • Prefilling large volumes of body text in GMAIL compose getting a Request URI too long error

    - by Ali
    Hi guys this is a followup from the question: http://stackoverflow.com/questions/2583928/prefilling-gmail-compose-screen-with-html-text Where I was building a google apps application - I can call a gmail compose message page from my application using the url: https://mail.google.com/a/domain/?view=cm&fs=1&tf=1&source=mailto&to=WHOEVER%40COMPANY.COM&su=SUBJECTHERE&cc=WHOEVER%40COMPANY.COM&bcc=WHOEVER%40COMPANY.COM&body=PREPOPULATEDBODY However when I try to pass in the body parameter a very long line of text eg as a reply message body I get this error from GMAIL stating the REQUEST URI is too long. Is there a better way to do this as in a way to fillin the text body box of gmail compose section. Or some way to open the page and have it prefilled with javascript some how...

    Read the article

  • Can ASP.net MVC 2 Project and CMS can be Combined together

    - by coolguy97
    Hi, I have developed project using Asp.net MVC 2.Now the content part of My site i don't want to build a Cms for that So my question is can i used any existing CMS developed in MVC2 so that The content part will be taken care of by the CMS and Application part by developed project by me. I have used CMS like Silverstripe which is quite easy which also provides ORM to develop application on their Sapphire engine which but developed in PHP. If it is Combined then when writing Code i will write like this [This just Sample Imaginary Code.I just want CMS to be easy] <logo><Pick_up_from_CMS ID=logo></logo> <menu><Pick_up_from_CMS ID=menu></menu> <header><Pick_up_from_CMS ID=header></header> <body> <Pick_up_from_CMS ID=body> <MY_Application_Logic ID=Logic1><!--This May be my Registration or Search form> --> </body> <footer><Pick_up_from_CMS ID=footer></footer>

    Read the article

  • Inserting Large volume of data in SQL Server 2005

    - by Manjoor
    We have a application (written in c#) to store live stock market price in the database (SQL Server 2005). It insert about 1 Million record in a single day. Now we are adding some more segment of market into it and the no of records would be double (2 Millions/day). Currently the average record insertion per second is about 50, maximum is 450 and minimum is 0. To check certain conditions i have used service broker (asynchronous trigger) on my price table. It is running fine at this time(about 35% CPU utilization). Now i am planning to create a in memory dataset of current stock price. we would like to do some simple calculations. I want to know different views of members on this. Please provide your way of dealing with such situation.

    Read the article

  • How do i make my text wrap in a <div> with a large border radius

    - by Greg Guida
    in the following code <html> <body> <div style="height:400px; width:400px; -moz-border-radius:100px; -webkit-border-radius:100px; border:3px solid #500; background-color:#a00; overflow:hidden;"> Why is this getting cut at the beginning??? </div> </body> </html> Why isn't the browser wrapping the text around the rounded corners. In webkit browsers(i tested both chrome and safari) the overflow hidden cuts the text outside the border. Firefox just renders text outside the border. I also tried this without overflow:hidden; but again the text just rendered outside the border.

    Read the article

  • The project is not configured for Facelets yet in RAD 8.0

    - by Jyoti
    I am trying to create JSF 2 pages. When I create pages using facelets template I get message on top that "The project is not configured for Facelets yet. You need to add a Facelets runtime to the project's classpath". I created file called Test1.xhtml <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11 /DTD/xhtml11.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com /jsf/facelets" xmlns:f="http://java.sun.com/jsf/core" xmlns:h="http://java.sun.com /jsf/html"> <h:head> <title>Test1</title> <meta http-equiv="Content-Type" content="application/xhtml+xml; charset=UTF-8" /> <meta name="GENERATOR" content="Rational® Application Developer for WebSphere® Software" /> </h:head> <h:body> Test </h:body> </html> When I run this I see same content of file in explorer, instead of Test. Also Page code is not created for it.

    Read the article

  • One large file or multiple small files?

    - by Dan
    I have an application (currently written in Python as we iron out the specifics but eventually it will be written in C) that makes use of individual records stored in plain text files. We can't use a database and new records will need to be manually added regularly. My question is this: would it be faster to have a single file (500k-1Mb) and have my application open, loop through, find and close a file OR would it be faster to have the records separated and named using some appropriate convention so that the application could simply loop over filenames to find the data it needs? I know my question is quite general so direction to any good articles on the topic are as appreciated as much as suggestions. Thanks very much in advance for your time, Dan

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • NPGSQL seems to have a rather large bug?

    - by Mr Shoubs
    Hello, this is a weird one, when I run the following code all rows are returned from the db. Imagaine what would happen if this was an update or delete. Dim cmd As New NpgsqlCommand cmd.Connection = conn cmd.CommandText = "select * FROM ac_profiles WHERE profileid = @profileId" cmd.Parameters.Add("@profile", 58) Dim dt As DataTable = DataAccess2.DataAccess.sqlQueryDb(cmd) DataGridView1.DataSource = dt My question is why is this happening?

    Read the article

  • Memory Issues When DOM Parsing A Large XML File on Android Devices

    - by tonyc
    Hey awesome SO users, I have an Android application that parses an XML file for users and displays results in a much more mobile friendly format. The app works great for most users, but some users have lots and lots of data and the app crashes on them because it runs out of memory. Is there any way I have a DOM style XML parser quit parsing data after a certain amount of parsing? I only need the first 30 or so elements so it would make the application much more efficient. I'd like to use a SAX or pull parser instead, but the XML I'm parsing is not valid and I have no control over it. Unless anyone has some good SAX solutions that let me parse messy, invalid XML, I think DOM is the only way to go. Thanks for reading!

    Read the article

  • Processing potentially large STDIN data, more than once

    - by d11wtq
    I'd like to provide an accessor on a class that provides an NSInputStream for STDIN, which may be several hundred megabytes (or gigabytes, though unlikely, perhaps) of data. When I caller gets this NSInputStream it should be able to read from it without worrying about exhausting the data it contains. In other words, another block of code may request the NSInputStream and will expect to be able to read from it. Without first copying all of the data into an NSData object which (I assume) would cause memory exhaustion, what are my options for handling this? The returned NSInputStream does not have to be the same instance, it simply needs to provide the same data. The best I can come up with right now is to copy STDIN to a temporary file and then return NSInputStream instances using that file. Is this pretty much the only way to handle it? Is there anything I should be cautious of if I go the temporary file route?

    Read the article

  • DataGridView lags for a second with large data updates

    - by alexD
    I have a DataGridView with about 400 rows and 10 columns. When the user first displays this table, it receives all of the data from the server and populates the table. The DGV uses a DataTable as it's data source, and when updating the DataTable I use row.BeginEdit/EndEdit and acceptChanges, but when the View itself is updated it lags for a second while all of the DGV is being updated. I am wondering if there is a way to make this smooth, so that for example, if the user is scrolling through the data and it updates, it won't interrupt the scrolling. Or if the user is moving the display around the screen and it updates, it won't interrupt. Is there an easy way to do this? If not, is there away to prevent the DGV from updating the view until all events have ended so it won't be repainted until the user stops scrolling, dragging, etc ?

    Read the article

  • Typesetting a large matrix in LaTeX

    - by Hooked
    I have a 3x12 matrix I'd like to input into my LaTeX (with amsmath) document but LaTeX seems to choke when the matrix gets larger then 3x10: \begin{equation} \textbf{e} = \begin{bmatrix} 1&1&1&1&0&0&0&0&-1&-1&-1&-1\\ 1&-1&0&0&1&1&-1&-1&0&0&1&-1\\ 0&0&1&-1&1&-1&1&-1&1&-1&0&0 \end{bmatrix} \end{equation} The error: Extra alignment tab has been changed to \cr. tells me that I have more & then the bmatrix environment can handle. Is there a proper way to handle this? Also it seems that the alignment for 1's and the -1's are different, is that also expected of the bmatrix?

    Read the article

  • Deliver large volume of automatic notification emails without being throttled

    - by jack
    I think most website has certain needs to deliver emails to its users, e.g. account activation emails, private messsage notification, comment notification, etc. Take my site as example, among 5,000 registered users, about 1,500 signed up using gmail.com box, 1,000 using yahoo.com and another 1,000 using hotmail.com. Every now and then I receive complaints from users that they never receive account activation email, sometime it goes to junk folder sometimes it just not show in any folder. Maybe it's kind of being "throttled" when exceeded maximum number of messages sent from same ip address to gmail.com/yahoo.com/hotmail.com during certain period of time? I'm using Postfix and there seems no problem with configuration since 90% of emails can be delivered to gmail.com/yahoo.com/hotmail.com boxes successfully. I noticed twitter is delivering millions of such automatic notifications to its users but I never missed a message from them. How do they archive this? Is there a permanent white list on gmail.com, yahoo.com or hotmail.com? Thanks in advance.

    Read the article

  • Has anyone used Ant4Eclipse with Project Lombok?

    - by gmcnaughton
    Has anyone successfully used Ant4Eclipse (http://www.ant4eclipse.org/) in combination with Project Lombok (http://projectlombok.org/)? Lombok provides annotations for removing boilerplate code; however, it doesn't appear to play nicely with Ant4Eclipse (headless compilation of Eclipse projects). For instance, the following Lombok sample compiles fine in Eclipse and javac: import lombok.Getter; public class LombokTest { private @Getter String foo; public LombokTest() { String s = this.getFoo(); } } But compiling with Ant4Eclipse's <buildJdtProject> yields the following: [javac] Compiling 1 source file [javac] ---------- [javac] 1. WARNING in C:\dev\Java\workspace\LombokTest\src\LombokTest.java (at line 4) [javac] private @Getter String foo; [javac] ^^^ [javac] The field LombokTest.foo is never read locally [javac] ---------- [javac] 2. ERROR in C:\dev\Java\workspace\LombokTest\src\LombokTest.java (at line 8) [javac] String s = this.getFoo(); [javac] ^^^^^^ [javac] The method getFoo() is undefined for the type LombokTest [javac] ---------- Has anyone successfully used these libraries together? Thanks! Edit: sample project demonstrating the issue

    Read the article

  • Linux core dumps are too large!

    - by themoondothshine
    Hey guys, Recently I've been noticing an increase in the size of the core dumps generated by my application. Initially, they were just around 5MB in size and contained around 5 stack frames, and now I have core dumps of 2GBs and the information contained within them are no different from the smaller dumps. Is there any way I can control the size of core dumps generated? Shouldn't they be at least smaller than the application binary itself? Binaries are compiled in this way: Compiled in release mode with debug symbols (ie, -g compiler option in GCC). Debug symbols are copied onto a separate file and stripped from the binary. A GNU debug symbols link is added to the binary. At the beginning of the application, there's a call to setrlimit which sets the core limit to infinity -- Is this the problem?

    Read the article

  • Change Large Number of Record Keys using Map Table

    - by Coyote
    I have a set of records indexed by id numbers, I need to convert these record's indexes to a new id number. I have a two column table mapping the old numbers to the new numbers. For example given these two tables, what would the update statement look like? Given: OLD_TO_NEW oldid | newid ----------------- 1234 0987 7698 5645 ... ... and id | data ---------------- 1234 'yo' 7698 'hey' ... ... Need: id | data ---------------- 0987 'yo' 5645 'hey' ... ... This oracle so I have access to PL/SQL, I'm just trying to avoid it.

    Read the article

  • SQL Server: One large persisted computed column for Fulltext Indexing

    - by Alex
    It appears to me as the easiest, most straightforward solution, but please correct me if I'm wrong. Instead of having a fulltext index on all individual columns of a table, isn't it better to just generate one single wide computed column and run the fulltext index against that only? It appears to me that it gets rid of all the issues of having multiple columns, incl. that I can't search "x AND y" as this will not match a row with "x" present in column 1 and "y" present in column 2. Any counterarguments?

    Read the article

  • Usage of Maven (and open source in general) in high governance and risk-averse large organizations (

    - by bart
    Does anyone have any good stories of these kinds of organizations being open to using open source (such as tools like Maven etc). Many staff I've encountered have little or no exposure to open source/systems and open source is treated with great suspicion. Some reasons given for this are lack of support and robustness, which is ironic given the number of end-of-life unsupported vendor products that are in production. Bonus points for any success stories where you've seen open source go into orgs like this and have a real benefit!

    Read the article

  • performance of large number calculations in python (python 2.7.3 and .net 4.0)

    - by g36
    There is a lot of general questions about python performance in comparison to other languages. I've got more specific example: There are two simple functions wrote in python an c#, both checking if int number is prime. python: import time def is_prime(n): num =n/2 while num >1: if n % num ==0: return 0 num-=1 return 1 start = time.clock() probably_prime = is_prime(2147483629) elapsed = (time.clock() - start) print 'time : '+str(elapsed) and C#: using System.Diagnostics; public static bool IsPrime(int n) { int num = n/2; while(num >1) { if(n%num ==0) { return false; } num-=1; } return true; } Stopwatch sw = new Stopwatch(); sw.Start(); bool result = Functions.IsPrime(2147483629); sw.Stop(); Console.WriteLine("time: {0}", sw.Elapsed); And times ( which are surprise for me as a begginer in python:)): Python: 121s; c#: 6s Could You explain where does this big diffrence come from ?

    Read the article

  • Moving a large number of files in one directory to multiple directories

    - by Axsuul
    I am looking to create a Windows batch script to move about 2,000 files and splitting them up so that there are 10 files per folder. I have attempted creating a batch script but the syntax really boggles my mind. Here is what I have so far @echo off :: Config parameters set /a groupsize = 10 :: initial counter, everytime counter is 1, we create new folder set /a n = 1 :: folder counter set /a nf = 1 for %%f in (*.txt) do ( :: if counter is 1, create new folder if %n% == 1 ( md folder%nf% set /a n += 1 ) :: move file into folder mv -Y %%f folder%nf%\%%f :: reset counter if larger than group size if %n% == %groupsize% ( set /a n = 1 ) else ( set /a n += 1 ) ) pause Basically what this script does is loop through each .txt file in the directory. It creates a new directory in the beginning and moves 10 files into that directory, then creates a new folder again and moves another 10 files into that directory, and so on. However, I'm having problems where the n variable is not being incremented in the loop? I'm sure there's other errors too since the CMD window closes on me even with pause. Any help or guidance is appreciated, thanks for your time!

    Read the article

  • How does an open source project get funded?

    - by squillman
    Other than large entities sponsoring a project, such as the Apache foundation, what are other ways that full-time open source developers (specifically those on products offered free of charge) receive funding for their projects? Obviously there will be donations that will trickle in but from what other channels do developers receive more steady income?

    Read the article

  • Ideal way to deliver large data over Web Services

    - by zengr
    We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing. Now, the problem is, there is not 1 WS we are implementing, there is one WS which the client component hits, this initiates a series (5 more) of WSs which gather data from their respective data stores and finally provide the data back to the original WS, which then delivers the data back to the client component. So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel. So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal WS and at the same time, also delivering the data to the client component.

    Read the article

  • Importing large datasets on iPhone using CoreData

    - by Matthes
    Hi there, I'm facing very annoying problem. My iPhone app is loading it's data from a network server. Data are sent as plist and when parsed, it neeeds to be stored to SQLite db using CoreData. Issue is that in some cases those datasets are too big (5000+ records) and import takes way too long. More on that, when iPhone tries to suspend the screen, Watchdog kills the app because it's still processing the import and does not respond up to 5 seconds, so import is never finished. I used all recommended techniques according to article "Efficiently Importing Data" http://developer.apple.com/mac/library/DOCUMENTATION/Cocoa/Conceptual/CoreData/Articles/cdImporting.html and other docs concerning this, but it's still awfully slow. Solution I'm looking for is to let app suspend, but let import run in behind (better one) or to prevent attempts to suspend the app at all. Or any better idea is welcomed too. Any tips on how to overcome these issues are highly appreciated! Thanks

    Read the article

  • Creating a Large Matrix in ff

    - by Ryan Rosario
    I am trying to create a huge matrix in ff, and I know that ff is good for this sort of thing. But, there is a major problem. The dimensions of the matrix exceed .Machine$max_integer! I am running on a 64 bit machine, using 64bit R and 64bit ff. Is there any way to get around this problem? It's been suggested that R is using the MAXINT value from stdint.h. Is there any way to fix this without changing that file and possibly breaking build? > ffMatrix <- ff(vmode="boolean", dim=c(1e10,1e10)) Error in if (length < 0 || length > .Machine$integer.max) stop("length must be between 1 and .Machine$integer.max") : missing value where TRUE/FALSE needed In addition: Warning message: In ff(vmode = "boolean", dim = c(1e+10, 1e+10)) : NAs introduced by coercion > 1e+10 > .Machine$integer.max [1] TRUE

    Read the article

< Previous Page | 124 125 126 127 128 129 130 131 132 133 134 135  | Next Page >