Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 206/457 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Javascript: Properly Setting A Text Area

    - by Jeremy Person
    I have a text area and the problem is people are typing a large amount of text and I have it clearing out the value and forcing an N/A so I can force something to be entered. How can I make the script below validate someone has already typed something (and not clear it out) but still clear out the N/A by default? textarea name="req_WhatMadeItDifficultToUse" cols="35" onfocus="this.value = '';" onblur="if(this.value == '') this.value = 'N/A';" id="WhatMadeItDifficultToUse"N/A

    Read the article

  • How to switch from VARCHAR to TEXT in SQL 2000?

    - by MatthewMartin
    What do I need to consider before I switch a bunch of fields from VARCHAR(bignumber) to TEXT? Aside from performance, and sometime in the far future TEXT will be deprecated, and aside from the fact that it looks like I need to drop and recreate the table to alter the column's data type? This is for SQL 2000-- I can't do VARCHAR(max) and VARCHAR(8000) isn't large enough.

    Read the article

  • In an AVL tree, at what condition the balancing is to be done? proper code in c languge

    - by bachchan
    Binary search follows Divide and Conquer method where as linear Search doesn't follw.The time complexity of Binary Search in O(log n) but incase of linear search the time complexity is O(n). Thats way Binary search is having bettr prior than linear search. But it is true when the list of items is large incase of smaller list linear is best(i.e.- it is only when the Best Case concern)

    Read the article

  • read from file after calling lseek64 - Linux

    - by rursw1
    Hi, I'm trying to read a large file ( 2.0 GB). The seeking is done by lseek64, then I tried to read using read(fileHandle, buffer, bufferLength)\ pread64(fileHandle, buffer, bufferLength, offset) - but both return with -1. What could it be? Thanks in advance!

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Qt + QTextEdit content into QDomDocument

    - by kaycee
    hi, i'm having QTextEdit widget with large content in it (content is XML). i want to take the content and set it into a QDomDocument, so i take the content using document = textEdit->document(); but i dont know how to take it from here into a QDomDocument... what's the best way to do it ?

    Read the article

  • Can I host an ASP.NET webite outside of IIS?

    - by boraer
    Hi everybody, I need to write an ASP.NET application which must handle a very large number of transactions per second - as many as 5000 users may transact at the same time. I think I will use WCF in back to communicate with SQL server. But in front, can IIS handle 5000 users at the same time effectively, or is there any simple way to host my application outside of IIS?

    Read the article

  • Is Cassandra database row size limited by available memory?

    - by Adam Hollidge
    I'm working with very long time series -- hundreds of millions of data points in one series -- and am considering Cassandra as a data store. In this question, one of the Cassandra committers (the über helpful jbellis) says that Cassandra rows can be very large, and that column slicing operations are faster than row slices, hence my question: Is the row size still limited by available memory?

    Read the article

  • Read text file in java

    - by user326091
    Hi, I have a text file. I would like to retrieve the content from one line to another line. For example, the file may be 200K lines. I want to read the content from line 78 to line 2735. Since the file may be very large, I do not want to read the whole content into the memory. thanks Frank

    Read the article

  • Detect pointer arithmetics because of LARGEADDRESSAWARE

    - by Suma
    I would like to switch my application to LARGEADDRESSAWARE. One of issues to watch for is pointer arithmetic, as pointer difference can no longer be represented as signed 32b. Is there some way how to find automatically all instances of pointer subtraction in a large C++ project? If not, is there some "least effort" manual or semi-automatic method how to achieve this?

    Read the article

  • How do you check the presence of many keys in a Python dictinary?

    - by Thierry Lam
    I have the following dictionary: sites = { 'stackoverflow': 1, 'superuser': 2, 'meta': 3, 'serverfault': 4, 'mathoverflow': 5 } To check if there are more than one key available in the above dictionary, I will do something like: 'stackoverflow' in sites and 'serverfault' in sites The above is maintainable with only 2 key lookups. Is there a better way to handle checking a large number of keys in a very big dictionary?

    Read the article

  • SQL Query Theory Question...

    - by Keng
    I have a large historical transaction table (15-20 million rows MANY columns) and a table with one row one column. The table with one row contains a date (last processing date) which will be used to pull the data in the trasaction table ('process_date'). Question: Should I inner join the 'process_date' table to the transaction table or the transaction table to the 'process_date' table?

    Read the article

  • I need summarise months into years, how can i do that?

    - by Tay
    Hi i working a fairly large spreadsheet which is broken down into months. I want to put the yearly subtotals at the begining of the spreadsheet. So in cell A1 i want to add AA2-AL2, B1 i want to add AM2-AX2 and so on so forth. How can i do this whithout manually going to each set of values. Is there anyway i can put a formula in A1 which i can copy to B1,C1,D1,E1, which will pick up a set of 12 cells each (with no overlap)

    Read the article

  • Suggestion for developing search engine

    - by MohamedGooner
    I want to develop a simple search engine, using ASP.NET and C# , where I can search for a word which contained in a very big text (like the Holy Bible or something like that), then the program shows the user where the word is. I have no idea about in which database I can put this large text and using which method will I search for a word. Any suggestions will help me, and if anyone have a tutorial for anything similar it will benefit me.

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • how to avoid jQuery UI draggable from also triggering click event

    - by James Tauber
    A have a large div (a map) that is draggable via jQuery UI draggable. The div has child divs which are clickable. My problem is that if you drag the map, on mouse up, the click event is fired on whatever child div you started the drag from. How do I stop the mouse up from triggering the click event if its part of a drag (as opposed to someone just clicking without a drag, in which case the click event is desired).

    Read the article

  • Best way to restrict access to a folder in Dropbox

    - by Joe S
    I currently run a business with around 10 staff members and we currently use Dropbox Pro 100GB to share all of our files. It works very well and is inexpensive, however, I am taking on a number of new staff and would like to move the more sensitive documents into their own, protected folder. Currently, we all share one Dropbox account, I am aware that Dropbox for teams supports this, but it is far too expensive for us as a small company. I have researched a number of solutions: 1) Set up a new standard Dropbox account just for use by management, which will contain all of the sensitive documents, and join the shared folder of the rest of my team to access the rest of the documents. As i understand it, this is not possible with a free account, as any dropbox shared folder added to your account will use up your quota 2) Set up some sort of TrueCrypt container, and install TrueCrypt on each trusted staff member's machine, and store the documents inside that. Would this be difficult to use? I'd imagine the sync-ing would not work so well as the disk would technically be mounted at the time of use and any changes would be a change to the actual container rather than individual files. I was just wondering if anyone knows a way to do this without the drawbacks outlined above? Thanks!

    Read the article

  • Using many mutex locks

    - by hanno
    I have a large tree structure on which several threads are working at the same time. Ideally, I would like to have an individual mutex lock for each cell. I looked at the definition of pthread_mutex_t in bits/pthreadtypes.h and it is fairly short, so the memory usage should not be an issue in my case. However, is there any performance penalty when using many (let's say a few thousand) different pthread_mutex_ts for only 8 threads?

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • Typemock - Worth the money?

    - by AngryHacker
    I know that this is a subjective question... Typemock is $799 per developer. Licences for 5 devs comes up to a pretty large sum. If someone here used Typemock and given that there are open source mocking frameworks, is it worth the money? Why?

    Read the article

  • Can the .NET MethodInfo cache be cleared or disabled?

    - by Anton
    Per MSDN, calling Type.GetMethods() stores reflected method information in a MemberInfo cache so the expensive operation doesn't have to be performed again. I have an application that scans assemblies/types, looking for methods that match a given specification. The problem is that memory consumption increases significantly (especially with large numbers of referenced assemblies) since .NET hangs onto the method metadata. Is there any way to clear or disable this MemberInfo cache?

    Read the article

  • How to generate a random BigInteger value in Java?

    - by Bill the Lizard
    I need to generate arbitrarily large random integers in the range 0 (inclusive) to n (exclusive). My initial thought was to call nextDouble and multiply by n, but once n gets to be larger than 253, the results would no longer be uniformly distributed. BigInteger has the following constructor available: public BigInteger(int numBits, Random rnd) Constructs a randomly generated BigInteger, uniformly distributed over the range 0 to (2numBits - 1), inclusive. How can this be used to get a random value in the range 0 - n, where n is not a power of 2?

    Read the article

  • Possible to lock attribute write access by Doors User?

    - by Philip Nguyen
    Is it possible to programmatically lock certain attributes based on the user? So certain attributes can be written to by User2 and certain attributes cannot be written to by User2. However, User1 may have write access to all attributes. What is the most efficient way of accomplishing this? I have to worry about not taking up too many computational resources, as I would like this to be able to work on quite large modules.

    Read the article

  • Splitting a file before upload?

    - by Yevgeniy Brikman
    On a webpage, is it possible to split large files into chunks before the file is uploaded to the server? For example, split a 10MB file into 1MB chunks, and upload one chunk at a time while showing a progress bar? It sounds like JavaScript doesn't have any file manipulation abilities, but what about Flash and Java applets? This would need to work in IE6+, Firefox and Chrome. Update: forgot to mention that (a) we are using Grails and (b) this needs to run over https.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >