Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 206/457 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • Read text file in java

    - by user326091
    Hi, I have a text file. I would like to retrieve the content from one line to another line. For example, the file may be 200K lines. I want to read the content from line 78 to line 2735. Since the file may be very large, I do not want to read the whole content into the memory. thanks Frank

    Read the article

  • In an AVL tree, at what condition the balancing is to be done? proper code in c languge

    - by bachchan
    Binary search follows Divide and Conquer method where as linear Search doesn't follw.The time complexity of Binary Search in O(log n) but incase of linear search the time complexity is O(n). Thats way Binary search is having bettr prior than linear search. But it is true when the list of items is large incase of smaller list linear is best(i.e.- it is only when the Best Case concern)

    Read the article

  • Qt + QTextEdit content into QDomDocument

    - by kaycee
    hi, i'm having QTextEdit widget with large content in it (content is XML). i want to take the content and set it into a QDomDocument, so i take the content using document = textEdit->document(); but i dont know how to take it from here into a QDomDocument... what's the best way to do it ?

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Higher level database layer for Android?

    - by sweetiecakes
    Are there any good database abstraction layers/object relational mappers/ActiveRecord implementations/whatever they are called for Android? I'm aware that db4o is officially supported, but it has quite a large footprint and I'd rather use a more conventional database (SQLite).

    Read the article

  • Detect pointer arithmetics because of LARGEADDRESSAWARE

    - by Suma
    I would like to switch my application to LARGEADDRESSAWARE. One of issues to watch for is pointer arithmetic, as pointer difference can no longer be represented as signed 32b. Is there some way how to find automatically all instances of pointer subtraction in a large C++ project? If not, is there some "least effort" manual or semi-automatic method how to achieve this?

    Read the article

  • Database design advice needed.

    - by user346271
    Hi all, I'm a lone developer for a telecoms company, and am after some database design advice from anyone with a bit of time to answer. I am inserting into one table ~2 million rows each day, these tables then get archived and compressed on a monthly basis. Each monthly table contains ~15,000,000 rows. Although this is increasing month on month. For every insert I do above I am combining the data from rows which belong together and creating another "correlated" table. This table is currently not being archived, as I need to make sure I never miss an update to the correlated table. (Hope that makes sense) Although in general this information should remain fairly static after a couple of days of processing. All of the above is working perfectly. However my company now wishes to perform some stats against this data, and these tables are getting too large to provide the results in what would be deemed a reasonable time. Even with the appropriate indexes set. So I guess after all the above my question is quite simple. Should I write a script which groups the data from my correlated table into smaller tables. Or should I store the queries result sets in something like memcache? I'm already using mysqls cache, but due to having limited control over how long the data is stored for, it's not working ideally. The main advantages I can see of using something like memcache: No blocking on my correlated table after the query has been cashed. Greater flexibility of sharing the collected data between the backend collector and front end processor. (i.e custom reports could be written in the backend and the results of these stored in the cache under a key which then gets shared with anyone who would want to see the data of this report) Redundancy and scalability if we start sharing this data with a large amount of customers. The main disadvantages I can see of using something like memcache: Data is not persistent if machine is rebooted / cache is flushed. The main advantages of using MySql Persistent data. Less code changes (although adding something like memcache is trivial anyway) The main disadvantages of using MySql Have to define table templates every time I want to store provide a new set of grouped data. Have to write a program which loops through the correlated data and fills these new tables. Potentially will still grow slower as the data continues to be filled. Apologies for quite a long question. It's helped me to write down these thoughts here anyway, and any advice/help/experience with dealing with this sort of problem would be greatly appreciated. Many thanks. Alan

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • Can I host an ASP.NET webite outside of IIS?

    - by boraer
    Hi everybody, I need to write an ASP.NET application which must handle a very large number of transactions per second - as many as 5000 users may transact at the same time. I think I will use WCF in back to communicate with SQL server. But in front, can IIS handle 5000 users at the same time effectively, or is there any simple way to host my application outside of IIS?

    Read the article

  • How do you check the presence of many keys in a Python dictinary?

    - by Thierry Lam
    I have the following dictionary: sites = { 'stackoverflow': 1, 'superuser': 2, 'meta': 3, 'serverfault': 4, 'mathoverflow': 5 } To check if there are more than one key available in the above dictionary, I will do something like: 'stackoverflow' in sites and 'serverfault' in sites The above is maintainable with only 2 key lookups. Is there a better way to handle checking a large number of keys in a very big dictionary?

    Read the article

  • Suggestion for developing search engine

    - by MohamedGooner
    I want to develop a simple search engine, using ASP.NET and C# , where I can search for a word which contained in a very big text (like the Holy Bible or something like that), then the program shows the user where the word is. I have no idea about in which database I can put this large text and using which method will I search for a word. Any suggestions will help me, and if anyone have a tutorial for anything similar it will benefit me.

    Read the article

  • Best way to restrict access to a folder in Dropbox

    - by Joe S
    I currently run a business with around 10 staff members and we currently use Dropbox Pro 100GB to share all of our files. It works very well and is inexpensive, however, I am taking on a number of new staff and would like to move the more sensitive documents into their own, protected folder. Currently, we all share one Dropbox account, I am aware that Dropbox for teams supports this, but it is far too expensive for us as a small company. I have researched a number of solutions: 1) Set up a new standard Dropbox account just for use by management, which will contain all of the sensitive documents, and join the shared folder of the rest of my team to access the rest of the documents. As i understand it, this is not possible with a free account, as any dropbox shared folder added to your account will use up your quota 2) Set up some sort of TrueCrypt container, and install TrueCrypt on each trusted staff member's machine, and store the documents inside that. Would this be difficult to use? I'd imagine the sync-ing would not work so well as the disk would technically be mounted at the time of use and any changes would be a change to the actual container rather than individual files. I was just wondering if anyone knows a way to do this without the drawbacks outlined above? Thanks!

    Read the article

  • Is Cassandra database row size limited by available memory?

    - by Adam Hollidge
    I'm working with very long time series -- hundreds of millions of data points in one series -- and am considering Cassandra as a data store. In this question, one of the Cassandra committers (the über helpful jbellis) says that Cassandra rows can be very large, and that column slicing operations are faster than row slices, hence my question: Is the row size still limited by available memory?

    Read the article

  • how to avoid jQuery UI draggable from also triggering click event

    - by James Tauber
    A have a large div (a map) that is draggable via jQuery UI draggable. The div has child divs which are clickable. My problem is that if you drag the map, on mouse up, the click event is fired on whatever child div you started the drag from. How do I stop the mouse up from triggering the click event if its part of a drag (as opposed to someone just clicking without a drag, in which case the click event is desired).

    Read the article

  • I need summarise months into years, how can i do that?

    - by Tay
    Hi i working a fairly large spreadsheet which is broken down into months. I want to put the yearly subtotals at the begining of the spreadsheet. So in cell A1 i want to add AA2-AL2, B1 i want to add AM2-AX2 and so on so forth. How can i do this whithout manually going to each set of values. Is there anyway i can put a formula in A1 which i can copy to B1,C1,D1,E1, which will pick up a set of 12 cells each (with no overlap)

    Read the article

  • SQL Design Question regarding schema and if Name value pair is the best solution

    - by Aur
    I am having a small problem trying to decide on database schema for a current project. I am by no means a DBA. The application parses through a file based on user input and enters that data in the database. The number of fields that can be parsed is between 1 and 42 at the current moment. The current design of the database is entirely flat with there being 42 columns; some have repeated columns such as address1, address2, address3, etc... This says that I should normalize the data. However, data integrity is not needed at this moment and the way the data is shaped I'm looking at several joins. Not a bad thing but the data is still in a 1 to 1 relationship and I still see a lot of empty fields per row. So my concerns are that this does not allow the database or the application to be very extendable. If they want to add more fields to be parsed (which they do) than I'd need to create another table and add another foreign key to the linking table. The third option is I have a table where the fields are defined and a table for each record. So what I was thinking is to make a table that stores the value and then links to those two tables. The problem is I can picture the size of that table growing large depending on the input size. If someone gives me a file with 300,000 records than 300,000 x 40 = 12 million so I have some reservations. However I think if I get to that point than I should be happy it is being used. This option also allows for more custom displaying of information albeit a bit more work but little rework even if you add more fields. So the problem boils down to: 1. Current design is a flat file which makes extending it hard and it is not normalized. 2. Normalize the tables although no real benefits for the moment but requirements change. 3. Normalize it down into the name value pair and hope size doesn't hurt. There are a large number of inserts, updates, and selects against that table. So performance is a worry but I believe the saying is design now, performance testing later? I'm probably just missing something practical so any comments would be appreciated even if it’s a quick sanity check. Thank you for your time.

    Read the article

  • SQL Query Theory Question...

    - by Keng
    I have a large historical transaction table (15-20 million rows MANY columns) and a table with one row one column. The table with one row contains a date (last processing date) which will be used to pull the data in the trasaction table ('process_date'). Question: Should I inner join the 'process_date' table to the transaction table or the transaction table to the 'process_date' table?

    Read the article

  • Typemock - Worth the money?

    - by AngryHacker
    I know that this is a subjective question... Typemock is $799 per developer. Licences for 5 devs comes up to a pretty large sum. If someone here used Typemock and given that there are open source mocking frameworks, is it worth the money? Why?

    Read the article

  • Using many mutex locks

    - by hanno
    I have a large tree structure on which several threads are working at the same time. Ideally, I would like to have an individual mutex lock for each cell. I looked at the definition of pthread_mutex_t in bits/pthreadtypes.h and it is fairly short, so the memory usage should not be an issue in my case. However, is there any performance penalty when using many (let's say a few thousand) different pthread_mutex_ts for only 8 threads?

    Read the article

  • What is the best 3-D technology for the "Online Room Planner" site?

    - by Omega
    The main user-case is: Create the 2D floor plan See the 3D view of the room in colors and in dynamic lighting (switching on and off the lamps) Select the furniture from the large library of predefined samples. Change the color and texture of the furniture samples. Create the photos of the 3D room view from different points. Also user can move and turn the camera in the room and discover the view.

    Read the article

  • Can the .NET MethodInfo cache be cleared or disabled?

    - by Anton
    Per MSDN, calling Type.GetMethods() stores reflected method information in a MemberInfo cache so the expensive operation doesn't have to be performed again. I have an application that scans assemblies/types, looking for methods that match a given specification. The problem is that memory consumption increases significantly (especially with large numbers of referenced assemblies) since .NET hangs onto the method metadata. Is there any way to clear or disable this MemberInfo cache?

    Read the article

  • R: How to write out a data.frame so that I can paste it into SO for others to read?

    - by John
    I have a large data.frame displaying some weird properties when plotted. I'd like to ask a question about it on Stackoverflow, to do that I'd like to write the data.frame out in a form that I can paste it into SO and somebody else can easily run it and have it back into a data.frame object again. Is there an easy way to accomplish this? Also, if it is really long, should I use paste bin instead of directly paste it here?

    Read the article

  • Possible to lock attribute write access by Doors User?

    - by Philip Nguyen
    Is it possible to programmatically lock certain attributes based on the user? So certain attributes can be written to by User2 and certain attributes cannot be written to by User2. However, User1 may have write access to all attributes. What is the most efficient way of accomplishing this? I have to worry about not taking up too many computational resources, as I would like this to be able to work on quite large modules.

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >