Search Results

Search found 3618 results on 145 pages for 'huge'.

Page 90/145 | < Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >

  • MySqlConnection really not close

    - by stighy
    Hi guys, i've a huge problem. Take a look to this sample code private sub FUNCTION1() dim conn as new mysqlconnection(myconnstring) conn.open ---- do something FUNCTION2() ---- conn.close end sub private sub FUNCTION2() dim conn as new mysqlconnection(myconnstring) .... conn.open -- do something conn.close end sub Despite i close regulary all my connection, they remains "open" on the mysql server. I know this because i'm using MySQL Administration tool to check how many connection i open "line by line" of my source code. In fact i get a lot of time "user has exceeded the 'max_user_connections' resource (current value: 5)" My hoster permit ONLY 5 connection, but i think that if i write GOOD source code, this cannot be a problem. So my question is: why that "damn" connections remain open ?? Thank you in advance!

    Read the article

  • Import/commit to svn branch from a different codebase

    - by publicRavi
    I am trying to migrate to svn from a not-so-famous version control system (lets call it nsfvc). svn trunk was created some time ago from nsfvc's trunk. There is an active branch in nsfvc that I have to import to svn branch. The diff between nsfvc's trunk and branch is huge (updates, renames, additions, deletions, moves). How do I go about doing this? I am guessing it is not as simple as... svn co http://mysvn/repo/branches/branch c:\workspace # replace files in c:\workspace svn add svn ci

    Read the article

  • SQL compare entire rows

    - by zmaster
    In SQL server 2008 I have some huge tables (200-300+ cols). Every day we run a batch job generating a new table with timestamp appended to the name of the table. The the tables have no PK. I would like a generic way to compare 2 rows from two tables. Showing which cols having different values is sufficient, but showing the values would be perfect. Thanks a lot Thanks for the answers. I ended up writing my own C# tool to do the job - as Im not allowed to install 3rd party software in my company.

    Read the article

  • What is the smallest Windows header I can #include to define DWORD?

    - by j_random_hacker
    I have a small header file of my own which declares a couple of functions, one of which has a return type of DWORD. I'm reluctant to drag in windows.h just to get the official definition of this type since that file is huge, and my header will be used in a number of source modules that don't otherwise need it. Of course, in practice I know that DWORD is just unsigned int, but I'd prefer the more hygienic approach of including an official header file if possible. On this page it says that DWORD is defined in windef.h, but unfortunately including just this small file directly leads to compilation errors -- apparently it expects to be included by other headers. (Also, the fact that my file is a header file also means I can't just declare WIN32_LEAN_AND_MEAN, since the source file that #includes my file might need this to be left undefined.) Any ideas? I know it's not the end of the world -- I can just continue to #include <windows.h> -- but thought someone might have a better idea!

    Read the article

  • Deploy Java application on MacOSX (from a Windows system)

    - by Matías
    Hello, Here's the deal. I'm just starting with Java programming, I've made a simple application that uses SWT graphic library and I want to deploy it on a Mac (running the latest version of MacOS X). I did all the programming in my Windows 7 machine, so here are my questions: Q1) Can I make an executable file for MacOS X from my Windows machine? How? (I saw that it's possible to create .exe files on Windows, instead of using .jar; I want to do the same for the Mac, of course it won't be an .exe) Q2) If I export my project in Eclipse and I choose Runnable JAR File and then on Library Handling I pick Extract required libraries into generated JAR or Package required libraries into generated JAR I end up with a huge .JAR (about 15MB of size, my application consist in just a button on a Window and a tiny method that doesn't do much). Is that considered normal? Here's the list of libraries that my project appears to be using: Thanks in advance.

    Read the article

  • Fastest method in merging of the two: dicts vs lists

    - by tipu
    I'm doing some indexing and memory is sufficient but CPU isn't. So I have one huge dictionary and then a smaller dictionary I'm merging into the bigger one: big_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1}} smaller_dict = {"the" : {"6" : 1, "7" : 1}} #after merging resulting_dict = {"the" : {"1" : 1, "2" : 1, "3" : 1, "4" : 1, "5" : 1, "6" : 1, "7" : 1}} My question is for the values in both dicts, should I use a dict (as displayed above) or list (as displayed below) when my priority is to use as much memory as possible to gain the most out of my CPU? For clarification, using a list would look like: big_dict = {"the" : [1, 2, 3, 4, 5]} smaller_dict = {"the" : [6,7]} #after merging resulting_dict = {"the" : [1, 2, 3, 4, 5, 6, 7]} Side note: The reason I'm using a dict nested into a dict rather than a set nested in a dict is because JSON won't let me do json.dumps because a set isn't key/value pairs, it's (as far as the JSON library is concerned) {"a", "series", "of", "keys"} Also, after choosing between using dict to a list, how would I go about implementing the most efficient, in terms of CPU, method of merging them? I appreciate the help.

    Read the article

  • sql select from a large number of IDs

    - by Claudiu
    I have a table, Foo. I run a query on Foo to get the ids from a subset of Foo. I then want to run a more complicated set of queries, but only on those IDs. Is there an efficient way to do this? The best I can think of is creating a query such as: SELECT ... --complicated stuff WHERE ... --more stuff AND id IN (1, 2, 3, 9, 413, 4324, ..., 939393) That is, I construct a huge "IN" clause. Is this efficient? Is there a more efficient way of doing this, or is the only way to JOIN with the inital query that gets the IDs? If it helps, I'm using SQLObject to connect to a PostgreSQL database, and I have access to the cursor that executed the query to get all the IDs.

    Read the article

  • Efficient job progress update in web application

    - by Endru6
    Hi, Creating a web application (Django in my case, but I think the question is more general) that is administrating a cluster of workers doing queued jobs, there is a need to track each jobs progress. When I've done it using database UPDATE (PostgreSQL in this case), it severely hits the database performance, because each UPDATE creates a new row in a table, and in my case only vacuuming DB removes obsolete rows. Having 30 jobs running and reporting progress every 1 minute DB may require vacuuming (and it means huge slow downs on a front end side for all the employees working with the system) every 10 days. Because the progress information isn't critical, ie. it doesn't have to be persistent, how would you do the progress updates from jobs without using an overhead database implies? There are 30 worker servers, each doing 1 or 2 jobs simultaneously, 1 front end server which serves a web application to users, and 1 database server.

    Read the article

  • Extract substructure from a text file using bash or python

    - by Werner
    Hi, I have a huge text file, which follows the structure: SET TAG1 ... ... SET ... SET TAG2 ... ... SET ... ... I would like to extract for a specific TAG, (i.e. TAG54) its individual "substructure", which would be SET TAG54 ... ... SET Each substructure, for a given TAG_i contains always: first line:SET second line:TAG_i (in this case TAG54) an arbitrary number of lines last line:SET I wonder what would be the best way to do this, whether in bash or python, so for a given TAG, one can "extract" this substructure. Thanks

    Read the article

  • how to render custom columns with a GenericTreeModel

    - by Giorgio Gelardi
    I have to display some data in a treeview. The "real" data model is huge and I cannot copy all the stuff in a TreeStore, so I guess I should use a GenericTreeModel to act like a virtual treeview. Btw the first column is the classic icon+text style and I think I should declare a column with a CellRendererPixbuf (faq sample), but I'm not sure what the model methods on_get_n_columns() and on_get_value() should return. It's both a Pixbuf and a string value for the same column. Any help will be appreciated.

    Read the article

  • What do you wish you could've learned sooner?

    - by Industrial
    What things, methods, workflows, etc. can you not live without today and wish you had learned of a long time ago? For example, learning some basic Ubuntu and using my debugger properly in the IDE have made a huge difference to me and are together probably the two things that I most wish I had done a long time ago. Using a debugger just seems like common sense now to many of us, but to those that are in a early stage of their career it might not. (I'm a good example of that.)

    Read the article

  • Performance effect of using print statements in Python script

    - by Sudar
    I have a Python script that process a huge text file (with around 4 millon lines) and writes the data into two separate files. I have added a print statement, which outputs a string for every line for debugging. I want to know how bad it could be from the performance perspective? If it is going to very bad, I can remove the debugging line. Edit It turns out that having a print statement for every line in a file with 4 million lines is increasing the time way too much.

    Read the article

  • Measure data transfer rate over tcp using c#

    - by publicENEMY
    i want to measure current download speed. im sending huge file over tcp. how can i capture the transfer rate every second? if i use IPv4InterfaceStatistics or similar method, instead of capturing the file transfer rate, i capture the device transfer rate. the problem with capturing device transfer rate is that it captures all ongoing data through the network device instead of the single file that i transfer. how can i capture the file transfer rate? im using c#.

    Read the article

  • When can we say that we have mastered something

    - by Thinking
    I donot know if this is a valid question to ask here or not but I have asked this as because i have the doubt In many interviews , the interviewer ask as how much you want ot rate yourself on a scale of 10 in C#, Jave etc. Some says 6 some 7 .... My question is how to judge where I am standing at present? And when can we say that we have mastered a language or a topic. As everything is huge and everyday we learn something.. so there is no end to it... so how can I judge that? Thanks

    Read the article

  • Read a large result set in chunks from mysql

    - by ripper234
    I am trying to read a huge result set from mysql. Reading them in a straight-forward manner didn't work, as mysql tries to return all results together, which times out. I found the following piece of code which tells mysql to read the results back one at a time: stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE); Can I read a chunk at a time instead of one by one? I've tried setting fetch size to a different value, but it doesn't work.

    Read the article

  • Sql Server 2000 Stored Procedure Prevents Parallelism or something?

    - by user187305
    I have a huge disgusting stored procedure that wasn't slow a couple months ago, but now is. I barely know what this thing does and I am in no way interested in rewriting it. I do know that if I take the body of the stored procedure and then declare/set the values of the parameters and run it in query analyzer that it runs more than 20x faster. From the internet, I've read that this is probably due to a bad cached query plan. So, I've tried running the sp with "WITH RECOMPILE" after the EXEC and I've also tried putting the "WITH RECOMPLE" inside the sp, but neither of those helped even a little bit. When I look at the execution plan of the sp vs the query, the biggest difference is that the sp has "Parallelism" operations all over the place and the query doesn't have any. Can this be the cause of the difference in speeds? Thank you, any ideas would be great... I'm stuck.

    Read the article

  • how to step into code from jars (non JDK) using IntelliJ?

    - by ascari
    I am new to IntelliJ (and Stackoverflow) and fairly new to Java,In my application I am using code from jars that in IntelliJ I added as "External Libraries". I also have the source code for those jars, but I rather not compile it (they are huge and complex). Now while debugging my application I would like to step into the library code that is compiled into those jars. How can I set up IntelliJ to do that? Is there another way other that attaching the entire jar library source code to my application code?

    Read the article

  • PHP - turning register globals off, what is the best way to go about fixing the code?

    - by user187809
    I am working on a old code base, where programmers assumed that register_globals will always be on. Hence variables are used without $_GET or $_POST prefix, pretty much in every page (the code base is huge, hundreds of scripts). I tried turning it off, but the very first script (login script) goes on an infinite loop. I understand that going through one script at a time, and one line at a time and fixing the variables is probably the only option (adding the prefix $_GET or $_POST as the case may be). Has anyone does this before? How did you go about doing it? Any advice?

    Read the article

  • debate: Is adding third party libraries to a war a good idea?

    - by Master Chief
    We have a debate going on . a. The "standard" way of assembling a web app. Create a WAR with all our app artifacts and all other components like hibernate and memcached etc are deployed in the tomcat/shared/lib area. b. Create a humongous war with everything included and nothing in tomcat/shared/lib. Pros for a - It keeps things modular and the war is small. Cons for a - dependency on shared/lib has to be managed especially by the deployment process. Pros for b - All dependencies are controlled by the build process removing any room for error. Cons for b - War is really, really big. If you are deploying over a network to a huge farm, then it might have an impact. want to see what thoughts others might have about this.

    Read the article

  • What is the best way to sync multiple client SqlServers to one MS SqlServer 2005?

    - by user605055
    I have several client databases that use my windows application. I want to send this data for online web site. The client databases and server database structure is different because we need to add client ID column to some tables in server data base. I use this way to sync databases; use another application and use C# bulk copy with transaction to sync databases. My server database sql server is too busy and parallel task cannot be run. I work on this solution: I use triggers after update, delete, insert to save changes in one table and create sql query to send a web service to sync data. But I must send all data first! Huge data set (bigger than 16mg) I think can't use replication because the structure and primary keys are different.

    Read the article

  • what to use for repetitive (daily, weekly, monthly) tasks ? Workflows, Windows Services, something e

    - by mare
    I've been writing Windows Services for a while and they always seem to work fine for things that need to run every day, few times a week, once a month, etc. but I've been lately thinking about going with Windows Workflow Foundation. However, I am unsure how would they run on a server without some container application (for instance SharePoint)? I worked with Sharepoint workflows before and I always had huge problems, at first with the bugs in the workflow architecture implementation (the problems with sleep and delay) and later when they eventually started to work, they were difficult to manage and change. On the other hand Windows Services were always quite easy to implement, easy to create a setup for them and install them and they were always quite resilient (they were often working for months without crashing or something else going wrong). What do you recommend? Please bear in mind we are working in .NET (version is of no problem, if 4.0 brings something new on this subject, we can use it).

    Read the article

  • Is there a way to read 10000 lines from a file in python?

    - by windsound
    I am relatively new in python, was working on C a lot. Since I was seeing so many new functions in python that I do not know, I was wondering if there is a function that can request 10000 lines from a file in python. Something like this is what I expect if that kind of function exist: lines = get_10000_lines(file_pointer) Did python have a build-in function or is there any module that I can download for this? If not, how do I do this to be easiest way. I need to analyze a huge file so I want to read 10000 lines and analyze per time to save memory. Thanks for helping!

    Read the article

  • Download File from Web C++ (with winsock?)

    - by Lienau
    I need to download files/read strings from a specified url in C++. I've done some research with this, cURL seems to be the most popular method. Also, I've used it before in PHP. The problem with cURL is that the lib is huge, and my file has to be small. I think you can do it with winsock, but I can't find any simple examples. If you have a simple winsock example, a light cURL/Something else, or anything that could get the job done. I would greatly appreciated. Also, I need this to work with native C++.

    Read the article

  • Which NoSQL db to use with C?

    - by systemsfault
    Hello all, I'm working on an application that I'm going to write with C and i am considering to use a nosql db for storing timeseries data with at most 8 or 9 fields. But in every 5 minutes there will huge write operations such as 2-10 million rows and then there will be reads(but performance is not as crucial in read as in the write operation). I'm considering to use a NoSQL db here in order to store the data but couldn't decide on which one to use. Couchdb seems to have a stable driver called pillowtalk for C; but Mongo's driver doesn't look as promising as pillowtalk. I'm also open to other suggestions. What is your recommendation?

    Read the article

  • Javascript === vs == : Does it matter which "equal" operator I use?

    - by bcasp
    I'm using JSLint to go through some horrific JavaScript at work and it's returning a huge number of suggestions to replace == with === when doing things like comparing 'idSele_UNVEHtype.value.length == 0' inside of an if statement. I'm basically wondering if there is a performance benefit to replacing == with ===. Any performance improvement would probably be welcomed as there are hundreds (if not thousands) of these comparison operators being used throughout the file. I tried searching for relevant information to this question, but trying to search for something like '=== vs ==' doesn't seem to work so well with search engines...

    Read the article

< Previous Page | 86 87 88 89 90 91 92 93 94 95 96 97  | Next Page >