Search Results

Search found 385 results on 16 pages for 'reasoning'.

Page 13/16 | < Previous Page | 9 10 11 12 13 14 15 16  | Next Page >

  • .NET Splash screen issues

    - by CODe
    I have a splash screen for my C# database application that is called via the Shown event. The splash screen contains some information that is preprocessed when the main form's constructor is called, hence why I'm using the Shown event, because that information should be available. However, when the splash screen is shown, the main form is whited out, and the menu bar, bottom menu bar, and even the gray background are all white and invisible. It looks like the program is hanging, but after the 5 second delay I have built in, the banner goes away and the program is shown normally. Also, on the banner, I have labels that are not shown when the splash screen displays... Here is my code, some reasoning behind why it isn't working would help greatly. SPLASH SCREEN CODE : public partial class StartupBanner : Form { public StartupBanner(int missingNum, int expiredNum) { InitializeComponent(); missingLabel.Text = missingNum.ToString() + " MISSING POLICIES"; expiredLabel.Text = expiredNum.ToString() + " EXPIRED POLICIES"; } } CALLING CODE : private void MainForm_Shown(object sender, EventArgs e) { StartupBanner startup = new StartupBanner(missingPoliciesNum, expiredPoliciesNum); startup.MdiParent = this; startup.Show(); Thread.Sleep(5000); startup.Close(); } Using startup.ShowDialog() shows the correct label information on the splash screen, but that locks up the application, and I need the splash to go away after about 5 seconds, which is why it's a splash. ;)

    Read the article

  • Does Perl auto-vivify variables used as references in subroutine calls?

    - by FM
    I've declared 2010 to be the year of higher-order programming, so I'm learning Haskell. The introduction has a slick quick-sort demo, and I thought, "Hey, that's easy to do in Perl". It turned to be easier than I expected. Note that I don't have to worry about whether my partitions ($less and $more) are defined. Normally you can't use an undefined value as an array reference. use strict; use warnings; use List::MoreUtils qw(part); my @data = (5,6,7,4,2,9,10,9,5,1); my @sorted = qsort(@data); print "@sorted\n"; sub qsort { return unless @_; my $pivot = shift @_; my ($less, $more) = part { $_ < $pivot ? 0 : 1 } @_; # Works, even though $less and $more are sometimes undefined. return qsort(@$less), $pivot, qsort(@$more); } As best I can tell, Perl will auto-vivify a variable that you try to use as a reference -- but only if you are passing it to a subroutine. For example, my call to foo() works, but not the attempted print. use Data::Dumper qw(Dumper); sub foo { print "Running foo(@_)\n" } my ($x); print Dumper($x); # Fatal: Can't use an undefined value as an ARRAY reference. # print @$x, "\n"; # But this works. foo(@$x); # Auto-vivification: $x is now []. print Dumper($x); My questions: Am I understanding this behavior correctly? What is the explanation or reasoning behind why Perl does this? Is this behavior explained anywhere in the docs?

    Read the article

  • Is NUnit's ExpectedExceptionAttribute only way to test if something raises an exception?

    - by Dariusz Walczak
    Hello, I'm completely new at C# and NUnit. In Boost.Test there is a family of BOOST_*_THROW macros. In Python's test module there is TestCase.assertRaises method. As far as I understand it, in C# with NUnit (2.4.8) the only method of doing exception test is to use ExpectedExceptionAttribute. Why should I prefer ExpectedExceptionAttribute over - let's say - Boost.Test's approach? What reasoning can stand behind this design decision? Why is that better in case of C# and NUnit? Finally, if I decide to use ExpectedExceptionAttribute, how can I do some additional tests after exception was raised and catched? Let's say that I want to test requirement saying that object has to be valid after some setter raised System.IndexOutOfRangeException. How would you fix following code to compile and work as expected? [Test] public void TestSetterException() { Sth.SomeClass obj = new SomeClass(); // Following statement won't compile. Assert.Raises( "System.IndexOutOfRangeException", obj.SetValueAt( -1, "foo" ) ); Assert.IsTrue( obj.IsValid() ); } Edit: Thanks for your answers. Today, I've found an It's the Tests blog entry where all three methods described by you are mentioned (and one more minor variation). It's shame that I couldn't find it before :-(.

    Read the article

  • When does an ARM7 processor increase its PC register?

    - by Summer_More_More_Tea
    Hi everyone: I'm thinking about this question for a time: when does an ARM7(with 3 pipelines) processor increase its PC register. I originally thought that after an instruction has been executed, the processor first check is there any exception in the last execution, then increase PC by 2 or 4 depending on current state. If an exception occur, ARM7 will change its running mode, store PC in the LR of current mode and begin to process current exception without modifying the PC register. But it make no sense when analyzing returning instructions. I can not work out why PC will be assigned LR when returning from an undefined-instruction-exception while LR-4 from prefetch-abort-exception, don't both of these exceptions happened at the decoding state? What's more, according to my textbook, PC will always be assigned LR-4 when returning from prefetch-abort-exception no matter what state the processor is(ARM or Thumb) before exception occurs. However, I think PC should be assigned LR-2 if the original state is Thumb, since a Thumb-instruction is 2 bytes long instead of 4 bytes which an ARM-instruction holds, and we just wanna roll-back an instruction in current state. Is there any flaws in my reasoning or something wrong with the textbook. Seems a long question. I really hope anyone can help me get the right answer. Thanks in advance.

    Read the article

  • How to avoid multiple, unused has_many associations when using multiple models for the same entity (

    - by mikep
    Hello, I'm looking for a nice, Ruby/Rails-esque solution for something. I'm trying to split up some data using multiple tables, rather than just using one gigantic table. My reasoning is pretty much to try and avoid the performance drop that would come with having a big table. So, rather than have one table called books, I have multiple tables: books1, books2, books3, etc. (I know that I could use a partition, but, for now, I've decided to go the 'multiple tables' route.) Each user has their books placed into a specific table. The actual book table is chosen when the user is created, and all of their books go into the same table. The goal is to try and keep each table pretty much even -- but that's a different issue. One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following: class User < ActiveRecord::Base has_many :books1, :books2, :books3, :books4, :books5 end class Books1 < ActiveRecord::Base belongs_to :user end class Books2 < ActiveRecord::Base belongs_to :user end First off, for each specific user, only one of the book tables would be usable/applicable, since all of a user's books are stored in the same table. So, only one of the associations would be in use at any time and any other has_many :bookX association that was loaded would be a waste. I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this. Is there's some sort of special Ruby/Rails methodology that could be applied here to avoid having to have all of those has_many associations? Also, does anyone have any advice on how to abstract the fact that there's multiple book tables behind a single books model/class?

    Read the article

  • Web Shop Schema - Document Db

    - by Maxem
    I'd like to evaluate a document db, probably mongo db in an ASP.Net MVC web shop. A little reasoning at the beginning: There are about 2 million products. The product model would be pretty bad for rdbms as there'd be many different kinds of products with unique attributes. For example, there'd be books which have isbn, authors, title, pages etc as well as dvds with play time, directors, artists etc and quite a few more types. In the end, I'd have about 9 different products with a combined column count (counting common columns like title only once) of about 70 to 100 whereas each individual product has 15 columns at most. The three commonly used ways in RDBMS would be: EAV model which would have pretty bad performance characteristics and would make it either impractical or perform even worse if I'd like to display the author of a book in a list of different products (think start page, recommended products etc.). Ignore the column count and put it all in the product table: Although I deal with somewhat bigger databases (row wise), I don't have any experience with tables with more than 20 columns as far as performance is concered but I guess 100 columns would have some implications. Create a table for each product type: I personally don't like this approach as it complicates everything else. C# Driver / Classes: I'd like to use the NoRM driver and so far I think i'll try to create a product dto that contains all properties (grouped within detail classes like book details, except for those properties that should be displayed on list views etc.). In the app I'll use BookBehavior / DvdBehaviour which are wrappers around a product dto but only expose the revelent Properties. My questions now: Are my performance concerns with the many columns approach valid? Did I overlook something and there is a much better way to do it in an RDBMS? Is MongoDb on Windows stable enough? Does my approach with different behaviour wrappers make sense?

    Read the article

  • Get Function Pointer to function in a shared library I didn't directly load

    - by bdk
    My Linux application (A) links against a Third Party shared Library (B) which I don't have source code to. This library makes use of another third party shared library that I don't have source code to (C). I believe that (B) uses dlopen to access (C) instead of directly linking. My reasoning for this is that 'ldd' on (B) does not show (C) and objdump -X (B) shows references to dlopen/dlclose/dlsym. My requirement is that I need to in my code for (A) get a function pointer to a function foo() located in (C). Normally I'd use dlsym for this, but I need to pass it the handle returned from dlopen which I don't have since (B) does not expose this. - For the larger context: I need to modify the function in (C) such that everytime it calls its helper function bar() (also located in (C)), it also calls a function with the same signature located in (A) with the same parameters (Basically inject my code into the codepath of (C) foo()-bar(). I believe I've found a way to accomplish this using gdb, but in order to port my gdb command list, but I'm stuck on the step of getting the function pointer. I'm also open to alternatives to accomplish the same task rather than the exact problem as stated above Edit: After writing this I realized I can probably just do another dlopen on the file in my code and the symbols returned via dlsym on that handle should be the same as received via the original dlopen, If I'm reading the dlopen man page correctly. However I'm still interested in advice or assistance with the my larger context, If theres a better way to go about this

    Read the article

  • SSL signed certificates for internal use

    - by rogueprocess
    I have a distributed application consisting of many components that communicate over TCP (for examle JMS) and HTTP. All components run on internal hardware, with internal IP addresses, and are not accessible to the public. I want to make the communication secure using SSL. Does it make sense to purchase signed certificates from a well-known certificate authority? Or should I just use self-signed certs? My understanding of the advantage of trusted certs is that the authority is an entity that can be trusted by the general public - but that is only an issue when the general public needs to be sure that the entity at a particular domain is who they say they are. Therefore, in my case, where the same organization is responsible for the components at both ends of the communication, and everything in between, a publicly trusted authority would be pointless. In other words, if I generate and sign a certificate for my own server, I know that it's trustworthy. And no one from outside the organization will ever be asked to trust this certificate. That is my reasoning - am I correct, or is there some potential advantage to using certs from a known authority?

    Read the article

  • C/C++: feedback in analyzing a code example

    - by KaiserJohaan
    Hello, I have a piece of code from an assignment I am uncertain about. I feel confident that I know the answer, but I just want to double-check with the community incase there's something I forgot. The title is basically secure coding and the question is just to explain the results. int main() { unsigned int i = 1; unsigned int c = 1; while (i > 0) { i = i*2; c++; } printf("%d\n", c); return 0; } My reasoning is this: At first glance you could imagine the code would run forever, considering it's initialized to a positive value and ever increasing. This of course is wrong because eventually the value will grow so large it will cause an integer overflow. This in turn is not entirely true either, because eventally it will force the variable 'i' to be signed by making the last bit to 1 and therefore regarded as a negative number, therefore terminating the loop. So it is not writing to unallocated memory and therefore cause integer overflow, but rather violating the data type and therefore causing the loop to terminate. I am quite sure this is the reason, but I just want to double check. Any opinions?

    Read the article

  • Number of simple mutations to change one string to another?

    - by mstksg
    Hi; I'm sure you've all heard of the "Word game", where you try to change one word to another by changing one letter at a time, and only going through valid English words. I'm trying to implement an A* Algorithm to solve it (just to flesh out my understanding of A*) and one of the things that is needed is a minimum-distance heuristic. That is, the minimum number of one of these three mutations that can turn an arbitrary string a into another string b: 1) Change one letter for another 2) Add one letter at a spot before or after any letter 3) Remove any letter Examples aabca => abaca: aabca abca abaca = 2 abcdebf => bgabf: abcdebf bcdebf bcdbf bgdbf bgabf = 4 I've tried many algorithms out; I can't seem to find one that gives the actual answer every time. In fact, sometimes I'm not sure if even my human reasoning is finding the best answer. Does anyone know any algorithm for such purpose? Or maybe can help me find one? Thanks.

    Read the article

  • Encapsulating user input of data for a class (C++)

    - by Dr. Monkey
    For an assignment I've made a simple C++ program that uses a superclass (Student) and two subclasses (CourseStudent and ResearchStudent) to store a list of students and print out their details, with different details shown for the two different types of students (using overriding of the display() method from Student). My question is about how the program collects input from the user of things like the student name, ID number, unit and fee information (for a course student) and research information (for research students): My implementation has the prompting for user input and the collecting of that input handled within the classes themselves. The reasoning behind this was that each class knows what kind of input it needs, so it makes sense to me to have it know how to ask for it (given an ostream through which to ask and an istream to collect the input from). My lecturer says that the prompting and input should all be handled in the main program, which seems to me somewhat messier, and would make it trickier to extend the program to handle different types of students. I am considering, as a compromise, to make a helper class that handles the prompting and collection of user input for each type of Student, which could then be called on by the main program. The advantage of this would be that the student classes don't have as much in them (so they're cleaner), but also they can be bundled with the helper classes if the input functionality is required. This also means more classes of Student could be added without having to make major changes to the main program, as long as helper classes are provided for these new classes. Also the helper class could be swapped for an alternative language version without having to make any changes to the class itself. What are the major advantages and disadvantages of the three different options for user input (fully encapsulated, helper class or in the main program)?

    Read the article

  • Type result with Ternary operator in C#

    - by Vaccano
    I am trying to use the ternary operator, but I am getting hung up on the type it thinks the result should be. Below is an example that I have contrived to show the issue I am having: class Program { public static void OutputDateTime(DateTime? datetime) { Console.WriteLine(datetime); } public static bool IsDateTimeHappy(DateTime datetime) { if (DateTime.Compare(datetime, DateTime.Parse("1/1")) == 0) return true; return false; } static void Main(string[] args) { DateTime myDateTime = DateTime.Now; OutputDateTime(IsDateTimeHappy(myDateTime) ? null : myDateTime); Console.ReadLine(); ^ } | } | // This line has the compile issue ---------------+ On the line indicated above, I get the following compile error: Type of conditional expression cannot be determined because there is no implicit conversion between '< null ' and 'System.DateTime' I am confused because the parameter is a nullable type (DateTime?). Why does it need to convert at all? If it is null then use that, if it is a date time then use that. I was under the impression that: condition ? first_expression : second_expression; was the same as: if (condition) first_expression; else second_expression; Clearly this is not the case. What is the reasoning behind this? (NOTE: I know that if I make "myDateTime" a nullable DateTime then it will work. But why does it need it? As I stated earlier this is a contrived example. In my real example "myDateTime" is a data mapped value that cannot be made nullable.)

    Read the article

  • What are the arguments against the inclusion of server side scripting in JavaScript code blocks?

    - by James Wiseman
    I've been arguing for some time against embedding server-side tags in JavaScript code, but was put on the spot today by a developer who seemed unconvinced The code in question was a legacy ASP application, although this is largely unimportant as it could equally apply to ASP.NET or PHP (for example). The example in question revolved around the use of a constant that they had defined in ServerSide code. 'VB Const MY_CONST: MY_CONST = 1 If sMyVbVar = MY_CONST Then 'Do Something End If //JavaScript if (sMyJsVar === "<%= MY_CONST%>"){ //DoSomething } My standard arguments against this are: Script injection: The server-side tag could include code that can break the JavaScript code Unit testing. Harder to isolate units of code for testing Code Separation : We should keep web page technologies apart as much as possible. The reason for doing this was so that the developer did not have to define the constant in two places. They reasoned that as it was a value that they controlled, that it wasn't subject to script injection. This reduced my justification for (1) to "We're trying to keep the standards simple, and defining exception cases would confuse people" The unit testing and code separation arguments did not hold water either, as the page itself was a horrible amalgam of HTML, JavaScript, ASP.NET, CSS, XML....you name it, it was there. No code that was every going to be included in this page could possibly be unit tested. So I found myself feeling like a bit of a pedant insisting that the code was changed, given the circumstances. Are there any further arguments that might support my reasoning, or am I, in fact being a bit pedantic in this insistence?

    Read the article

  • I can learn either C or Java, which one should I choose first? Should I take them concurrently?

    - by GR1000
    I realize this is a subject of hot debate, but I'm interested in opinions that relate to my specific situation. I want to learn the basics and fundamentals of programming, so I'm already taking a college course in general programming concepts. It isn't covering a specific language, but it's giving me a solid foundation that I can build upon when I move on to a class that teaches a specific language. My two options for a specific language are Java and C because those are the two languages taught at the college I want to take classes from. What I want to do is learn a complex language so that I can apply that knowledge to languages that I use, or will eventually use, in my current job building web pages: XHTML, CSS, JavaScript, PHP, XML, ActionScript. I'm not necessariy interested in becoming a Java developer or a C developer in the immediate future, but I do have aspirations of developing web applications and iPod/iPhone applications. So, basically, I'm looking for answers to these questions and the reasoning behind them: Do I take the introductory course in Java first, and then take the intro course in C, or Do I take C first and then take Java? Is there any reason not to take them concurrently? Should I skip C altogether as Java covers everything I need to know? EDIT: Thanks everyone for your thoughtful and insightful responses.

    Read the article

  • Initial capacity of collection types, i.e. Dictionary, List

    - by Neil N
    Certain collection types in .Net have an optional "Initial Capacity" constructor param. i.e. Dictionary<string, string> something = new Dictionary<string,string>(20); List<string> anything = new List<string>(50); I can't seem to find what the default initial capacity is for these objects on MSDN. If I know I will only be storing 12 or so items in a dictionary, doesn't it make sense to set the initial capacity to something like 20? My reasoning is, assuming the capacity grows like it does for a StringBuiler, which doubles each time the capacity is hit, and each re-allocation is costly, why not pre-set the size to something you know will hold your data, with some extra room just in case? If the initial capacity is 100, and I know I will only need a dozen or so, it seems as though the rest of that allocated RAM is allocated for nothing. Please spare me the "premature optimization" speil for the O(n^n)th time. I know it won't make my apps any faster or save any meaningful amount of memory, this is mostly out of curiosity.

    Read the article

  • How I May Have Taken A Wrong Path in Programming

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Connection Pool Strategy: Good, Bad or Ugly?

    - by Drew
    I'm in charge of developing and maintaining a group of Web Applications that are centered around similar data. The architecture I decided on at the time was that each application would have their own database and web-root application. Each application maintains a connection pool to its own database and a central database for shared data (logins, etc.) A co-worker has been positing that this strategy will not scale because having so many different connection pools will not be scalable and that we should refactor the database so that all of the different applications use a single central database and that any modifications that may be unique to a system will need to be reflected from that one database and then use a single pool powered by Tomcat. He has posited that there is a lot of "meta data" that goes back and forth across the network to maintain a connection pool. My understanding is that with proper tuning to use only as many connections as necessary across the different pools (low volume apps getting less connections, high volume apps getting more, etc.) that the number of pools doesn't matter compared to the number of connections or more formally that the difference in overhead required to maintain 3 pools of 10 connections is negligible compared to 1 pool of 30 connections. The reasoning behind initially breaking the systems into a one-app-one-database design was that there are likely going to be differences between the apps and that each system could make modifications on the schema as needed. Similarly, it eliminated the possibility of system data bleeding through to other apps. Unfortunately there is not strong leadership in the company to make a hard decision. Although my co-worker is backing up his worries only with vagueness, I want to make sure I understand the ramifications of multiple small databases/connections versus one large database/connection pool.

    Read the article

  • Is it "right" to translate error messages?

    - by Iraklis
    This is somehow subjective depending on the target translation language, but bear with me for a sec. I have recently been involved in a translation project. The goal was to translate the strings of an MVC framework to the Greek language. 70% of the language strings of the framework where translated, however 30% where intentionally left out. The decision was that we will not translate error messages aimed towards the developer of the application. The reasoning behind this (in short) was: are aimed towards designers/programmers. Programmers ( and even designers :) ) should have a basic understanding of English, at least enough so they can search on it on Google if they do not know what it means. (racist?) are aimed towards the developer and in a perfect world should not be displayed to the end user of the application as they concern the inner workings of the web application itself. i.e "You must set the database name in your database config file." and perhaps most importantly, they make the life of the developer harder when he tries to get more information/help regarding the error. For example the above error yields 8 results in Google (in quotes), whereas its Greek translation yields exactly 0. I know that this depends on the popularity of the target translation language and the application itself. For example I'm guessing that there are is vast amount of documentation regarding German SAP error messages (i know, i know, SAP IS German, but you get the point), as opposed to Greek Error Messages documentation regarding random application X which has about 500 installations worldwide. So to summarize: When you develop language translation packs for your applications do you translate error messages? Do you only do for predominant languages like English/Spanish/German/French? Or do you live them intact? I'm not looking for the "right" or "correct" answer, I'm looking for a "best-practices" answer, or if this problem is defined in any "official" standard/policy that you have had experience with.

    Read the article

  • Is this an acceptable use of "ASCII arithmetic"?

    - by jmgant
    I've got a string value of the form 10123X123456 where 10 is the year, 123 is the day number within the year, and the rest is unique system-generated stuff. Under certain circumstances, I need to add 400 to the day number, so that the number above, for example, would become 10523X123456. My first idea was to substring those three characters, convert them to an integer, add 400 to it, convert them back to a string and then call replace on the original string. That works. But then it occurred to me that the only character I actually need to change is the third one, and that the original value would always be 0-3, so there would never be any "carrying" problems. It further occurred to me that the ASCII code points for the numbers are consecutive, so adding the number 4 to the character "0", for example, would result in "4", and so forth. So that's what I ended up doing. My question is, is there any reason that won't always work? I generally avoid "ASCII arithmetic" on the grounds that it's not cross-platform or internationalization friendly. But it seems reasonable to assume that the code points for numbers will always be sequential, i.e., "4" will always be 1 more than "3". Anybody see any problem with this reasoning? Here's the code. string input = "10123X123456"; input[2] += 4; //Output should be 10523X123456

    Read the article

  • Should a C++ constructor do real work?

    - by Wade Williams
    I'm strugging with some advice I have in the back of my mind but for which I can't remember the reasoning. I seem to remember at some point reading some advice (can't remember the source) that C++ constructors should not do real work. Rather, they should initialize variables only. The advice when on to explain that real work should be done in some sort of init() method, to be called separately after the instance was created. The situation is I have a class that represents a hardware device. It makes logical sense to me for the constructor to call the routines that query the device in order to build up the instance variables that describe the device. In other words, once new instantiates the object, the developer receives an object which is ready to be used, no separate call to object-init() required. Is there a good reason why constructors shouldn't do real work? Obviously it could slow allocation time, but that wouldn't be any different if calling a separate method immediately after allocation. Just trying to figure out what gotchas I not currently considering that might have lead to such advice.

    Read the article

  • MySQL - What is wrong with this query or my database? Terrible performance.

    - by Moss
    SELECT * from `employees` a LEFT JOIN (SELECT phone1 p1, count(*) c, FROM `employees` GROUP BY phone1) b ON a.phone1 = b.p1; I'm not sure if it is this query in particular that has the problem. I have been getting terrible performance in general with this database. The table in question has 120,000 rows. I have tried this particular query remotely and locally with the MyISAM and InnoDB engines, with different types of joins, and with and without an index on phone1. I can get this to complete in about 4 minutes on a 10,000 row table successfully but performance drops exponentially with larger tables. Remotely it will lose connection to the server and locally it brings my system to its knees and seems to go on forever. This query is only a smaller step I was trying to do when a larger query couldn't complete. Maybe I should explain the whole scenario. I have one big flat ugly table that lists a bunch of people and their contact info and the info of the companies they work for. I'm trying to normalize the database and intelligently determine which phone numbers apply to individual people and which apply to an office location. My reasoning is that if a phone number occurs multiple times and the number of occurrence equals the number of times that the street address it is attached to occurs then it must be an office number. So the first step is to count each phone number grouping by phone number. Normally if you just use COUNT()...GROUP BY it will only list the first record it finds in that group so I figured I have to join the full table to the count table where the phone number matches. This does work but as I said I can't successfully complete it on any table much larger than 10,000 rows. This seems pathetic and this doesn't seem like a crazy query to do. Is there a better way to achieve what I want or do I have to break my large table into 12 pieces or is there something wrong with the table or db?

    Read the article

  • JQuery click anywhere except certain divs and issues with dynamically added html

    - by jCuga
    I want this webpage to highlight certain elements when you click on one of them, but if you click anywhere else on the page, then all of these elements should no longer be highlighted. I accomplished this task by the following, and it works just fine except for one thing (described below): $(document).click(function() { // Do stuff when clicking anywhere but on elements of class suggestion_box $(".suggestion_box").css('background-color', '#FFFFFF'); }); $(".suggestion_box").click(function() { // means you clicked on an object belonging to class suggestion_box return false; }); // the code for handling onclicks for each element function clickSuggestion() { // remove all highlighting $(".suggestion_box").css('background-color', '#FFFFFF'); // then highlight only a specific item $("div[name=" + arguments[0] + "]").css('background-color', '#CCFFCC'); } This way of enforcing the highlighting of elements works fine until I add more html to the page without having a new page load. This is done by .append() and .prepend() What I suspected from debugging was that the page is not "aware" of the new elements that were added to the page dynamically. Despite the new, dynamically added elements having the appropriate class names/IDs/names/onclicks ect, they never get highlighted like the rest of the elements (which continue to work fine the entire time). I was wondering if a possible reason for why my approach does not work for the dynamically added content is that the page is not able to recognize the elements that were not present during the pageload. And if this is a possibility, then is there a way to reconcile this without a pageload? If this line of reasoning is wrong, then the code I have above is probably not enough to show what's wrong with my webpage. But I'm really just interested in whether or not this line of thought is a possibility.

    Read the article

  • What programming language do you wish would quietly retire? [closed]

    - by Gregory Higley
    This is the inverse of the "What programming language do you wish would catch on?" question. I was a Delphi programmer for many years, and I still appreciate its power, but I dislike verbose programming languages. So I would love to see Pascal put out to pasture. The same goes for BASIC in any form, despite the fact that it's the language I cut my teeth on. When I look at cathedrals of beauty like Haskell and REBOL, BASIC just makes me cringe. (VB.NET is tolerable, but barely. It has a few nice language features I'd like to see moved to C#.) My dislike of Pascal and VB.NET is subjective. They are powerful languages, but I dislike their syntax esthetically. Try to explain your reasoning, if you can, even if it's just "I don't like its syntax." This question is not meant to be a flame war, argumentative, or hateful. It's meant to be a straightforward, honest discussion of programmers' dislikes.

    Read the article

  • Java File Handling, what did I do wrong?

    - by Urda
    Wrote up a basic file handler for a Java Homework assignment, and when I got the assignment back I had some notes about failing to catch a few instances: Buffer from file could have been null. File was not found File stream wasn't closed Here is the block of code that is used for opening a file: /** * Create a Filestream, Buffer, and a String to store the Buffer. */ FileInputStream fin = null; BufferedReader buffRead = null; String loadedString = null; /** Try to open the file from user input */ try { fin = new FileInputStream(programPath + fileToParse); buffRead = new BufferedReader(new InputStreamReader(fin)); loadedString = buffRead.readLine(); fin.close(); } /** Catch the error if we can't open the file */ catch(IOException e) { System.err.println("CRITICAL: Unable to open text file!"); System.err.println("Exiting!"); System.exit(-1); } The one comment I had from him was that fin.close(); needed to be in a finally block, which I did not have at all. But I thought that the way I have created the try/catch it would have prevented an issue with the file not opening. Let me be clear on a few things: This is not for a current assignment (not trying to get someone to do my own work), I have already created my project and have been graded on it. I did not fully understand my Professor's reasoning myself. Finally, I do not have a lot of Java experience, so I was a little confused why my catch wasn't good enough.

    Read the article

  • What are the drawbacks of this Classing format?

    - by Keysle
    This is a 3 layer example of my classing format function __(_){return _.constructor} //class var _ = ( CLASS = function(){ this.variable = 0; this.sub = new CLASS.SUBCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } _.divePeak = function(){ alert('lvl'+this.variable); this.sub.variable += 5; } //sub class _ = ( __(_).SUBCLASS = function(){ this.variable = 1; this.sub = new CLASS.SUBCLASS.DEEPCLASS(); }).prototype; _.func = function(){ alert('lvl'+this.variable); this.sub.func(); } //deep class _ = ( __(_).DEEPCLASS = function(){ this.variable = 2; }).prototype; _.func = function(){ alert('lvl'+this.variable); } Before you blow a gasket, let me explain myself. The purpose behind the underscores is to accelerate the time needed to specify functions for a class and also specify sub classes of a class. To me it's easier to read. I KNOW, this does interfere with underscore.js if you intend to use it in your classes. I'm sure _.js can be easily switched over to another $ymbol though ... oh wait, But I digress. Why have classes within a class? because solar.system() and social.system() mean two totally different things but it's convenient to use the same name. Why user underscores to manage the definition of the class? because "Solar.System.prototype" took me about 2 seconds to type out and 2 typos to correct. It also keeps all function names for all classes in the same column of texts, which is nice for legibility. All I'm doing is presenting my reasoning behind this method and why I came up with it. I'm 3 days into learning OO JS and I am very willing to accept that I might have messed up.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16  | Next Page >