Search Results

Search found 23323 results on 933 pages for 'worst is better'.

Page 176/933 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Please recommend the one SQL book for a developer without a lot of SQL experience.

    - by Hamish Grubijan
    I have too many hobbies outside of my profession, so I am hoping to read just one good book, and get a tad better at SQL. My background: took one boring, theoretical class in databases, was exposed to SQL professionally (in addition to several other languages and technologies) for a year and a half. I've done about 5 years of C#/Java stuff professionally. By "professionally" I mean doing it full-time while someone paid me more than $25/hr for it - not necessarily that I created masterpieces along the way :) I want to become better at SQL (coding aspect; DBA is not of particular importance to me right now). I am looking for one book to give me a solid foundation in it. When I needed to learn some C from almost a scratch, I used (and loved) this book: http://www.amazon.com/Programming-Language-2nd-Brian-Kernighan/dp/0131103628 I am hoping to find one just like this for SQL. I am not doing web development now or in a near future, and I am looking for something that is hopefully not specific to any one sub-industry. Thanks in advance.

    Read the article

  • Casting pointer to object to void * in C++

    - by JB
    I've been reading StackOverflow too much and started doubting all the code I've ever written, I keep thinking "Is that undefined behavour?" even in code that has been working for ages. So my question - Is it safe and well defined behavour to cast a pointer to an object (In this case abstract interface classes) to a void* and then later on cast them back to the original class and call method using them? I'm fully aware that the code that does this is probably awful. I wouldn't even consider writing it like this now (this is old code which I don't really want to change), so I'm not looking for a discussion of better ways to do this. I already know how to write it better if I ever did this again. But if it's actually broken to rely on this in C++ then I'll have to look at changing the code, if it's merely awful code then changing it won't be a priority. I would have had no doubts about something this simple a year or two ago but as my understanding of C++ increases I actually find I have more and more worries about code being safe under the standards even if it works perfectly well. Perhaps reading too much stack overflow is a bad thing for productivity sometimes :P

    Read the article

  • Keeping track of dependency revisions

    - by Samaursa
    I have a project with several dependencies that are in various repositories. Each time I commit changes to my project, I make sure I write the revision numbers of all the dependent repositories so that in the event I ever have to come back to this revision (let's call it 5), I can immediately know which revisions of the dependent repositories revision 5 is guaranteed to work with, update the dependencies to the specified revisions, compile and run the project. So for example if I have: Dep1 @ Revisions 10 Dep2 @ Revisions 20 Dep3 @ Revisions 10 Proj @ Revisions 35 And let's say that when Proj was on revision 17, the Dep1 revision was 5, Dep2 revision was 13 and Dep3 revision was 3. So in my SVN logs, I recorded something like this: !! Works with Dep1 Rev 5, Dep2 Rev 13, Dep3 Rev 3 To me this seems primitive and makes me believe that there is a better way to do it. Now in one of my other questions, Ivy Dependency Manager has been recommended. I have not looked at it in detail yet (seems complicated and yet another thing I must learn). To me it seems like the log of SVN (and Mercurial etc.) could have been split into Log and Dependencies (if any) where the latter could be switched off if there were no dependencies (unless of course I am unaware of an easier/better solution). This would allow for a cleaner log that maybe even warned at each new commit to check the previously defined dependencies again and make sure they have not changed. So, I was wondering how everyone manages this situations and if you have any tips, techniques, programs, suggestions that you can offer. Thank you.

    Read the article

  • Handling Denormalized Schema with Eclipselink

    - by iamrohitbanga
    Hello All I have a denormalized table containing employee information. The fields are employee id, name and department name. The primary key is a composite one consisting of all three fields. An employee can belong to multiple departments. I want to read/write the objects in the table using the Eclipselink Dynamic Persistence API (which is infact a wrapper on top of JPA descriptors etc.). Example Data: 1 e1 dep1 2 e1 dep2 3 e2 dep1 4 e2 dep3 5 e3 dep1 5 e3 dep2 5 e3 dep3 A normal ReadAllQuery (select query) on the table returns a DynamicEntity corresponding to each row in the table. However I want to club all entities based on the emp id and return all the departments he belongs to as a list. I can merge the entities after retrieving them but if I can use some Eclipselink feature out of the box then it would be better. One way to do the read is the following: I create two dynamic types corresponding to employee: Having id,name as the primary key Having id, department as the primary key, I create a OneToManyMapping from the first type to the second one. Then when I query the first type it does return the departments to which employee belongs as a list of DynamicEntity of the second type. This satisfies the read scenario. Is there a better way of doing this? Is this inherently supported by Eclipselink or JPA? I cannot get the same dynamic type configuration working for the write scenario. This is because when I write the changes using the writeObject method of UnitOfWork, it generates insert queries which enter the following entries in the table id name department 102 emp_102 102 st 102 dep_102 102 dep_102 102 dep_102 instead of: id name department 102 emp_102 st 102 emp_102 dep_102 102 emp_102 dep_102 102 emp_102 dep_102 Is there any way I can get write to work with this schema using eclipselink? I want to avoid doing the heavy lifting of merging the rows for such a denormalized schema or generating each row before doing a write. Is there no clean way of doing this using Eclipselink or JPA? Thanks in Advance.

    Read the article

  • Terminate function on System.in .. possible?

    - by Ronald
    I am currently working on a project where I have to make an agent to interact with a server. Each 50ms, the server will receive the last thing I outputted to System.out and send me a new set of lines as a 'state' through the System.in printstream to analyze and send my next message to System.out. Also, if the server receives multiple outputs from me, it only regards the most recent one. .. As for my question: My program originally constructed a tree and then analyzed each leaf node to see which would be optimal, and then waited around for the next input, but I can recursively do a deeper tree search that would make my output 'better' (and again and again to keep returning a better result). Using this and the fact that if the server receives multiple outputs, it only takes the most recent one, I could do each level, print my result and start the next level. But here comes my problem... I can't be stuck in some complex algorithm while I am supposed to receiving the next input as I will then miss it. So I was wondering if there is a way to cancel anything else I am doing when I receive something via System.in and then go back to the beginning of the function and start the search again with the new set of input (and rinse and repeat..) I hope this all makes sense, Thank ye all

    Read the article

  • Declaring an enum within a class

    - by bporter
    In the following code snippet, the Color enum is declared within the Car class in order to limit the scope of the enum and to try not to "pollute" the global namespace. class Car { public: enum Color { RED, BLUE, WHITE }; void SetColor( Car::Color color ) { _color = color; } Car::Color GetColor() const { return _color; } private: Car::Color _color; }; (1) Is this a good way to limit the scope of the Color enum? Or, should I declare it outside of the Car class, but possibly within its own namespace or struct? I just came across this article today, which advocates the latter and discusses some nice points about enums: http://gamesfromwithin.com/stupid-c-tricks-2-better-enums. (2) In this example, when working within the class, is it best to code the enum as Car::Color, or would just Color suffice? (I assume the former is better, just in case there is another Color enum declared in the global namespace. That way, at least, we are explicit about the enum to we are referring.) Thanks in advance for any input on this.

    Read the article

  • Keeping the number of objects and event-listeners on stage as low as possible

    - by DevEight
    Hello. I am creating a site with lots of big scrollable text-boxes in it. Each text-box object contained some text, and two buttons to scroll up/down with. The scroll buttons each had an event listener so the text moved when you clicked them. These text-boxes were stacked on-top of each other with all except one having an alpha of 0. If I wanted to change which text-box is active I move it to the front and call a small TweenLite animation. To the left (outside of the text-box objects) I have an object similar to a menu. It also has about 12 or so event-listeners (one for every button). This turns out cause A LOT of lag an it's very troublesome for my laptop to run it. What I need help with doing is to reduce the number of event-listeners on the stage and also the amount of text-boxes. What I was thinking was to add the text-boxes using AS so I only have 1 on the stage at a time but I couldn't figure out how to do it. I also thought it might be better to just use 1 big event-listeners and from mouseX and mouseY decide which button the user is trying to push. Are there any better alternatives to this? And if so, please elaborate on how to do it.

    Read the article

  • vSphere education - What are the downsides of configuring virtual machines with *too* much RAM?

    - by ewwhite
    VMware memory management seems to be a tricky balancing act. With cluster RAM, Resource Pools, VMware's management techniques (TPS, ballooning, host swapping), in-guest RAM utilization, swapping, reservations, shares and limits, there are a lot of variables. I'm in a situation where clients are using dedicated vSphere cluster resources. However, they are configuring the virtual machines as though they were on physical hardware. In turn, this means a standard VM build may have 4 vCPUs and 16GB or more of RAM. I come from the school of starting small (1 vCPU, minimal RAM), checking real-world use and adjusting up as necessary. Some examples from a "problem" cluster. Resource pool summary - Looks almost 4:1 overcommitted. Note the high amount of ballooned RAM. Resource allocation - The Worst Case Allocation column shows that these VMs would have access to less than 50% of their configured RAM under constrained conditions. The real-time memory utilization graph of the top VM in the listing above. 4 vCPU and 64GB RAM allocated. It averages under 9GB use. Summary of the same VM What are the downsides of overcommitting and overconfiguring resources (specifically RAM) in vSphere environments? Assuming that the VMs can run in less RAM, is it fair to say that there's overhead to configuring virtual machines with more RAM than they need? What is the counter-argument to: "if a VM has 16GB of RAM allocated, but only uses 4GB, what's the problem??"? E.g. do customers need to be educated? What specific metric should be used to meter RAM usage. Tracking the peaks of "Active" versus time?

    Read the article

  • C++ Declaring an enum within a class

    - by bporter
    In the following code snippet, the Color enum is declared within the Car class in order to limit the scope of the enum and to try not to "pollute" the global namespace. class Car { public: enum Color { RED, BLUE, WHITE }; void SetColor( Car::Color color ) { _color = color; } Car::Color GetColor() const { return _color; } private: Car::Color _color; }; (1) Is this a good way to limit the scope of the Color enum? Or, should I declare it outside of the Car class, but possibly within its own namespace or struct? I just came across this article today, which advocates the latter and discusses some nice points about enums: http://gamesfromwithin.com/stupid-c-tricks-2-better-enums. (2) In this example, when working within the class, is it best to code the enum as Car::Color, or would just Color suffice? (I assume the former is better, just in case there is another Color enum declared in the global namespace. That way, at least, we are explicit about the enum to we are referring.) Thanks in advance for any input on this.

    Read the article

  • Best (functional?) programming language to learn coming from Mathematica

    - by Will Robertson
    As a mechanical engineering PhD student, I haven't had a great pedigree in programming as part of my “day job”. I started out in Matlab (having written some Hypercard and Applescript back in the day, and being introduced to Ada, of all things, in my 1st undergrad year), learned to program—if you can call it that—in (La)TeX; and finally discovered and fell for Mathematica. Now I'm interested in learning a "real" programming language that I can enjoy in the same sort of style as Mathematica, which tries to stress functional programming since it seems to map more nicely to how certain kinds of mathematics can be written algorithmically. So which functional language should I learn? I guess the obvious answer is “as many as possible”, but let's start out humble and give a single, well-considered option a good crack. I've heard good things about, say, Haskell and Scala, but I wonder if (given my non–computer science background) I'd be better off starting in more “grounded” territory and going with Ruby or Python (the latter having the big advantage of being used for Sage, which I'd also like to investigate…after my PhD). Well, I guess this is pretty subjective, so perhaps I could rephrase: would it be better to start looking at Haskell (say) straight after an ad-hoc education to functional programming in Mathematica, or will I get more out of learning Python (say) first? In reference to the question "what do I want to do with it?", I guess my answer is "fun, and learning more". I've got this list of languages that I'd like to look at, and I don't know how to trim them down. And I'd rather start with something a little higher-level than C simply so that I can be somewhat productive without having to re-invent many wheels for any code I'd like to write.

    Read the article

  • Speed/expensive of SQLite query vs. List.contains() for "in-set" icon on list rows

    - by kpdvx
    An application I'm developing requires that the app main a local list of things, let's say books, in a local "library." Users can access their local library of books and search for books using a remote web service. The app will be aware of other users of the app through this web service, and users can browse other users' lists of books in their library. Each book is identified by a unique bookId (represented as an int). When viewing books returned through a search result or when viewing another user's book library, the individual list row cells need to visually represent if the book is in the user's local library or not. A user can have at most 5,000 books in the library, stored in SQLite on the device (and synchronized with the remote web service). My question is, to determine if the book shown in the list row is in the user's library, would it be better to directly ask SQLite (via SELECT COUNT(*)...) or to maintain, in-memory, a List or int[] array of some sort containing the unique bookIds. So, on each row display do I query SQLite or check if the List or int[] array contains the unique bookId? Because the user can have at most 5,000 books, each bookId occupies 4 bytes so at most this would use ~ 20kB. In thinking about this, and in typing this out, it seems obvious to me that it would be far better for performance if I maintained a list or int[] array of in-library bookIds vs. querying SQLite (the only caveat to maintaining an int[] array is that if books are added or removed I'll need to grow or shrink the array by hand, so with this option I'll most likely use an ArrayList or Vector, though I'm not sure of the additional memory overhead of using Integer objects as opposed to primitives). Opinions, thoughts, suggestions?

    Read the article

  • C++ Printing: Printer jams, what am I doing wrong?

    - by Kleas
    I have a problem with printing in C++. As far as I know, this code used to work on my previous printer, but ever since I got another one (an HP C7280) it started giving problems. Whenever I try to print anything, even an empty page, the page JAMS the printer. I have to manualy remove the page from the printer. I have no clue why this is happening. Am I doing something wrong, is it a driver problem, are there better ways to print in C++? I am using Windows 7 64 bit, but this problem also presented itself when I was using Windows Vista 64 bit. I use the following code: PRINTDLG pd; ZeroMemory(&pd, sizeof(pd)); pd.lStructSize = sizeof(pd); pd.hwndOwner = mainWindow; pd.hDevMode = NULL; pd.hDevNames = NULL; pd.Flags = PD_USEDEVMODECOPIESANDCOLLATE | PD_RETURNDC; pd.nCopies = 1; pd.nMinPage = 1; pd.nMaxPage = 0xFFFF; if (PrintDlg(&pd)==TRUE) { DOCINFO di; di.cbSize = sizeof(DOCINFO); di.lpszDocName = "Rumitec en Roblaco Print"; di.lpszOutput = (LPTSTR)NULL; di.fwType = 0; // Start printing StartDoc(pd.hDC, &di); StartPage(pd.hDC); initPrinter(pd.hDC); // ... // Do some drawing // ... // End printing EndPage(pd.hDC); EndDoc(pd.hDC); DeleteDC(pd.hDC); } Am I doing something wrong? Alternatively, is there a better, easier, more modern way to do it?

    Read the article

  • "2d Search" in Solr or how to get the best item of the multivalued field 'items'?

    - by Karussell
    The title is a bit awkward but I couldn't found a better one. My problem is as follows: I have several users stored as documents and I am storing several key-value-pairs or items (which have an id) for each document. Now, if I apply highlighting I can get the first n items. If you have several hundreds of such items this highlighting is necessary and works nicely. But there are two problems: The highlighted text won't contain the id and so retrieving additional information of the highlighted item text is ugly. E.g. you need to store the id in the text so that the highlighter returns it. Adding the id to the hl.fl parameter does not help. You will not get the most relevant n-items. You will get the first n items ... So how can I find the best items of a documents with multiple such items? I will now add my own findings as answers, but as I will point out. Each of them has its drawbacks. Hopefully anyone of you can point me to a better solution.

    Read the article

  • My scanner isn't responding. It is connected to a SCSI interface using Windows XP

    - by Bob
    I have a Microtek ScanMaker 9600XL scanner. The best part about it is that it is 12x17 inches. Wowza! The worst part about it is that I've had it working at one point, with the same cable, same card, same computer, but have since re-installed Windows XP on it. Currently it will turn on, and blink the Power and Ready lights. They should be solid. I've done my best to find documentation for this, but all I've really gotten is the content on the Microtek site. I've tried turning the scanner on, then the PC. Turning the PC on, then the scanner. When I try launching the software it pops up a dialog saying "ScanWizard Pro can't find any scanners! Use SCSI Check to find a scanner." I know the scanner has a pair of little buttons on the back. These cycle up/down a counter. I think it goes 0-7. Any thoughts on what that does, or how to proceed troubleshooting? I think my next step is to try each of those numbers, and do both pc booted first, and scanner booted first for each of those numbers...

    Read the article

  • Optimized 2D Tile Scrolling in OpenGL

    - by silicus
    Hello, I'm developing a 2D sidescrolling game and I need to optimize my tiling code to get a better frame rate. As of right now I'm using a texture atlas and 16x16 tiles for 480x320 screen resolution. The level scrolls in both directions, and is significantly larger than 1 screen (thousands of pixels). I use glTranslate for the actual scrolling. So far I've tried: Drawing only the on-screen tiles using glTriangles, 2 per square tile (too much overhead) Drawing the entire map as a Display List (great on a small level, way to slow on a large one) Partitioning the map into Display Lists half the size of the screen, then culling display lists (still slows down for 2-directional scrolling, overdraw is not efficient) Any advice is appreciated, but in particular I'm wondering: I've seen Vertex Arrays/VBOs suggested for this because they're dynamic. What's the best way to take advantage of this? If I simply keep 1 screen of vertices plus a bit of overdraw, I'd have to recopy the array every few frames to account for the change in relative coordinates (shift everything over and add the new rows/columns). If I use more overdraw this doesn't seem like a big win; it's like the half-screen display list idea. Does glScissor give any gain if used on a bunch of small tiles like this, be it a display list or a vertex array/VBO Would it be better just to build the level out of large textures and then use glScissor? Would losing the memory saving of tiling be an issue for mobile development if I do this (just curious, I'm currently on a PC)? This approach was mentioned here Thanks :)

    Read the article

  • Sorting by some field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and only files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) After that for each result which is folder i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such thing?

    Read the article

  • Caching queries in Django

    - by dolma33
    In a django project I only need to cache a few queries, using, because of server limitations, a cache table instead of memcached. One of those queries looks like this: Let's say I have a Parent object, which has a lot of Child objects. I need to store the result of the simple query parent.childs.all(). I have no problem with that, and everything works as expected with some code like key = "%s_children" %(parent.name) value = cache.get(key) if value is None: cache.set(key, parent.children.all(), CACHE_TIMEOUT) value = cache.get(key) But sometimes, just sometimes, the cache.set does nothing, and, after executing cache.set, cache.get(key) keeps returning None. After some test, I've noticed that cache.set is not working when parent.children.all().count() has higher values. That means that if I'm storing inside of key (for example) 600 children objects, it works fine, but it wont work with 1200 children. So my question is: is there a limit to the data that a key could store? How can I override it? Second question: which way is "better", the above code, or the following one? key = "%s_children" %(parent.name) value = cache.get(key) if value is None: value = parent.children.all() cache.set(key, value, CACHE_TIMEOUT) The second version won't cause errors if cache.set doesn't work, so it could be a workaround to my issue, but obviously not a solution. In general, let's forget about my issue, which version would you consider "better"?

    Read the article

  • What data stucture should I use for BigInt class

    - by user1086004
    I would like to implement a BigInt class which will be able to handle really big numbers. I want only to add and multiply numbers, however the class should also handle negative numbers. I wanted to represent the number as a string, but there is a big overhead with converting string to int and back for adding. I want to implement addition as on the high school, add corresponding order and if the result is bigger than 10, add the carry to next order. Then I thought that it would be better to handle it as a array of unsigned long long int and keep the sign separated by bool. With this I'm afraid of size of the int, as C++ standard as far as I know guarantees only that int < float < double. Correct me if I'm wrong. So when I reach some number I should move in array forward and start adding number to the next array position. Is there any data structure that is appropriate or better for this? Thanks in advance.

    Read the article

  • multiple models in Rails with a shared interface

    - by dfondente
    I'm not sure of the best structure for a particular situation in Rails. We have several types of workshops. The administration of the workshops is the same regardless of workshop type, so the data for the workshops is in a single model. We collect feedback from participants about the workshops, and the questionnaire is different for each type of workshop. I want to access the feedback about the workshop from the workshop model, but the class of the associated model will depend on the type of workshop. If I was doing this in something other than Rails, I would set up an abstract class for WorkshopFeedback, and then have subclasses for each type of workshop: WorkshopFeedbackOne, WorkshopFeedbackTwo, WorkshopFeedbackThree. I'm unsure how to best handle this with Rails. I currently have: class Workshop < ActiveRecord::Base has_many :workshop_feedbacks end class Feedback < ActiveRecord::Base belongs_to :workshop has_many :feedback_ones has_many :feedback_twos has_many :feedback_threes end class FeedbackOne < ActiveRecord::Base belongs_to :feedback end class FeedbackTwo < ActiveRecord::Base belongs_to :feedback end class FeedbackThree < ActiveRecord::Base belongs_to :feedback end This doesn't seem like to the cleanest way to access the feedback from the workshop model, as accessing the correct feedback will require logic investigating the Workshop type and then choosing, for instance, @workshop.feedback.feedback_one. Is there a better way to handle this situation? Would it be better to use a polymorphic association for feedback? Or maybe using a Module or Mixin for the shared Feedback interface? Note: I am avoiding using Single Table Inheritance here because the FeedbackOne, FeedbackTwo, FeedbackThree models do not share much common data, so I would end up with a large sparsely populated table with STI.

    Read the article

  • How to securely connect to multiple different LDAPS servers (Debian)

    - by Pickle
    I'm trying to connect to multiple different LDAPS servers. A lot of the documentation I've seen recommends setting TLS_REQCERT never, but that strikes me as horribly unsecure to not verify the certificate. So I've set that to demand. All the documentation I've seen says I need to update ldap.conf with a TLS_CACERT directive pointing to a .pem file. I've got that .pem file set up with the certificate from LDAP Server #1, and ldaps connections are happening fine. I've now got to communicate securely with another LDAP server in another branch of my organization, that uses a different certificate. I've seen no documentation on how to do this, except 1 page that says I can simply put multiple (not chained) certificates in the same .pem file. I've done this and everything is working hunky dorey. However, when I told a colleague what I did, he sounded like the sky was falling - putting 2 non-chained certificates into one .pem file is apparently the worst thing since ... ever. Is there a more acceptable way to do this? Or is this the only accepted way?

    Read the article

  • What techniques can I employ to create a series of UI Elements from a collection of objects using WP

    - by elggarc
    I'm new to WPF and before I dive in solving a problem in completely the wrong way I was wondering if WPF is clever enough to handle something for me. Imagine I have a collection containing objects. Each object is of the same known type and has two parameters. Name (a string) and Picked (a boolean). The collection will be populated at run time. I would like to build up a UI element at run time that will represent this collection as a series of checkboxes. I want the Picked parameter of any given object in the collection updated if the user changes the selected state of the checkbox. To me, the answer is simple. I iterate accross the collection and create a new checkbox for each object, dynamically wiring up a ValueChanged event to capture when Picked should be changed. It has occured to me, however, that I may be able to harness some unknown feature of WPF to do this better (or "properly"). For example, could data binding be employed here? I would be very interested in anyone's thoughts. Thanks, E FootNote: The structure of the collection can be changed completely to better fit any chosen solution but ultimately I will always start from, and end with, some list of string and boolean pairs.

    Read the article

  • How can i design a DB where the user can define the fields and types of a detail table in a M-D rela

    - by Simon
    My application has one table called 'events' and each event has approx 30 standard fields, but also user defined fields that could be any name or type, in an 'eventdata' table. Users can define these event data tables, by specifying x number of fields (either text/double/datetime/boolean) and the names of these fields. This 'eventdata' (table) can be different for each 'event'. My current approach is to create a lookup table for the definitions. So if i need to query all 'event' and 'eventdata' per record, i do so in a M-D relaitionship using two queries (i.e. select * from events, then for each record in 'events', select * from 'some table'). Is there a better approach to doing this? I have implemented this so far, but most of my queries require two distinct calls to the DB - i cannot simply join my master 'events' table with different 'eventdata' tables for each record in in 'events'. I guess my main question is: can i join my master table with different detail tables for each record? E.g. SELECT E.*, E.Tablename FROM events E LEFT JOIN 'E.tablename' T ON E._ID = T.ID If not, is there a better way to design my database considering i have no idea on how many user defined fields there may be and what type they will be.

    Read the article

  • python interactive mode module import issue

    - by Jeff
    I believe I have what would be called a scope issue, perhaps name space. Not too sure I'm new to python. I'm trying to make a module that will search through a list using regular expressions. I'm sure there is a better way of doing it but this error that I'm getting is bugging me and I want to understand why. here's my code: class relist(list): def __init__(self, l): list.__init__(self, l) def __getitem__(self, rexp): r = re.compile(rexp) res = filter(r.match, self) return res if __name__ == '__main__': import re listl = [x+y for x in 'test string' for y in 'another string for testing'] print(listl) test = relist(listl) print('----------------------------------') print(test['[s.]']) When I run this code through the command line it works the way I expect it to; however when I run it through python interactive mode I get the error >>> test['[s.]'] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "relist.py", line 8, in __getitem__ r = re.compile(rexp) NameError: global name 're' is not defined While in the interactive mode I do import re and I am able to use the re functions, but for some reason when I'm trying to execute the module it doesn't work. Do I need to import re into the scope of the class? I wouldn't think so because doesn't python search through other scopes if it's not found in the current one? I appreciate your help, and if there is a better way of doing this search I would be interested in knowing. Thanks

    Read the article

  • Building a structure/object in a place other than the constructor

    - by Vishal Naidu
    I have different types of objects representing the same business entity. UIObject, PowershellObject, DevCodeModelObject, WMIObject all are different representation to the same entity. So say if the entity is Animal then I have AnimalUIObject, AnimalPSObject, AnimalModelObject, AnimalWMIObject, etc. Now the implementations of AnimalUIObject, AnimalPSObject, AnimalModelObject are all in separate assemblies. Now my scenario is I want to verify the contents of business entity Animal irrespective of the assembly it came from. So I created a GenericAnimal class to represent the Animal entity. Now in GenericAnimal I added the following constructors: GenericAnimal(AnimalUIObject) GenericAnimal(AnimalPSObject) GenericAnimal(AnimalModelObject) Basically I made GenericAnimal depend on all the underlying assemblies so that while verifying I deal with this abstraction. Now the other approach to do this is have GenericAnimal with an empty constructor an allow these underlying assemblies to have a Transform() method which would build the GenericAnimal. Both approaches have some pros and cons: The 1st approach: Pros: All construction logic is in one place in one class GenericAnimal Cons: GenericAnimal class must be touched every-time there is a new representation form. The 2nd approach: Pros: construction responsibility is delegated to the underlying assembly. Cons: As construction logic is spread accross assemblies, tomorrow if I need to add a property X in GenericAnimal then I have to touch all the assemblies to change the Transform method. Which approach looks better ? or Which would you consider a lesser evil ? Is there any alternative way better than the above two ?

    Read the article

  • What stackoverflow questions bring out the best in you? [closed]

    - by Mark Robinson
    This question is to help me (us) ask better questions of this community to encourage better working together. What questions here bring out the best in you? I don't mean which are the best / most appropriate questions but which bring the best out of you? That is, it brings out your best thinking and most constructive behaviour. Consider, for example, questions that make you think or that are asked in a certain way. Or consider specific a question should be. Or more abstractly, questions asked on a certain day / time, ... So I'm not necessarily asking about specific subjects but rather how a question is asked. What inspires you to drop everything and start answering? This comes from my observing questions that get highest points - I’m graduating with a Computer Science degree but I don’t feel like I know how to program. is very positive but my first ever question caused a minor storm. Ironically, I'm nervous posting this question so please don't attack if this is inappropriate.

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >