Search Results

Search found 9114 results on 365 pages for 'reversible functions'.

Page 71/365 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • When does the "Do One Thing" paradigm become harmful?

    - by Petr
    For the sake of argument here's a sample function that prints contents of a given file line-by-line. Version 1: void printFile(const string & filePath) { fstream file(filePath, ios::in); string line; while (file.good()) { getline(file, line); cout << line << endl; } } I know it is recommended that functions do one thing at one level of abstraction. To me, though code above does pretty much one thing and is fairly atomic. Some books (such as Robert C. Martin's Clean Code) seem to suggest breaking the above code into separate functions. Version 2: void printLine(const string & line) { cout << line << endl; } void printLines(fstream & file) { string line; while (file.good()) { getline(file, line); printLine(line); } } void printFile(const string & filePath) { fstream file(filePath, ios::in); printLines(file); } I understand what they want to achieve (open file / read lines / print line), but isn't it a bit of overkill? The original version is simple and in some sense already does one thing - prints a file. The second version will lead to a large number of really small functions which may be far less legible than the first version. Wouldn't it be, in this case, better to have the code at one place? At which point does the "Do One Thing" paradigm become harmful?

    Read the article

  • Old programmer disappeared. About to hire another programmer. How do I approach this?

    - by pocto
    After spending over one year working on a social network project for me using WordPress and BuddyPress, my programmer has disappeared, even though he got paid every single week, for the whole period. Yes, he's not dead as I used an email tracker to confirm and see he opens my emails, but he doesn't respond. It seems he got another job. I wonder why he just couldn't say so. And I even paid him an advance salary for work he hasn't done. The problem is that I never asked for full documentation for most of the functions he coded in. And there were MANY functions for this 1+ year period, and some of them have bugs that he still didn't fix. Now it seems all confusing. What's the first thing I should do now? How do I proceed? I guess the first thing to do will be to get another programmer, but I want to start on the right foot by having all the current code documented so that any programmer can work on all the functions without issues. Is that the first thing I should do? If yes, how do I go about it? What's the standard type of documentation required for something like this? Can I get a programmer that will just do the documentation for all the codes and fix the bugs or is documentation not really important? Also, do you think getting another "individual" programmer is better or get a company that has programmers working for them, so that if the programmer assigned to my project disappears, another can replace him, without my involvement? I feel this is the approach I should have taken in the beginning.

    Read the article

  • Benchmarking ORM associations

    - by barerd
    I am trying to benchmark two cases of self referential many to many as described in datamapper associations. Both cases consist of an Item clss, which may require many other items. In both cases, I required the ruby benchmark library and source file, created two items and benchmarked require/unrequie functions as below: Benchmark.bmbm do |x| x.report("require:") { item_1.require_item item_2, 10 } x.report("unrequire:") { item_1.unrequire_item item_2 } end To be clear, both functions are datamapper add/modify functions like: componentMaps.create :component_id => item.id, :quantity => quantity componentMaps.all(:component_id => item.id).destroy! and links_to_components.create :component_id => item.id, :quantity => quantity links_to_components.all(:component_id => item.id).destroy! The results are variable and in the range of 0.018001 to 0.022001 for require function in both cases, and 0.006 to 0.01 for unrequire function in both cases. This made me suspicious about the correctness of my test method. Edit I went ahead and compared a "get by primary key case" to a "finding first matching record case" by: (1..10000).each do |i| Item.create :name => "item_#{i}" end Benchmark.bmbm do |x| x.report("Get") { item = Item.get 9712 } x.report("First") { item = Item.first :name => "item_9712" } end where the results were very different like 0 sec compared to 0.0312, as expected. This suggests that the benchmarking works. I wonder whether I benchmarked the two types of associations correctly, and whether a difference between 0.018 and 0.022 sec significant?

    Read the article

  • If a library doesn't provide all my needs, how should I proceed?

    - by 9a3eedi
    I'm developing an application involving math and physics models, and I'd like to use a Math library for things like Matrices. I'm using C#, and so I was looking for some libraries and found Math.NET. I'm under the impression, from past experience, that for math, using a robust and industry-approved third party library is much better than writing your own code. It seems good for many purposes, but it does not provide support for Quaternions, which I need to use as a type. Also, I need some functions in Vector and Matrix that also aren't provided, such as rotation matrices and vector rotation functions, and calculating cross products. At the same time, it provides a lot of functions/classes that I simply do not need, which might mean a lot of unnecessary bloat and complexity. At this rate, should I even bother using the library? Should I write my own math library? Or is it a better idea to stick to the third party library and somehow wrap around it? Perhaps I should make a subclass of the Matrix and Vector type of the library? But isn't that considered bad style? I've also tried looking for other libraries but unfortunately I couldn't find anything suitable.

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • What is the best way to handle dynamic content?

    - by user1561753
    So we run a site where there are elements of the interface that could potentially be changed at any moment in the backend. Specifically we run a web service where certain functions are loaded dynamically. However, there are times where we remove certian functions and we want the experience to be as seamless for the user as possible. Now we've considered a few methods of solving this Ping the server every few seconds. If the functions are outdated/no longer available refresh the users page. While this would work the best, I feel like having that much IO can't be too good When the user clicks a function, if it's outdated/no longer available, alert them in the response and refresh the page. This would also work fairly well. I guess I'm more wondering how web apps like Google Docs work where you have content that has to be synced up across multiple users and that isn't more than a few seconds outdated Sorry if this isn't the best place to ask this. I figured this was more of a site architecture question and that this might be the place to ask it over SO.

    Read the article

  • Handling Types for Real and Complex Matrices in a BLAS Wrapper

    - by mga
    I come from a C background and I'm now learning OOP with C++. As an exercise (so please don't just say "this already exists"), I want to implement a wrapper for BLAS that will let the user write matrix algebra in an intuitive way (e.g. similar to MATLAB) e.g.: A = B*C*D.Inverse() + E.Transpose(); My problem is how to go about dealing with real (R) and complex (C) matrices, because of C++'s "curse" of letting you do the same thing in N different ways. I do have a clear idea of what it should look like to the user: s/he should be able to define the two separately, but operations would return a type depending on the types of the operands (R*R = R, C*C = C, R*C = C*R = C). Additionally R can be cast into C and vice versa (just by setting the imaginary parts to 0). I have considered the following options: As a real number is a special case of a complex number, inherit CMatrix from RMatrix. I quickly dismissed this as the two would have to return different types for the same getter function. Inherit RMatrix and CMatrix from Matrix. However, I can't really think of any common code that would go into Matrix (because of the different return types). Templates. Declare Matrix<T> and declare the getter function as T Get(int i, int j), and operator functions as Matrix *(Matrix RHS). Then specialize Matrix<double> and Matrix<complex>, and overload the functions. Then I couldn't really see what I would gain with templates, so why not just define RMatrix and CMatrix separately from each other, and then overload functions as necessary? Although this last option makes sense to me, there's an annoying voice inside my head saying this is not elegant, because the two are clearly related. Perhaps I'm missing an appropriate design pattern? So I guess what I'm looking for is either absolution for doing this, or advice on how to do better.

    Read the article

  • Can't find new.h - getting gcc-4.2 on Quantal?

    - by Suyo
    I've been trying to compile the Valve Source SDK (2007) on my machine, but I keep running into the same error: In file included from ../public/tier1/interface.h:50:0, from ../utils/serverplugin_sample/serverplugin_empty.cpp:13: ../public/tier0/platform.h:46:17: new.h: No such file or directory I'm pretty new to C++ coding and compiling, but using apt-file search I tried to use every single suggestion for the required files in the Makefile (libstdc++.a and libgcc_eh.a), and none worked. I then found a note in the Makefile saying gcc 4.2.2 is recommended - I assume the older code won't work with the newer version, but gcc-4.2 is unavailable in 12.10. So my question/s is/are: If my assumption is right - how do I get gcc 4.2.2 on Quantal? If my assumption is wrong - what else could be the problem here? Relevant portion of the Makefile: # compiler options (gcc 3.4.1 will work - 4.2.2 recommended) CC=/usr/bin/gcc CPLUS=/usr/bin/g++ CLINK=/usr/bin/gcc CPP_LIB="/usr/lib/gcc/x86_64-w64-mingw32/4.6/libstdc++.a /usr/lib/gcc/x86_64-w64-mingw32/4.6/libgcc_eh.a" # GCC 4.2.2 optimization flags, if you're using anything below, don't use these! OPTFLAGS=-O1 -fomit-frame-pointer -ffast-math -fforce-addr -funroll-loops -fthread-jumps -fcrossjumping -foptimize-sibling-calls -fcse-follow-jumps -fcse-skip-blocks -fgcse -fgcse-lm -fexpensive-optimizations -frerun-cse-after-loop -fcaller-saves -fpeephole2 -fschedule-insns2 -fsched-interblock -fsched-spec -fregmove -fstrict-overflow -fdelete-null-pointer-checks -freorder-blocks -freorder-functions -falign-functions -falign-jumps -falign-loops -falign-labels -ftree-vrp -ftree-pre -finline-functions -funswitch-loops -fgcse-after-reload #OPTFLAGS= # put any compiler flags you want passed here USER_CFLAGS=-m32

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Motivation for a service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such scenario. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - why not just import the BL.DLL (and the dal.dll) to the other UI, and whenever making a change re-copy the dlls, it might not be so 'neat', but still less hassle than one more layer? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • How to match responses from a server with their corresponding requests? [closed]

    - by Deele
    There is a server that responds to requests on a socket. The client has functions to emit requests and functions to handle responses from the server. The problem is that the request sending function and the response handling function are two unrelated functions. Given a server response X, how can I know whether it's a response to request X or some other request Y? I would like to make a construct that would ensure that response X is definitely the answer to request X and also to make a function requestX() that returns response X and not some other response Y. This question is mostly about the general programming approach and not about any specific language construct. Preferably, though, the answer would involve Ruby, TCP sockets, and PHP. My code so far: require 'socket' class TheConnection def initialize(config) @config = config end def send(s) toConsole("--> #{s}") @conn.send "#{s}\n", 0 end def connect() # Connect to the server begin @conn = TCPSocket.open(@config['server'], @config['port']) rescue Interrupt rescue Exception => detail toConsole('Exception: ' + detail.message()) print detail.backtrace.join('\n') retry end end def getSpecificAnswer(input) send "GET #{input}" end def handle_server_input(s) case s.strip when /^Hello. (.*)$/i toConsole "[ Server says hello ]" send "Hello to you too! #{$1}" else toConsole(s) end end def main_loop() while true ready = select([@conn, $stdin], nil, nil, nil) next if !ready for s in ready[0] if s == $stdin then return if $stdin.eof s = $stdin.gets send s elsif s == @conn then return if @conn.eof s = @conn.gets handle_server_input(s) end end end end def toConsole(msg) t = Time.new puts t.strftime("[%H:%M:%S]") + ' ' + msg end end @config = Hash[ 'server'=>'test.server.com', 'port'=>'2020' ] $conn = TheConnection.new(@config) $conn.connect() $conn.getSpecificAnswer('itemsX') begin $conn.main_loop() rescue Interrupt rescue Exception => detail $conn.toConsole('Exception: ' + detail.message()) print detail.backtrace.join('\n') retry end

    Read the article

  • Is there a better way to organize my module tests that avoids an explosion of new source files?

    - by luser droog
    I've got a neat (so I thought) way of having each of my modules produce a unit-test executable if compiled with the -DTESTMODULE flag. This flag guards a main() function that can access all static data and functions in the module, without #including a C file. From the README: -- Modules -- The various modules were written and tested separately before being coupled together to achieve the necessary basic functionality. Each module retains its unit-test, its main() function, guarded by #ifdef TESTMODULE. `make test` will compile and execute all the unit tests, producing copious output, but importantly exitting with an appropriate success or failure code, so the `make test` command will fail if any of the tests fail. Module TOC __________ test obj src header structures CONSTANTS ---- --- --- --- -------------------- m m.o m.c m.h mfile mtab TABSZ s s.o s.c s.h stack STACKSEGSZ v v.o v.c v.h saverec_ f.o f.c f.h file ob ob.o ob.c ob.h object ar ar.o ar.c ar.h array st st.o st.c st.h string di di.o di.c di.h dichead dictionary nm nm.o nm.c nm.h name gc gc.o gc.c gc.h garbage collector itp itp.c itp.h context osunix.o osunix.c osunix.h unix-dependent functions It's compile by a tricky bit of makefile, m:m.c ob.h ob.o err.o $(CORE) itp.o $(OP) cc $(CFLAGS) -DTESTMODULE $(LDLIBS) -o $@ $< err.o ob.o s.o ar.o st.o v.o di.o gc.o nm.o itp.o $(OP) f.o where the module is compiled with its own C file plus every other object file except itself. But it's creating difficulties for the kindly programmer who offered to write the Autotools files for me. So the obvious way to make it "less weird" would be to bust-out all the main functions into separate source files. But, but ... Do I gotta?

    Read the article

  • Clear Complete Instructions to Dual-Boot 12.04 on OSX Lion

    - by BCZ
    Honestly, google-surfing this question leads to so many half-answers and multi-part communications that it is both scary and frustrating to try to navigate them. The question here is simple: What are the clear and complete step-by-step instructions that you used to dual-boot 12.04 on your OSX Lion (entrapped) Apple computer. Did you use rEFIt, rEFIind, a special .iso of 12.04? What? Obviously, there is a preference for safer, easier, and more reversible methods. I can probably assure that the best answer will get plenty of views.

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • Is Java easy decompilation a factor worth considering

    - by Sandra G
    We are considering the programming language for a desktop application with extended GUI use (tables, windows) and heavy database use. We considered Java for use however the fact that it can be decompiled back very easily into source code is holding us back. There are of course many obfuscators available however they are just that: obfuscators. The only obfuscation worth doing we got was stripping function and variables names into meaningless letters and numbers so that at least stealing code and renaming it back into something meaningful is too much work and we are 100% sure it is not reversible back in any automated way. However as it concerns to protecting internals (like password hashes or sensible variables content) we found obfuscators really lacking. Is there any way to make Java applications as hard to decode as .exe counterparts? And is it a factor to consider when deciding whether to develop in Java a desktop application?

    Read the article

  • How can you make an emacs macro wait for cscope query results?

    - by Sudhanshu
    I am trying to write a macro which calls cscope-find-functions-calling-this-function on each and every tag in a file displayed in the *Tags List* buffer (created by list-tags command). This should create a buffer which contains list of all functions calling a set of functions defined in a certain file. This is the sequence of keystrokes: 1. <f11> ;; cscope-find-functions-calling-this-function 2. RET ;; newline [shows results of cscope in a split window] 3. C-x C-p ;; mark-page 4. C-x C-x ;; icicle-exchange-point-and-mark 5. <up> ;; previous-line 6. <end> ;; end-of-line [region to copy has been marked] 7. <f7> ;; append-results-to-buffer 8. C-x ESC O ;; [move back to split window on the right] 9. C-x b ;; icicle-buffer [Switch back to *Tags List* buffer] 10. *Tags ;; self-insert-command * 5 11. SPC ;; self-insert-command 12. List* ;; self-insert-command * 5 13. RET ;; newline 14 . <down> ;; next-line [Position point on next tag in the list] Problem: I get no results in the buffer, and I found out that's because Step 3-7 execute even before cscope prints the results of query made on Steps 1-2. I can insert a pause in the macro by using C-x q, but I'd rather like the macro to wait after Step 2, until cscope has returned with the results and only then continue further. I suspect this is not possible through a macro, maybe a LISP function... I'm not a lisp expert myself. Can someone please help? Thanks! Details: I have Icicles installed so by default I get word at point in current buffer as input in minibuffer. F11 is bound to cscope-find-functions-calling-this-function windmove is installed and C-x (C-x ESC o - as shown below) takes you to the right window. F7 is bound to append-results-to-buffer which is defined as: (defun append-results-to-buffer () (interactive) (append-to-buffer (get-buffer-create "c1") (point) (mark))) This function just appends the currently marked region to a buffer named "c1".

    Read the article

  • Controlling the USB from Windows

    - by b-gen-jack-o-neill
    Hi, I know this probably is not the easiest thing to do, but I am trying to connect Microcontroller and PC using USB. I dont want to use internal USART of Microcontroller or USB to RS232 converted, its project indended to help me understand various principles. So, getting the communication done from the Microcontroller side is piece of cake - I mean, when I know he protocol, its relativelly easy to implement it on Micro, becouse I am in direct control of evrything, even precise timing. But this is not the case of PC. I am not very familiar with concept of Windows handling the devices connected. In one of my previous question I ask about how Windows works with devices thru drivers. I understood that for internal use of Windows, drivers must have some default set of functions available to OS. I mean, when OS wants to access HDD, it calls HDD driver (which is probably internal in OS), with specific "questions" so that means that HDD driver has to be written to cooperate with Windows, to have write function in the proper place to be called by the OS. Something similiar is for GPU, Even DirectX, I mean DirectX must call specific functions from drivers, so drivers must be written to work with DX. I know, many functions from WinAPI works on their own, but even "simple" window must be in the end written into framebuffer, using MMIO to adress specified by drivers. Am I right? So, I expected that Windows have internal functions, parts of WinAPI designed to work with certain comonly used things. To call manufacturer-designed drivers. But this seems to not be entirely true becouse Windows has no way to communicate thru Paralel port. I mean, there is no function in the WinAPI to work with serial port, but there are funcions to work with HDD,GPU and so. But now there comes the part I am getting very lost at. So, I think Windows must have some built-in functions to communicate thru USB, becouse for example it handles USB flash memory. So, is there any WinAPI function designed to let user to operate USB thru that function, or when I want to use USB myself, do I have to call desired USB-driver function myself? Becouse all you need to send to USB controller is device adress and the infromation right? I mean, I don´t have to write any new drivers, am I right? Just to call WinAPI function if there is such, or directly call original USB driver. Does any of this make some sense?

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • return the result of a query and the total number of rows in a single function

    - by csotelo
    This is a question as might be focused on working in the best way, if there are other alternatives or is the only way: Using Codeigniter ... I have the typical 2 functions of list records and show total number of records (using the page as an alternative). The problem is that they are rather large. Sample 2 functions in my model: count Rows: function get_all_count() { $this->db->select('u.id_user'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); $query = $this->db->get(); return $query->num_rows(); } results per page function get_all($limit, $offset, $sort = '') { $this->db->select('u.id_user, user, email, flag'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); if($sort) $this->db->order_by($sort); $this->db->limit($limit, $offset); $query = $this->db->get(); return $query->result(); } You see, I repeat the most of the functions, the difference is that only the number of fields and management pages. I wonder if there is any alternative to get as much results as the query in a single function. I have seen many tutorials, and all create 2 functions: one to count and another to show results ... Will there be more optimal?

    Read the article

  • Why to build own CMS?

    - by ile
    On my first job interview, I was asked why did I build my own CMS? Why not to use one of existing CMS, Wordpress, Joomla, Drupal...? At first, I was stunned. I couldn't immediately recall all of my reasons for building my own CMS, but this was definitely one of the main reasons: It's my code and if I want to change something in that CMS (which I often have to do, because each website I build needs CMS with different functions) it's not a big problem. For some time I've been using Wordpress and one of the main things that distracted me from using it was discovering bugs in code that wasn't written by me and this bugs were often, especially if I made some changes to CMS or added a plugin... Here, I can find these 8 reasons why NOT to build own CMS: It won’t meet users’ needs It’s too much work It won’t be a standard solution It won’t be extendible fast enough It won’t be tested well enough It won’t be easily changeable It won’t add any value Create content, not functionality Quote from the same page: So the main question to ask yourself is: ‘Why am I really trying to re-solve a problem that has already been solved before?’ Well, I definitely agree that it's hard to invent CMS that hasn't been already invented, but on other hand, I think every CMS is (or should be) individual... it maybe won't have a million of functions, it will have 3 functions but their usage will be clear (to a user) and do all that one site needs to have. I think also that it is not good to give to a client a CMS with a lot of functions that are never used and it looks probably more professional when website and CMS together look like one product. I would also like to comment some quote parts: "It’s too much work" - I agree, but when using existing CMS and customizing it to website needs and can sometimes be very hard job or mission impossible. "It won’t be easily changeable" - I disagree with this one. What is your opinion on this one, why did you develop or didn't develop your own CMS? Ile

    Read the article

  • I assume Row_Number doesn’t act only on rows of the window frame

    - by AspOnMyNet
    a) Quote is taken from http://www.postgresql.org/docs/current/static/tutorial-window.html for each row, there is a set of rows within its partition called its window frame. Many (but not all) window functions act only on the rows of the window frame, rather than of the whole partition. By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause I assume Row_Number doesn’t act only on rows of the window frame, but instead always act on all rows of a partition? b) By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause I assume that is only true for those window functions that act only on rows of the window frame ( thus above quote isn't true for ROW_NUMBER() function )? c) http://www.postgresql.org/docs/current/static/tutorial-window.html talks about PostgreSQL 8.4’s Windowing functions. Is everything in that article also true for Sql Server 2008’s Windowing functions thanx

    Read the article

  • SQL Server 2005 database design - many-to-many relationships with hierarchy

    - by Remnant
    Note I have completely re-written my original post to better explain the issue I am trying to understand. I have tried to generalise the problem as much as possible. Also, my thanks to the original people who responded. Hopefully this post makes things a little clearer. Context In short, I am struggling to understand the best way to design a small scale database to handle (what I perceive to be) multiple many-to-many relationships. Imagine the following scenario for a company organisational structure: Textile Division Marketing Division | | ---------------------- ---------------------- | | | | HR Dept Finance Dept HR Dept Finance Dept | | | | ---------- ---------- ---------- --------- | | | | | | | | Payroll Hiring Audit Tax Payroll Hiring Audit Accounts | | | | | | | | Emps Emps Emps Emps Emps Emps Emps Emps NB: Emps denotes a list of employess that work in that area When I first started with this issue I made four separate tables: Divisions - Textile, Marketing (PK = DivisionID) Departments - HR, Finance (PK = DeptID) Functions - Payroll, Hiring, Audit, Tax, Accounts (PK = FunctionID) Employees - List of all Employees (PK = EmployeeID) The problem as I see it is that there are multiple many-to-many relationships i.e. many departments have many divisions and many functions have many departments. Question Giving the database structure above, suppose I wanted to do the following: Get all employees who work in the Payroll function of the Marketing Division To do this I need to be able to differentiate between the two Payroll departments but I am not sure how this can be done? I understand that I could build a 'Link / Junction' table between Departments and Functions so that I can retrieve which Functions are in which Departments. However, I would still need to differentiate the Division they belong to. Research Effort As you can see I am an abecedarian when it comes to database deisgn. I have spent the last two days resaerching this issue, traversing nested set models, adjacency models, reading that this issue is known not to be NP complete etc. I am sure there is a simple solution?

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • Is there a good way of automatically generating javascript client code from server side python

    - by tat.wright
    I basically want to be able to: Write a few functions in python (with the minimum amount of extra meta data) Turn these functions into a web service (with the minimum of effort / boiler plate) Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments) Example python: def hello_world(): return "Hello world" javascript: ... <!-- This file is automatically generated (either dynamically or statically) --> <script src="http://myurl.com/webservice/client_side_javascript"> </script> ... <script> $('#button').click(function () { hello_world(function (data){ $('#label').text(data))) } </script> A bit of research has shown me some approaches that come close to this: Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell) Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this) But are there any approaches closer to what I want?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >