Search Results

Search found 9366 results on 375 pages for 'common lisp'.

Page 307/375 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • multiple models in Rails with a shared interface

    - by dfondente
    I'm not sure of the best structure for a particular situation in Rails. We have several types of workshops. The administration of the workshops is the same regardless of workshop type, so the data for the workshops is in a single model. We collect feedback from participants about the workshops, and the questionnaire is different for each type of workshop. I want to access the feedback about the workshop from the workshop model, but the class of the associated model will depend on the type of workshop. If I was doing this in something other than Rails, I would set up an abstract class for WorkshopFeedback, and then have subclasses for each type of workshop: WorkshopFeedbackOne, WorkshopFeedbackTwo, WorkshopFeedbackThree. I'm unsure how to best handle this with Rails. I currently have: class Workshop < ActiveRecord::Base has_many :workshop_feedbacks end class Feedback < ActiveRecord::Base belongs_to :workshop has_many :feedback_ones has_many :feedback_twos has_many :feedback_threes end class FeedbackOne < ActiveRecord::Base belongs_to :feedback end class FeedbackTwo < ActiveRecord::Base belongs_to :feedback end class FeedbackThree < ActiveRecord::Base belongs_to :feedback end This doesn't seem like to the cleanest way to access the feedback from the workshop model, as accessing the correct feedback will require logic investigating the Workshop type and then choosing, for instance, @workshop.feedback.feedback_one. Is there a better way to handle this situation? Would it be better to use a polymorphic association for feedback? Or maybe using a Module or Mixin for the shared Feedback interface? Note: I am avoiding using Single Table Inheritance here because the FeedbackOne, FeedbackTwo, FeedbackThree models do not share much common data, so I would end up with a large sparsely populated table with STI.

    Read the article

  • Why doesn't g++ pay attention to __attribute__((pure)) for virtual functions?

    - by jchl
    According to the GCC documentation, __attribute__((pure)) tells the compiler that a function has no side-effects, and so it can be subject to common subexpression elimination. This attribute appears to work for non-virtual functions, but not for virtual functions. For example, consider the following code: extern void f( int ); class C { public: int a1(); int a2() __attribute__((pure)); virtual int b1(); virtual int b2() __attribute__((pure)); }; void test_a1( C *c ) { if( c->a1() ) { f( c->a1() ); } } void test_a2( C *c ) { if( c->a2() ) { f( c->a2() ); } } void test_b1( C *c ) { if( c->b1() ) { f( c->b1() ); } } void test_b2( C *c ) { if( c->b2() ) { f( c->b2() ); } } When compiled with optimization enabled (either -O2 or -Os), test_a2() only calls C::a2() once, but test_b2() calls b2() twice. Is there a reason for this? Is it because, even though the implementation in class C is pure, g++ can't assume that the implementation in every subclass will also be pure? If so, is there a way to tell g++ that this virtual function and every subclass's implementation will be pure?

    Read the article

  • Number of characters recommended for a statement

    - by liaK
    Hi, I have been using Qt 4.5 and so do C++. I have been told that it's a standard practice to maintain the length of each statement in the application to 80 characters. Even in Qt creator we can make a right border visible so that we can know whether we are crossing the 80 characters limit. But my question is, Is it really a standard being followed? Because in my application, I use indenting and all, so it's quite common that I cross the boundary. Other cases include, there might be a error statement which will be a bit explanatory one and which is in an inner block of code, so it too will cross the boundary. Usually my variable names look bit lengthier so as to make the names meaningful. When I call the functions of the variable names, again I will cross. Function names will not be in fewer characters either. I agree a horizontal scroll bar shows up and it's quite annoying to move back and forth. So, for function calls including multiple arguments, when the boundary is reached I will make the forth coming arguments in the new line. But besides that, for a single statement (for e.g a very long error message which is in double quotes " " or like longfun1()->longfun2()->...) if I use an \ and split into multiple lines, the readability becomes very poor. So is it a good practice to have those statement length restrictions? If this restriction in statement has to be followed? I don't think it depends on a specific language anyway. I added C++ and Qt tags since if it might. Any pointers regarding this are welcome.

    Read the article

  • What is an appropriate way to separate lifecycle events in the logging system?

    - by Hanno Fietz
    I have an application with many different parts, it runs on OSGi, so there's the bundle lifecycles, there's a number of message processors and plugin components that all can die, can be started and stopped, have their setup changed etc. I want a way to get a good picture of the current system status, what components are up, which have problems, how long they have been running for etc. I think that logging, especially in combination with custom appenders (I'm using log4j), is a good part of the solution and does help ad-hoc analysis as well as live monitoring. Normally, I would classify lifecycle events as INFO level, but what I really want is to have them separate from what else is going on in INFO. I could create my own level, LIFECYCLE. The lifecycle events happen in various different areas and on various levels in the application hierarchy, also they happen in the same areas as other events that I want to separate them from. I could introduce some common lifecycle management and use that to distinguish the events from others. For instance, all components that have a lifecycle could implement a particular interface and I log by its name. Are there good examples of how this is done elsewhere? What are considerations?

    Read the article

  • Array's index and argc signedness

    - by tusbar
    Hello, The C standard (5.1.2.2.1 Program startup) says: The function called at program startup is named main. [...] It shall be de?ned with a return type of int and with no parameters: int main(void) { /* ... */ } or with two parameters [...] : int main(int argc, char *argv[]) { /* ... */ } And later says: The value of argc shall be nonnegative. Why shouldn't argc be defined as an unsigned int, argc supposedly meaning 'argument count'? Should argc be used as an index for argv? So I started wondering if the C standard says something about the type of array's index. Is it signed? 6.5.2.1 Array subscripting: One of the expressions shall have type ‘‘pointer to object type’’, the other expression shall have integer type, and the result has type ‘‘type’’. It doesn't say anything about its signedness (or I didn't find it). It is pretty common to see codes using negatives array indexes (array[-1]) but isn't it undefined behavior? Should array's indexes be unsigned?

    Read the article

  • Type-safe generic data structures in plain-old C?

    - by Bradford Larsen
    I have done far more C++ programming than "plain old C" programming. One thing I sorely miss when programming in plain C is type-safe generic data structures, which are provided in C++ via templates. For sake of concreteness, consider a generic singly linked list. In C++, it is a simple matter to define your own template class, and then instantiate it for the types you need. In C, I can think of a few ways of implementing a generic singly linked list: Write the linked list type(s) and supporting procedures once, using void pointers to go around the type system. Write preprocessor macros taking the necessary type names, etc, to generate a type-specific version of the data structure and supporting procedures. Use a more sophisticated, stand-alone tool to generate the code for the types you need. I don't like option 1, as it is subverts the type system, and would likely have worse performance than a specialized type-specific implementation. Using a uniform representation of the data structure for all types, and casting to/from void pointers, so far as I can see, necessitates an indirection that would be avoided by an implementation specialized for the element type. Option 2 doesn't require any extra tools, but it feels somewhat clunky, and could give bad compiler errors when used improperly. Option 3 could give better compiler error messages than option 2, as the specialized data structure code would reside in expanded form that could be opened in an editor and inspected by the programmer (as opposed to code generated by preprocessor macros). However, this option is the most heavyweight, a sort of "poor-man's templates". I have used this approach before, using a simple sed script to specialize a "templated" version of some C code. I would like to program my future "low-level" projects in C rather than C++, but have been frightened by the thought of rewriting common data structures for each specific type. What experience do people have with this issue? Are there good libraries of generic data structures and algorithms in C that do not go with Option 1 (i.e. casting to and from void pointers, which sacrifices type safety and adds a level of indirection)?

    Read the article

  • Replacing certain words with links to definitions using Javascript

    - by adharris
    I am trying to create a glossary system which will get a list of common words and their definitions via ajax, then replace any occurrence of that word in certain elements (those with the useGlossary class) with a link to the full definition and provide a short definition on mouse hover. The way I am doing it works, but for large pages it takes 30-40 seconds, during which the page hangs. I would like to either decrease the time it takes to do the replacement or make it so that the replacement is running in the background without hanging the page. I am using jquery for most of the javascript, and Qtip for the mouse hover. Here is my existing slow code: $(document).ready(function () { $.get("fetchGlossary.cfm", null, glossCallback, "json"); }); function glossCallback(data) { $(".useGlossary").each(function() { var $this = $(this); for (var i in data) { $this.html($this.html().replace(new RegExp("\\b" + data[i].term + "\\b", "gi"), function(m) {return makeLink(m, data[i].def);})); } $this.find("a.glossary").qtip({ style: { name: 'blue', tip: true } }) }); } function makeLink(m, def) { return "<a class='glossary glossary" + m.replace(/\s/gi, "").toUpperCase() + "' href='reference/glossary.cfm' title='" + def + "'>" + m + "</a>"; } Thanks for any feedback/suggestions!

    Read the article

  • log activity. intrusion detection. user event notification ( interraction ). messaging

    - by Julian Davchev
    Have three questions that I somehow find related so I put them in same place. Currently building relatively large LAMP system - making use of messaging(activeMQ) , memcache and other goodies. I wonder if there are best practices or nice tips and tricks on howto implement those. System is user aware - meaning all actions done can be bind to particular logged user. 1. How to log all actions/activities of users? So that stats/graphics might be extracted later for analysing. At best that will include all url calls, post data etc etc. Meaning tons of inserts. I am thinking sending messages to activeMQ and later cron dumping in DB and cron analysing might be good idea here. Since using Zend Framework I guess I may use some request plugin so I don't have to make the log() call all over the code. 2.How to log stuff so may be used for intrusion detection? I know most things might be done on http level using apache mods for example but there are also specific cases like (5 failed login attempts in a row (leads to captcha) etc etc..) This also would include tons of inserts. Here I guess direct usage of memcache might be best approach as data don't seem vital to be permanantly persisted. Not sure if cannot use data from point 1. 3.System will notify users of some events. Like need approval , something broke..whatever.Some events will need feedback(action) from user, others are just informational. Wonder if there is common solutions for needs like this. Example: Based on occuring event(s) user will be notifed (user inbox for example) what happend. There will be link or something to lead him to details of thingy that happend and take action accordingly. Those seem trivial at first look but problem I see if coding it directly is becoming really fast hard to maintain.

    Read the article

  • Create a function to a function that runs a for loop?

    - by user637364
    Hi, I have made some code that creates a red border around an image when the user click on to highlite thats the choesen one. But I want to erase previous or all border with a white border around all images before a new click is made on another image. My question is how do I activate a call to a function when a click is made and how would a function look in jQuery? I just whant to use the .css to change the border in perhaps a loop and change the id of the images? Can I mix common javascript with jQuery, or should it only be pure jQuery code in a script? This is a simplified part of the code, it contains "minibild_1" to "minibild_5" $(document).ready(function(){ $("#minibild_1").click(function(){ $("#minibild_1").css({"border":"2px solid #D00C33"}); $("#storbild").attr("src","../bilder/bilder_stora/{$row1-bild_1}.jpg"); }); $("#minibild_2").click(function(){ $("#minibild_2").css({"border":"2px solid #D00C33"}); $("#storbild").attr("src","../bilder/bilder_stora/{$row1-bild_2}.jpg"); }); });

    Read the article

  • Unicode version of base64 encoding/ decoding

    - by Yan Cheng CHEOK
    I am using base64 encoding/decoding from http://www.adp-gmbh.ch/cpp/common/base64.html It works pretty well with the following code. const std::string s = "I Am A Big Fat Cat" ; std::string encoded = base64_encode(reinterpret_cast<const unsigned char*>(s.c_str()), s.length()); std::string decoded = base64_decode(encoded); std::cout << _T("encoded: ") << encoded << std::endl; std::cout << _T("decoded: ") << decoded << std::endl; However, when comes to unicode namespace std { #ifdef _UNICODE typedef wstring tstring; #else typedef string tstring; #endif } const std::tstring s = _T("I Am A Big Fat Cat"); How can I still make use of the above function? Merely changing std::string base64_encode(unsigned TCHAR const* , unsigned int len); std::tstring base64_decode(std::string const& s); will not work correctly. (I expect base64_encode to return ASCII. Hence, std::string should be used instead of std::tstring)

    Read the article

  • Programming logic best practice - redundant checks

    - by eldblz
    I'm creating a large PHP project and I've a trivial doubt about how to proceed. Assume we got a class books, in this class I've the method ReturnInfo: function ReturnInfo($id) { if( is_numeric($id) ) { $query = "SELECT * FROM books WHERE id='" . $id . "' LIMIT 1;"; if( $row = $this->DBDrive->ExecuteQuery($query, $FetchResults=TRUE) ) { return $row; } else { return FALSE; } } else { throw new Exception('Books - ReturnInfo - id not valid.'); } } Then i have another method PrintInfo function PrintInfo($id) { print_r( $this->ReturnInfo($id) ); } Obviously the code sample are just for example and not actual production code. In the second method should I check (again) if id is numeric ? Or can I skip it because is already taken care in the first method and if it's not an exception will be thrown? Till now I always wrote code with redundant checks (no matter if already checked elsewhere i'll check it also here) Is there a best practice? Is just common sense? Thank you in advance for your kind replies.

    Read the article

  • Problem accessing private variables in jQuery like chainable design pattern

    - by novogeek
    Hi folks, I'm trying to create my custom toolbox which imitates jQuery's design pattern. Basically, the idea is somewhat derived from this post: http://stackoverflow.com/questions/2061501/jquery-plugin-design-pattern-common-practice-for-dealing-with-private-function (Check the answer given by "David"). So here is my toolbox function: (function(window){ var mySpace=function(){ return new PrivateSpace(); } var PrivateSpace=function(){ var testCache={}; }; PrivateSpace.prototype={ init:function(){ console.log('init this:', this); return this; }, ajax:function(){ console.log('make ajax calls here'); return this; }, cache:function(key,selector){ console.log('cache selectors here'); testCache[key]=selector; console.log('cached selector: ',testCache); return this; } } window.hmis=window.m$=mySpace(); })(window) Now, if I execute this function like: console.log(m$.cache('firstname','#FirstNameTextbox')); I get an error 'testCache' is not defined. I'm not able to access the variable "testCache" inside my cache function of the prototype. How should I access it? Basically, what I want to do is, I want to cache all my jQuery selectors into an object and use this object in the future.

    Read the article

  • A better solution than element.Elements("Whatever").First()?

    - by codeka
    I have an XML file like this: <SiteConfig> <Sites> <Site Identifier="a" /> <Site Identifier="b" /> <Site Identifier="c" /> </Sites> </SiteConfig> The file is user-editable, so I want to provide reasonable error message in case I can't properly parse it. I could probably write a .xsd for it, but that seems kind of overkill for a simple file. So anyway, when querying for the list of <Site> nodes, there's a couple of ways I could do it: var doc = XDocument.Load(...); var siteNodes = from siteNode in doc.Element("SiteConfig").Element("SiteUrls").Elements("Sites") select siteNode; But the problem with this is that if the user has not included the <SiteUrls> node (say) it'll just throw a NullReferenceException which doesn't really say much to the user about what actually went wrong. Another possibility is just to use Elements() everywhere instead of Element(), but that doesn't always work out when coupled with calls to Attribute(), for example, in the following situation: var siteNodes = from siteNode in doc.Elements("SiteConfig") .Elements("SiteUrls") .Elements("Sites") where siteNode.Attribute("Identifier").Value == "a" select siteNode; (That is, there's no equivalent to Attributes("xxx").Value) Is there something built-in to the framework to handle this situation a little better? What I would prefer is a version of Element() (and of Attribute() while we're at it) that throws a descriptive exception (e.g. "Looking for element <xyz> under <abc> but no such element was found") instead of returning null. I could write my own version of Element() and Attribute() but it just seems to me like this is such a common scenario that I must be missing something...

    Read the article

  • merging two tables, while applying aggregates on the duplicates (max,min and sum)

    - by cloudraven
    I have a table (let's call it log) with a few millions of records. Among the fields I have Id, Count, FirstHit, LastHit. Id - The record id Count - number of times this Id has been reported FirstHit - earliest timestamp with which this Id was reported LastHit - latest timestamp with which this Id was reported This table only has one record for any given Id Everyday I get into another table (let's call it feed) with around half a million records with these fields among many others: Id Timestamp - Entry date and time. This table can have many records for the same id What I want to do is to update log in the following way. Count - log count value, plus the count() of records for that id found in feed FirstHit - the earliest of the current value in log or the minimum value in feed for that id LastHit - the latest of the current value in log or the maximum value in feed for that id. It should be noticed that many of the ids in feed are already in log. The simple thing that worked is to create a temporary table and insert into it the union of both as in Select Id, Min(Timestamp) As FirstHit, MAX(Timestamp) as LastHit, Count(*) as Count FROM feed GROUP BY Id UNION ALL Select Id, FirstHit,LastHit,Count FROM log; From that temporary table I do a select that aggregates Min(firsthit), max(lasthit) and sum(Count) Select Id, Min(FirstHit),Max(LastHit),Sum(Count) FROM @temp GROUP BY Id; and that gives me the end result. I could then delete everything from log and replace it with everything with temp, or craft an update for the common records and insert the new ones. However, I think both are highly inefficient. Is there a more efficient way of doing this. Perhaps doing the update in place in the log table?

    Read the article

  • C++ reference variable again!!!

    - by kumar_m_kiran
    Hi All, I think most would be surprised about the topic again, However I am referring to a book "C++ Common Knowledge: Essential Intermediate Programming" written by "Stephen C. Dewhurst". In the book, he quotes a particular sentence (in section under Item 5. References Are Aliases, Not Pointers), which is as below A reference is an alias for an object that already exists prior to the initialization of the reference. Once a reference is initialized to refer to a particular object, it cannot later be made to refer to a different object; a reference is bound to its initializer for its whole lifetime Can anyone please explain the context of "cannot later be made to refer to a different object" Below code works for me, #include <iostream> using namespace std; int main(int argc, char *argv[]) { int i = 100; int& ref = i; cout<<ref<<endl; int k = 2000; ref = k; cout<<ref<<endl; return 0; } Here I am referring the variable ref to both i and j variable. And the code works perfectly fine. Am I missing something? I have used SUSE10 64bit linux for testing my sample program. Thanks for your input in advance.

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • Organizing development teams

    - by Patrick
    A long time ago, when my company was much smaller, dividing the development work over teams was quite easy: the 'application' team developed the applications-specific logic, often requiring a deep insight of specific industry problems) the 'generic' team developed the parts that were common/generic for all applications (user interface related stuff, database access, low-level Windows stuff, ...) Over the years the boundaries between the teams have become fuzzy: the 'application' teams often write application-specific functionality with a 'generic' part, so instead of asking the 'generic' team to write that part for them, they write it themselves to speed up the developments; then donate it to the 'generic' team the 'generic' team's focus seems to be more 'maintenance oriented'. All of the 'very generic' code has already been written, so no new developments are needed in it, but instead they continuously have to support all the functionality donated by the application teams. All this seems to indicate that it's not a good idea anymore to have this split in teams. Maybe the 'generic' team should evolve into a 'software quality' team (defining and guarding the rules for writing good quality software), or into a 'software deployment' team (defining how software should be deployed, installed, ...). How do you split up the work in different teams if you have different applications? everybody can write generic code and donates it to a central 'generic' team? everybody can write generic code, but nobody 'manages' this generic code (everybody is the owner) generic code is written by a 'generic' team only and the applications have to wait until the 'generic' team delivers the generic part (via a library, via a DLL) there is no overlap in code between the different applications some other way? Notice that thee advantage of having the mix (allowing everybody to write everywhere in the code) is that: code is written in a more flexible way it's easier to debug the code since you can easily step into the 'generic' code in the debugger But the big (and maybe only) disadvantage is that this generic code may become nobody's responsibility if there is no clear team that manages it anymore. What is your vision?

    Read the article

  • JavaScript for loop index strangeness

    - by pythonBOI
    I'm relatively new to JS so this may be a common problem, but I noticed something strange when dealing with for loops and the onclick function. I was able to replicate the problem with this code: <html> <head> <script type="text/javascript"> window.onload = function () { var buttons = document.getElementsByTagName('a'); for (var i=0; i<2; i++) { buttons[i].onclick = function () { alert(i); return false; } } } </script> </head> <body> <a href="">hi</a> <br /> <a href="">bye</a> </body> </html> When clicking the links I would expect to get '0' and '1', but instead I get '2' for both of them. Why is this? BTW, I managed to solve my particular problem by using the 'this' keyword, but I'm still curious as to what is behind this behavior.

    Read the article

  • C#: Determine Type for (De-)Serialization

    - by dbemerlin
    Hi, i have a little problem implementing some serialization/deserialization logic. I have several classes that each take a different type of Request object, all implementing a common interface and inheriting from a default implementation: This is how i think it should be: Requests interface IRequest { public String Action {get;set;} } class DefaultRequest : IRequest { public String Action {get;set;} } class LoginRequest : DefaultRequest { public String User {get;set;} public String Pass {get;set;} } Handlers interface IHandler<T> { public Type GetRequestType(); public IResponse HandleRequest(IModel model, T request); } class DefaultHandler<T> : IHandler<T> // Used as fallback if the handler cannot be determined { public Type GetRequestType() { return /* ....... how to get the Type of T? ((new T()).GetType()) ? .......... */ } public IResponse HandleRequest(IModel model, T request) { /* ... */ } } class LoginHandler : DefaultHandler<LoginRequest> { public IResponse HandleRequest(IModel mode, LoginRequest request) { } } Calling class Controller { public ProcessRequest(String action, String serializedRequest) { IHandler handler = GetHandlerForAction(action); IRequest request = serializer.Deserialize<handler.GetRequestType()>(serializedRequest); handler(this.Model, request); } } Is what i think of even possible? My current Solution is that each handler gets the serialized String and is itself responsible for deserialization. This is not a good solution as it contains duplicate code, the beginning of each HandleRequest method looks the same (FooRequest request = Deserialize(serializedRequest); + try/catch and other Error Handling on failed deserialization). Embedding type information into the serialized Data is not possible and not intended. Thanks for any Hints.

    Read the article

  • How can I prevent infinite recursion when using events to bind UI elements to fields?

    - by Billy ONeal
    The following seems to be a relatively common pattern (to me, not to the community at large) to bind a string variable to the contents of a TextBox. class MyBackEndClass { public event EventHandler DataChanged; string _Data; public string Data { get { return _Data; } set { _Data = value; //Fire the DataChanged event } } } class SomeForm : // Form stuff { MyBackEndClass mbe; TextBox someTextBox; SomeForm() { someTextBox.TextChanged += HandleTextBox(); mbe.DataChanged += HandleData(); } void HandleTextBox(Object sender, EventArgs e) { mbe.Data = ((TextBox)sender).Text; } void HandleData(Object sender, EventArgs e) { someTextBox.Text = ((MyBackEndClass) sender).Data; } } The problem is that changing the TextBox fires the changes the data value in the backend, which causes the textbox to change, etc. That runs forever. Is there a better design pattern (other than resorting to a nasty boolean flag) that handles this case correctly? EDIT: To be clear, in the real design the backend class is used to synchronize changes between multiple forms. Therefore I can't just use the SomeTextBox.Text property directly. Billy3

    Read the article

  • How can I compare the performance of log() and fp division in C++?

    - by Ventzi Zhechev
    Hi, I’m using a log-based class in C++ to store very small floating-point values (as the values otherwise go beyond the scope of double). As I’m performing a large number of multiplications, this has the added benefit of converting the multiplications to sums. However, at a certain point in my algorithm, I need to divide a standard double value by an integer value and than do a *= to a log-based value. I have overloaded the *= operator for my log-based class and the right-hand side value is first converted to a log-based value by running log() and than added to the left-hand side value. Thus the operations actually performed are floating-point division, log() and floating-point summation. My question whether it would be faster to first convert the denominator to a log-based value, which would replace the floating-point division with floating-point subtraction, yielding the following chain of operations: twice log(), floating-point subtraction, floating-point summation. In the end, this boils down to whether floating-point division is faster or slower than log(). I suspect that a common answer would be that this is compiler and architecture dependent, so I’ll say that I use gcc 4.2 from Apple on darwin 10.3.0. Still, I hope to get an answer with a general remark on the speed of these two operators and/or an idea on how to measure the difference myself, as there might be more going on here, e.g. executing the constructors that do the type conversion etc. Cheers!

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • Finding patterns of failure in a Unit Test

    - by Pekka
    I'm new to Unit Testing, and I'm only getting into the routine of building test suites. I have what is going to be a rather large project that I want to build tests for from the start. I'm trying to figure out general strategies and patterns for building test suites. When you look at a class, many tests come to you obviously due to the nature of the class. Say for a "user account" class with basic CRUD operations, being related to a database table, we will want to test - well, the CRUD. creating an object and seeing whether it exists query its properties change some properties change some properties to incorrect values and delete it again. As for how to break things, there are "fail" tests common to most CRUD classes like: Invalid input data types A number as the ID key that exceeds the range of the chosen data type Input in an incorrect character encoding Input that is too long And so on and so on. For a unit test concerned with file operations, the list of "breaking things" could be Invalid characters in file name File name too long File name uses incorrect protocol or path I'm pretty sure similar patterns - applicable beyond the unit test one is currently working on - can be found for most units that are being tested. Now my question is: Am I correct in seeing such "breaking patterns"? Or am I getting something completely wrong about Unit testing, and if I did it right, this wouldn't be an issue at all? Is Unit Testing as a process of finding as many ways to break the unit as possible the right way to go? If I am correct: Are there existing definitions, lists, cheat sheets for such patterns? Are there any provisions (mainly in PHPUnit, as that's the framework I'm working in) to automate such patterns? Is there any assistance - in the form of check lists, or software - to aid in writing complete tests?

    Read the article

  • Help regarding multi-threading in MFC,please help me firends!

    - by kiddo
    Hello all,in my application there is a small part of function,in which it will read files to get some information,the number of filecount would be utleast 50,So I thought of implementing threading.Say if the user is giving 50 files,I wanted to separate it as 5 *10, 5 thread should be created,so that each thread can handle 10 files which can speed up the process.And also from the below code you can see that some variables are common.I read some articles about threading and I am aware that only one thread should access a variable/contorl at a me(CCriticalStiuation can be used for that).For me as a beginner,I am finding hard to imlplement what I have learned about threading.Somebody please give me some idea with code shown below..thanks in advance file read function:// void CMyClass::GetWorkFilesInfo(CStringArray& dataFilesArray,CString* dataFilesB, int* check,DWORD noOfFiles,LPWSTR path) { CString cFilePath; int cIndex =0; int exceptionInd = 0; wchar_t** filesForWork = new wchar_t*[noOfFiles]; int tempCheck; int localIndex =0; for(int index = 0;index < noOfFiles; index++) { tempCheck = *(check + index); if(tempCheck == NOCHECKBOX) { *(filesForWork+cIndex) = new TCHAR[MAX_PATH]; wcscpy(*(filesForWork+cIndex),*(dataFilesB +index)); cIndex++; } else//CHECKED or UNCHECKED { dataFilesArray.Add(*(dataFilesB+index)); *(check + localIndex) = *(check + index); localIndex++; } } WorkFiles(&cFilePath,dataFilesArray,filesForWork, path, cIndex); dataFilesArray.Add(cFilePath); *(check + localIndex) = CHECKED; }

    Read the article

  • Applying powershell outside IT Management.

    - by Tormod
    Hi. We have a flexible process control system by which automation engineers configure up large application comprising thousands of small logical units that are parameterized and integrated into the control flow. There are many tasks that are repetitive on the granular level, and there are a multitude of proprietary productivity tools that have been made to meet this demand. We have different business segments, and the automation engineers vary across the board in skill sets and interests. Fancy GUI and usability versus flexibility is a common discussion. At first glance, powershell seems to be a sensible platform to implement such tooling and which also would be a advantageous cross-over skill to manage the IT aspects of the system setup and deployment as a whole. This should allow the script savvy their desired flexibility (they are already a scripting crowd) and the GUI dependant could still get their desired GUI underpinned by powershell. But I can't seem to find many people/groups who have tried to use the scriptability and object passing of powershell extensively to accommodate a heterogeneous user community outside the realm of IT management. Do anybody have any tips or word of caution? Am I missing something obvious as to why this shouldn't be done? Shouldn't powershell be taking over the world? ;-)

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >