Search Results

Search found 6880 results on 276 pages for 'argument dependent lookup'.

Page 215/276 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Is it possible to return a list of numbers from a Sybase function?

    - by ps_rs4
    I'm trying to overcome a very serious performance issue in which Sybase refuses to use the primary key index on a large table because one of the required fields is specified indirectly through another table - or, in other words; SELECT ... FROM BIGTABLE WHERE KFIELD = 123 runs in ms but SELECT ... FROM BIGTABLE, LTLTBL WHERE KFIELD = LTLTBL.LOOKUP AND LTLTBL.UNIQUEID = 'STRINGREPOF123' takes 30 - 40 seconds. I've managed to work around this first problem by using a function that basically lets me do this; SELECT ... FROM BIGTABLE WHERE KFIELD = MYFUNC('STRINGREPOF123') which also runs in ms. The problem, however, is that this approach only works when there is a single value returned by MYFUNCT but I have some cases where it may return 2 or 3 values. I know that the SQL SELECT ... FROM BIGTABLE WHERE KFIELD IN (123,456,789) also returns in millis so I'd like to have a function that returns a list of possible values rather than just a single one - is this possible? Sadly the application is running on Sybase ASA 9. Yes I know it is old and is scheduled to be refreshed but there's nothing I can do about it now so I need logic that will work with this version of the DB. Thanks in advance for any assistance on this matter.

    Read the article

  • Javascript, IE, Strings, and Performance problems

    - by Infinity
    Hey guys, So we have this product, and it's really slow in IE. We've already applied a lot of the practices advised by the IE guys themselves (like this, and this), and try to sacrifice clean code for performance in the critical parts like DOM manipulation. However, as you can see in this IE profiler screenshot.. Just "String" is the biggest offender. Almost 750ms of exclusive time. Does this mean IE is spending 750ms just instantiating Strings? I also read this stuff on the Opera dev blog: A build script can remove whitespace, comments, replace strings with Array lookups (to avoid MSIE creating a string object for every single instance of a string — even in conditions) But no more info regarding this. Anyone can clarify? It seems like IE has to create a full String instance every time you have " " in your code, which could explain this, but I don't know what the array lookup optimization would look like. BTW- we don't really do much of string concatenation anywhere in the code. The library we use is MooTools 1.2.4 Any suggestions will be appreciated! Thx

    Read the article

  • Is it possible to tie nested generics?

    - by Michael Deardeuff
    Is it possible to tie nested generics/captures together? I often have the problem of having a Map lookup of class to genericized item of said class. In concrete terms I want something like this (no, T is not declared anywhere). private Map<Class<T>, ServiceLoader<T>> loaders = Maps.newHashMap(); In short, I want loaders.put/get to have semantics something like these: <T> ServiceLoader<T> get(Class<T> klass) {...} <T> void put(Class<T> klass, ServiceLoader<T> loader) {...} Is the following the best I can do? Do I have to live with the inevitable @SuppressWarnings("unchecked") somewhere down the line? private Map<Class<?>, ServiceLoader<?>> loaders = Maps.newHashMap();

    Read the article

  • C++ Virtual Methods for Class-Specific Attributes or External Structure

    - by acanaday
    I have a set of classes which are all derived from a common base class. I want to use these classes polymorphically. The interface defines a set of getter methods whose return values are constant across a given derived class, but vary from one derived class to another. e.g.: enum AVal { A_VAL_ONE, A_VAL_TWO, A_VAL_THREE }; enum BVal { B_VAL_ONE, B_VAL_TWO, B_VAL_THREE }; class Base { //... virtual AVal getAVal() const = 0; virtual BVal getBVal() const = 0; //... }; class One : public Base { //... AVal getAVal() const { return A_VAL_ONE }; BVal getBVal() const { return B_VAL_ONE }; //... }; class Two : public Base { //... AVal getAVal() const { return A_VAL_TWO }; BVal getBVal() const { return B_VAL_TWO }; //... }; etc. Is this a common way of doing things? If performance is an important consideration, would I be better off pulling the attributes out into an external structure, e.g.: struct Vals { AVal a_val; VBal b_val; }; storing a Vals* in each instance, and rewriting Base as follows? class Base { //... public: AVal getAVal() const { return _vals->a_val; }; BVal getBVal() const { return _vals->b_val; }; //... private: Vals* _vals; }; Is the extra dereference essentially the same as the vtable lookup? What is the established idiom for this type of situation? Are both of these solutions dumb? Any insights are greatly appreciated

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • Computing, storing, and retrieving values to and from an N-Dimensional matrix

    - by Adam S
    This question is probably quite different from what you are used to reading here - I hope it can provide a fun challenge. Essentially I have an algorithm that uses 5(or more) variables to compute a single value, called outcome. Now I have to implement this algorithm on an embedded device which has no memory limitations, but has very harsh processing constraints. Because of this, I would like to run a calculation engine which computes outcome for, say, 20 different values of each variable and stores this information in a file. You may think of this as a 5(or more)-dimensional matrix or 5(or more)-dimensional array, each dimension being 20 entries long. In any modern language, filling this array is as simple as having 5(or more) nested for loops. The tricky part is that I need to dump these values into a file that can then be placed onto the embedded device so that the device can use it as a lookup table. The questions now, are: What format(s) might be acceptable for storing the data? What programs (MATLAB, C#, etc) might be best suited to compute the data? C# must be used to import the data on the device - is this possible given your answer to #1?

    Read the article

  • Problem with making a simple JS XmlHttpRequest call

    - by recipriversexclusion
    I have to create a very simple client that makes GET and POST calls to our server and parses the returned XML. I am writing this in JavaScript, problem is I don't know how to program in JS (started to look into this just this morning)! As n initial test, I am trying to ping to the Twitter API, here's the function that gets called when user enters the URL http://api.twitter.com/1/users/lookup.xml and hits the submit button: function doRequest() { var req_url, req_type, body; req_url = document.getElementById('server_url').value; req_type = document.getElementById('request_type').value; alert("Connecting to url: " + req_url + " with HTTP method: " + req_type); req = new XMLHttpRequest(); req.open(req_type, req_url, false, "username", "passwd");// synchronous conn req.onreadystatechange=function() { if (req.readyState == 4) { alert(req.status); } } req.send(null); } When I run this on FF, I get a Access to restricted URI denied" code: "1012 error on Firebug. Stuff I googled suggested that this was a FF-specific problem so I switched to Chrome. Over there, the second alert comes up, but displays 0 as HTTP status code, which I found weird. Can anyone spot what the problem is? People say this stuff is easier to use with JQuery but learning that on top of JS syntax is a bit too much now.

    Read the article

  • Looking for: nosql (redis/mongodb) based event logging for Django

    - by Parand
    I'm looking for a flexible event logging platform to store both pre-defined (username, ip address) and non-pre-defined (can be generated as needed by any piece of code) events for Django. I'm currently doing some of this with log files, but it ends up requiring various analysis scripts and ends up in a DB anyway, so I'm considering throwing it immediately into a nosql store such as MongoDB or Redis. The idea is to be easily able to query, for example, which ip address the user most commonly comes from, whether the user has ever performed some action, lookup the outcome for a specific event, etc. Is there something that already does this? If not, I'm thinking of this: The "event" is a dictionary attached to the request object. Middleware fills in various pieces (username, ip, sql timing), code fills in the rest as needed. After the request is served a post-request hook drops the event into mongodb/redis, normalizing various fields (eg. incrementing the username:ip address counter) and dropping the rest in as is. Words of wisdom / pointers to code that does some/all of this would be appreciated.

    Read the article

  • Stored procedure for generic MERGE

    - by GilliVilla
    I have a set of 10 tables in a database (DB1). And there are 10 tables in another database (DB2) with exact same schema on the same SQL Server 2008 R2 database server machine. The 10 tables in DB1 are frequently updated with data. I intend to write a stored procedure that would run once every day for synchronizing the 10 tables in DB1 with DB2. The stored procedure would make use of the MERGE statement. Now, my aim is to make this as generic and parametrized as possible. That is, accommodate for more tables down the line... and accommodate different source and target DB names. Definitely no hard coding is intended. This is my algorithm so far: Have the database names as parameters Have the first query within the stored procedure... result in giving the names of the 10 tables from a lookup table (this can be 10, 20 or whatever) Have a generic MERGE statement that does the sync for each of the above set of tables (based on primary key?) This is where I need more inputs on. What is the best way to achieve this stored procedure? SQL syntax would be helpful.

    Read the article

  • What kind of data do I pass into a Django Model.save() method?

    - by poswald
    Lets say that we are getting POSTed a form like this in Django: rate=10 items= [23,12,31,52,83,34] The items are primary keys of an Item model. I have a bunch of business logic that will run and create more items based on this data, the results of some db lookups, and some business logic. I want to put that logic into a save signal or an overridden Model.save() method of another model (let's call it Inventory). The business logic will run when I create a new Inventory object using this form data. Inventory will look like this: class Inventory(models.Model): picked_items = models.ManyToManyField(Item, related_name="items_picked_set") calculated_items = models.ManyToManyField(Item, related_name="items_calculated_set") rate = models.DecimalField() ... other fields here ... New calculated_items will be created based on the passed in items which will be stored as picked_items. My question is this: is it better for the save() method on this model to accept: the request object (I don't really like this coupling) the form data as arguments or kwargs (a list of primary keys and the other form fields) a list of Items (The caller form or view will lookup the list of Items and create a list as well as pass in the other form fields) some other approach? I know this is a bit subjective, but I was wondering what the general idea is. I've looked through a lot of code but I'm having a hard time finding a pattern I like.

    Read the article

  • HIbernate 3.5.1 - can I just drop in EHCache 2.0.1?

    - by caerphilly
    I'm using Hibernate 3.5.1, which comes with EHCache 1.5 bundled. If I want to use the latest EHCache release (2.0.1), is it just a matter of removing the ehcache-1.5.jar from my project, and replacing with ehcache-core-2.0.1.jar? Any issues to be aware of? Also - is a cache "region" in the Hibernate mapping file that same as a cache "name" in the ehcache configuration xml? What I want to do is define 2 named cache regions - one for read-only reference entities that won't change (lookup lists etc), and one for all other entities. So in ehcache I want to define two elements; <cache name="readonly"> ... </cache> <cache name="mutable"> ... </cache> And then in my Hibernate mapping files, I will specify the cache to be used for each entity: <hibernate-mapping> <class name="lookuplist"> <cache region="readonly" usage="read-only"/> <property> ... </property> </class> </hibernate-mapping> Will that work? Some of the documentation seems to imply that a separate region/cache gets created for each mapped class... Thanks.

    Read the article

  • Rails: RESTful Find, Initialize, or Create

    - by Andrew
    I have an app that has Cities in it. I'm looking for some suggestions on how to RESTfully structure a controller so that I can lookup, initialize, and create city records via AJAX requests. For instance: Given a text field city_name A user enters the name of a City, like "Paris, France" The app checks this location to see if there is such a city in the database already If there is, it returns the city object If there is not, it returns a new record initialized with the name "Paris" and the country "France", and prompts the user to confirm they want to add this city to the database If the user says "Yes" the record is saved. If not the record is discarded and the form is cleared. Now, my first approach was to change the Create action to use find_or_create, so that an AJAX post to cities_path would result in either returning the existing city or creating it and returning it. That works ok... However, it would be better to setup controller actions that would take a string input, find , or else initialize and return, then only create if the user confirms the generated record is correct. The ideal scenario would put this all in one action so AJAX request can go to that url, the server responds with JSON objects, and javascript can handle things from there. I'd like to keep all the user-interaction logic client side, and also minimize the number of requests it takes to achieve this. Any suggestions on the cleanest, most RESTful way to accomplish this?

    Read the article

  • Why do I get this exception? {An item with the same key has already been added."})

    - by Alan
    Aknittel NewSellerID is the result of a lookup on tblSellers. These tables (tblSellerListings and tblSellers) are not "officially" joined with a foreign key relationship, either in the model or in the database, but I want some referential integrity maintained for the future. So my issue remains. Why do I get the exception ({"An item with the same key has already been added."}) with this code, if I don't begin each iteration of the foreach loop with a new ObjectContext and end it with SaveChanges, which I think will affect performance. Also, could you tell me why ORCSolutionsDataService.tblSellerListings (An ADO.NET DataServices/WCF object is not IDisposable, like LINQ to Entities?? ============================================== // Add listings to previous seller int NewSellerID = 0; // Look up existing Seller key using SellerUniqueEBAYID var qryCurrentSeller = from s in service.tblSellers where s.SellerEBAYUserID == SellerUserID select s; foreach (var s in qryCurrentSeller) NewSellerID = s.SellerID; // Save the selected listings for this seller foreach (DataGridViewRow dgr in dgvRows) { ORCSolutionsDataService.tblSellerListings NewSellerListing = new ORCSolutionsDataService.tblSellerListings(); NewSellerListing.ItemID = dgr.Cells["txtSellerItemID"].Value.ToString(); NewSellerListing.Title = dgr.Cells["txtSellerItemTitle"].Value.ToString(); NewSellerListing.CurrentPrice = Convert.ToDecimal(dgr.Cells["txtSellerItemPrice"].Value); NewSellerListing.QuantitySold = Convert.ToInt32(dgr.Cells["txtSellerItemSold"].Value); NewSellerListing.EndTime = Convert.ToDateTime(dgr.Cells["txtSellerItemEnds"].Value); NewSellerListing.CategoryName = dgr.Cells["txtSellerItemCategory"].Value.ToString(); NewSellerListing.ExtendedPrice = Convert.ToDecimal(dgr.Cells["txtExtendedReceipts"].Value); NewSellerListing.RetrievedDtime = Convert.ToDateTime(dtSellerDataRetrieved.ToString()); NewSellerListing.SellerID = NewSellerID; service.AddTotblSellerListings(NewSellerListing); } service.SaveChanges(); } catch (Exception ex) { MessageBox.Show("Unable to add a new case. Exception: " + ex.Message); }

    Read the article

  • Large Switch statements: Bad OOP?

    - by Mystere Man
    I've always been of the opinion that large switch statements are a symptom of bad OOP design. In the past, I've read articles that discuss this topic and they have provided altnerative OOP based approaches, typically based on polymorphism to instantiate the right object to handle the case. I'm now in a situation that has a monsterous switch statement based on a stream of data from a TCP socket in which the protocol consists of basically newline terminated command, followed by lines of data, followed by an end marker. The command can be one of 100 different commands, so I'd like to find a way to reduce this monster switch statement to something more manageable. I've done some googling to find the solutions I recall, but sadly, Google has become a wasteland of irrelevant results for many kinds of queries these days. Are there any patterns for this sort of problem? Any suggestions on possible implementations? One thought I had was to use a dictionary lookup, matching the command text to the object type to instantiate. This has the nice advantage of merely creating a new object and inserting a new command/type in the table for any new commands. However, this also has the problem of type explosion. I now need 100 new classes, plus I have to find a way to interface them cleanly to the data model. Is the "one true switch statement" really the way to go? I'd appreciate your thoughts, opinions, or comments.

    Read the article

  • Scope of the c++ using directive

    - by ThomasMcLeod
    From section 7.3.4.2 of the c++11 standard: A using-directive specifies that the names in the nominated namespace can be used in the scope in which the using-directive appears after the using-directive. During unqualified name lookup (3.4.1), the names appear as if they were declared in the nearest enclosing namespace which contains both the using-directive and the nominated namespace. [ Note: In this context, “contains” means “contains directly or indirectly”. —end note ] What do the second and third sentences mean exactly? Please give example. Here is the code I am attempting to understand: namespace A { int i = 7; } namespace B { using namespace A; int i = i + 11; } int main(int argc, char * argv[]) { std::cout << A::i << " " << B::i << std::endl; return 0; } It print "7 7" and not "7 18" as I would expect. Sorry for the typo, the program actually prints "7 11".

    Read the article

  • Sort queryset by a generic foreign key (django)?

    - by thornomad
    I am using Django's comment framework which utilizes generic foreign keys. Question: How do I sort a given model's queryset by their comment count using the generic foreign key lookup? Reading the django docs on the subject it says one needs to calculate them not using the aggregation API: Django's database aggregation API doesn't work with a GenericRelation. [...] For now, if you need aggregates on generic relations, you'll need to calculate them without using the aggregation API. The only way I can think of, though, would be to iterate through my queryset, generate a list with content_type and object_id's for each item, then run a second queryset on the Comment model filtering by this list of content_type and object_id ... sort the objects by the count, then re-create a new queryset in this order by pulling the content_object for each comment ... This just seems wrong and I'm not even sure how to pull it off. Ideas? Someone must have done this before. I found this post online but it requires me handwriting SQL -- is that really necessary ?

    Read the article

  • Python lists/arrays: disable negative indexing wrap-around

    - by wim
    While I find the negative number wraparound (i.e. A[-2] indexing the second-to-last element) extremely useful in many cases, there are often use cases I come across where it is more of an annoyance than helpful, and I find myself wishing for an alternate syntax to use when I would rather disable that particular behaviour. Here is a canned 2D example below, but I have had the same peeve a few times with other data structures and in other numbers of dimensions. import numpy as np A = np.random.randint(0, 2, (5, 10)) def foo(i, j, r=2): '''sum of neighbours within r steps of A[i,j]''' return A[i-r:i+r+1, j-r:j+r+1].sum() In the slice above I would rather that any negative number to the slice would be treated the same as None is, rather than wrapping to the other end of the array. Because of the wrapping, the otherwise nice implementation above gives incorrect results at boundary conditions and requires some sort of patch like: def ugly_foo(i, j, r=2): def thing(n): return None if n < 0 else n return A[thing(i-r):i+r+1, thing(j-r):j+r+1].sum() I have also tried zero-padding the array or list, but it is still inelegant (requires adjusting the lookup locations indices accordingly) and inefficient (requires copying the array). Am I missing some standard trick or elegant solution for slicing like this? I noticed that python and numpy already handle the case where you specify too large a number nicely - that is, if the index is greater than the shape of the array it behaves the same as if it were None.

    Read the article

  • How can i design a DB where the user can define the fields and types of a detail table in a M-D rela

    - by Simon
    My application has one table called 'events' and each event has approx 30 standard fields, but also user defined fields that could be any name or type, in an 'eventdata' table. Users can define these event data tables, by specifying x number of fields (either text/double/datetime/boolean) and the names of these fields. This 'eventdata' (table) can be different for each 'event'. My current approach is to create a lookup table for the definitions. So if i need to query all 'event' and 'eventdata' per record, i do so in a M-D relaitionship using two queries (i.e. select * from events, then for each record in 'events', select * from 'some table'). Is there a better approach to doing this? I have implemented this so far, but most of my queries require two distinct calls to the DB - i cannot simply join my master 'events' table with different 'eventdata' tables for each record in in 'events'. I guess my main question is: can i join my master table with different detail tables for each record? E.g. SELECT E.*, E.Tablename FROM events E LEFT JOIN 'E.tablename' T ON E._ID = T.ID If not, is there a better way to design my database considering i have no idea on how many user defined fields there may be and what type they will be.

    Read the article

  • load data from grid row into (pop up) form for editing

    - by user1495457
    I read in Ext JS in Action ( by J. Garcia) that if we have an instance of Ext.data.Record, we can use the form's loadRecord method to set the form's values. However, he does not give a working example of this (in the example that he uses data is loaded into a form through a file called data.php). I have searched many forums and found the following entry helpful as it gave me an idea on how to solve my problem by using form's loadRecord method: load data from grid to form Now the code for my store and grid is as follows: var userstore = Ext.create('Ext.data.Store', { storeId: 'viewUsersStore', model: 'Configs', autoLoad: true, proxy: { type: 'ajax', url: '/user/getuserviewdata.castle', reader: { type: 'json', root: 'users' }, listeners: { exception: function (proxy, response, operation, eOpts) { Ext.MessageBox.alert("Error", "Session has timed-out. Please re-login and try again."); } } } }); var grid = Ext.create('Ext.grid.Panel', { id: 'viewUsersGrid', title: 'List of all users', store: Ext.data.StoreManager.lookup('viewUsersStore'), columns: [ { header: 'Username', dataIndex: 'username' }, { header: 'Full Name', dataIndex: 'fullName' }, { header: 'Company', dataIndex: 'companyName' }, { header: 'Latest Time Login', dataIndex: 'lastLogin' }, { header: 'Current Status', dataIndex: 'status' }, { header: 'Edit', menuDisabled: true, sortable: false, xtype: 'actioncolumn', width: 50, items: [{ icon: '../../../Content/ext/img/icons/fam/user_edit.png', tooltip: 'Edit user', handler: function (grid, rowIndex, colIndex) { var rec = userstore.getAt(rowIndex); alert("Edit " + rec.get('username')+ "?"); EditUser(rec.get('id')); } }] }, ] }); function EditUser(id) { //I think I need to have this code here - I don't think it's complete/correct though var formpanel = Ext.getCmp('CreateUserForm'); formpanel.getForm().loadRecord(rec); } 'CreateUserForm' is the ID of a form that already exists and which should appear when user clicks on Edit icon. That pop-up form should then automatically be populated with the correct data from the grid row. However my code is not working. I get an error at the line 'formpanel.getForm().loadRecord(rec)' - it says 'Microsoft JScript runtime error: 'undefined' is null or not an object'. Any tips on how to solve this?

    Read the article

  • CodeIgniter -- unable to use an object

    - by Smandoli
    THE SUMMARY: When I call .../index.php/product, I receive: Fatal error: Call to a member function get_prod_single() on a non-object in /var/www/sparts/main/controllers/product.php on line 16 The offending Line 16 is: $data['pn_oem'] = $this->product_model->get_prod_single($product_id); Looks like I don't know how to make this a working object. Can you help me? THE CODE: In my /Models folder I have product_model.php: <?php class Product_model extends Model { function Product_model() { parent::Model(); } function get_prod_single($product_id) { //This will be a DB lookup ... return 'foo'; //stub to get going } } ?> In my /controllers folder I have product.php: <?php class Product extends Controller { function Product() { parent::Controller(); } function index() { $this->load->model('Product_model'); $product_id = 113; // will get this dynamically $data['product_id'] = $product_id; $data['pn_oem'] = $this->product_model->get_prod_single($product_id); $this->load->view('prod_single', $data); } } ?>

    Read the article

  • Oracle Insert via Select from multiple tables where one table may not have a row

    - by Mikezx6r
    I have a number of code value tables that contain a code and a description with a Long id. I now want to create an entry for an Account Type that references a number of codes, so I have something like this: insert into account_type_standard (account_type_Standard_id, tax_status_id, recipient_id) ( select account_type_standard_seq.nextval, ts.tax_status_id, r.recipient_id from tax_status ts, recipient r where ts.tax_status_code = ? and r.recipient_code = ?) This retrieves the appropriate values from the tax_status and recipient tables if a match is found for their respective codes. Unfortunately, recipient_code is nullable, and therefore the ? substitution value could be null. Of course, the implicit join doesn't return a row, so a row doesn't get inserted into my table. I've tried using NVL on the ? and on the r.recipient_id. I've tried to force an outer join on the r.recipient_code = ? by adding (+), but it's not an explicit join, so Oracle still didn't add another row. Anyone know of a way of doing this? I can obviously modify the statement so that I do the lookup of the recipient_id externally, and have a ? instead of r.recipient_id, and don't select from the recipient table at all, but I'd prefer to do all this in 1 SQL statement.

    Read the article

  • And now for a complete change of direction from C++ function pointers

    - by David
    I am building a part of a simulator. We are building off of a legacy simulator, but going in different direction, incorporating live bits along side of the simulated bits. The piece I am working on has to, effectively route commands from the central controller to the various bits. In the legacy code, there is a const array populated with an enumerated type. A command comes in, it is looked up in the table, then shipped off to a switch statement keyed by the enumerated type. The type enumeration has a choice VALID_BUT_NOT_SIMULATED, which is effectively a no-op from the point of the sim. I need to turn those no-ops into commands to actual other things [new simulated bits| live bits]. The new stuff and the live stuff have different interfaces than the old stuff [which makes me laugh about the shill job that it took to make it all happen, but that is a topic for a different discussion]. I like the array because it is a very apt description of the live thing this chunk is simulating [latching circuits by row and column]. I thought that I would try to replace the enumerated types in the array with pointers to functions and call them directly. This would be in lieu of the lookup+switch.

    Read the article

  • cfdiv working in FF and Safari but does not show up in IE

    - by JS
    Within a form I have a button that launches a cfwindow, then presents a search screen for the user to make a selection. Once selection is made, the cfwindow closes and the selected content shows in the main page by being bound to a cfdiv. This all works fine in FF but the cfdiv doesn't show at all in IE. In IE, the cfwindow works, the select works, but then no bound page. I have tried setting bindonload and that made no difference (and I need it to be true if there is content that is pulled in via a query when it loads). All I have been able to find so far regarding this issue is setting bindonload to false and putting the cfdiv outside of the form but that's not possible in my current design. Here is a snippet which is inside of a larger form (and all necessary tags are cfimported). <td class="left" style="white-space:nowrap;"> <cfoutput>#n#</cfoutput>.&nbsp;<cfinput type="button" value="Select" name="x#n#Select#i#" onClick="ColdFusion.Window.create('x#n#Select#i#', 'Exercise Lookup', 'xSelect/xSelect.cfm?xNameVar=x#n#S#i#&window=x#n#Select#i#&workout=workout#i#', {x:100,y:100,height:500,width:720,modal:true,closable:true,draggable:true,resizable:true,center:true,initshow:true,minheight:200,minwidth:200 })" /> &nbsp; <cfdiv id="x#n#S#i#" tagName="x#n#S#i#" bind="url:xSelect/x.cfm"></cfdiv> <cfinput type="hidden" name="x#n#s#i#" value="#n#"> </td> Thanks for any help.

    Read the article

  • Should core application configuration be stored in the database, and if so what should be done to se

    - by Rl
    I'm writing an application around a lot of hierarchical data. Currently the hierarchy is fixed, but it's likely that new items will be added to the hierarchy in the future. (please let them be leaves) My current application and database design is fairly generic and nothing dealing with specific nodes in the hierarchy is hardcoded, with the exception of validation and lookup functions written to retrieve external data from each node's particular database. This pleases me from a design point of view, but I'm nervous at the realization that the entire application rests on a handful of records in the database. I'm also frustrated that I have to enforce certain aspects of data integrity with database triggers rather than by foreign key constraints (an example is where several different nodes in the hierarchy have their own proprietary IDs and I store them in a single column which, when coupled with the node ID can be used to locate the foreign data). I'm starting to wonder whether it may have been appropriate to simply hardcoded these known nodes into the system so that it would be more "type safe" and less generic. How does one know when something should be hardcoded, and when it should be a configuration item? Is it just a cost-benefit analysis of clarity/safety now vs less work later, or am I missing some metric I should be using to determine whether or not this is appropriate. The steps I'm taking to protect these valuable configurations are to add triggers that prevent updates/deletes. The database user that this application uses will only have the ability to manipulate data through stored procedures. What else can I do?

    Read the article

  • Spring MVC: Where to place validation and how to validation entity references.

    - by arrages
    Let's say I have the following command bean for creating a user: public class CreateUserCommand { private String userName; private String email; private Integer occupationId; pirvate Integer countryId; } occupationId and countryId are drop down selected values on the form. They map to an entity in the database (Occupation, Country). This command object is going to be fed to a service facade like so: userServiceFacade.createUser(CreateUserCommand command); This facade will construct a user entity to be sent to the actual service. So I suppose that in the facade layer I will have to make several dao calls to map all the lookup properties of the User entity. Based on this what is the best strategy to validate that occupationId and countryId map to real entities? Where is the best place to perform this validation? There is the spring validator but I am not sure this is the best place for this, for one I am wary of this method as validation is tied to the web tier, but also that means I would need to make the dao calls in the validator for validation but I would need to call the dao's in the facade layer again when the command - entity translation occurs. Is there anything I can do better? Thanks.

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >