Search Results

Search found 9706 results on 389 pages for 'aggregate functions'.

Page 80/389 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • NHibernate mapping with optimistic-lock="version" and dynamic-update="true" is generating invalid up

    - by SteveBering
    I have an entity "Group" with an assigned ID which is added to an aggregate in order to persist it. This causes an issue because NHibernate can't tell if it is new or existing. To remedy this issue, I changed the mapping to make the Group entity use optimistic locking on a sql timestamp version column. This caused a new issue. Group has a bag of sub objects. So when NHibernate flushes a new group to the database, it first creates the Group record in the Groups table, then inserts each of the sub objects, then does an update of the Group records to update the timestamp value. However, the sql that is generated to complete the update is invalid when the mapping is both dynamic-update="true" and optimistic-lock="version". Here is the mapping: <class xmlns="urn:nhibernate-mapping-2.2" dynamic-update="true" mutable="true" optimistic-lock="version" name="Group" table="Groups"> <id name="GroupNumber" type="System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="GroupNumber" length="5" /> <generator class="assigned" /> </id> <version generated="always" name="Timestamp" type="BinaryBlob" unsaved-value="null"> <column name="TS" not-null="false" sql-type="timestamp" /> </version> <property name="UID" update="false" type="System.Guid, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="GroupUID" unique="true" /> </property> <property name="Description" type="AnsiString"> <column name="GroupDescription" length="25" not-null="true" /> </property> <bag access="field.camelcase-underscore" cascade="all" inverse="true" lazy="true" name="Assignments" mutable="true" order-by="GroupAssignAssignment"> <key foreign-key="fk_Group_Assignments"> <column name="GroupNumber" /> </key> <one-to-many class="Assignment" /> </bag> <many-to-one class="Aggregate" name="Aggregate"> <column name="GroupParentID" not-null="true" /> </many-to-one> </class> </hibernate-mapping> When the mapping includes both the dynamic update and the optimistic lock, the sql generated is: UPDATE groups SET WHERE GroupNumber = 11111 AND TS=0x00000007877 This is obviously invalid as there are no SET statements. If I remove the dynamic update part, everything gets updated during this update statement instead. This makes the statement valid, but rather unnecessary. Has anyone seen this issue before? Am I missing something? Thanks, Steve

    Read the article

  • Non-linear regression models in PostgreSQL using R

    - by Dave Jarvis
    Background I have climate data (temperature, precipitation, snow depth) for all of Canada between 1900 and 2009. I have written a basic website and the simplest page allows users to choose category and city. They then get back a very simple report (without the parameters and calculations section): The primary purpose of the web application is to provide a simple user interface so that the general public can explore the data in meaningful ways. (A list of numbers is not meaningful to the general public, nor is a website that provides too many inputs.) The secondary purpose of the application is to provide climatologists and other scientists with deeper ways to view the data. (Using too many inputs, of course.) Tool Set The database is PostgreSQL with R (mostly) installed. The reports are written using iReport and generated using JasperReports. Poor Model Choice Currently, a linear regression model is applied against annual averages of daily data. The linear regression model is calculated within a PostgreSQL function as follows: SELECT regr_slope( amount, year_taken ), regr_intercept( amount, year_taken ), corr( amount, year_taken ) FROM temp_regression INTO STRICT slope, intercept, correlation; The results are returned to JasperReports using: SELECT year_taken, amount, year_taken * slope + intercept, slope, intercept, correlation, total_measurements INTO result; JasperReports calls into PostgreSQL using the following parameterized analysis function: SELECT year_taken, amount, measurements, regression_line, slope, intercept, correlation, total_measurements, execute_time FROM climate.analysis( $P{CityId}, $P{Elevation1}, $P{Elevation2}, $P{Radius}, $P{CategoryId}, $P{Year1}, $P{Year2} ) ORDER BY year_taken This is not an optimal solution because it gives the false impression that the climate is changing at a slow, but steady rate. Questions Using functions that take two parameters (e.g., year [X] and amount [Y]), such as PostgreSQL's regr_slope: What is a better regression model to apply? What CPAN-R packages provide such models? (Installable, ideally, using apt-get.) How can the R functions be called within a PostgreSQL function? If no such functions exist: What parameters should I try to obtain for functions that will produce the desired fit? How would you recommend showing the best fit curve? Keep in mind that this is a web app for use by the general public. If the only way to analyse the data is from an R shell, then the purpose has been defeated. (I know this is not the case for most R functions I have looked at so far.) Thank you!

    Read the article

  • How can you make an emacs macro wait for cscope query results?

    - by Sudhanshu
    I am trying to write a macro which calls cscope-find-functions-calling-this-function on each and every tag in a file displayed in the *Tags List* buffer (created by list-tags command). This should create a buffer which contains list of all functions calling a set of functions defined in a certain file. This is the sequence of keystrokes: 1. <f11> ;; cscope-find-functions-calling-this-function 2. RET ;; newline [shows results of cscope in a split window] 3. C-x C-p ;; mark-page 4. C-x C-x ;; icicle-exchange-point-and-mark 5. <up> ;; previous-line 6. <end> ;; end-of-line [region to copy has been marked] 7. <f7> ;; append-results-to-buffer 8. C-x ESC O ;; [move back to split window on the right] 9. C-x b ;; icicle-buffer [Switch back to *Tags List* buffer] 10. *Tags ;; self-insert-command * 5 11. SPC ;; self-insert-command 12. List* ;; self-insert-command * 5 13. RET ;; newline 14 . <down> ;; next-line [Position point on next tag in the list] Problem: I get no results in the buffer, and I found out that's because Step 3-7 execute even before cscope prints the results of query made on Steps 1-2. I can insert a pause in the macro by using C-x q, but I'd rather like the macro to wait after Step 2, until cscope has returned with the results and only then continue further. I suspect this is not possible through a macro, maybe a LISP function... I'm not a lisp expert myself. Can someone please help? Thanks! Details: I have Icicles installed so by default I get word at point in current buffer as input in minibuffer. F11 is bound to cscope-find-functions-calling-this-function windmove is installed and C-x (C-x ESC o - as shown below) takes you to the right window. F7 is bound to append-results-to-buffer which is defined as: (defun append-results-to-buffer () (interactive) (append-to-buffer (get-buffer-create "c1") (point) (mark))) This function just appends the currently marked region to a buffer named "c1".

    Read the article

  • Creating Ada record with one field

    - by ada hater
    I've define a type: type Foo is record bar : Positive; end record; I want to create a function that returns an instance of the record: function get_foo return Foo is return (1); end get_foo; But Ada won't let me, saying "positional aggregate cannot have one argument". Stupidly trying, I've added another dumb field to the record, and then return (1, DOESNT_MATTER); works! How do I tell Ada that's not a positional aggregate, but an attempt to create a record?

    Read the article

  • In CQRS (event-sourced), do you need a global sequence counter in the event store?

    - by Jon M
    In trying to get my head around CQRS (and DDD in general) I have come across situations when two events occur on different aggregates but the order of them has domain meaning. If so then they could happen so close together that a timestamp (as used by the sample implementations I have seen) cannot differentiate them, meaning the event store doesn't contain a 'complete' representation of the domain as there is ambiguity over the order in which events occurred. As an example, the domain could fire a CustomerCreatedEvent which applies to the Customer aggregate, and then a CustomerAssignedToAgent event on the Agent aggregate. The CustomerAssignedToAgent event doesn't make sense if it occurs before the CustomerCreatedEvent, but typically both of these might be fired as a result of one operation which makes it likely that the timestamps would effectively be the same. So am I just modelling things badly? Should there ever be a situation where the sequence of events across different aggregates is important? Or should you keep a global sequence number on your event store, so that you can identify the exact sequence in which events occurred?

    Read the article

  • Controlling the USB from Windows

    - by b-gen-jack-o-neill
    Hi, I know this probably is not the easiest thing to do, but I am trying to connect Microcontroller and PC using USB. I dont want to use internal USART of Microcontroller or USB to RS232 converted, its project indended to help me understand various principles. So, getting the communication done from the Microcontroller side is piece of cake - I mean, when I know he protocol, its relativelly easy to implement it on Micro, becouse I am in direct control of evrything, even precise timing. But this is not the case of PC. I am not very familiar with concept of Windows handling the devices connected. In one of my previous question I ask about how Windows works with devices thru drivers. I understood that for internal use of Windows, drivers must have some default set of functions available to OS. I mean, when OS wants to access HDD, it calls HDD driver (which is probably internal in OS), with specific "questions" so that means that HDD driver has to be written to cooperate with Windows, to have write function in the proper place to be called by the OS. Something similiar is for GPU, Even DirectX, I mean DirectX must call specific functions from drivers, so drivers must be written to work with DX. I know, many functions from WinAPI works on their own, but even "simple" window must be in the end written into framebuffer, using MMIO to adress specified by drivers. Am I right? So, I expected that Windows have internal functions, parts of WinAPI designed to work with certain comonly used things. To call manufacturer-designed drivers. But this seems to not be entirely true becouse Windows has no way to communicate thru Paralel port. I mean, there is no function in the WinAPI to work with serial port, but there are funcions to work with HDD,GPU and so. But now there comes the part I am getting very lost at. So, I think Windows must have some built-in functions to communicate thru USB, becouse for example it handles USB flash memory. So, is there any WinAPI function designed to let user to operate USB thru that function, or when I want to use USB myself, do I have to call desired USB-driver function myself? Becouse all you need to send to USB controller is device adress and the infromation right? I mean, I don´t have to write any new drivers, am I right? Just to call WinAPI function if there is such, or directly call original USB driver. Does any of this make some sense?

    Read the article

  • Query Execution Plan - When is the Where clause executed?

    - by Alex
    I have a query like this (created by LINQ): SELECT [t0].[Id], [t0].[CreationDate], [t0].[CreatorId] FROM [dbo].[DataFTS]('test', 100) AS [t0] WHERE [t0].[CreatorId] = 1 ORDER BY [t0].[RANK] DataFTS is a full-text search table valued function. The query execution plan looks like this: SELECT (0%) - Sort (23%) - Nested Loops (Inner Join) (1%) - Sort (Top N Sort) (25%) - Stream Aggregate (0%) - Stream Aggregate (0%) - Compute Scalar (0%) - Table Valued Function (FullTextMatch) (13%) | | - Clustered Index Seek (38%) Does this mean that the WHERE clause ([CreatorId] = 1) is executed prior to the TVF ( full text search) or after the full text search? Thank you.

    Read the article

  • Aggregation over a few models - Django

    - by RadiantHex
    Hi folks, I'm trying to compute the average of a field over various subsets of a queryset. Player.objects.order_by('-score').filter(sex='male').aggregate(Avg('level')) This works perfectly! But... if I try to compute it for the top 50 players it does not work. Player.objects.order_by('-score').filter(sex='male')[:50].aggregate(Avg('level')) This last one returns the exact same result as the query above it, which is wrong. What am I doing wrong? Help would be very much appreciated!

    Read the article

  • Which one of the following is NOT a pitfall of inheritance?

    - by Difficult PEOPLE
    Which one of the following is NOT a pitfall of inheritance? Base-derive classes should be totally separate and do not have an is-a relationship. Base-derive classes should have been aggregate classes instead. Inheritance may be inverted, example: Truck<-Vehicle should be Vehicle<-Truck. Incompatible class hierarchies may be connected because of multiple inheritance. Aggregation should have been used instead. Functionality is transferred from a base class to a derived one. In my opinion, NOT a pitfall of inheritance meas can use inheritance. 1 seems do without inheritance 2 aggregate substitute Base-derive I don't know So, I think 5 is the answer.

    Read the article

  • Log transport and aggregation at scale

    - by markdrayton
    How're you analysing log files from UNIX/Linux machines? We run several hundred servers which all generate their own log files, either directly or through syslog. I'm looking for a decent solution to aggregate these and pick out important events. This problem breaks down into 3 components: 1) Message transport The classic way is to use syslog to log messages to a remote host. This works fine for applications that log into syslog but less useful for apps that write to a local file. Solutions for this might include having the application log into a FIFO connected to a program to send the message using syslog, or by writing something that will grep the local files and send the output to the central syslog host. However, if we go to the trouble of writing tools to get messages into syslog would we be better replacing the whole lot with something like Facebook's Scribe which offers more flexibility and reliability than syslog? 2) Message aggregation Log entries seem to fall into one of two types: per-host and per-service. Per-host messages are those which occur on one machine; think disk failures or suspicious logins. Per-service messages occur on most or all of the hosts running a service. For instance, we want to know when Apache finds an SSI error but we don't want the same error from 100 machines. In all cases we only want to see one of each type of message: we don't want 10 messages saying the same disk has failed, and we don't want a message each time a broken SSI is hit. One approach to solving this is to aggregate multiple messages of the same type into one on each host, send the messages to a central server and then aggregate messages of the same kind into one overall event. SER can do this but it's awkward to use. Even after a couple of days of fiddling I had only rudimentary aggregations working and had to constantly look up the logic SER uses to correlate events. It's powerful but tricky stuff: I need something which my colleagues can pick up and use in the shortest possible time. SER rules don't meet that requirement. 3) Generating alerts How do we tell our admins when something interesting happens? Mail the group inbox? Inject into Nagios? So, how're you solving this problem? I don't expect an answer on a plate; I can work out the details myself but some high-level discussion on what is surely a common problem would be great. At the moment we're using a mishmash of cron jobs, syslog and who knows what else to find events. This isn't extensible, maintainable or flexible and as such we miss a lot of stuff we shouldn't. Updated: we're already using Nagios for monitoring which is great for detected down hosts/testing services/etc but less useful for scraping log files. I know there are log plugins for Nagios but I'm interested in something more scalable and hierarchical than per-host alerts.

    Read the article

  • Question on the implementation of my Entity System

    - by miguel.martin
    I am currently creating an Entity System, in C++, it is almost completed (I have all the code there, I just have to add a few things and test it). The only thing is, I can't figure out how to implement some features. This Entity System is based off a bit from the Artemis framework, however it is different. I'm not sure if I'll be able to type this out the way my head processing it. I'm going to basically ask whether I should do something over something else. Okay, now I'll give a little detail on my Entity System itself. Here are the basic classes that my Entity System uses to actually work: Entity - An Id (and some methods to add/remove/get/etc Components) Component - An empty abstract class ComponentManager - Manages ALL components for ALL entities within a Scene EntitySystem - Processes entities with specific components Aspect - The class that is used to help determine what Components an Entity must contain so a specific EntitySystem can process it EntitySystemManager - Manages all EntitySystems within a Scene EntityManager - Manages entities (i.e. holds all Entities, used to determine whether an Entity has been changed, enables/disables them, etc.) EntityFactory - Creates (and destroys) entities and assigns an ID to them Scene - Contains an EntityManager, EntityFactory, EntitySystemManager and ComponentManager. Has functions to update and initialise the scene. Now in order for an EntitySystem to efficiently know when to check if an Entity is valid for processing (so I can add it to a specific EntitySystem), it must recieve a message from the EntityManager (after a call of activate(Entity& e)). Similarly the EntityManager must know when an Entity is destroyed from the EntityFactory in the Scene, and also the ComponentManager must know when an Entity is created AND destroyed. I do have a Listener/Observer pattern implemented at the moment, but with this pattern I may remove a Listener (which is this case is dependent on the method being called). I mainly have this implemented for specific things related to a game, i.e. Teams, Tagging of entities, etc. So... I was thinking maybe I should call a private method (using friend classes) to send out when an Entity has been activated, deleted, etc. i.e. taken from my EntityFactory void EntityFactory::killEntity(Entity& e) { // if the entity doesn't exsist in the entity manager within the scene if(!getScene()->getEntityManager().doesExsist(e)) { return; // go back to the caller! (should throw an exception or something..) } // tell the ComponentManager and the EntityManager that we killed an Entity getScene()->getComponentManager().doOnEntityWillDie(e); getScene()->getEntityManager().doOnEntityWillDie(e); // notify the listners for(Mouth::iterator i = getMouth().begin(); i != getMouth().end(); ++i) { (*i)->onEntityWillDie(*this, e); } _idPool.addId(e.getId()); // add the ID to the pool delete &e; // delete the entity } As you can see on the lines where I am telling the ComponentManager and the EntityManager that an Entity will die, I am calling a method to make sure it handles it appropriately. Now I realise I could do this without calling it explicitly, with the help of that for loop notifying all listener objects connected to the EntityFactory's Mouth (an object used to tell listeners that there's an event), however is this a good idea (good design, or what)? I've gone over the PROS and CONS, I just can't decide what I want to do. Calling Explicitly: PROS Faster? Since these functions are explicitly called, they can't be "removed" CONS Not flexible Bad design? (friend functions) Calling through Listener objects (i.e. ComponentManager/EntityManager inherits from a EntityFactoryListener) PROS More Flexible? Better Design? CONS Slower? (virtual functions) Listeners can be removed, i.e. may be removed and not get called again during the program, which could cause in a crash. P.S. If you wish to view my current source code, I am hosting it on BitBucket.

    Read the article

  • return the result of a query and the total number of rows in a single function

    - by csotelo
    This is a question as might be focused on working in the best way, if there are other alternatives or is the only way: Using Codeigniter ... I have the typical 2 functions of list records and show total number of records (using the page as an alternative). The problem is that they are rather large. Sample 2 functions in my model: count Rows: function get_all_count() { $this->db->select('u.id_user'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); $query = $this->db->get(); return $query->num_rows(); } results per page function get_all($limit, $offset, $sort = '') { $this->db->select('u.id_user, user, email, flag'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); if($sort) $this->db->order_by($sort); $this->db->limit($limit, $offset); $query = $this->db->get(); return $query->result(); } You see, I repeat the most of the functions, the difference is that only the number of fields and management pages. I wonder if there is any alternative to get as much results as the query in a single function. I have seen many tutorials, and all create 2 functions: one to count and another to show results ... Will there be more optimal?

    Read the article

  • Why to build own CMS?

    - by ile
    On my first job interview, I was asked why did I build my own CMS? Why not to use one of existing CMS, Wordpress, Joomla, Drupal...? At first, I was stunned. I couldn't immediately recall all of my reasons for building my own CMS, but this was definitely one of the main reasons: It's my code and if I want to change something in that CMS (which I often have to do, because each website I build needs CMS with different functions) it's not a big problem. For some time I've been using Wordpress and one of the main things that distracted me from using it was discovering bugs in code that wasn't written by me and this bugs were often, especially if I made some changes to CMS or added a plugin... Here, I can find these 8 reasons why NOT to build own CMS: It won’t meet users’ needs It’s too much work It won’t be a standard solution It won’t be extendible fast enough It won’t be tested well enough It won’t be easily changeable It won’t add any value Create content, not functionality Quote from the same page: So the main question to ask yourself is: ‘Why am I really trying to re-solve a problem that has already been solved before?’ Well, I definitely agree that it's hard to invent CMS that hasn't been already invented, but on other hand, I think every CMS is (or should be) individual... it maybe won't have a million of functions, it will have 3 functions but their usage will be clear (to a user) and do all that one site needs to have. I think also that it is not good to give to a client a CMS with a lot of functions that are never used and it looks probably more professional when website and CMS together look like one product. I would also like to comment some quote parts: "It’s too much work" - I agree, but when using existing CMS and customizing it to website needs and can sometimes be very hard job or mission impossible. "It won’t be easily changeable" - I disagree with this one. What is your opinion on this one, why did you develop or didn't develop your own CMS? Ile

    Read the article

  • I assume Row_Number doesn’t act only on rows of the window frame

    - by AspOnMyNet
    a) Quote is taken from http://www.postgresql.org/docs/current/static/tutorial-window.html for each row, there is a set of rows within its partition called its window frame. Many (but not all) window functions act only on the rows of the window frame, rather than of the whole partition. By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause I assume Row_Number doesn’t act only on rows of the window frame, but instead always act on all rows of a partition? b) By default, if ORDER BY is supplied then the frame consists of all rows from the start of the partition up through the current row, plus any following rows that are equal to the current row according to the ORDER BY clause I assume that is only true for those window functions that act only on rows of the window frame ( thus above quote isn't true for ROW_NUMBER() function )? c) http://www.postgresql.org/docs/current/static/tutorial-window.html talks about PostgreSQL 8.4’s Windowing functions. Is everything in that article also true for Sql Server 2008’s Windowing functions thanx

    Read the article

  • SQL Server 2005 database design - many-to-many relationships with hierarchy

    - by Remnant
    Note I have completely re-written my original post to better explain the issue I am trying to understand. I have tried to generalise the problem as much as possible. Also, my thanks to the original people who responded. Hopefully this post makes things a little clearer. Context In short, I am struggling to understand the best way to design a small scale database to handle (what I perceive to be) multiple many-to-many relationships. Imagine the following scenario for a company organisational structure: Textile Division Marketing Division | | ---------------------- ---------------------- | | | | HR Dept Finance Dept HR Dept Finance Dept | | | | ---------- ---------- ---------- --------- | | | | | | | | Payroll Hiring Audit Tax Payroll Hiring Audit Accounts | | | | | | | | Emps Emps Emps Emps Emps Emps Emps Emps NB: Emps denotes a list of employess that work in that area When I first started with this issue I made four separate tables: Divisions - Textile, Marketing (PK = DivisionID) Departments - HR, Finance (PK = DeptID) Functions - Payroll, Hiring, Audit, Tax, Accounts (PK = FunctionID) Employees - List of all Employees (PK = EmployeeID) The problem as I see it is that there are multiple many-to-many relationships i.e. many departments have many divisions and many functions have many departments. Question Giving the database structure above, suppose I wanted to do the following: Get all employees who work in the Payroll function of the Marketing Division To do this I need to be able to differentiate between the two Payroll departments but I am not sure how this can be done? I understand that I could build a 'Link / Junction' table between Departments and Functions so that I can retrieve which Functions are in which Departments. However, I would still need to differentiate the Division they belong to. Research Effort As you can see I am an abecedarian when it comes to database deisgn. I have spent the last two days resaerching this issue, traversing nested set models, adjacency models, reading that this issue is known not to be NP complete etc. I am sure there is a simple solution?

    Read the article

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Is there a good way of automatically generating javascript client code from server side python

    - by tat.wright
    I basically want to be able to: Write a few functions in python (with the minimum amount of extra meta data) Turn these functions into a web service (with the minimum of effort / boiler plate) Automatically generate some javascript functions / objects for rpc (this should prevent me from doing as many stupid things as possible like mistyping method names, forgetting the names of methods, passing the wrong number of arguments) Example python: def hello_world(): return "Hello world" javascript: ... <!-- This file is automatically generated (either dynamically or statically) --> <script src="http://myurl.com/webservice/client_side_javascript"> </script> ... <script> $('#button').click(function () { hello_world(function (data){ $('#label').text(data))) } </script> A bit of research has shown me some approaches that come close to this: Automatic generation of json-rpc services from functions with a little boiler plate code in python and then using jquery and json to do the calls (still easy to make mistakes with method names - still need to be aware of urls when calling, very irritating to write these calls yourself in the firebug shell) Using a library like soaplib to generate wsdl from python (by adding copious type information). And then somehow convert this into javascript (not sure if there is even a library to do this) But are there any approaches closer to what I want?

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • Is it normal for C++ static initialization to appear twice in the same backtrace?

    - by Joseph Garvin
    I'm trying to debug a C++ program compiled with GCC that freezes at startup. GCC mutex protects function's static local variables, and it appears that waiting to acquire such a lock is why it freezes. How this happens is rather confusing. First module A's static initialization occurs (there are __static_init functions GCC invokes that are visible in the backtrace), which calls a function Foo(), that has a static local variable. The static local variable is an object who's constructor calls through several layers of functions, then suddenly the backtrace has a few ??'s, and then it's is in the static initialization of a second module B (the __static functions occur all over again), which then calls Foo(), but since Foo() never returned the first time the mutex on the local static variable is still set, and it locks. How can one static init trigger another? My first theory was shared libraries -- that module A would be calling some function in module B that would cause module B to load, thus triggering B's static init, but that doesn't appear to be the case. Module A doesn't use module B at all. So I have a second (and horrifying) guess. Say that: Module A uses some templated function or a function in a templated class, e.g. foo<int>::bar() Module B also uses foo<int>::bar() Module A doesn't depend on module B at all At link time, the linker has two instances of foo<int>::bar(), but this is OK because template functions are marked as weak symbols... At runtime, module A calls foo<int>::bar, and the static init of module B is triggered, even though module B doesn't depend on module A! Why? Because the linker decided to go with module B's instance of foo::bar instead of module A's instance at link time. Is this particular scenario valid? Or should one module's static init never trigger static init in another module?

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Using Tcl DSL in Python

    - by Sridhar Ratnakumar
    I have a bunch of Python functions. Let's call them foo, bar and baz. They accept variable number of string arguments and does other sophisticated things (like accessing the network). I want the "user" (let's assume he is only familiar with Tcl) to write scripts in Tcl using those functions. Here's an example (taken from Macports) that user can come up with: post-configure { if {[variant_isset universal]} { set conflags "" foreach arch ${configure.universal_archs} { if {${arch} == "i386"} {append conflags "x86 "} else { if {${arch} == "ppc64"} {append conflags "ppc_64 "} else { append conflags ${arch} " " } } } set profiles [exec find ${worksrcpath} -name "*.pro"] foreach profile ${profiles} { reinplace -E "s|^(CONFIG\[ \\t].*)|\\1 ${conflags}|" ${profile} # Cures an isolated case system "cd ${worksrcpath}/designer && \ ${qt_dir}/bin/qmake -spec ${qt_dir}/mkspecs/macx-g++ -macx \ -o Makefile python.pro" } } } Here, variant_issset, reinplace are so on (other than Tcl builtins) are implemented as Python functions. if, foreach, set, etc.. are normal Tcl constructs. post-configure is a Python function that accepts, well, a Tcl code block that can later be executed (which in turns would obviously end up calling the above mentioned Python "functions"). Is this possible to do in Python? If so, how? from Tkinter import *; root= Tk(); root.tk.eval('puts [array get tcl_platform]') is the only integration I know of, which is obviously very limited (not to mention the fact that it starts up X11 server on mac).

    Read the article

  • atol(), atof(), atoi() function behaviours, is there a stable way to convert from/to string/integer

    - by Berkay
    In these days i'm playing with the C functions of atol(), atof() and atoi(), from a blog post i find a tutorial and applied: here are my results: void main() char a[10],b[10]; puts("Enter the value of a"); gets(a); puts("Enter the value of b"); gets(b); printf("%s+%s=%ld and %s-%s=%ld",a,b,(atol(a)+atol(b)),a,b,(atol(a)-atol(b))); getch(); } there is atof() which returns the float value of the string and atoi() which returns integer value. now to see the difference between the 3 i checked this code: main() { char a[]={"2545.965"}; printf("atol=%ld\t atof=%f\t atoi=%d\t\n",atol(a),atof(a),atoi(a)); } the output will be atol=2545 atof=2545.965000 atoi=2545 char a[]={“heyyou”}; now when you run the program the following will be the output (why?, is there any solution to convert pure strings to integer?) atol=0 atof=0 atoi=0 the string should contain numeric value now modify this program as char a[]={“007hey”}; the output in this case(tested in Red hat) will be atol=7 atof=7.000000 atoi=7 so the functions has taken 007 only not the remaining part (why?) Now consider this char a[]={“hey007?}; the output of the program will be atol=0 atof=0.000000 atoi=0 So i just want to convert my strings to number and then again to same text, i played with these functions and as you see i'm getting really interesting results? why is that? any other functions to convert from/to string/integer and vice versa?

    Read the article

  • C++ VB6 interfacing problem

    - by Roshan
    Hi, I'm tearing my hair out trying to solve this one, any insights will be much appreciated: I have a C++ exe which acquires data from some hardware in the main thread and processes it in another thread (thread 2). I use a c++ dll to supply some data processing functions which are called from thread 2. I have a requirement to make another set of data processing functions in VB6. I have thus created a VB6 dll, using the add-in vbAdvance to create a standard dll. When I call functions from within this VB6 dll from the main thread, everything works exactly as expected. When I call functions from this VB6 dll in thread 2, I get an access violation. I've traced the error to the CopyMemory command, it would seem that if this is used within the call from the main thread, it's fine but in a call from the process thread, it causes an exception. Why should this be so? As far as I understand, threads share the same address space. Here is the code from my VB dll Public Sub UserFunInterface(ByVal in1ptr As Long, ByVal out1ptr As Long, ByRef nsamples As Long) Dim myarray1() As Single Dim myarray2() As Single Dim i As Integer ReDim myarray1(0 To nsamples - 1) As Single ReDim myarray2(0 To nsamples - 1) As Single With tsa1din(0) ' defined as safearray1d in a global definitions module .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = in1ptr End With With tsa1dout .cDims = 1 .cbElements = 4 .cElements = nsamples .pvData = out1ptr End With CopyMemory ByVal VarPtrArray(myarray1), VarPtr(tsa1din(0)), 4 CopyMemory ByVal VarPtrArray(myarray2), VarPtr(tsa1dout), 4 For i = 0 To nsamples - 1 myarray2(i) = myarray1(i) * 2 Next i ZeroMemory ByVal VarPtrArray(myarray1), 4 ZeroMemory ByVal VarPtrArray(myarray2), 4 End Sub

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >