Search Results

Search found 3525 results on 141 pages for 'infinite sequence'.

Page 115/141 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • Mapping issue with multi-field primary keys using hibernate/JPA annotations

    - by Derek Clarkson
    Hi all, I'm stuck with a database which is using multi-field primary keys. I have a situation where I have a master and details table, where the details table's primary key contains fields which are also the foreign key's the the master table. Like this: Master primary key fields: master_pk_1 Details primary key fields: master_pk_1 details_pk_2 details_pk_3 In the Master class we define the hibernate/JPA annotations like this: @Id @GeneratedValue(strategy = GenerationType.SEQUENCE, generator = "idGenerator") @Column(name = "master_pk_1") private long masterPk1; @OneToMany(cascade=CascadeType.ALL) @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1") private List<Details> details = new ArrayList<Details>(); And in the details class I have defined the id and back reference like this: @EmbeddedId @AttributeOverrides( { @AttributeOverride( name = "masterPk1", column = @Column(name = "master_pk_1")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")), @AttributeOverride(name = "detailsPk2", column = @Column(name = "details_pk_2")) }) private DetailsPrimaryKey detailsPrimaryKey = new DetailsPrimaryKey(); @ManyToOne @JoinColumn(name = "master_pk_1", referencedColumnName = "master_pk_1", insertable=false) private Master master; The goal of all of this was that I could create a new master, add some details to it, and when saved, JPA/Hibernate would generate the new id for master in the masterPk1 field, and automatically pass it down to the details records, storing it in the matching masterPk1 field in the DetailsPrimaryKey class. At least that's what the documentation I've been looking at implies. What actually happens is that hibernate appears to corectly create and update the records in the database, but not pass the key to the details classes in memory. Instead I have to manually set it myself. I also found that without the insertable=true added to the back reference to master, that hibernate would create sql that had the master_pk_1 field listed twice in the insert statement, resulting in the database throwing an exception. My question is simply is this arrangement of annotations correct? or is there a better way of doing it?

    Read the article

  • Hiring a programmer: looking for the "right attitude"

    - by Totophil
    It's actually two questions in one: What is the right attitude for a programmer? How do you (or would you) look for one when interviewing or during hiring process? Please note this question is not about personality or traits of a candidate, it is about their attitude towards what they do for living. This is also not about reverse of programmers pet peeves. The question has been made community wiki, since I am interested in a good answer rather than reputation. I disagree that the question is purely subjective and just a matter of opinion: clearly some attitudes make a better programmer than others. Consecutively, there might quite possibly exist an attitude that is common to the most of the better programmers. Update: After some deliberation I came up with the following attitude measurement scales: identifies themselves with the job ? fully detached perceives code as a collection of concepts ? sees code as a sequence of steps thinks of creating software as an art ? takes 100% rational approach to design and development Answers that include some sort of a comment on the appropriateness of these scales are greatly appreciated. Definition of "attitude": a complex mental state involving beliefs and feelings and values and dispositions to act in certain ways; "he had the attitude that work was fun" The question came as a result of some reflection on the top voted answer to "How do you ensure code quality?" here on Stack Overflow.

    Read the article

  • Can a GeneralPath be modified?

    - by Dov
    java2d is fairly expressive, but requires constructing lots of objects. In contrast, the older API would let you call methods to draw various shapes, but lacks all the new features like transparency, stroke, etc. Java has fairly high costs associated with object creation. For speed, I would like to create a GeneralPath whose structure does not change, but go in and change the x,y points inside. path = new GeneralPath(GeneralPath.WIND_EVEN_ODD, 10); path.moveTo(x,y); path.lineTo(x2, y2); double len = Math.sqrt((x2-x)*(x2-x) + (y2-y)*(y2-y)); double dx = (x-x2) * headLen / len; double dy = (y-y2) * headLen / len; double dx2 = -dy * (headWidth/headLen); double dy2 = dx * (headWidth/headLen); path.lineTo(x2 + dx + dx2, y2 + dy + dy2); path.moveTo(x2 + dx - dx2, y2 + dy - dy2); path.lineTo(x2,y2); This one isn't even that long. Imagine a much longer sequence of commands, and only the ones on the end are changing. I just want to be able to overwrite commands, to have an iterator effectively. Does that exist?

    Read the article

  • Calculating odds distribution with 6-sided dice

    - by Stephen
    I'm trying to calculate the odds distribution of a changing number of 6-sided die rolls. For example, 3d6 ranges from 3 to 18 as follows: 3:1, 4:3, 5:6, 6:10, 7:15, 8:21, 9:25, 10:27, 11:27, 12:25, 13:21, 14:15, 15:10, 16:6, 17:3, 18:1 I wrote this php program to calculate it: function distributionCalc($numberDice,$sides=6) { for ( $i=0; $i<pow($sides,$numberDice); $i++) { $sum=0; for ($j=0; $j<$numberDice; $j++) { $sum+=(1+(floor($i/pow($sides,$j))) % $sides); } $distribution[$sum]++; } return $distribution; } The inner $j for-loop uses the magic of the floor and modulus functions to create a base-6 counting sequence with the number of digits being the number of dice, so 3d6 would count as: 111,112,113,114,115,116,121,122,123,124,125,126,131,etc. The function takes the sum of each, so it would read as: 3,4,5,6,7,8,4,5,6,7,8,9,5,etc. It plows through all 3^6 possible results and adds 1 to the corresponding slot in the $distribution array between 3 and 18. Pretty straightforward. However, it only works until about 8d6, afterward i get server time-outs because it's now doing billions of calculations. But I don't think it's necessary because die probability follows a sweet bell-curve distribution. I'm wondering if there's a way to skip the number crunching and go straight to the curve itself. Is there a way to do this, so, for example, with 80d6 (80-480)? Can the distribution be projected without doing 6^80 calculations? I'm not a professional coder and probability is still new to me, so thanks for all the help! Stephen

    Read the article

  • Newbie database index question

    - by RenderIn
    I have a table with multiple indexes, several of which duplicate the same columns: Index 1 columns: X, B, C, D Index 2 columns: Y, B, C, D Index 3 columns: Z, B, C, D I'm not very knowledgeable on indexing in practice, so I'm wondering if somebody can explain why X, Y and Z were paired with these same columns. B is an effective date. C is a semi-unique key ID for this table for a specific effective date B. D is a sequence that identifies the priority of this record for the identifier C. Why not just create 6 indexes, one for each X, Y, Z, B, C, D? I want to add an index to another column T, but in some contexts I'll only be querying on T alone while in others I will also be specifying the B, C and D columns... so should I create just one index like above or should I create one for T and one for (T, B, C, D)? I've not had as much luck as expected when googling for comprehensive coverage of indexing. Any resources where I can get a through explanation and lots of examples of B-tree indexing?

    Read the article

  • How to use ASP.NET Routing in a Quote of the Day Website

    - by SidC
    Good Afternoon, A client is interested in creating an ASP.NET 2.0 website whose purpose is to serve up a "quote of the day". He wants the quotes on static content pages all attached to the same master page. The quote pages must be viewed in a certain sequence, and site browsers cannot view any other pages than the starting page when browsing to the site. That is, everyone must go to page 001.aspx when entering the site. Two Questions: 1. The content pages are going to be created by the client using an excel data source and a merge process by which each quote page is created eg. 001.aspx, 002.aspx etc. This seems clunky to me at best. Would ASP.NET Dynamic Data be a better solution here? I'm new to ASP.NET Routing and URL Rewriting as a whole. How would I setup a route table to ensure that users always entered the site on the same entry page, and create a route table such that default.aspx resolves to 001.aspx? Thanks, Sid

    Read the article

  • Why does my Workflow Service (4.0) variable go null in a DoWhile Activity?

    - by jlafay
    I have a WF service that I'm trying to setup receive activities to "Subscribe" and "Unsubscribe". I'm using This WF Durable Duplex Tutorial as a basis because my service performs callbacks to clients. Basically, think of it as a chat service. I can make client calls to the two receive activities just fine. What happens is the callback address of the client is passed in to Subscribe() on the service. The address is stored as a variable in the WF service and everything looks like it would work as to be expected. When a client calls Unsubscribe(), my watch I have set on the address var during debugging shows it as null. So what gives? Here's the basic setup of my WF service layout... Everything is enveloped in a DoWhile activity. Inside of that is a Pick activity and two Pick branches. The first branch is for subscribing activities. It has a receive-sendreply activity that assigns the string passed by the client to the WF address var. The second branch handles unsubscribing. The trigger is the Request activity and the client address is again passed in. From there it goes into a sequence, starting with an If. It checks to see if the unsubscribeAddress equals the address already subscribed. If it does, then it sets the address to String.Empty and sends a success message back to the client. Why would a variable that's scoped to the enveloping DoWhile activity be implicitly assigned to null? I'm trying to get this to work so I can implement multiple client subscribers from there and work on triggers that invoke callbacks to multiple clients. CONCAT EDIT: I set a breakpoint at the DoWhile level and my var is null once Unsubscribe() is called. When Subscribe() is invoked, the watch shows a value in the var all the way through. Until I Unsubscribe() with a client. Should I be using a While Activity instead?

    Read the article

  • In Java, send commands to another command-line program

    - by bradvido
    I am using Java on Windows XP and want to be able to send commands to another program such as telnet. I do not want to simply execute another program. I want to execute it, and then send it a sequence of commands once it's running. Here's my code of what I want to do, but it does not work: (If you uncomment and change the command to "cmd" it works as expected. Please help.) try { Runtime rt = Runtime.getRuntime(); String command = "telnet"; //command = "cmd"; Process pr = rt.exec(command); BufferedReader processOutput = new BufferedReader(new InputStreamReader(pr.getInputStream())); BufferedWriter processInput = new BufferedWriter(new OutputStreamWriter(pr.getOutputStream())); String commandToSend = "open localhost\n"; //commandToSend = "dir\n" + "exit\n"; processInput.write(commandToSend); processInput.flush(); int lineCounter = 0; while(true) { String line = processOutput.readLine(); if(line == null) break; System.out.println(++lineCounter + ": " + line); } processInput.close(); processOutput.close(); pr.waitFor(); } catch(Exception x) { x.printStackTrace(); }

    Read the article

  • Exporting/Importing events to Outlook 2007 calendar - problem

    - by iandisme
    I work on a web app that involves scheduling. A user can view his schedule, and then download a meeting request file for a particular event. In Outlook 2003, simply opening this event would cause a meeting request to pop up and the user could accept, which would either add or update the event in their calendar. However, in Outlook 2007, the meeting request Accept function is disabled, and the reason given is that the user is the organizer and can't accept his own event request. The ICS file clearly shows that this is not the case. Has anyone experienced this same problem? Does anyone know how to work around it? (Using Outlook's import function is scarcely an option because it causes duplicate events to be created; the import function doesn't seem to care that the events have the same UID) Here is the ICS file: BEGIN:VCALENDAR PRODID:#{my app} VERSION:2.0 CALSCALE:GREGORIAN METHOD:REQUEST BEGIN:VEVENT DTSTAMP:20100324T150236Z UID:eeb639a1-f8e5-4eab-ab3c-232ad91364c6 SEQUENCE:2 ORGANIZER:#{myApp}.#{myDomain}.com DESCRIPTION: DTSTART;TZID=Europe/London:20110620T120010 DTEND;TZID=Europe/London:20110620T133010 SUMMARY:BREAK:Breakfast LOCATION:Room 101 END:VEVENT BEGIN:VTIMEZONE //Timezone info edited for brevity END:VTIMEZONE END:VCALENDAR

    Read the article

  • Double Free inside of a destructor upon adding to a vector

    - by Shawn B
    Hey, I am working on a drum machine, and am having problems with vectors. Each Sequence has a list of samples, and the samples are ordered in a vector. However, when a sample is push_back on the vector, the sample's destructor is called, and results in a double free error. Here is the Sample creation code: class XSample { public: Uint8 Repeat; Uint8 PlayCount; Uint16 Beats; Uint16 *Beat; Uint16 BeatsPerMinute; XSample(Uint16 NewBeats,Uint16 NewBPM,Uint8 NewRepeat); ~XSample(); void GenerateSample(); void PlaySample(); }; XSample::XSample(Uint16 NewBeats,Uint16 NewBPM,Uint8 NewRepeat) { Beats = NewBeats; BeatsPerMinute = NewBPM; Repeat = NewRepeat-1; PlayCount = 0; printf("XSample Construction\n"); Beat = new Uint16[Beats]; } XSample::~XSample() { printf("XSample Destruction\n"); delete [] Beat; } And the 'Dynamo' code that creates each sample in the vector: class XDynamo { public: std::vector<XSample> Samples; void CreateSample(Uint16 NewBeats,Uint16 NewBPM,Uint8 NewRepeat); }; void XDynamo::CreateSample(Uint16 NewBeats,Uint16 NewBPM,Uint8 NewRepeat) { Samples.push_back(XSample(NewBeats,NewBPM,NewRepeat)); } Here is main(): int main() { XDynamo Dynamo; Dynamo.CreateSample(4,120,2); Dynamo.CreateSample(8,240,1); return 0; } And this is what happens when the program is run: Starting program: /home/shawn/dynamo2/dynamo [Thread debugging using libthread_db enabled] XSample Construction XSample Destruction XSample Construction XSample Destruction *** glibc detected *** /home/shawn/dynamo2/dynamo: double free or corruption (fasttop): 0x0804d008 *** However, when the delete [] is removed from the destructor, the program runs perfectly. What is causing this? Any help is greatly appreciated.

    Read the article

  • Is it possible to store controls(Panel) as object, serialize it and store it as a file?

    - by ikky
    The topic says it all. Using Compact Framework C# I'm tiling (order/sequence is important) some images that i download from an url, into a Panel(each image is a PictureBox). This can be a huge process, and may take some time. Therefor i only want the user to download the images and tile them once. So the next time the user uses the Tile Application, the Panel that was created the first time is already stored in a file and is loaded from that file. So what i want is a method to store a Panel as a file. Is this possible, or do you think i should do it another way? I've tried something like this: BinaryWriter panelStorage = new BinaryWriter(new FileStream("imagePanel.panel", FileMode.OpenOrCreate, FileAccess.Write, FileShare.None)); Byte[] bImageObject = new Byte[20000]; bImageObject = (byte[])(object)this.imagePanel; panelStorage .Write(bMapObject); panelStorage .Close(); But the casting was not very legal :P "InvalidCastException" Can anyone help me with this problem? Thank you in advance!

    Read the article

  • how to queue function behind ajax request

    - by user1052335
    What I'm asking is conceptual rather than why my code doesn't work. I'm trying to make a web app that displays content fetched from a database in a sequence, one at a time, and then waits for a response from the user before continuing to the next one. I'm working with javascript on the client side with the JQuery library and php on the server side. In theory, there's time while waiting for the user's input to fetch information from the server using an AJAX request and have it ready for the use by the time he clicks the button, but there's also a chance that such an AJAX request hasn't completed when he clicks the button. So I need something like pseudocode: display current information fetch next data point from the server in the background onUserInput { if ( ajax request complete) { present the information fetched in this request } else if (ajax request not complete) { wait for ajax request complete present information to user } My question is this: how does one implement this " else if (ajax request not complete) { wait for ajax request complete " part. I'm currently using JQuery for my AJAX needs. I'm somewhat new to working with AJAX, and I did search around, but I didn't find anything that seemed on point. I don't know what tools I should use for this. Some kind of queue maybe? I don't need to be spoon fed. I just need to know how this is done, using what tools or if my desired outcome would be accomplished in some other way entirely. Thanks.

    Read the article

  • First-chance exception at std::set dectructor

    - by bartek
    Hi, I have a strange exception at my class destructor: First-chance exception reading location 0x00000 class DispLst{ // For fast instance existance test std::set< std::string > instances; [...] DispLst::~DispLst(){ this->clean(); DeleteCriticalSection( &instancesGuard ); } <---- here instances destructor raises exception Call stack: X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::begin() Line 556 + 0xc bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::_Tidy() Line 1421 + 0x64 bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::~_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 () Line 541 C++ X.exe!std::set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ::~set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator () + 0x2b bytes C++ X.exe!DispLst::~DispLst() Line 82 + 0xf bytes C++ The exact place of error in xtree: void _Tidy() { // free all storage erase(begin(), end()); <------------------- HERE this->_Alptr.destroy(&_Left(_Myhead)); this->_Alptr.destroy(&_Parent(_Myhead)); this->_Alptr.destroy(&_Right(_Myhead)); this->_Alnod.deallocate(_Myhead, 1); _Myhead = 0, _Mysize = 0; } iterator begin() { // return iterator for beginning of mutable sequence return (_TREE_ITERATOR(_Lmost())); <---------------- HERE } What is going on ? I'm using Visual Studio 2008.

    Read the article

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • PLPGSQL : How to return a record from function executed by INSERT/UPDATE rule?

    - by seas
    Do the following scheme for my database: create sequence data_sequence; create table data_table { id integer primary key; field varchar(100); }; create view data_view as select id, field from data_table; create function data_insert(_new data_view) returns data_view as $$declare _id integer; _result data_view%rowtype; begin _id := nextval('data_sequence'); insert into data_table(id, field) values(_id, _new.field); select * into _result from data_view where id = _id; return _result; end; $$ language plpgsql; create rule insert as on insert to data_view do instead select data_insert(new); Then type in psql: insert into data_view(field) values('abc'); Would like to see something like: id | field ----+--------- 1 | abc Instead see: data_insert ------------- (1, "abc") Is it possible to fix this somehow? Thanks for any ideas. Ultimate idea is to use this in other functions, so that I could obtain id of just inserted record without selecting for it from scratch. Something like: insert into data_view(field) values('abc') returning id into my_variable would be nice but doesn't work with error: ERROR: cannot perform INSERT RETURNING on relation "data_view" HINT: You need an unconditional ON INSERT DO INSTEAD rule with a RETURNING clause. I don't really understand that HINT. I use PostgreSQL 8.4.

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • Replicating SQL's 'Join' in Python

    - by Daniel Mathews
    I'm in the process of trying to switch from R to Python (mainly issues around general flexibility). With Numpy, matplotlib and ipython, I've am able to cover all my use cases save for merging 'datasets'. I would like to simulate SQL's join by clause (inner, outer, full) purely in python. R handles this with the 'merge' function. I've tried the numpy.lib.recfunctions join_by, but it critical issues with duplicates along the 'key': join_by(key, r1, r2, jointype='inner', r1postfix='1', r2postfix='2', defaults=None, usemask=True, asrecarray=False) Join arrays r1 and r2 on key key. The key should be either a string or a sequence of string corresponding to the fields used to join the array. An exception is raised if the key field cannot be found in the two input arrays. Neither r1 nor r2 should have any duplicates along key: the presence of duplicates will make the output quite unreliable. Note that duplicates are not looked for by the algorithm. source: http://presbrey.mit.edu:1234/numpy.lib.recfunctions.html Any pointers or help will be most appreciated!

    Read the article

  • Small-o(n^2) implementation of Polynomial Multiplication

    - by AlanTuring
    I'm having a little trouble with this problem that is listed at the back of my book, i'm currently in the middle of test prep but i can't seem to locate anything regarding this in the book. Anyone got an idea? A real polynomial of degree n is a function of the form f(x)=a(n)x^n+?+a1x+a0, where an,…,a1,a0 are real numbers. In computational situations, such a polynomial is represented by a sequence of its coefficients (a0,a1,…,an). Assuming that any two real numbers can be added/multiplied in O(1) time, design an o(n^2)-time algorithm to compute, given two real polynomials f(x) and g(x) both of degree n, the product h(x)=f(x)g(x). Your algorithm should **not** be based on the Fast Fourier Transform (FFT) technique. Please note it needs to be small-o(n^2), which means it complexity must be sub-quadratic. The obvious solution that i have been finding is indeed the FFT, but of course i can't use that. There is another method that i have found called convolution, where if you take polynomial A to be a signal and polynomial B to be a filter. A passed through B yields a shifted signal that has been "smoothed" by A and the resultant is A*B. This is supposed to work in O(n log n) time. Of course i am completely unsure of implementation. If anyone has any ideas of how to achieve a small-o(n^2) implementation please do share, thanks.

    Read the article

  • SQLAlchemy DetachedInstanceError with regular attribute (not a relation)

    - by haridsv
    I just started using SQLAlchemy and get a DetachedInstanceError and can't find much information on this anywhere. I am using the instance outside a session, so it is natural that SQLAlchemy is unable to load any relations if they are not already loaded, however, the attribute I am accessing is not a relation, in fact this object has no relations at all. I found solutions such as eager loading, but I can't apply to this because this is not a relation. I even tried "touching" this attribute before closing the session, but it still doesn't prevent the exception. What could be causing this exception for a non-relational property even after it has been successfully accessed once before? Any help in debugging this issue is appreciated. I will meanwhile try to get a reproducible stand-alone scenario and update here. Update: This is the actual exception message with a few stacks: File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 159, in __get__ return self.impl.get(instance_state(instance), instance_dict(instance)) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/attributes.py", line 377, in get value = callable_(passive=passive) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/state.py", line 280, in __call__ self.manager.deferred_scalar_loader(self, toload) File "/home/hari/bin/lib/python2.6/site-packages/SQLAlchemy-0.6.1-py2.6.egg/sqlalchemy/orm/mapper.py", line 2323, in _load_scalar_attributes (state_str(state))) DetachedInstanceError: Instance <ReportingJob at 0xa41cd8c> is not bound to a Session; attribute refresh operation cannot proceed The partial model looks like this: metadata = MetaData() ModelBase = declarative_base(metadata=metadata) class ReportingJob(ModelBase): __tablename__ = 'reporting_job' job_id = Column(BigInteger, Sequence('job_id_sequence'), primary_key=True) client_id = Column(BigInteger, nullable=True) And the field client_id is what is causing this exception with a usage like the below: Query: jobs = session \ .query(ReportingJob) \ .filter(ReportingJob.job_id == job_id) \ .all() if jobs: # FIXME(Hari): Workaround for the attribute getting lazy-loaded. jobs[0].client_id return jobs[0] This is what triggers the exception later out of the session scope: msg = msg + ", client_id: %s" % job.client_id

    Read the article

  • How to properly implement cheat codes?

    - by Axarydax
    Hi, what would be the best way to implement kind of cheat codes in general? I have WinForms application in mind, where a cheat code would unlock an easter egg, but the implementation details are not relevant. The best approach that comes to my mind is to keep index for each code - let's consider famous DOOM codes - IDDQD and IDKFA, in a fictional C# app. string[] CheatCodes = { "IDDQD", "IDKFA"}; int[] CheatIndexes = { 0, 0 }; const int CHEAT_COUNT = 2; void KeyPress(char c) { for (int i = 0; i < CHEAT_COUNT; i++) //for each cheat code { if (CheatCodes[i][CheatIndexes[i]] == c) { //we have hit the next key in sequence if (++CheatIndexes[i] == CheatCodes[i].Length) //are we in the end? { //Do cheat work MessageBox.Show(CheatCodes[i]); //reset cheat index so we can enter it next time CheatIndexes[i] = 0; } } else //mistyped, reset cheat index CheatIndexes[i] = 0; } } Is this the right way to do it?

    Read the article

  • How do I process a nested list?

    - by ddbeck
    Suppose I have a bulleted list like this: * list item 1 * list item 2 (a parent) ** list item 3 (a child of list item 2) ** list item 4 (a child of list item 2 as well) *** list item 5 (a child of list item 4 and a grand-child of list item 2) * list item 6 I'd like to parse that into a nested list or some other data structure which makes the parent-child relationship between elements explicit (rather than depending on their contents and relative position). For example, here's a list of tuples containing an item and a list of its children (and so forth): [('list item 1',), ('list item 2', [('list item 3',), [('list item 4', [('list item 5'),]] ('list item 6',)] I've attempted to do this with plain Python and some experimentation with Pyparsing, but I'm not making progress. I'm left with two major questions: What's the strategy I need to employ to make this work? I know recursion is part of the solution, but I'm having a hard time making the connection between this and, say, a Fibonacci sequence. I'm certain I'm not the first person to have done this, but I don't know the terminology of the problem to make fruitful searches for more information on this topic. What problems are related to this so that I can learn more about solving these kinds of problems in general?

    Read the article

  • Inconsistency in modified/created/accessed time on mac

    - by Seth Johnson
    I'm having trouble using os.utime to correctly set the modification time on the mac (Mac OS X 10.6.2, running Python 2.6.1 from /usr/bin/python). It's not consistent with the touch utility, and it's not consistent with the properties displayed in the Finder's "get info" window. Consider the following command sequence. The 'created' and 'modified' times in the plain text refer to the "get info" window attributes. As a reminder, os.utime takes arguments (filename, (atime, mtime)). >>> import os >>> open('tempfile','w').close() 'created' and 'modified' are both the current time. >>> os.utime('tempfile', (1000000000, 1500000000) ) 'created' is the current time, 'modified' is July 13, 2017. >>> os.utime('tempfile', (1000000000, 1000000000) ) 'created' and 'modified' are both September 8, 2001. >>> os.path.getmtime('tempfile') 1000000000.0 >>> os.path.getctime('tempfile') 1269021939.0 >>> os.path.getatime('tempfile') 1269021951.0 ...but the os.path.get?time and os.stat don't reflect it. >>> os.utime('tempfile', (1500000000, 1000000000) ) 'created' and 'modified' are still both September 8, 2001. >>> os.utime('tempfile', (1500000000, 1500000000) ) 'created' is September 8, 2001, 'modified' is July 13, 2017. I'm not sure if this is a Python problem or a Mac stat problem. When I exit the Python shell and run touch -a -t 200011221234 tempfile neither the modification nor the creation times are changed, as expected. Then I run touch -m -t 200011221234 tempfile and both 'created' and 'modified' times are changed. Does anyone have any idea what's going on? How do I change the modification and creation times consistently on the mac? (Yes, I am aware that on Unixy systems there is no "creation time.")

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • Dropping all user tables/sequences in Oracle

    - by Ambience
    As part of our build process and evolving database, I'm trying to create a script which will remove all of the tables and sequences for a user. I don't want to do recreate the user as this will require more permissions than allowed. My script creates a procedure to drop the tables/sequences, executes the procedure, and then drops the procedure. I'm executing the file from sqlplus: drop.sql: create or replace procedure drop_all_cdi_tables is cur integer; begin cur:= dbms_sql.OPEN_CURSOR(); for t in (select table_name from user_tables) loop execute immediate 'drop table ' ||t.table_name|| ' cascade constraints'; end loop; dbms_sql.close_cursor(cur); cur:= dbms_sql.OPEN_CURSOR(); for t in (select sequence_name from user_sequences) loop execute immediate 'drop sequence ' ||t.sequence_name; end loop; dbms_sql.close_cursor(cur); end; / execute drop_all_cdi_tables; / drop procedure drop_all_cdi_tables; / Unfortunately, dropping the procedure causes a problem. There seems to cause a race condition and the procedure is dropped before it executes. E.g.: SQL*Plus: Release 11.1.0.7.0 - Production on Tue Mar 30 18:45:42 2010 Copyright (c) 1982, 2008, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options Procedure created. PL/SQL procedure successfully completed. Procedure created. Procedure dropped. drop procedure drop_all_user_tables * ERROR at line 1: ORA-04043: object DROP_ALL_USER_TABLES does not exist SQL Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64 With the Partitioning, OLAP, Data Mining and Real Application Testing options Any ideas on how to get this working?

    Read the article

  • Poor LLVM JIT performance

    - by Paul J. Lucas
    I have a legacy C++ application that constructs a tree of C++ objects. I want to use LLVM to call class constructors to create said tree. The generated LLVM code is fairly straight-forward and looks repeated sequences of: ; ... %11 = getelementptr [11 x i8*]* %Value_array1, i64 0, i64 1 %12 = call i8* @T_string_M_new_A_2Pv(i8* %heap, i8* getelementptr inbounds ([10 x i8]* @0, i64 0, i64 0)) %13 = call i8* @T_QueryLoc_M_new_A_2Pv4i(i8* %heap, i8* %12, i32 1, i32 1, i32 4, i32 5) %14 = call i8* @T_GlobalEnvironment_M_getItemFactory_A_Pv(i8* %heap) %15 = call i8* @T_xs_integer_M_new_A_Pvl(i8* %heap, i64 2) %16 = call i8* @T_ItemFactory_M_createInteger_A_3Pv(i8* %heap, i8* %14, i8* %15) %17 = call i8* @T_SingletonIterator_M_new_A_4Pv(i8* %heap, i8* %2, i8* %13, i8* %16) store i8* %17, i8** %11, align 8 ; ... Where each T_ function is a C "thunk" that calls some C++ constructor, e.g.: void* T_string_M_new_A_2Pv( void *v_value ) { string *const value = static_cast<string*>( v_value ); return new string( value ); } The thunks are necessary, of course, because LLVM knows nothing about C++. The T_ functions are added to the ExecutionEngine in use via ExecutionEngine::addGlobalMapping(). When this code is JIT'd, the performance of the JIT'ing itself is very poor. I've generated a call-graph using kcachegrind. I don't understand all the numbers (and this PDF seems not to include commas where it should), but if you look at the left fork, the bottom two ovals, Schedule... is called 16K times and setHeightToAtLeas... is called 37K times. On the right fork, RAGreed... is called 35K times. Those are far too many calls to anything for what's mostly a simple sequence of call LLVM instructions. Something seems horribly wrong. Any ideas on how to improve the performance of the JIT'ing?

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >