Search Results

Search found 167 results on 7 pages for 'albert iordache'.

Page 4/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • Spring 3 pet clinic example uses ${owner.new}, in the JSTL EL where can I read about out about .new

    - by Albert
    Spring 3 pet clinic example uses ${owner.new}, in the JSTL EL where can I find out more about where the .new comes from and what spec it is a part of? Ive seen empty and not empty operators/ reserved words but not .new until now in the Spring 3 pet clinic example.hers is the line im questioning: New Owner: located in the ownerForm.jsp file in the spring 3 pet clinic sample application.

    Read the article

  • Enable Session state in sharePoint 2010

    - by Albert D. Kallal
    I setup a test box computer with server 2008 (standard edition, not R2 and not hyper-v editing). I then installed SharePoint 2010. I was amazed how easy the whole setup went (the prerequisites setup on the SharePoint disk made this process oh so easy – great install system). Really this was just so easy. This test box is being used for testing Access web services. I am able to well publish access applications to this test server and Access applications publish and run just fine on the web SharePoint site through an web browser. However, the only thing that does not work is when I launch a Access report. The error message I get back is This report failed to load because session state is not turned on. Here is a screen shot: I can’t seem to find the setting anywhere to turn session state on. Any hints or links on how to enable session state in SharePoint 2010 would be most appreciated.

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • Scala: "using" keyword

    - by Albert Cenkier
    i've defined 'using' keyword as following: def using[A, B <: {def close(): Unit}] (closeable: B) (f: B => A): A = try { f(closeable) } finally { closeable.close() } i can use it like that: using(new PrintWriter("sample.txt")){ out => out.println("hellow world!") } now i'm curious how to define 'using' keyword to take any number of parameters, and be able to access them separately: using(new BufferedReader(new FileReader("in.txt")), new PrintWriter("out.txt")){ (in, out) => out.println(in.readLIne) }

    Read the article

  • Word 2007 - Pasted Text Not Spellchecked??

    - by Albert
    My Word 2007 spell-check seems to work fine, except that when I paste in text from somewhere else, it won't detect any misspellings in that pasted text...no matter what I try. If it makes any difference, when I paste in text, the formatting is preserved (size color etc). Any ideas on what to try?

    Read the article

  • Debugging stored procedures, without using SSMS 2008 Debugger, or the Visual Studio debugger (output

    - by Albert
    I have a SQL Server 2005 database with some Stored Procedures (SP) that I would like to debug...essentially I would just like to check variable values at certain points throughout the SP execution. I have SSMS 2008, but when I try to use the debugger, I get an error that it can't debug SQL Server 2005 databases. And I can't use the Visual Studio debugger (by stepping into the SP via Server Explorer) because Remote Debugging is blocked by our firewall, and I'm rightfully not allowed to touch the firewall. So my question is how can I check variable values at certain points in the SP execution? Is there some way to output those values somewhere, perhaps along with some text?

    Read the article

  • Duplicate System.Web.UI.AsyncPostBackTrigger Controls keep getting inserted automatically, causing P

    - by Albert
    I have an update panel with a number of [asp:AsyncPostBackTrigger...] controls, and everything was working fine. But now, something keeps inserting duplicate AsyncTriggers, and instead of simply being [asp:AsyncPostBackTrigger...] controls they're [System.Web.UI.AsyncPostBackTrigger...] controls, and I get parser errors as a result. So I delete the duplicate triggers, and they get re-inserted within a few minutes, seemingly randomly. Anyone know whats going on here?

    Read the article

  • Spring integration with RabbitMQ

    - by Albert
    We have build a solution based on file based delivery using Spring-Integration. This works fine but we need to process many files. We are happy with Spring Integration but we want to scale up. For this we'd like to use a messaging system like Rabbit MQ(or other solutions). Does anybody have experience with this, what's needed to get this working?

    Read the article

  • python: iif or (x ? a : b)

    - by Albert
    If Python would support the (x ? a : b) syntax from C/C++, I would write: print paid ? ("paid: " + str(paid) + " €") : "not paid" I really don't want to have an if-check and two independent prints here (because that is only an example above, in my code, it looks much more complicated and would really be stupid to have almost the same code twice). However, Python does not support this operator or any similar operator (afaik). What is the easiest/cleanest/most common way to do this? I have searched a bit and seen someone defining an iif(cond,iftrue,iffalse) function, inspired from Visual Basic. I wondered if I really have to add that code and if/why there is no such basic function in the standard library.

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • Clean URLs for images

    - by Albert
    I'm unable to get a working .htaccess that should accept clean URLs to load images. I mean, for example, if a user type this: http://mysite.com/image/example It works perfectly, as my PHP process and parse it. However, if the user type: .../image/example.jpg It doesn't work. I mean, if a user writes that, I want to load the module with the example.jpg as a parameter, I don't want to load the image at all! Thanks in advance.

    Read the article

  • Inspect in memory hsqldb while debugging

    - by Albert
    We're using hdsqldb in memory to run junit tests which operate against a database. The db is setup before running each test via a spring configuration. All works fine. Now when a tests fails it can be convinient to be able to inspect the values in the in memory database. Is this possible? If so how? Our url is: jdbc.url=jdbc:hsqldb:mem:testdb;sql.enforce_strict_size=true The database is destroyed after each tests. But when the debugger is running the database should also still be alive. I've tried connecting with the sqldb databaseManager. That works, but I don't see any tables or data. Any help is highly appreciated!

    Read the article

  • SQL Server 2008 Table Maintenance - Rebuild, Reorganize, Update Stats, Check Integrity etc HELP!

    - by Albert
    I'm migrating a ~15GB database from SQL Server 2005 to a new server running SQL Server 2008, and along with that I need to create all the new Maintenance Plans. I can take care of all the backup stuff, but the table maintenance baffles me some. Does anyone have any input on how often I should (or how often you do would suffice too) the following tasks? Check Database Integrity Rebuild Indexes Reorganize Indexes Update Statistics Shrink Database? Am I missing anything? Again if you can share how often you do these tasks that would be great...and/or share any general information about your approach to table maintenance that would be helpful. Lastly does it matter what order I run these tasks in (when setting up a job)?

    Read the article

  • Missing Intellisense While Describing Custom Control Properties Declaratively

    - by Albert Bori
    So, I've been working on this project for a few days now, and have been unable to resolve the issue of getting intellisense support for my custom-defined inner properties for a user control (ascx, mind you). I have seen the solution to this (using server controls, .cs mind you) many times. Spelled out in this article very well. Everything works for me while using ascx controls except intellisense. Here's the outline of my code: [PersistChildren(true)] [ParseChildren(typeof(BreadCrumbItem))] [ControlBuilder(typeof(BreadCrumbItem))] public partial class styledcontrols_buttons_BreadCrumb : System.Web.UI.UserControl { ... [PersistenceMode(PersistenceMode.InnerDefaultProperty)] public List<BreadCrumbItem> BreadCrumbItems { get { return _breadCrumbItems; } set { _breadCrumbItems = value; } } ... protected override void AddParsedSubObject(object obj) { base.AddParsedSubObject(obj); if (obj is BreadCrumbItem) BreadCrumbItems.Add(obj as BreadCrumbItem); } ... public class BreadCrumbItem : ControlBuilder { public string Text { get; set; } public string NavigateURL { get; set; } public override Type GetChildControlType(string tagName, System.Collections.IDictionary attribs) { if (String.Compare(tagName, "BreadCrumbItem", true) == 0) { return typeof(BreadCrumbItem); } return null; } } } Here's my mark up (which works fine, just no intellisense on the child object declarations): <%@ Register src="../styledcontrols/buttons/BreadCrumb.ascx" tagname="BreadCrumb" tagprefix="uc1" %> ... <uc1:BreadCrumb ID="BreadCrumb1" runat="server" BreadCrumbTitleText="Current Page"> <BreadCrumbItem Text="Home Page" NavigateURL="~/test/breadcrumbtest.aspx?iwentsomewhere=1" /> <BreadCrumbItem Text="Secondary Page" NavigateURL="~/test/breadcrumbtest.aspx?iwentsomewhere=1" /> </uc1:BreadCrumb> I think the issue lies with how the intellisense engine traverses supporting classes. All the working examples I see of this are not ascx, but Web Server Controls (cs, in a compiled assembly). If anyone could shed some light on how to accomplish this with ascx controls, I'd appreciate it.

    Read the article

  • compile Boost as static Universal binary lib

    - by Albert
    I want to have a static Universal binary lib of Boost. (Preferable the latest stable version, that is 1.43.0, or newer.) I found many Google hits with similar problems and possible solutions. However, most of them seems outdated. Also none of them really worked. Right now, I am trying sudo ./bjam --toolset=darwin --link=static --threading=multi \ --architecture=combined --address-model=32_64 \ --macosx-version=10.4 --macosx-version-min=10.4 \ install That compiles and install fine. However, the produced binaries seems broken. az@ip245 47 (openlierox) %file /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a: current ar archive random library az@ip245 49 (openlierox) %lipo -info /usr/local/lib/libboost_signals.a input file /usr/local/lib/libboost_signals.a is not a fat file Non-fat file: /usr/local/lib/libboost_signals.a is architecture: x86_64 az@ip245 48 (openlierox) %otool -hv /usr/local/lib/libboost_signals.a Archive : /usr/local/lib/libboost_signals.a /usr/local/lib/libboost_signals.a(trackable.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1536 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(connection.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(named_slot_map.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1856 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(signal_base.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1776 SUBSECTIONS_VIA_SYMBOLS /usr/local/lib/libboost_signals.a(slot.o): Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC_64 X86_64 ALL 0x00 OBJECT 3 1616 SUBSECTIONS_VIA_SYMBOLS Any suggestion how to get that correct?

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • JPA : Add and remove operations on lazily initialized collection behaviour ?

    - by Albert Kam
    Hello, im currently trying out JPA 2 and using Hibernate 3.6.x as the engine. I have an entity of ReceivingGood that contains a List of ReceivingGoodDetail, and has a bidirectional relation. Some related codes for each entity follows : ReceivingGood.java @OneToMany(mappedBy="receivingGood", targetEntity=ReceivingGoodDetail.class, fetch=FetchType.LAZY, cascade = CascadeType.ALL) private List<ReceivingGoodDetail> details = new ArrayList<ReceivingGoodDetail>(); public void addReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { receivingGoodDetail.setReceivingGood(this); } void internalAddReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { this.details.add(receivingGoodDetail); } public void removeReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { receivingGoodDetail.setReceivingGood(null); } void internalRemoveReceivingGoodDetail(ReceivingGoodDetail receivingGoodDetail) { this.details.remove(receivingGoodDetail); } @ManyToOne @JoinColumn(name = "receivinggood_id") private ReceivingGood receivingGood; ReceivingGoodDetail.java : public void setReceivingGood(ReceivingGood receivingGood) { if (this.receivingGood != null) { this.receivingGood.internalRemoveReceivingGoodDetail(this); } this.receivingGood = receivingGood; if (receivingGood != null) { receivingGood.internalAddReceivingGoodDetail(this); } } In my experiements with both of these entities, both adding the detail to the receivingGood's collection, and even removing the detail from the receivingGood's collection, will trigger a query to fill the collection before doing the add or remove. This assumption is based on my experiments that i will paste below. My concern is that : is it ok to do changes on only a little bit of records on the collection, and the engine has to query all of the details belonging to the collection ? What if the collection would have to be filled with 1000 records when i just want to edit a single record ? Here are my experiments with the output as the comment above each method : /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, ... too long Hibernate: select receivingg0_.id as id10_20_, receivingg0_.creationDate as creation2_10_20_, ... too long removing existing detail from lazy collection Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, details0_.creationDate as creation2_10_7_, details0_.modificationDate as modifica3_10_7_, details0_.usercreate_id as usercreate10_10_7_, details0_.usermodify_id as usermodify11_10_7_, details0_.version as version10_7_, details0_.buyQuantity as buyQuant5_10_7_, details0_.buyUnit as buyUnit10_7_, details0_.internalQuantity as internal7_10_7_, details0_.internalUnit as internal8_10_7_, details0_.product_id as product12_10_7_, details0_.receivinggood_id as receivi13_10_7_, details0_.supplierLotNumber as supplier9_10_7_, user1_.id as id2_0_, user1_.creationDate as creation2_2_0_, user1_.modificationDate as modifica3_2_0_, user1_.usercreate_id as usercreate6_2_0_, user1_.usermodify_id as usermodify7_2_0_, user1_.version as version2_0_, user1_.name as name2_0_, user2_.id as id2_1_, user2_.creationDate as creation2_2_1_, user2_.modificationDate as modifica3_2_1_, user2_.usercreate_id as usercreate6_2_1_, user2_.usermodify_id as usermodify7_2_1_, user2_.version as version2_1_, user2_.name as name2_1_, user3_.id as id2_2_, user3_.creationDate as creation2_2_2_, user3_.modificationDate as modifica3_2_2_, user3_.usercreate_id as usercreate6_2_2_, user3_.usermodify_id as usermodify7_2_2_, user3_.version as version2_2_, user3_.name as name2_2_, user4_.id as id2_3_, user4_.creationDate as creation2_2_3_, user4_.modificationDate as modifica3_2_3_, user4_.usercreate_id as usercreate6_2_3_, user4_.usermodify_id as usermodify7_2_3_, user4_.version as version2_3_, user4_.name as name2_3_, product5_.id as id0_4_, product5_.creationDate as creation2_0_4_, product5_.modificationDate as modifica3_0_4_, product5_.usercreate_id as usercreate7_0_4_, product5_.usermodify_id as usermodify8_0_4_, product5_.version as version0_4_, product5_.code as code0_4_, product5_.name as name0_4_, user6_.id as id2_5_, user6_.creationDate as creation2_2_5_, user6_.modificationDate as modifica3_2_5_, user6_.usercreate_id as usercreate6_2_5_, user6_.usermodify_id as usermodify7_2_5_, user6_.version as version2_5_, user6_.name as name2_5_, user7_.id as id2_6_, user7_.creationDate as creation2_2_6_, user7_.modificationDate as modifica3_2_6_, user7_.usercreate_id as usercreate6_2_6_, user7_.usermodify_id as usermodify7_2_6_, user7_.version as version2_6_, user7_.name as name2_6_ from ReceivingGoodDetail details0_ left outer join COMMON_USER user1_ on details0_.usercreate_id=user1_.id left outer join COMMON_USER user2_ on user1_.usercreate_id=user2_.id left outer join COMMON_USER user3_ on user2_.usermodify_id=user3_.id left outer join COMMON_USER user4_ on details0_.usermodify_id=user4_.id left outer join Product product5_ on details0_.product_id=product5_.id left outer join COMMON_USER user6_ on product5_.usercreate_id=user6_.id left outer join COMMON_USER user7_ on product5_.usermodify_id=user7_.id where details0_.receivinggood_id=? after removing try selecting the size : 4 after removing, now flushing Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? Hibernate: update ReceivingGoodDetail set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, buyQuantity=?, buyUnit=?, internalQuantity=?, internalUnit=?, product_id=?, receivinggood_id=?, supplierLotNumber=? where id=? and version=? detail size : 4 */ public void removeFromLazyCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // get existing detail ReceivingGoodDetail detail = em.find(ReceivingGoodDetail.class, "323fb0e7-9bb2-48dc-bc07-5ff32f30e131"); detail.setInternalUnit("MCB"); System.out.println("removing existing detail from lazy collection"); receivingGood.removeReceivingGoodDetail(detail); System.out.println("after removing try selecting the size : " + receivingGood.getDetails().size()); System.out.println("after removing, now flushing"); em.flush(); System.out.println("detail size : " + receivingGood.getDetails().size()); } /* Hibernate: select receivingg0_.id as id9_14_, receivingg0_.creationDate as creation2_9_14_, ... too long Hibernate: select receivingg0_.id as id10_20_, receivingg0_.creationDate as creation2_10_20_, ... too long adding existing detail into lazy collection Hibernate: select details0_.receivinggood_id as receivi13_9_8_, details0_.id as id8_, details0_.id as id10_7_, details0_.creationDate as creation2_10_7_, details0_.modificationDate as modifica3_10_7_, details0_.usercreate_id as usercreate10_10_7_, details0_.usermodify_id as usermodify11_10_7_, details0_.version as version10_7_, details0_.buyQuantity as buyQuant5_10_7_, details0_.buyUnit as buyUnit10_7_, details0_.internalQuantity as internal7_10_7_, details0_.internalUnit as internal8_10_7_, details0_.product_id as product12_10_7_, details0_.receivinggood_id as receivi13_10_7_, details0_.supplierLotNumber as supplier9_10_7_, user1_.id as id2_0_, user1_.creationDate as creation2_2_0_, user1_.modificationDate as modifica3_2_0_, user1_.usercreate_id as usercreate6_2_0_, user1_.usermodify_id as usermodify7_2_0_, user1_.version as version2_0_, user1_.name as name2_0_, user2_.id as id2_1_, user2_.creationDate as creation2_2_1_, user2_.modificationDate as modifica3_2_1_, user2_.usercreate_id as usercreate6_2_1_, user2_.usermodify_id as usermodify7_2_1_, user2_.version as version2_1_, user2_.name as name2_1_, user3_.id as id2_2_, user3_.creationDate as creation2_2_2_, user3_.modificationDate as modifica3_2_2_, user3_.usercreate_id as usercreate6_2_2_, user3_.usermodify_id as usermodify7_2_2_, user3_.version as version2_2_, user3_.name as name2_2_, user4_.id as id2_3_, user4_.creationDate as creation2_2_3_, user4_.modificationDate as modifica3_2_3_, user4_.usercreate_id as usercreate6_2_3_, user4_.usermodify_id as usermodify7_2_3_, user4_.version as version2_3_, user4_.name as name2_3_, product5_.id as id0_4_, product5_.creationDate as creation2_0_4_, product5_.modificationDate as modifica3_0_4_, product5_.usercreate_id as usercreate7_0_4_, product5_.usermodify_id as usermodify8_0_4_, product5_.version as version0_4_, product5_.code as code0_4_, product5_.name as name0_4_, user6_.id as id2_5_, user6_.creationDate as creation2_2_5_, user6_.modificationDate as modifica3_2_5_, user6_.usercreate_id as usercreate6_2_5_, user6_.usermodify_id as usermodify7_2_5_, user6_.version as version2_5_, user6_.name as name2_5_, user7_.id as id2_6_, user7_.creationDate as creation2_2_6_, user7_.modificationDate as modifica3_2_6_, user7_.usercreate_id as usercreate6_2_6_, user7_.usermodify_id as usermodify7_2_6_, user7_.version as version2_6_, user7_.name as name2_6_ from ReceivingGoodDetail details0_ left outer join COMMON_USER user1_ on details0_.usercreate_id=user1_.id left outer join COMMON_USER user2_ on user1_.usercreate_id=user2_.id left outer join COMMON_USER user3_ on user2_.usermodify_id=user3_.id left outer join COMMON_USER user4_ on details0_.usermodify_id=user4_.id left outer join Product product5_ on details0_.product_id=product5_.id left outer join COMMON_USER user6_ on product5_.usercreate_id=user6_.id left outer join COMMON_USER user7_ on product5_.usermodify_id=user7_.id where details0_.receivinggood_id=? after adding try selecting the size : 5 after adding, now flushing Hibernate: update ReceivingGood set creationDate=?, modificationDate=?, usercreate_id=?, usermodify_id=?, version=?, purchaseorder_id=?, supplier_id=?, transactionDate=?, transactionNumber=?, transactionType=?, transactionYearMonth=?, warehouse_id=? where id=? and version=? detail size : 5 */ public void editLazyCollection() { String headerId = "3b373f6a-9cd1-4c9c-9d46-240de37f6b0f"; ReceivingGood receivingGood = em.find(ReceivingGood.class, headerId); // get existing detail ReceivingGoodDetail detail = em.find(ReceivingGoodDetail.class, "323fb0e7-9bb2-48dc-bc07-5ff32f30e131"); detail.setInternalUnit("MCB"); System.out.println("adding existing detail into lazy collection"); receivingGood.addReceivingGoodDetail(detail); System.out.println("after adding try selecting the size : " + receivingGood.getDetails().size()); System.out.println("after adding, now flushing"); em.flush(); System.out.println("detail size : " + receivingGood.getDetails().size()); } Please share your experience on this matter ! Thank you !

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Python subprocess.Popen

    - by Albert
    I have that code: #!/usr/bin/python -u localport = 9876 import sys, re, os from subprocess import * tun = Popen(["./newtunnel", "22", str(localport)], stdout=PIPE, stderr=STDOUT) print "** Started tunnel, waiting to be ready ..." for l in tun.stdout: sys.stdout.write(l) if re.search("Waiting for connection", l): print "** Ready for SSH !" break The "./newtunnel" will not exit, it will constantly output more and more data to stdout. However, that code will not give any output and just keeps waiting in the tun.stdout. When I kill the newtunnel process externally, it flushes all the data to tun.stdout. So it seems that I can't get any data from the tun.stdout while it is still running. Why is that? How can I get the information? Note that the default bufsize for Popen is 0 (unbuffered). I can also specify bufsize=0 but that doesn't change anything.

    Read the article

  • refactoring in iSeries (RPG), is it realistic

    - by albert green
    Implementing agile in projects requires the ability to do refactoring. It is not really a must, but code refactoring has proven to be a good engineering practice. In an agile (Scrum) project on the iSeries platform, which requires development (new code and modifications to legacy code) in RPG, RPG LE, is it possible to implement refactoring? If so what are the techniques to do it? If someone who has tried it could share their experience or just point to references, I would greatly appreciate it.

    Read the article

  • g++ on MacOSX doesn't work with -arch ppc64

    - by Albert
    I am trying to build a Universal binary on MacOSX with g++. However, it doesn't really work. I have tried with this simple dummy code: #include <iostream> using namespace std; int main() { cout << "Hello" << endl; } This works fine: % g++ test.cpp -arch i386 -arch ppc -arch x86_64 -o test % file test test: Mach-O universal binary with 3 architectures test (for architecture i386): Mach-O executable i386 test (for architecture ppc7400): Mach-O executable ppc test (for architecture x86_64): Mach-O 64-bit executable x86_64 However, this does not: % g++ test.cpp -arch i386 -arch ppc -arch x86_64 -arch ppc64 -o test In file included from test.cpp:1: /usr/include/c++/4.2.1/iostream:44:28: error: bits/c++config.h: No such file or directory In file included from /usr/include/c++/4.2.1/ios:43, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from test.cpp:1: /usr/include/c++/4.2.1/iosfwd:45:29: error: bits/c++locale.h: No such file or directory /usr/include/c++/4.2.1/iosfwd:46:25: error: bits/c++io.h: No such file or directory In file included from /usr/include/c++/4.2.1/bits/ios_base.h:45, from /usr/include/c++/4.2.1/ios:48, from /usr/include/c++/4.2.1/ostream:45, from /usr/include/c++/4.2.1/iostream:45, from test.cpp:1: /usr/include/c++/4.2.1/ext/atomicity.h:39:23: error: bits/gthr.h: No such file or directory /usr/include/c++/4.2.1/ext/atomicity.h:40:30: error: bits/atomic_word.h: No such file or directory ... Any idea why that is? I have installed Xcode 3.2.2 with all SDKs it comes with.

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >