Search Results

Search found 13710 results on 549 pages for 'relational model'.

Page 530/549 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • How do I get Visual Studio build process to pass condition to library(external) project

    - by Billy Talented
    I have tried several solutions for this problem. Enough to know I do not know enough about MSBuild to do this elegantly but I feel like there should be a method. I have a set of libraries for working with .net projects. A few of the projects utilize System.Web.Mvc - which recently released Version 2 - and we are looking forward to the upgrade. Currently sites which reference this library reference it directly by the project(csproj) on the developer's computer - not a built version of the library so that changes and source code case easily be viewed when dealing code from this library. This works quite well and would prefer to not have to switch to binary references (but will if this is the only solution). The problem I have is that because of some of the functionality that was added onto the MVC1 based library (view engines, model binders etc) several of the sites reliant on these libraries need to stay on MVC1 until we have full evaluated and tested them on MVC2. I would prefer to not have to fork or have two copies on each dev machine. So what I would like to be able to do is set a property group value in the referencing web application and have this read by the above mentions library with the caviat that when working directly on the library via its containing solution I would like to be able to control this via Configuration Manager by selecting a build type and that property overriding the build behavior of the solution (i.e. 'Debug - MVC1' vs 'Debug -MVC2') - I have this working via: <Choose> <When Condition=" '$(Configuration)|$(Platform)' == 'Release - MVC2|AnyCPU' Or '$(Configuration)|$(Platform)' == 'Debug - MVC2|AnyCPU'"> <ItemGroup> <Reference Include="System.Web.Mvc, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> </Reference> <Reference Include="Microsoft.Web.Mvc, Version=1.0.0.0, Culture=neutral, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC2\Microsoft.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </When> <Otherwise> <ItemGroup> <Reference Include="System.Web.Mvc, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35, processorArchitecture=MSIL"> <SpecificVersion>False</SpecificVersion> <HintPath>..\Dependancies\Web\MVC\System.Web.Mvc.dll</HintPath> </Reference> </ItemGroup> </Otherwise> The item that I am struggling with is the cross solution issue(solution TheWebsite references this project and needs to control which build property to use) that I have not found a way to work with that I think is a solid solution that enabled the build within visual studio to work as it has to date. Other bits: we are using VS2008, Resharper, TeamCity for CI, SVN for source control.

    Read the article

  • JPA thinks I'm deleting a detached object

    - by Steve
    So, I've got a DAO that I used to load and save my domain objects using JPA. I finally managed to get the transaction stuff working (with a bunch of help from the folks here...), now I've got another issue. In my test case, I call my DAO to load a domain object with a given id, check that it got loaded and then call the same DAO to delete the object I just loaded. When I do that I get the following: java.lang.IllegalArgumentException: Removing a detached instance mil.navy.ndms.conops.common.model.impl.jpa.Group#10 at org.hibernate.ejb.event.EJB3DeleteEventListener.performDetachedEntityDeletionCheck(EJB3DeleteEventListener.java:45) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:108) at org.hibernate.event.def.DefaultDeleteEventListener.onDelete(DefaultDeleteEventListener.java:74) at org.hibernate.impl.SessionImpl.fireDelete(SessionImpl.java:794) at org.hibernate.impl.SessionImpl.delete(SessionImpl.java:772) at org.hibernate.ejb.AbstractEntityManagerImpl.remove(AbstractEntityManagerImpl.java:253) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.springframework.orm.jpa.SharedEntityManagerCreator$SharedEntityManagerInvocationHandler.invoke(SharedEntityManagerCreator.java:180) at $Proxy27.remove(Unknown Source) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDao.delete(GroupDao.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:304) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy28.delete(Unknown Source) at mil.navy.ndms.conops.common.dao.impl.jpa.GroupDaoTest.testGroupDaoSave(GroupDaoTest.java:89) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:48) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37) at java.lang.reflect.Method.invoke(Method.java:600) at junit.framework.TestCase.runTest(TestCase.java:164) at junit.framework.TestCase.runBare(TestCase.java:130) at junit.framework.TestResult$1.protect(TestResult.java:106) at junit.framework.TestResult.runProtected(TestResult.java:124) at junit.framework.TestResult.run(TestResult.java:109) at junit.framework.TestCase.run(TestCase.java:120) at junit.framework.TestSuite.runTest(TestSuite.java:230) at junit.framework.TestSuite.run(TestSuite.java:225) at org.eclipse.jdt.internal.junit.runner.junit3.JUnit3TestReference.run(JUnit3TestReference.java:130) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:460) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:673) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:386) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:196) Now given that I'm using the same DAO instance, and I've not changed EntityManagers (unless Spring does so without letting me know), how can this be a detached object? My DAO code looks like this: public class GenericJPADao implements IWebDao, IDao, IDaoUtil { private static Logger logger = Logger.getLogger (GenericJPADao.class); protected Class voClass; @PersistenceContext(unitName = "CONOPS_PU") protected EntityManagerFactory emf; @PersistenceContext(unitName = "CONOPS_PU") protected EntityManager em; public GenericJPADao() { super ( ); ParameterizedType genericSuperclass = (ParameterizedType) getClass ( ).getGenericSuperclass ( ); this.voClass = (Class) genericSuperclass.getActualTypeArguments ( )[1]; } ... public void delete (INTFC modelObj, EntityManager em) { em.remove (modelObj); } @SuppressWarnings("unchecked") public INTFC findById (Long id) { return ((INTFC) em.find (voClass, id)); } } The test case code looks like: IGroup loadedGroup = dao.findById (group.getId ( )); assertNotNull (loadedGroup); assertEquals (group.getId ( ), loadedGroup.getId ( )); dao.delete (loadedGroup); // - This generates the above exception loadedGroup = dao.findById (group.getId ( )); assertNull(loadedGroup); Can anyone tell me what I'm doing wrong here? Thanks

    Read the article

  • ASP.NET MVC 2 - Saving child entities on form submit

    - by Justin
    Hey, I'm using ASP.NET MVC 2 and am struggling with saving child entities. I have an existing Invoice entity (which I create on a separate form) and then I have a LogHours view that I'd like to use to save InvoiceLog's, which are child entities of Invoice. Here's the view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<TothSolutions.Data.Invoice>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Log Hours </asp:Content> <asp:Content ID="Content3" ContentPlaceHolderID="HeadContent" runat="server"> <script type="text/javascript"> $(document).ready(function () { $("#InvoiceLogs_0__Description").focus(); }); </script> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Log Hours</h2> <% using (Html.BeginForm("SaveHours", "Invoices")) {%> <%: Html.ValidationSummary(true) %> <fieldset> <legend>Fields</legend> <table> <tr> <th>Date</th> <th>Description</th> <th>Hours</th> </tr> <% int index = 0; foreach (var log in Model.InvoiceLogs) { %> <tr> <td><%: log.LogDate.ToShortDateString() %></td> <td><%: Html.TextBox("InvoiceLogs[" + index + "].Description")%></td> <td><%: Html.TextBox("InvoiceLogs[" + index + "].Hours")%></td> <td>Hours</td> </tr> <% index++; } %> </table> <p> <%: Html.Hidden("InvoiceID") %> <%: Html.Hidden("CreateDate") %> <input type="submit" value="Save" /> </p> </fieldset> <% } %> <div> <%: Html.ActionLink("Back to List", "Index") %> </div> </asp:Content> And here's the controller code: //GET: /Secure/Invoices/LogHours/ public ActionResult LogHours(int id) { var invoice = DataContext.InvoiceData.Get(id); if (invoice == null) { throw new Exception("Invoice not found with id: " + id); } return View(invoice); } //POST: /Secure/Invoices/SaveHours/ [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveHours([Bind(Exclude = "InvoiceLogs")]Invoice invoice) { TryUpdateModel(invoice.InvoiceLogs, "InvoiceLogs"); invoice.UpdateDate = DateTime.Now; invoice.DeveloperID = DeveloperID; //attaching existing invoice. DataContext.InvoiceData.Attach(invoice); //save changes. DataContext.SaveChanges(); //redirect to invoice list. return RedirectToAction("Index"); } And the data access code: public static void Attach(Invoice invoice) { var i = new Invoice { InvoiceID = invoice.InvoiceID }; db.Invoices.Attach(i); db.Invoices.ApplyCurrentValues(invoice); } In the SaveHours action, it properly sets the values of the InvoiceLog entities after I call TryUpdateModel but when it does SaveChanges it doesn't update the database with the new values. Also, if you manually update the values of the InvoiceLog entries in the database and then go to this page it doesn't populate the textboxes so it's clearly not binding correctly. Thanks, Justin

    Read the article

  • Flex list control itemrenderer issue

    - by Jerry
    Hi Guys I am working on a small photo Gallery. I create a xml file and try to link it to my List control with itemrenderer. However, when I tried to save the file, I got access of undefined property data. Here is my code...and thanks a lot! <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600"> <fx:Declarations> <fx:Model id="pictureXML" source="data/pictures.xml"/> <s:ArrayList id="pictureArray" source="{pictureXML.picture}"/> </fx:Declarations> <s:List id="pictureGrid" dataProvider="{pictureArray}" horizontalCenter="0" top="20"> <s:itemRenderer> <fx:Component> <s:HGroup> <mx:Image source="images/big/{data.source}" /> <s:Label text="{data.caption}"/> </s:HGroup> </fx:Component> </s:itemRenderer> </s:List> </s:Application> My xml <?xml version="1.0"?> <album> <picture> <source>city1.jpg </source> <caption>City View 1</caption> </picture> <picture> <source>city2.jpg </source> <caption>City View 2</caption> </picture> <picture> <source>city3.jpg </source> <caption>City View 3</caption> </picture> <picture> <source>city4.jpg </source> <caption>City View 4</caption> </picture> <picture> <source>city5.jpg </source> <caption>City View 5</caption> </picture> <picture> <source>city6.jpg </source> <caption>City View 6</caption> </picture> <picture> <source>city7.jpg </source> <caption>City View 7</caption> </picture> <picture> <source>city8.jpg </source> <caption>City View 8</caption> </picture> <picture> <source>city9.jpg </source> <caption>City View 9</caption> </picture> <picture> <source>city10.jpg </source> <caption>City View 10</caption> </picture> </album> I appreciate any helps!!!

    Read the article

  • Why do I get the error "Only antlib URIs can be located from the URI alone,not the URI" when trying to run hibernate tools in my build.xml

    - by Casbah
    I'm trying to run hibernate tools in an ant build to generate ddl from my JPA annotations. Ant dies on the taskdef tag. I've tried with ant 1.7, 1.6.5, and 1.6 to no avail. I've tried both in eclipse and outside. I've tried including all the hbn jars in the hibernate-tools path and not. Note that I based my build file on this post: http://stackoverflow.com/questions/281890/hibernate-jpa-to-ddl-command-line-tools I'm running eclipse 3.4 with WTP 3.0.1 and MyEclipse 7.1 on Ubuntu 8. Build.xml: <project name="generateddl" default="generate-ddl"> <path id="hibernate-tools"> <pathelement location="../libraries/hibernate-tools/hibernate-tools.jar" /> <pathelement location="../libraries/hibernate-tools/bsh-2.0b1.jar" /> <pathelement location="../libraries/hibernate-tools/freemarker.jar" /> <pathelement location="../libraries/jtds/jtds-1.2.2.jar" /> <pathelement location="../libraries/hibernate-tools/jtidy-r8-20060801.jar" /> </path> <taskdef classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="hibernate-tools"/> <target name="generate-ddl" description="Export schema to DDL file"> <!-- compile model classes before running hibernatetool --> <!-- task definition; project.class.path contains all necessary libs <taskdef name="hibernatetool" classname="org.hibernate.tool.ant.HibernateToolTask" classpathref="project.class.path" /> --> <hibernatetool destdir="sql"> <!-- check that directory exists --> <jpaconfiguration persistenceunit="default" /> <classpath> <dirset dir="WebRoot/WEB-INF/classes"> <include name="**/*"/> </dirset> </classpath> <hbm2ddl outputfilename="schemaexport.sql" format="true" export="false" drop="true" /> </hibernatetool> </target> Error message (ant -v): Apache Ant version 1.7.0 compiled on December 13 2006 Buildfile: /home/joe/workspace/bento/ant-generate-ddl.xml parsing buildfile /home/joe/workspace/bento/ant-generate-ddl.xml with URI = file:/home/joe/workspace/bento/ant-generate-ddl.xml Project base dir set to: /home/joe/workspace/bento [antlib:org.apache.tools.ant] Could not load definitions from resource org/apache/tools/ant/antlib.xml. It could not be found. BUILD FAILED /home/joe/workspace/bento/ant-generate-ddl.xml:12: Only antlib URIs can be located from the URI alone,not the URI at org.apache.tools.ant.taskdefs.Definer.execute(Definer.java:216) at org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:288) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.tools.ant.dispatch.DispatchUtils.execute(DispatchUtils.java:105) at org.apache.tools.ant.Task.perform(Task.java:348) at org.apache.tools.ant.Target.execute(Target.java:357) at org.apache.tools.ant.helper.ProjectHelper2.parse(ProjectHelper2.java:140) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.parseBuildFile(InternalAntRunner.java:191) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.run(InternalAntRunner.java:400) at org.eclipse.ant.internal.ui.antsupport.InternalAntRunner.main(InternalAntRunner.java:137) Total time: 195 milliseconds

    Read the article

  • How to overcome shortcomings in reporting from EAV database?

    - by David Archer
    The major shortcomings with Entity-Attribute-Value database designs in SQL all seem to be related to being able to query and report on the data efficiently and quickly. Most of the information I read on the subject warn against implementing EAV due to these problems and the commonality of querying/reporting for almost all applications. I am currently designing a system where almost all the fields necessary for data storage are not known at design/compile time and are defined by the end-user of the system. EAV seems like a good fit for this requirement but due to the problems I've read about, I am hesitant in implementing it as there are also some pretty heavy reporting requirements for this system as well. I think I've come up with a way around this but would like to pose the question to the SO community. Given that typical normalized database (OLTP) still isn't always the best option for running reports, a good practice seems to be having a "reporting" database (OLAP) where the data from the normalized database is copied to, indexed extensively, and possibly denormalized for easier querying. Could the same idea be used to work around the shortcomings of an EAV design? The main downside I see are the increased complexity of transferring the data from the EAV database to reporting as you may end up having to alter the tables in the reporting database as new fields are defined in the EAV database. But that is hardly impossible and seems to be an acceptable tradeoff for the increased flexibility given by the EAV design. This downside also exists if I use a non-SQL data store (i.e. CouchDB or similar) for the main data storage since all the standard reporting tools are expecting a SQL backend to query against. Do the issues with EAV systems mostly go away if you have a seperate reporting database for querying? EDIT: Thanks for the comments so far. One of the important things about the system I'm working on it that I'm really only talking about using EAV for one of the entities, not everything in the system. The whole gist of the system is to be able to pull data from multiple disparate sources that are not known ahead of time and crunch the data to come up with some "best known" data about a particular entity. So every "field" I'm dealing with is multi-valued and I'm also required to track history for each. The normalized design for this ends up being 1 table per field which makes querying it kind of painful anyway. Here are the table schemas and sample data I'm looking at (obviously changed from what I'm working on but I think it illustrates the point well): EAV Tables Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_Value ------------------------------------------------------------------- - PersonId - Source - Field - Value - EffectiveDate - ------------------------------------------------------------------- - 123 - CIA - HomeAddress - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - HomeAddress - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - HomeAddress - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------------------- Reporting Table Person_Denormalized ---------------------------------------------------------------------------------------- - Id - Name - HomeAddress - HomeAddress_Confidence - HomeAddress_EffectiveDate - ---------------------------------------------------------------------------------------- - 123 - Joe Smith - 123 Cherry Ln - 0.713 - 2010-03-26 - ---------------------------------------------------------------------------------------- Normalized Design Person ------------------- - Id - Name - ------------------- - 123 - Joe Smith - ------------------- Person_HomeAddress ------------------------------------------------------ - PersonId - Source - Value - Effective Date - ------------------------------------------------------ - 123 - CIA - 123 Cherry Ln - 2010-03-26 - - 123 - DMV - 561 Stoney Rd - 2010-02-15 - - 123 - FBI - 676 Lancas Dr - 2010-03-01 - ------------------------------------------------------ The "Confidence" field here is generated using logic that cannot be expressed easily (if at all) using SQL so my most common operation besides inserting new values will be pulling ALL data about a person for all fields so I can generate the record for the reporting table. This is actually easier in the EAV model as I can do a single query. In the normalized design, I end up having to do 1 query per field to avoid a massive cartesian product from joining them all together.

    Read the article

  • Loading a PyML multiclass classifier... why isn't this working?

    - by Michael Aaron Safyan
    This is a followup from "Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?". I am using PyML for a computer vision project (pyimgattr), and have been having trouble storing/loading a multiclass classifier. When attempting to load one of the SVMs in a composite classifier, with loadSVM, I am getting: ValueError: invalid literal for float(): rest Note that this does not happen with the first classifier that I load, only with the second. What is causing this error, and what can I do to get around this so that I can properly load the classifier? Details To better understand the trouble I'm running into, you may want to look at pyimgattr.py (currently revision 11). I am invoking the program with "./pyimgattr.py train" which trains the classifier (invokes train on line 571, which trains the classifier with trainmulticlassclassifier on line 490 and saves it with storemulticlassclassifier on line 529), and then invoking the program with "./pyimgattr.py test" which loads the classifier in order to test it with the testing dataset (invokes test on line 628, which invokes loadmulticlassclassifier on line 549). The multiclass classifier consists of several one-against-rest SVMs which are saved individually. The loadmulticlassclassifier function loads these individually by calling loadSVM() on several different files. It is in this call to loadSVM (done indirectly in loadclassifier on line 517) that I get an error. The first of the one-against-rest classifiers loads successfully, but the second one does not. A transcript is as follows: $ ./pyimgattr.py test [INFO] pyimgattr -- Loading attributes from "classifiers/attributes.lst"... [INFO] pyimgattr -- Loading classnames from "classifiers/classnames.lst"... [INFO] pyimgattr -- Loading dataset "attribute_data/apascal_test.txt"... [INFO] pyimgattr -- Loaded dataset "attribute_data/apascal_test.txt". [INFO] pyimgattr -- Loading multiclass classifier from "classifiers/classnames_from_attributes"... [INFO] pyimgattr -- Constructing object into which to store loaded data... [INFO] pyimgattr -- Loading manifest data... [INFO] pyimgattr -- Loading classifier from "classifiers/classnames_from_attributes/aeroplane.svm".... scanned 100 patterns scanned 200 patterns read 100 patterns read 200 patterns {'50': 38, '60': 45, '61': 46, '62': 47, '49': 37, '52': 39, '53': 40, '24': 16, '25': 17, '26': 18, '27': 19, '20': 12, '21': 13, '22': 14, '23': 15, '46': 34, '47': 35, '28': 20, '29': 21, '40': 32, '41': 33, '1': 1, '0': 0, '3': 3, '2': 2, '5': 5, '4': 4, '7': 7, '6': 6, '8': 8, '58': 44, '39': 31, '38': 30, '15': 9, '48': 36, '16': 10, '19': 11, '32': 24, '31': 23, '30': 22, '37': 29, '36': 28, '35': 27, '34': 26, '33': 25, '55': 42, '54': 41, '57': 43} read 250 patterns in LinearSparseSVModel done LinearSparseSVModel constructed model [INFO] pyimgattr -- Loaded classifier from "classifiers/classnames_from_attributes/aeroplane.svm". [INFO] pyimgattr -- Loading classifier from "classifiers/classnames_from_attributes/bicycle.svm".... label at None delimiter , Traceback (most recent call last): File "./pyimgattr.py", line 797, in sys.exit(main(sys.argv)); File "./pyimgattr.py", line 782, in main return test(attributes_file,classnames_file,testing_annotations_file,testing_dataset_path,classifiers_path,logger); File "./pyimgattr.py", line 635, in test multiclass_classnames_from_attributes_classifier = loadmulticlassclassifier(classnames_from_attributes_folder,logger); File "./pyimgattr.py", line 529, in loadmulticlassclassifier classifiers.append(loadclassifier(os.path.join(filename,label+".svm"),logger)); File "./pyimgattr.py", line 502, in loadclassifier result=loadSVM(filename,datasetClass = SparseDataSet); File "/Library/Python/2.6/site-packages/PyML/classifiers/svm.py", line 328, in loadSVM data = datasetClass(fileName, **args) File "/Library/Python/2.6/site-packages/PyML/containers/vectorDatasets.py", line 224, in __init__ BaseVectorDataSet.__init__(self, arg, **args) File "/Library/Python/2.6/site-packages/PyML/containers/baseDatasets.py", line 214, in __init__ self.constructFromFile(arg, **args) File "/Library/Python/2.6/site-packages/PyML/containers/baseDatasets.py", line 243, in constructFromFile for x in parser : File "/Library/Python/2.6/site-packages/PyML/containers/parsers.py", line 426, in next x = [float(token) for token in tokens[self._first:self._last]] ValueError: invalid literal for float(): rest

    Read the article

  • Deserializing Metafile

    - by Kildareflare
    I have an application that works with Enhanced Metafiles. I am able to create them, save them to disk as .emf and load them again no problem. I do this by using the gdi32.dll methods and the DLLImport attribute. However, to enable Version Tolerant Serialization I want to save the metafile in an object along with other data. This essentially means that I need to serialize the metafile data as a byte array and then deserialize it again in order to reconstruct the metafile. The problem I have is that the deserialized data would appear to be corrupted in some way, since the method that I use to reconstruct the Metafile raises a "Parameter not valid exception". At the very least the pixel format and resolutions have changed. Code use is below. [DllImport("gdi32.dll")] public static extern uint GetEnhMetaFileBits(IntPtr hemf, uint cbBuffer, byte[] lpbBuffer); [DllImport("gdi32.dll")] public static extern IntPtr SetEnhMetaFileBits(uint cbBuffer, byte[] lpBuffer); [DllImport("gdi32.dll")] public static extern bool DeleteEnhMetaFile(IntPtr hemf); The application creates a metafile image and passes it to the method below. private byte[] ConvertMetaFileToByteArray(Image image) { byte[] dataArray = null; Metafile mf = (Metafile)image; IntPtr enhMetafileHandle = mf.GetHenhmetafile(); uint bufferSize = GetEnhMetaFileBits(enhMetafileHandle, 0, null); if (enhMetafileHandle != IntPtr.Zero) { dataArray = new byte[bufferSize]; GetEnhMetaFileBits(enhMetafileHandle, bufferSize, dataArray); } DeleteEnhMetaFile(enhMetafileHandle); return dataArray; } At this point the dataArray is inserted into an object and serialized using a BinaryFormatter. The saved file is then deserialized again using a BinaryFormatter and the dataArray retrieved from the object. The dataArray is then used to reconstruct the original Metafile using the following method. public static Image ConvertByteArrayToMetafile(byte[] data) { Metafile mf = null; try { IntPtr hemf = SetEnhMetaFileBits((uint)data.Length, data); mf = new Metafile(hemf, true); } catch (Exception ex) { System.Windows.Forms.MessageBox.Show(ex.Message); } return (Image)mf; } The reconstructed metafile is then saved saved to disk as a .emf (Model) at which point it can be accessed by the Presenter for display. private static void SaveFile(Image image, String filepath) { try { byte[] buffer = ConvertMetafileToByteArray(image); File.WriteAllBytes(filepath, buffer); //will overwrite file if it exists } catch (Exception ex) { System.Windows.Forms.MessageBox.Show(ex.Message); } } The problem is that the save to disk fails. If this same method is used to save the original Metafile before it is serialized everything is OK. So something is happening to the data during serialization/deserializtion. Indeed if I check the Metafile properties in the debugger I can see that the ImageFlags, PropertyID, resolution and pixelformats change. Original Format32bppRgb changes to Format32bppArgb Original Resolution 81 changes to 96 I've trawled though google and SO and this has helped me get this far but Im now stuck. Does any one have enough experience with Metafiles / serialization to help..? EDIT: If I serialize/deserialize the byte array directly (without embedding in another object) I get the same problem.

    Read the article

  • Rails & combo box change event: Help make this obtrusive javascript unobtrusive

    - by DJTripleThreat
    Ok so a friend of mine gave some help with a prototype/obtrusive solution to this but its not quite there. Also, I want to make this unobtrusive instead of using the observe_field function that rails gives me. I don't want to use prototype either because I'm more familiar with JQuery. Here's my problem: I have an Event that can have multiple ServiceTypes and a ServiceType can belong to many Events. A many-to-many relationship between these two exists as an OfferedService. When creating an event, I have a drop down with a list of TimeAllotments that are something like 10 minutes, 12 minutes, 15 minutes, 20 minutes, 30 minutes etc. When the user selects one of these choices, I want a div tag to be filled with a list of ServiceTypes that are associated with this TimeAllotment. So for example, the user selects "10 minutes" and then the div repopulates with services that last 10 minutes. Here is what I have so far: ... some erb code etc and then <fieldset> <legend><%= f.label :time_allotment, "Size of the Appointment Slots:" %></legend> <div> <span class="field-group"> <div> <!-- TimeAllotment is a tabless model which is why this is done like so... --> <%= select("event_service", "time_allotment", TimeAllotment.all.collect {|ta| [ta.title, ta.value]}, {:prompt => true}) %> </div> </span> </div> <div style="clear:both;"></div> Services: <div> <span class="field-group"> <!-- this div right here needs to be repopulated when the above select changes. --> <div id="services"> <% for service_type in ServiceType.all %> <div> <%= check_box_tag "event_service[service_type_ids][]", service_type.id, false %> <%=h service_type.title %> </div> <% end %> </div> </span> </div> <div class="clear"></div> </fieldset> ok so right now ALL of the services are there to be chosen from. I want them to change based on what is selected in the combobox event_service_time_allotment. Can someone help me get pointed in the right direction? I have looked at Ryan's rails casts for using JQuery but its not helpful because he deals with ajax calls for the create action. This would be for the new or edit action. I have a new.js.erb but it doesn't get loaded when calling the new action. I'm super lost as far as getting JQuery to work with my application. I think that if someone can just show me how to make an alert pop up when I change the combo box, and how to return a dataset using ajax the right now, I think I can figure out the rest. Thanks, I know this is super complicated so any helpful answers will get an upvote.

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • Matlab: Optimization by perturbing variable

    - by S_H
    My main script contains following code: %# Grid and model parameters nModel=50; nModel_want=1; nI_grid1=5; Nth=1; nRow.Scale1=5; nCol.Scale1=5; nRow.Scale2=5^2; nCol.Scale2=5^2; theta = 90; % degrees a_minor = 2; % range along minor direction a_major = 5; % range along major direction sill = var(reshape(Deff_matrix_NthModel,nCell.Scale1,1)); % variance of the coarse data matrix of size nRow.Scale1 X nCol.Scale1 %# Covariance computation % Scale 1 for ihRow = 1:nRow.Scale1 for ihCol = 1:nCol.Scale1 [cov.Scale1(ihRow,ihCol),heff.Scale1(ihRow,ihCol)] = general_CovModel(theta, ihCol, ihRow, a_minor, a_major, sill, 'Exp'); end end % Scale 2 for ihRow = 1:nRow.Scale2 for ihCol = 1:nCol.Scale2 [cov.Scale2(ihRow,ihCol),heff.Scale2(ihRow,ihCol)] = general_CovModel(theta, ihCol/(nCol.Scale2/nCol.Scale1), ihRow/(nRow.Scale2/nRow.Scale1), a_minor, a_major, sill/(nRow.Scale2*nCol.Scale2), 'Exp'); end end %# Scale-up of fine scale values by averaging [covAvg.Scale2,var_covAvg.Scale2,varNorm_covAvg.Scale2] = general_AverageProperty(nRow.Scale2/nRow.Scale1,nCol.Scale2/nCol.Scale1,1,nRow.Scale1,nCol.Scale1,1,cov.Scale2,1); I am using two functions, general_CovModel() and general_AverageProperty(), in my main script which are given as following: function [cov,h_eff] = general_CovModel(theta, hx, hy, a_minor, a_major, sill, mod_type) % mod_type should be in strings angle_rad = theta*(pi/180); % theta in degrees, angle_rad in radians R_theta = [sin(angle_rad) cos(angle_rad); -cos(angle_rad) sin(angle_rad)]; h = [hx; hy]; lambda = a_minor/a_major; D_lambda = [lambda 0; 0 1]; h_2prime = D_lambda*R_theta*h; h_eff = sqrt((h_2prime(1)^2)+(h_2prime(2)^2)); if strcmp(mod_type,'Sph')==1 || strcmp(mod_type,'sph') ==1 if h_eff<=a cov = sill - sill.*(1.5*(h_eff/a_minor)-0.5*((h_eff/a_minor)^3)); else cov = sill; end elseif strcmp(mod_type,'Exp')==1 || strcmp(mod_type,'exp') ==1 cov = sill-(sill.*(1-exp(-(3*h_eff)/a_minor))); elseif strcmp(mod_type,'Gauss')==1 || strcmp(mod_type,'gauss') ==1 cov = sill-(sill.*(1-exp(-((3*h_eff)^2/(a_minor^2))))); end and function [PropertyAvg,variance_PropertyAvg,NormVariance_PropertyAvg]=... general_AverageProperty(blocksize_row,blocksize_col,blocksize_t,... nUpscaledRow,nUpscaledCol,nUpscaledT,PropertyArray,omega) % This function computes average of a property and variance of that averaged % property using power averaging PropertyAvg=zeros(nUpscaledRow,nUpscaledCol,nUpscaledT); %# Average of property for k=1:nUpscaledT, for j=1:nUpscaledCol, for i=1:nUpscaledRow, sum=0; for a=1:blocksize_row, for b=1:blocksize_col, for c=1:blocksize_t, sum=sum+(PropertyArray((i-1)*blocksize_row+a,(j-1)*blocksize_col+b,(k-1)*blocksize_t+c).^omega); % add all the property values in 'blocksize_x','blocksize_y','blocksize_t' to one variable end end end PropertyAvg(i,j,k)=(sum/(blocksize_row*blocksize_col*blocksize_t)).^(1/omega); % take average of the summed property end end end %# Variance of averageed property variance_PropertyAvg=var(reshape(PropertyAvg,... nUpscaledRow*nUpscaledCol*nUpscaledT,1),1,1); %# Normalized variance of averageed property NormVariance_PropertyAvg=variance_PropertyAvg./(var(reshape(... PropertyArray,numel(PropertyArray),1),1,1)); Question: Using Matlab, I would like to optimize covAvg.Scale2 such that it matches closely with cov.Scale1 by perturbing/varying any (or all) of the following variables 1) a_minor 2) a_major 3) theta Thanks.

    Read the article

  • JSON Formatting with Jersey, Jackson, & json.org/java Parser using Curl Command

    - by socal_javaguy
    Using Java 6, Tomcat 7, Jersey 1.15, Jackson 2.0.6 (from FasterXml maven repo), & www.json.org parser, I am trying to pretty print the JSON String so it will look indented by the curl -X GET command line. I created a simple web service which has the following architecture: My POJOs (model classes): Family.java import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Family { private String father; private String mother; private List<Children> children; // Getter & Setters } Children.java import javax.xml.bind.annotation.XmlRootElement; @XmlRootElement public class Children { private String name; private String age; private String gender; // Getters & Setters } Using a Utility Class, I decided to hard code the POJOs as follows: public class FamilyUtil { public static Family getFamily() { Family family = new Family(); family.setFather("Joe"); family.setMother("Jennifer"); Children child = new Children(); child.setName("Jimmy"); child.setAge("12"); child.setGender("male"); List<Children> children = new ArrayList<Children>(); children.add(child); family.setChildren(children); return family; } } My web service: import java.io.IOException; import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.core.MediaType; import org.codehaus.jackson.JsonGenerationException; import org.codehaus.jackson.map.JsonMappingException; import org.codehaus.jackson.map.ObjectMapper; import org.codehaus.jettison.json.JSONException; import org.json.JSONObject; import org.json.JSONTokener; import com.myapp.controller.myappController; import com.myapp.resource.output.HostingSegmentOutput; import com.myapp.util.FamilyUtil; @Path("") public class MyWebService { @GET @Produces(MediaType.APPLICATION_JSON) public static String getFamily() throws IOException, JsonGenerationException, JsonMappingException, JSONException, org.json.JSONException { ObjectMapper mapper = new ObjectMapper(); String uglyJsonString = mapper.writeValueAsString(FamilyUtil.getFamily()); System.out.println(uglyJsonString); JSONTokener tokener = new JSONTokener(uglyJsonString); JSONObject finalResult = new JSONObject(tokener); return finalResult.toString(4); } } When I run this using: curl -X GET http://localhost:8080/mywebservice I get this in my Eclipse's console: {"father":"Joe","mother":"Jennifer","children":[{"name":"Jimmy","age":"12","gender":"male"}]} But from the curl command on the command line (this response is more important): "{\n \"mother\": \"Jennifer\",\n \"children\": [{\n \"age\": \"12\",\n \"name\": \"Jimmy\",\n \"gender\": \"male\"\n }],\n \"father\": \"Joe\"\n}" This is adding newline escape sequences and placing double quotes (but not indenting like it should it does have 4 spaces after the new line but its all in one line). Would appreciate it if someone could point me in the right direction.

    Read the article

  • Placing component on Glass Pane

    - by Chris Lieb
    I have a subclass of JLabel that forms a component of my GUI. I have implemented the ability to drag and drop the component from one container to another, but without any visual effects. I want to have this JLabel follow the cursor during the drag of the item from one container to another. I figured that I could just create a glass pane and draw it on there. However, even after I add the component to the glass pane, set the component visible, and set the glass pane visible, and set the glass pane as opaque, I still so not see the component. I know the component works because I can add it to the content pane and have it show up. How do I add a component to the glass pane? package wpics509s10t7.view; import javax.swing.*; import wpics509s10t7.model.Tile; import java.awt.*; import java.awt.dnd.DragSource; import java.awt.event.AWTEventListener; import java.awt.event.MouseEvent; /** * GlassPane tutorial * "A well-behaved GlassPane" * http://weblogs.java.net/blog/alexfromsun/ * <p/> * This is the final version of the GlassPane * it is transparent for MouseEvents, * and respects underneath component's cursors by default, * it is also friedly for other users, * if someone adds a mouseListener to this GlassPane * or set a new cursor it will respect them * * @author Alexander Potochkin */ public class GlassPane extends JPanel implements AWTEventListener { private static final long serialVersionUID = 1L; private final JFrame frame; private TileView tv; // subclass of JLabel private Point point; private WordStealApp wsa; public GlassPane(JFrame frame, WordStealApp wsa) { super(null); this.wsa = wsa; this.frame = frame; setOpaque(true); setLayout(null); setVisible(true); composite = AlphaComposite.getInstance(AlphaComposite.SRC_OVER, 0.5f); } public void beginDrag(Tile t, Point p) { this.tv = new TileView(t, null, this.wsa, true); this.add(this.tv); System.out.println("Starting point: x=" + p.getX() + ",y=" + p.getY()); this.tv.setLocation((int)p.getX(), (int)p.getY()); this.tv.setVisible(true); } public void endDrag(Point p) { System.out.println("Ending point: x=" + p.getX() + ",y=" + p.getY()); this.remove(this.tv); this.tv.setVisible(false); this.tv = null; } public void eventDispatched(AWTEvent event) { if (event instanceof MouseEvent) { MouseEvent me = (MouseEvent) event; if (!SwingUtilities.isDescendingFrom(me.getComponent(), frame)) { return; } if (me.getID() == MouseEvent.MOUSE_EXITED && me.getComponent() == frame) { if (tv != null) { tv.setVisible(false); } point = null; } else { MouseEvent converted = SwingUtilities.convertMouseEvent(me.getComponent(), me, frame.getGlassPane()); point = converted.getPoint(); } repaint(); } } /** * If someone adds a mouseListener to the GlassPane or set a new cursor * we expect that he knows what he is doing * and return the super.contains(x, y) * otherwise we return false to respect the cursors * for the underneath components */ @Override public boolean contains(int x, int y) { if (getMouseListeners().length == 0 && getMouseMotionListeners().length == 0 && getMouseWheelListeners().length == 0 && getCursor() == Cursor.getPredefinedCursor(Cursor.DEFAULT_CURSOR)) { return false; } return super.contains(x, y); } }

    Read the article

  • Recommended ASP.NET Shared Hosting (USA)

    - by coffeeaddict
    Ok, I have to admit I'm getting fed up with www.discountasp.net's pricing model and this annoyance has built up over the past 8 years or so. I've been with them for years and absolutely love them on the technical side, however it's getting ridiculously expensive for so little that you get. I mean here's my scenario: 1) I am running 2 SQL Server databases which costs me $10/ea per month so that's $20/month for 2 and I only get 500 mb disk space which is horrible 2) I am paying $10/mo just for the hosting itself which I only get 1 gig of disk space! I mean common! 3) I am simply running 2 small apps (Screwturn Wiki & Subtext Blog)...so I don't really care if it's up 99% or not, it's not worth paying a total of $300 just to keep these 2 apps running over discountasp.net Anyone else feel the same? Yes, I know they have great support, probably have great servers running behind this but in the end I really don't care as long as my site is up 95% or better. Yes, the hosting toolset rocks. But you know I bet you I can find a similar set somewhere else. I like how I can totally control IIS 7 at discountasp and I can control my own app pool etc. That's very powerful and essential. But anyone have any good alternatives to discountasp that gives me close to the same at a much more reasonable cost point? I mean http://www.m6.net/prices.aspx gives you 10 SQL Databases for $7 and 200 gigs disk space! I don't know about their tools or support but just looking at those numbers and some other hosts I've seen, I feel that discountasp.net is way out of line. They don't even offer any purchasing discounts such as it would be nice if my 2nd SQL Server is only $5/month not $10...stuff like this, to make it much more realistic and fair. Opinions (people who do have discountasp.net, people who have left them, or people who have another host they like)??? But geez $300 just to host a couple DBs and lightweight open source apps? Not worth the price they are charging. I'm almost at a price point that enables me to get a decent dedicated server! I really don't care about beta ASP.NET frameworks support. Not a big deal to me. If you have alternative suggestions rather than your experience with discountasp, I'd like to know how their toolset is. Do you have complete control over your DB in terms of adding users, and same goes for the web app pool, etc.? Discountasp.net's control panel rocks. I don't want to loose the ability to at least control and add virtual directories, recycle my dedicated app pool myself, backup my sql database myself, through tools which is what discountasp does give you. I'd also want to know that the hoster at least gets the latest and greatest in terms of non-beta ASP.NET related frameworks available to its shared hosters.

    Read the article

  • Mystery Key Value Coding Key

    - by Stephen Furlani
    Hello, I'm attempting to load data from an undocumented API (OsiriX). Getting the NSManagedObject like this: NSManagedObject *itemStudy = [[BrowserController databaseOutline] itemAtRow: [[BrowserController databaseOutline] selectedRow]]; works just fine. But getting the NSManagedObject like this: seriesArray = [_context executeFetchRequest:request error:&error]; NSManagedObject *itemSeries = [seriesArray objectAtIndex:0]; Generates an error when I call [itemSeries valueForKey:@"type"] 2010-05-27 11:04:48.178 rcOsirix[27712:7b03] Exception: [<NSManagedObject 0xd30fd0> valueForUndefinedKey:]: the entity Series is not key value coding-compliant for the key "type". This confuses me thoroughly. If I print the KVC values for itemSeries I get this list: 2010-05-27 11:04:48.167 rcOsirix[27712:7b03] KVC comment 2010-05-27 11:04:48.168 rcOsirix[27712:7b03] KVC date 2010-05-27 11:04:48.168 rcOsirix[27712:7b03] KVC dateAdded 2010-05-27 11:04:48.169 rcOsirix[27712:7b03] KVC dateOpened 2010-05-27 11:04:48.169 rcOsirix[27712:7b03] KVC displayStyle 2010-05-27 11:04:48.170 rcOsirix[27712:7b03] KVC id 2010-05-27 11:04:48.170 rcOsirix[27712:7b03] KVC modality 2010-05-27 11:04:48.170 rcOsirix[27712:7b03] KVC name 2010-05-27 11:04:48.171 rcOsirix[27712:7b03] KVC numberOfImages 2010-05-27 11:04:48.171 rcOsirix[27712:7b03] KVC numberOfKeyImages 2010-05-27 11:04:48.171 rcOsirix[27712:7b03] KVC rotationAngle 2010-05-27 11:04:48.172 rcOsirix[27712:7b03] KVC scale 2010-05-27 11:04:48.172 rcOsirix[27712:7b03] KVC seriesDICOMUID 2010-05-27 11:04:48.173 rcOsirix[27712:7b03] KVC seriesDescription 2010-05-27 11:04:48.173 rcOsirix[27712:7b03] KVC seriesInstanceUID 2010-05-27 11:04:48.173 rcOsirix[27712:7b03] KVC seriesSOPClassUID 2010-05-27 11:04:48.174 rcOsirix[27712:7b03] KVC stateText 2010-05-27 11:04:48.174 rcOsirix[27712:7b03] KVC thumbnail 2010-05-27 11:04:48.174 rcOsirix[27712:7b03] KVC windowLevel 2010-05-27 11:04:48.175 rcOsirix[27712:7b03] KVC windowWidth 2010-05-27 11:04:48.175 rcOsirix[27712:7b03] KVC xFlipped 2010-05-27 11:04:48.176 rcOsirix[27712:7b03] KVC xOffset 2010-05-27 11:04:48.176 rcOsirix[27712:7b03] KVC yFlipped 2010-05-27 11:04:48.176 rcOsirix[27712:7b03] KVC yOffset 2010-05-27 11:04:48.177 rcOsirix[27712:7b03] KVC mountedVolume 2010-05-27 11:04:48.177 rcOsirix[27712:7b03] KVC study 2010-05-27 11:04:48.178 rcOsirix[27712:7b03] KVC images The KVC for itemStudy is this: 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC accessionNumber 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC comment 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC date 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC dateAdded 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC dateOfBirth 2010-05-27 10:46:40.336 OsiriX[27266:a0f] KVC dateOpened 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC dictateURL 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC expanded 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC hasDICOM 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC id 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC institutionName 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC lockedStudy 2010-05-27 10:46:40.337 OsiriX[27266:a0f] KVC modality 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC name 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC numberOfImages 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC patientID 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC patientSex 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC patientUID 2010-05-27 10:46:40.338 OsiriX[27266:a0f] KVC performingPhysician 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC referringPhysician 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC reportURL 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC stateText 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC studyInstanceUID 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC studyName 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC windowsState 2010-05-27 10:46:40.339 OsiriX[27266:a0f] KVC albums 2010-05-27 10:46:40.340 OsiriX[27266:a0f] KVC series If I use code: NSDictionary *props = [[item entity] propertiesByName]; for (NSString *s in [props allKeys]) { NSLog(@"KVC %@", s); } Yet itemStudy throws no error if I call [itemStudy valueForKey:@"type"] when it should because there's no KVC for @"type"!!! Granted, the objects are different but neither of them contain the key @"type" and they both should throw errors, yet the Osirix code Tests for both conditions: if ([[item valueForKey:@"type"] isEqualToString:@"Series"]) { ... } if ([[item valueForKey:@"type"] isEqualToString:@"Study"]) { ... } And throws no errors. Yet when I load an NSManagedObject of the same exact model and entity @"Series" it throws the 'no key value' when passed into the conditions above. Am I missing something? Both the superentity and subentities of itemSeries and itemStudy are nil so they don't inherit from something that has KVC @"type". I'm totally at a loss as to explain what is going on. --- EDIT --- I know no one can explain what is going on... but maybe where to start looking? How would itemStudy have the extra KVC @"type" that doesn't show up in it's property list? Thank you for your assistance, -Stephen

    Read the article

  • Persistence classes in Qt

    - by zarzych
    Hi, I'm porting a medium-sized CRUD application from .Net to Qt and I'm looking for a pattern for creating persistence classes. In .Net I usually created abstract persistence class with basic methods (insert, update, delete, select) for example: public class DAOBase<T> { public T GetByPrimaryKey(object primaryKey) {...} public void DeleteByPrimaryKey(object primaryKey) {...} public List<T> GetByField(string fieldName, object value) {...} public void Insert(T dto) {...} public void Update(T dto) {...} } Then, I subclassed it for specific tables/DTOs and added attributes for DB table layout: [DBTable("note", "note_id", NpgsqlTypes.NpgsqlDbType.Integer)] [DbField("note_id", NpgsqlTypes.NpgsqlDbType.Integer, "NoteId")] [DbField("client_id", NpgsqlTypes.NpgsqlDbType.Integer, "ClientId")] [DbField("title", NpgsqlTypes.NpgsqlDbType.Text, "Title", "")] [DbField("body", NpgsqlTypes.NpgsqlDbType.Text, "Body", "")] [DbField("date_added", NpgsqlTypes.NpgsqlDbType.Date, "DateAdded")] class NoteDAO : DAOBase<NoteDTO> { } Thanks to .Net reflection system I was able to achieve heavy code reuse and easy creation of new ORMs. The simplest way to do this kind of stuff in Qt seems to be using model classes from QtSql module. Unfortunately, in my case they provide too abstract an interface. I need at least transactions support and control over individual commits which QSqlTableModel doesn't provide. Could you give me some hints about solving this problem using Qt or point me to some reference materials? Update: Based on Harald's clues I've implemented a solution that is quite similar to the .Net classes above. Now I have two classes. UniversalDAO that inherits QObject and deals with QObject DTOs using metatype system: class UniversalDAO : public QObject { Q_OBJECT public: UniversalDAO(QSqlDatabase dataBase, QObject *parent = 0); virtual ~UniversalDAO(); void insert(const QObject &dto); void update(const QObject &dto); void remove(const QObject &dto); void getByPrimaryKey(QObject &dto, const QVariant &key); }; And a generic SpecializedDAO that casts data obtained from UniversalDAO to appropriate type: template<class DTO> class SpecializedDAO { public: SpecializedDAO(UniversalDAO *universalDao) virtual ~SpecializedDAO() {} DTO defaultDto() const { return DTO; } void insert(DTO dto) { dao->insert(dto); } void update(DTO dto) { dao->update(dto); } void remove(DTO dto) { dao->remove(dto); } DTO getByPrimaryKey(const QVariant &key); }; Using the above, I declare the concrete DAO class as following: class ClientDAO : public QObject, public SpecializedDAO<ClientDTO> { Q_OBJECT public: ClientDAO(UniversalDAO *dao, QObject *parent = 0) : QObject(parent), SpecializedDAO<ClientDTO>(dao) {} }; From within ClientDAO I have to set some database information for UniversalDAO. That's where my implementation gets ugly because I do it like this: QMap<QString, QString> fieldMapper; fieldMapper["client_id"] = "clientId"; fieldMapper["name"] = "firstName"; /* ...all column <-> field pairs in here... */ dao->setFieldMapper(fieldMapper); dao->setTable("client"); dao->setPrimaryKey("client_id"); I do it in constructor so it's not visible at a first glance for someone browsing through the header. In .Net version it was easy to spot and understand. Do you have some ideas how I could make it better?

    Read the article

  • How to validates cyrilic email in Rails 3.1?

    - by iKeler
    Let's say I had the email address like putin-crab@?????????.?? How to validate that address in rails 3.1? My Model(i use Mongoid): #encoding: utf-8 class User include Mongoid::Document field :email, :type => String validates :email, :presence => true, :format => { :with => RFC822::EMAIL } end For validations reqexp i use gem https://github.com/dim/rfc-822 in rails console (normal email): ruby-1.9.2-p290 :001 > usr = User.new( :email => "[email protected]" ) => #<User _id: 4ec627cf4934db7e4d000001, _type: nil, email: "[email protected]"> ruby-1.9.2-p290 :002 > usr.valid? => true in rails console (fu@#ing email): ruby-1.9.2-p290 :003 > usr = User.new( :email => "putin-crab@?????????.??" ) => #<User _id: 4ec627f44934db7e4d000002, _type: nil, email: "putin-crab@?????????.??"> ruby-1.9.2-p290 :004 > usr.valid? Encoding::CompatibilityError: incompatible encoding regexp match (ASCII-8BIT regexp with UTF-8 string) from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `=~' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `!~' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/format.rb:9:in `validate_each' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:153:in `block in validate' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:150:in `each' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validator.rb:150:in `validate' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:302:in `_callback_before_13' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:404:in `_run_validate_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:81:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:42:in `block in run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `call' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `run_cascading_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:41:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations.rb:212:in `run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/callbacks.rb:53:in `block in run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:390:in `_run_validation_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activesupport-3.1.1/lib/active_support/callbacks.rb:81:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:42:in `block in run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `call' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:67:in `run_cascading_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/callbacks.rb:41:in `run_callbacks' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations/callbacks.rb:53:in `run_validations!' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/activemodel-3.1.1/lib/active_model/validations.rb:179:in `valid?' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/mongoid-2.3.3/lib/mongoid/validations.rb:70:in `valid?' from (irb):4 from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands/console.rb:45:in `start' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands/console.rb:8:in `start' from /home/username/.rvm/gems/ruby-1.9.2-p290@rail31/gems/railties-3.1.1/lib/rails/commands.rb:40:in `<top (required)>' from script/rails:6:in `require'

    Read the article

  • Which option do you like the most?

    - by spderosso
    I need to show a list of movies ordered by the date they were registered in the system and the amount of comments users have made about them. E.g: Title | Register Date | # of comments Gladiator | 02-01-2010 | 30 Matrix | 01-02-2010 | 20 It is a web application that follows MVC. So, I have the object "Movie", there is a MovieManager interface that is used for retrieving/persisting data to the database and a servlet RecentlyAddedMovies.java that forwards to a .jsp to render the list. I will explain my problem by showing an example of how I am doing another similar task: Showing top ranked movies. Regarding the MovieManager: /** * Returns a collection of the x best-ranked movies * @param x the amount of movies to return * @return A collection of movies * @throws SQLException */ public Collection<Movie> getTopX(int x) throws SQLException; The servlet: public class Top5 extends HttpServlet{ @Override protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws ServletException, IOException { MovieManager movManager = JDBCMovieManager.getInstance(); try { req.setAttribute("movies", movManager.getTopX(5)); } catch (SQLException e) { // TODO Auto-generated catch block e.printStackTrace(); } req.getRequestDispatcher("listMovies.jsp").forward(req, resp); } } And the listMovies.jsp has is some place something like: <c:forEach var="movie" items="${movies}"> ... <c:out value="${movie.title}"/> ... </c:forEach> The difference between the top5 problem and the problem I am trying to solve is that with the top5 thing all the data I needed to show in the view where part of the "Movie" object, so the MovieManager simply returned a Collection of Movies. Now, the MovieManager must retrieve an ordered list of (Movie, # of comments). The Movie object has a collection of Comment associated but of course, it lacks an instance variable with the length of that collection. The following things came into my mind: a) Create a MovieView object or something like that, that is only used by the "View" and "Controller" layer of the MVC which is not part of the "Model" in the sense that is used only for easy retrieval and display of information. This MovieView object will have a title, registerDate and # of Comments so the thing would look something like this: /** * Returns a collection of the x most recently added movies * @param x the amount of movies to return * @return A collection of movies * @throws SQLException */ public Collection<Movie2> getXRecentlyAddedMovies(int x) throws SQLException; The .jsp: <c:forEach var="movie2" items="${movies2}"> ... <c:out value="${movie2.title}"/> <c:out value="${movie2.registerDate}"/> <c:out value="${movie2.noOfComments}"/> ... </c:forEach> b) Add to the Collection the movie has, empty Comments (there is no actual need to retrieve all the comment data from the database) so that the comment's collection.size() will show the number of comments that movie has. (This sounds horrible to me) c) Add to the "Movie" object an instance field noOfComments with the setter and getter that will help with this type of cases even if, looking only from the object point of view, it is a redundant field since it already has a Collection that is ".size()"able . d) Make the MovieManager retrieve some structure like a Map or a Collection of a two component array or something like that that will contain both a Movie object and the associated number of comments and with JSLT iterate over it and show it accordingly.

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • java - BigDecimal

    - by Mk12
    I was trying to make my own class for currencies using longs, but Apparently I should use BigDecimal (and then whenever I print it just add the $ sign before it). Could someone please get me started? What would be the best way to use BigDecimals for Dollar currencies, like making it at least but no more than 2 decimal places for the cents, etc. The api for BigDecimal is huge, and I don't know which methods to use. Also, BigDecimal has better precision, but isn't that all lost if it passes through a double? if I do new BigDecimal(24.99), how will it be different than using a double? Or should I use the constructor that uses a String instead? EDIT: I decided to use BigDecimals, and then use: private static final java.text.NumberFormat moneyt = java.text.NumberFormat.getCurrencyInstance(); { money.setRoundingMode(RoundingMode.HALF_EVEN); } and then whenever I display the BigDecimals, to use money.format(theBigDecimal). Is this alright? Should I have the BigDecimal rounding it too? Because then it doesn't get rounded before it does another operation.. if so, could you show me how? And how should I create the BigDecimals? new BigDecimal("24.99") ? Well, after many comments to Vineet Reynolds (thanks for keeping coming back and answering), this is what I have decided. I use BigDecimals and a NumberFormat. Here is where I create the NumberFormat instance. private static final NumberFormat money; static { money = NumberFormat.getCurrencyInstance(Locale.CANADA); money.setRoundingMode(RoundingMode.HALF_EVEN); } Here is my BigDecimal: private final BigDecimal price; Whenever I want to display the price, or another BigDecimal that I got through calculations of price, I use: money.format(price) to get the String. Whenever I want to store the price, or a calculation from price, in a database or in a field or anywhere, I use (for a field): myCalculatedResult = price.add(new BigDecimal("34.58")).setScale(2, RoundingMode.HALF_EVEN); .. but I'm thinking now maybe I should not have the NumberFormat round, but when I want to display do this: System.out.print(money.format(price.setScale(2, RoundingMode.HALF_EVEN); That way to ensure the model and things displayed in the view are the same. I don't do: price = price.setScale(2, RoundingMode.HALF_EVEN); Because then it would always round to 2 decimal places and wouldn't be as precise in calculations. So its all solved now, I guess. But is there any shortcut to typing calculatedResult.setScale(2, RoundingMode.HALF_EVEN) all the time? All I can think of is static importing HALF_EVEN... EDIT: I've changed my mind a bit, I think if I store a value, I won't round it unless I have no more operations to do with it, e.g. if it was the final total that will be charged to someone. I will only round things at the end, or whenever necessary, and I will still use NumberFormat for the currency formatting, but since I always want rounding for display, I made a static method for display: public static String moneyFormat(BigDecimal price) { return money.format(price.setScale(2, RoundingMode.HALF_EVEN)); } So values stored in variables won't be rounded, and I'll use that method to display prices.

    Read the article

  • Complex Rails queries across multiple tables, unions, and will_paginate. Solved.

    - by uberllama
    Hi folks. I've been working on a complex "user feed" type of functionality for a while now, and after experimenting with various union plugins, hacking named scopes, and brute force, have arrived at a solution I'm happy with. S.O. has been hugely helpful for me, so I thought I'd post it here in hopes that it might help others and also to get feedback -- it's very possible that I worked on this so long that I walked down an unnecessarily complicated road. For the sake of my example, I'll use users, groups, and articles. A user can follow other users to get a feed of their articles. They can also join groups and get a feed of articles that have been added to those groups. What I needed was a combined, pageable feed of distinct articles from a user's contacts and groups. Let's begin. user.rb has_many :articles has_many :contacts has_many :contacted_users, :through => :contacts has_many :memberships has_many :groups, :through => :memberships contact.rb belongs_to :user belongs_to :contacted_user, :class_name => "User", :foreign_key => "contacted_user_id" article.rb belongs_to :user has_many :submissions has_many :groups, :through => :submissions group.rb has_many :memberships has_many :users, :through => :memberships has_many :submissions has_many :articles, :through => :submissions Those are the basic models that define my relationships. Now, I add two named scopes to the Article model so that I can get separate feeds of both contact articles and group articles should I desire. article.rb # Get all articles by user's contacts named_scope :by_contacts, lambda {|user| {:joins => "inner join contacts on articles.user_id = contacts.contacted_user_id", :conditions => ["articles.published = 1 and contacts.user_id = ?", user.id]} } # Get all articles in user's groups. This does an additional query to get the user's group IDs, then uses those in an IN clause named_scope :by_groups, lambda {|user| {:select => "DISTINCT articles.*", :joins => :submissions, :conditions => {:submissions => {:group_id => user.group_ids}}} } Now I have to create a method that will provide a UNION of these two feeds into one. Since I'm using Rails 2.3.5, I have to use the construct_finder_sql method to render a scope into its base sql. In Rails 3.0, I could use the to_sql method. user.rb def feed "(#{Article.by_groups(self).send(:construct_finder_sql,{})}) UNION (#{Article.by_contacts(self).send(:construct_finder_sql,{})})" end And finally, I can now call this method and paginate it from my controller using will_paginate's paginate_by_sql method. HomeController.rb @articles = Article.paginate_by_sql(current_user.feed, :page => 1) And we're done! It may seem simple now, but it was a lot of work getting there. Feedback is always appreciated. In particular, it would be great to get away from some of the raw sql hacking. Cheers.

    Read the article

  • Compiling ruby1.9.1 hangs and fills swap!

    - by nfm
    I'm compiling Ruby 1.9.1-p376 under Ubuntu 8.04 server LTS (64-bit), by doing the following: $ ./configure $ make $ sudo make install ./configure works without complaints. make hangs indefinitely until all my RAM and swap is gone. It get stuck after the following output: compiling ripper make[1]: Entering directory `/tmp/ruby1.9.1/ruby-1.9.1-p376/ext/ripper' gcc -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H=\"extconf.h\" -fPIC -O2 -g -Wall -Wno-parentheses -o ripper.o -c ripper.c If I run the gcc command by hand, with the -v argument to get verbose output, it hangs after the following: Using built-in specs. Target: x86_64-linux-gnu Configured with: ../src/configure -v --enable-languages=c,c++,fortran,objc,obj-c++,treelang --prefix=/usr --enable-shared --with-system-zlib --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --enable-nls --with-gxx-include-dir=/usr/include/c++/4.2 --program-suffix=-4.2 --enable-clocale=gnu --enable-libstdcxx-debug --enable-objc-gc --enable-mpfr --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu Thread model: posix gcc version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) /usr/lib/gcc/x86_64-linux-gnu/4.2.4/cc1 -quiet -v -I. -I../../.ext/include/x86_64-linux -I../.././include -I../.././ext/ripper -I../.. -I../../. -DRUBY_EXTCONF_H="extconf.h" ripper.c -quiet -dumpbase ripper.c -mtune=generic -auxbase-strip ripper.o -g -O2 -Wall -Wno-parentheses -version -fPIC -fstack-protector -fstack-protector -o /tmp/ccRzHvYH.s ignoring nonexistent directory "/usr/local/include/x86_64-linux-gnu" ignoring nonexistent directory "/usr/lib/gcc/x86_64-linux-gnu/4.2.4/../../../../x86_64-linux-gnu/include" ignoring nonexistent directory "/usr/include/x86_64-linux-gnu" ignoring duplicate directory "../.././ext/ripper" ignoring duplicate directory "../../." #include "..." search starts here: #include <...> search starts here: . ../../.ext/include/x86_64-linux ../.././include ../.. /usr/local/include /usr/lib/gcc/x86_64-linux-gnu/4.2.4/include /usr/include End of search list. GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4) (x86_64-linux-gnu) compiled by GNU C version 4.2.4 (Ubuntu 4.2.4-1ubuntu4). GGC heuristics: --param ggc-min-expand=47 --param ggc-min-heapsize=32795 Compiler executable checksum: 6e11fa7ca85fc28646173a91f2be2ea3 I just compiled ruby on another computer for reference, and it took about 10 seconds to print the following output (after the above Compiler executable checksum line): COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' as -V -Qy -o ripper.o /tmp/cca4fa7R.s GNU assembler version 2.20 (i486-linux-gnu) using BFD version (GNU Binutils for Ubuntu) 2.20 COMPILER_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/ LIBRARY_PATH=/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../../lib/:/lib/../lib/:/usr/lib/../lib/:/usr/lib/gcc/i486-linux-gnu/4.4.1/../../../:/lib/:/usr/lib/ COLLECT_GCC_OPTIONS='-v' '-I.' '-I../../.ext/include/i686-linux' '-I../.././include' '-I../.././ext/ripper' '-I../..' '-I../../.' '-DRUBY_EXTCONF_H="extconf.h"' '-D_FILE_OFFSET_BITS=64' '-fPIC' '-O2' '-g' '-Wall' '-Wno-parentheses' '-o' 'ripper.o' '-c' '-mtune=generic' '-march=i486' I have absolutely no clue what could be going wrong here - any ideas where I should start?

    Read the article

  • Spring / Hibernate / JUnit - No Hibernate Session bound to Thread

    - by Marty Pitt
    Hi I'm trying to access the current hibernate session in a test case, and getting the following error: org.hibernate.HibernateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at org.springframework.orm.hibernate3.SpringSessionContext.currentSession(SpringSessionContext.java:63) at org.hibernate.impl.SessionFactoryImpl.getCurrentSession(SessionFactoryImpl.java:574) I've clearly missed some sort of setup, but not sure what. Any help would be greatly appreciated. This is my first crack at Hibernate / Spring etc, and the learning curve is certainly steep! Regards Marty Code follows: The offending class: public class DbUnitUtil extends BaseDALTest { @Test public void exportDtd() throws Exception { Session session = sessionFactory.getCurrentSession(); session.beginTransaction(); Connection hsqldbConnection = session.connection(); IDatabaseConnection connection = new DatabaseConnection(hsqldbConnection); // write DTD file FlatDtdDataSet.write(connection.createDataSet(), new FileOutputStream("test.dtd")); } } Base class: @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations={"classpath:applicationContext.xml"}) public class BaseDALTest extends AbstractJUnit4SpringContextTests { public BaseDALTest() { super(); } @Resource protected SessionFactory sessionFactory; } applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName"> <value>org.hsqldb.jdbcDriver</value> </property> <property name="url"> <value>jdbc:hsqldb:mem:sample</value> </property> <property name="username"> <value>sa</value> </property> <property name="password"> <value></value> </property> </bean> <bean id="sessionFactory" class="com.foo.spring.AutoAnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="entityPackages"> <list> <value>com.sample.model</value> </list> </property> <property name="schemaUpdate"> <value>true</value> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.HSQLDialect </prop> <prop key="hibernate.show_sql">true</prop> </props> </property> </bean> </beans>

    Read the article

  • Post calls RedirectToAction but view specified in RedirectToAction is not rendered

    - by W4IK
    Here's some of the html <form id="frmSubmit" action="/Viewer" style="display:none;"> <div id="renderSubmit" class="renderReport"> <input type="hidden" name="reportYear" id="reportYear" value="" /> <input type="hidden" name="reportMonth" id="reportMonth" value="" /> <input type="hidden" name="propIds" id="propIds" value="" /> <input type="hidden" name="reportName" id="reportName" value="" /> <input type="hidden" name="reportYearFrom" id="reportYearFrom" value="" /> <input type="hidden" name="reportMonthFrom" id="reportMonthFrom" value="" /> <input type="hidden" name="reportYearTo" id="reportYearTo" value="" /> <input type="hidden" name="reportMonthTo" id="reportMonthTo" value="" /> </div> </form> a little further down the page <div id="reportList" class="renderReport"> <fieldset style="width:105%;"> <legend class="reportStepLegend">Step 3. <br /> Click a report name below to view a report</legend> <br /> <% foreach (ReportMetaData item in (ReportMetaDataContainer)ViewData.Model) { %> <div> <input id=<%=item.SSRSName%> type="button" class="reportLink" value="<%=item.DisplayName%>" /> </div> <%}%> </fieldset> </div> Here's the javascript that gets called when the button is clicked $('.reportLink').click(function() { if (CheckDateAndProps() === true) { $('#reportName').val(this.id); var formData = $("#frmSubmit").serializeArray(); $.post('Home/PostViewer/', formData); } }); Note...i did have the $.post like so earlier...but it didn't seem to make any difference $.post('Home/PostViewer/', formData, function(data) { alert(data.Result); }, "json"); Here's the controller code [AcceptVerbs(HttpVerbs.Post)] public ActionResult PostViewer(string reportYear, string reportMonth, string propIds, string reportName, string reportYearFrom, string reportMonthFrom, string reportYearTo, string reportMonthTo) { return RedirectToAction("Viewer"); } All is good in the world up to this point..I'm hitting the above method and all the values are populated. Here's the get ActionResult method [AcceptVerbs(HttpVerbs.Get)] public ActionResult Viewer(string reportYear, string reportMonth, string propIds, string reportName, string reportYearFrom, string reportMonthFrom, string reportYearTo, string reportMonthTo) { return View(); } I'm hitting this too...not seeing any values in the parameters...but that's just because I haven't passed them in yet...I don't think that's whats keeping the Viewer page from displaying.? Now...one would one expect the Viewer view to be rendered...right?...well all I see is the page that this was called from...it never displays the Viewer page???!!?!? Here's the routes from global.asax routes.MapRoute( "Viewer", // Route name "Home/Viewer", // URL with parameters new { controller = "Home", action = "Viewer" } // Parameter defaults ); routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); I can browse directly to the page http://localhost:50083/Home/Viewer and when I do so I hit the get ActionResult method and the page renders just fine. Any help is greatly appreciated!

    Read the article

  • Open XML SDK 2.0 - Split table to new power point slide when content flows off current slide

    - by amurra
    I have a bunch of data that I need to export from a website to a power point presentation and have been using Open XML SDK 2.0 to perform this task. I have a power point presentation that I am putting through Open XML SDK 2.0 Productivity Tool to generate the template code that I can use to recreate the export. On one of those slides I have a table and the requirement is to add data to that table and break that table across multiple slides if the table exceeds the bottom of the slide. The approach I have taken is to determine the height of the table and if it exceeds the height of the slide, move that new content into the next slide. I have read Bryan and Jones blog on adding repeating data to a power point slide, but my scenario is a little different. They use the following code: A.Table tbl = current.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = heightInEmu; tr.Append(CreateDrawingCell(imageRel + imageRelId)); tr.Append(CreateTextCell(category)); tr.Append(CreateTextCell(subcategory)); tr.Append(CreateTextCell(model)); tr.Append(CreateTextCell(price.ToString())); tbl.Append(tr); imageRelId++; This won't work for me since they know what height to set the table row to since it will be the height of the image, but when adding in different amounts of text I do not know the height ahead of time so I just set tr.Heightto a default value. Here is my attempt at figuring at the table height: A.Table tbl = tableSlide.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = 370840L; tr.Append(PowerPointUtilities.CreateTextCell("This"); tr.Append(PowerPointUtilities.CreateTextCell("is")); tr.Append(PowerPointUtilities.CreateTextCell("a")); tr.Append(PowerPointUtilities.CreateTextCell("test")); tr.Append(PowerPointUtilities.CreateTextCell("Test")); tbl.Append(tr); tableSlide.Slide.Save(); long tableHeight = PowerPointUtilities.TableHeight(tbl); Here are the helper methods: public static A.TableCell CreateTextCell(string text) { A.TableCell tableCell = new A.TableCell( new A.TextBody(new A.BodyProperties(), new A.Paragraph(new A.Run(new A.Text(text)))), new A.TableCellProperties()); return tableCell; } public static Int64Value TableHeight(A.Table table) { long height = 0; foreach (var row in table.Descendants<A.TableRow>() .Where(h => h.Height.HasValue)) { height += row.Height.Value; } return height; } This correctly adds the new table row to the existing table, but when I try and get the height of the table, it returns the original height and not the new height. The new height meaning the default height I initially set and not the height after a large amount of text has been inserted. It seems the height only gets readjusted when it is opened in power point. I have also tried accessing the height of the largest table cell in the row, but can't seem to find the right property to perform that task. My question is how do you determine the height of a dynamically added table row since it doesn't seem to update the height of the row until it is opened in power point? Any other ways to determine when to split content to another slide while using Open XML SDK 2.0? I'm open to any suggestion on a better approach someone might have taken since there isn't much documentation on this subject.

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >