Search Results

Search found 9492 results on 380 pages for 'logic unit'.

Page 149/380 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • serializing type definitions?

    - by Dave
    I'm not positive I'm going about this the right way. I've got a suite of applications that have varying types of output (custom defined types). For example, I might have a type called Widget: Class Widget Public name as String End Class Throughout the course of operation, when a user experiences a certain condition, the application will take that output instance of widget that user received, serialize it, and log it to the database noting the name of the type. Now, I have other applications that do something similar, but instead of dealing with Widget, it could be some totally random other type with different attributes, but again I serialize the instance, log it to the db, and note the name of the type. I have maybe a half dozen different types and don't anticipate too many additional ones in the future. After all this is said and done, I have an admin interface that looks through these logs, and has the ability for the user to view the contents of this data thats been logged. The Admin app has a reference to all the types involved, and with some basic switch case logic hinged upon the name of the type, will cast it into their original types, and pass it on to some handlers that have basic display logic to spit the data back out in a readable format (one display handler for each type) NOW... all this is well and good... Until one day, my model changed. The Widget class now has deprecated the name attribute and added on a bunch of other attributes. I will of course get type mismatches in the admin side when I try to reconstitute this data. I was wondering if there was some way, at runtime, i could perhaps reflect through my code and get a snapshot of the type definition at that precise moment, serialize it, and store it along with the data so that I could somehow use this to reconstitute it in the future?

    Read the article

  • HSQLdb permissions regarding OpenJPA

    - by wishi_
    Hi! I'm (still) having loads of issues with HSQLdb & OpenJPA. Exception in thread "main" <openjpa-1.2.0-r422266:683325 fatal store error> org.apache.openjpa.persistence.RollbackException: user lacks privilege or object not found: OPENJPA_SEQUENCE_TABLE {SELECT SEQUENCE_VALUE FROM PUBLIC.OPENJPA_SEQUENCE_TABLE WHERE ID = ?} [code=-5501, state=42501] at org.apache.openjpa.persistence.EntityManagerImpl.commit(EntityManagerImpl.java:523) at model_layer.EntityManagerHelper.commit(EntityManagerHelper.java:46) at HSQLdb_mvn_openJPA_autoTables.App.main(App.java:23) The HSQLdb is running as a server process, bound to port 9001 at my local machine. The user is SA. It's configured as follows: <persistence-unit name="HSQLdb_mvn_openJPA_autoTablesPU" transaction-type="RESOURCE_LOCAL"> <provider> org.apache.openjpa.persistence.PersistenceProviderImpl </provider> <class>model_layer.Testobjekt</class> <class>model_layer.AbstractTestobjekt</class> <properties> <property name="openjpa.ConnectionUserName" value="SA" /> <property name="openjpa.ConnectionPassword" value=""/> <property name="openjpa.ConnectionDriverName" value="org.hsqldb.jdbc.JDBCDriver" /> <property name="openjpa.ConnectionURL" value="jdbc:hsqldb:hsql://localhost:9001/mydb" /> <!-- <property name="openjpa.jdbc.SynchronizeMappings" value="buildSchema(ForeignKeys=true)" /> --> </properties> </persistence-unit> I have made a successful connection with my ORM layer. I can create and connect to my EntityManager. However each time I use EntityManagerHelper.commit(); It faila with that error, which makes no sense to me. SA is the Standard Admin user I used to create the table. It should be able to persist as this user into hsqldb.

    Read the article

  • Notifying view controller when subview touch events occur.

    - by Nebs
    I have a UIViewController whose view has a custom subview. This custom subview needs to track touch events and report swipe gestures. Currently I put touchesBegan, touchesMoved, touchesEnded and touchesCancelled in the subview class. With some extra logic I am able to get swipe gestures and call my handleRightSwipe and handleLeftSwipe methods. So now when I swipe within the subview it calls its local swipe handling methods. This all works fine. But what I really need is for the handleRightSwipe and handleLeftSwipe methods to be in the view controller. I could leave them in the subview class but then I'd have to bring in all the logic and data as well and that kind of breaks the MVC idea. So my question is is there a clean way to handle this? Essentially I want to keep my touch event methods in the subview so that they only trigger for that specific view. But I also want the view controller to be informed when these touch events (or in this case swipe gestures) occur. Any ideas? Thanks.

    Read the article

  • Why use short-circuit code?

    - by Tim Lytle
    Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators) There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons? I'm not asking about situations where two entities need to be evaluated anyway, for example: if($user->auth() AND $model->valid()){ $model->save(); } To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data. This also has a (to me) obvious purpose: if(is_string($userid) AND strlen($userid) > 10){ //do something }; Because it wouldn't be wise to call strlen() with a non-string value. What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page: defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); This could have been: if(!defined('APPLICATION_PATH')){ define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); } Or even as a single statement: if(!defined('APPLICATION_PATH')) define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?

    Read the article

  • Cannot use READPAST in snapshot isolation mode

    - by Marcus
    I have a process which is called from multiple threads which does the following: Start transaction Select unit of work from work table with by finding the next row where IsProcessed=0 with hints (UPDLOCK, HOLDLOCK, READPAST) Process the unit of work (C# and SQL stored procedures) Commit the transaction The idea of this is that a thread dips into the pool for the "next" piece of work, and processes it, and the locks are there to ensure that a single piece of work is not processed twice. (the order doesn't matter). All this has been working fine for months. Until today that is, when I happened to realise that despite enabling snapshot isolation and making it the default at the database level, the actual transaction creation code was manually setting an isolation level of "ReadCommitted". I duly changed that to "Snapshot", and of course immediately received the "You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ" error message. Oops! The main reason for locking the row was to "mark the row" in such a way that the "mark" would be removed when the transaction that applied the mark was committed and the lock seemed to be the best way to do this, since this table isn't read otherwise except by these threads. If I were to use the IsProcessed flag as the lock, then presumably I would need to do the update first, and then select the row I just updated, but I would need to employ the NOLOCK flag to know whether any other thread had set the flag on a row. All sounds a bit messy. The easiest option would be to abandon the snapshot isolation mode altogether, but the design of step #3 requires it. Any bright ideas on the best way to resolve this problem? Thanks Marcus

    Read the article

  • iPhone UnitTesting UITextField value and otest error 133

    - by Justin Galzic
    Are UITextFields not meant to be part of the LogicTests and instead part of the ApplicationTest target? I have a factory class that is responsible for creating and returning an (iPhone) UITextField and I'm trying to unit test it. It is part of my Logic Test target and when I try to build and run the tests, I get a build error about: /Developer/Tools/RunPlatformUnitTests.include:451:0 Test rig '/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.2.sdk/ 'Developer/usr/bin/otest' exited abnormally with code 133 (it may have crashed). In the build window, this points to the following line in: "RunPlatformUnitTests.include" RPUTIFail ${LINENO} "Test rig '${TEST_RIG}' exited abnormally with code ${TEST_RIG_RESULT} (it may have crashed)." My unit test looks like this: #import <SenTestingKit/SenTestingKit.h> #import <UIKit/UIKit.h> // Test-subject headers. #import "TextFieldFactory.h" @interface TextFieldFactoryTests : SenTestCase { } @end @implementation TextFieldFactoryTests #pragma mark Test Setup/teardown - (void) setUp { NSLog(@"%@ setUp", self.name); } - (void) tearDown { NSLog(@"%@ tearDown", self.name); } #pragma mark Tests - (void) testUITextField_NotASecureField { NSLog(@"%@ start", self.name); UITextField *textField = [TextFieldFactory createTextField:YES]; NSLog(@"%@ end", self.name); } The class I'm trying to test: // Header file #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> @interface TextFieldFactory : NSObject { } +(UITextField *)createTextField:(BOOL)isSecureField; @end // Implementation file #import "TextFieldFactory.h" @implementation TextFieldFactory +(UITextField *)createTextField:(BOOL)isSecureField { // x,y,z,w are constants declared else where UITextField *textField = [[[UITextField alloc] initWithFrame:CGRectMake(x, y, z, w)] autorelease]; // some initialization code return textField; } @end

    Read the article

  • OrientationEventListener not working properly

    - by nixau
    Hi all, I need to handle orientation changes in my Android application. For this purpose I decided to use OrientationEventListener convenience class. But his callback method is given somewhat strange behavior. My application starts in the portrait mode and then eventually switches to the lanscape one. I have some custom code executing in the callback onOrientationChanged method that provides some additional UI handling logic - it has a few calls to findViewById. What is strange is that when switching back from landscape to portrait mode onOrientationChanged callback is called twice, and what's even worse - the second call is dealing with bad Context - findViewById method starts returning null. These calls are made right from the MainThread @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); listener = new OrientationListener(); } @Override protected void onResume() { super.onResume(); // enabling listening listener.enable(); } @Override protected void onPause() { super.onPause(); // disabling listening listener.disable(); } I've replicated the same behavior with a dummy Activity without any logic except for one that deals with orientation hadling. I initiate orientation switch from the Android 2.2 emulator by pressing Ctrl+F11 What could be wrong? Upd: Inner class that implements OrientationEventListener private class OrientationListener extends OrientationEventListener { public OrientationL() { super(getBaseContext()); } @Override public void onOrientationChanged(int orientation) { toString(); } } }

    Read the article

  • Writing files in App_Data causes tempdata to be null

    - by RAMX
    I have a small asp.net MVC 1 web app that can store files and create directories in the App_Data directory. When the write operation succeeds, I add a message to the tempdata and do a redirectToRoute. The problem is that the tempdata is null when the action is executed. If i write the files in a directory outside of the web applications root directory, the tempdata is not null and everything works correctly. Any ideas why writing in the app_data seems to clear the tempdata ? edit: if DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment) writes in the App_Data, TempData will be null in the action being redirected to. if it is a directory out of the web app root it is fine. No exceptions are being thrown. [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(int id, string path, FormCollection form) { ViewData["path"] = path; ViewData["id"] = id; HttpPostedFileBase hpf; string comment = form["FileComment"]; hpf = Request.Files["File"] as HttpPostedFileBase; if (hpf.ContentLength != 0) { DRS.Logic.Repository.Manager.CreateFile(path, hpf, comment); TempData["notification"] = "file was created"; return RedirectToRoute(new { controller = "File", action ="ViewDetails", id = id, path = path + Path.GetFileName(hpf.FileName) }); } else { TempData["notification"] = "No file were selected."; return View(); } }

    Read the article

  • XAML Deserialization problem

    - by mrwayne
    Hi, I have a block of XAML of which i'm trying to deserialize. For arguments sake lets say it looks like below. <NS:SomeObject> <NS:SomeObject.SomeProperty> <NS:SomeDifferentObject SomeOtherProp="a value"/> </NS:SomeObject.SomeProperty> </NS:SomeObject> Of which i deserialise using the following code. XamlReader.Load( File.OpenRead( @"c:\SomeFile.xaml")) I have 2 solutions, one i use Unit Testing, and another i have for my web application. When i'm using the unit testing solution, it deserializes fine and works as expected. However, when i try to deserialize using my other project i keep getting an exception like the following. 'NameSpace.SomeObject' value cannot be assigned to property 'SomeProperty' of object 'NameSpace.SomeObject'. Object of Type 'NameSpace.SomeObject' cannot be converted to type 'NameSpace.SomeObject'. It's as if it is getting confused or instantiating 2 different types of objects? Note, i do not have similarly named classes or any sort of namespace conflict. The same codes executes fine in one solution and not the other. The same project files are referenced in both. Please help!

    Read the article

  • Netbeans Profile JUnit 4 problem

    - by Krishna K
    I have a unit test that takes 200 sec to run. I am trying to use NetBeans profiler to speed it up. But the profiler doesn't run the unit test. It just creates an object of the test and exits. Doesn't run the actual test methods or @Before / @After methods. This is a maven project with surefire and junit 4. And partial output is below. Profiler Agent: Waiting for connection on port 5140, timeout 10 seconds (Protocol version: 9) Profiler Agent: Established local connection with the tool ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.cris.puzzle.solvers.SudokuSolverTest Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec Results : Tests run: 0, Failures: 0, Errors: 0, Skipped: 0 Profiler Agent: Connection with agent closed Profiler Agent: Connection with agent closed Profiler Agent: Initializing... Profiler Agent: Options: >C:/Program Files/NetBeans 6.8/profiler3/lib,5140,10< Profiler Agent: Initialized succesfully ------------------------------------------------------------------------ BUILD SUCCESSFUL ------------------------------------------------------------------------ Total time: 14 seconds Does anyone know how to make it work? Thank you.

    Read the article

  • simpletest - Why does setReturnValue() seem to change behaviour depending on if test is run in isola

    - by JW
    I am using SimpleTest version 1.0.1 for a unit test. I create a new mock object within a test method and on it i do: $MockDbAdaptor->setReturnValue('query',1); Now, when i run this in a standalone unit test my tested object is happy to see 1 returned when query() is called on the mock db adaptor. However, when this exact same test is run as part of my 'all_tests' TestSuite, the test is failing. This happens because a call to the mock's query() method does not appear to return any value - thus causing my test subject to complain and trigger an unexpected exception that fails the test. So, the behaviour of setReturnValue() seems to change depending on whether the test is run in isolation or not. I can get it to work in both a standalone and TestSuite contexts by using this instead: $MockDbAdaptor->setReturnValueAt(0,'query',1); So my immediate problem can be fixed ...but it feels like a hack. I thought if i create a new mock within a test method then why is the setReturnValue() behaviour getting affected by the context in which the test class instance is run? It feel like a bug.

    Read the article

  • How to inject a Session Bean into a Message Driven Bean?

    - by Hank
    Hi guys, I'm reasonably new to JEE, so this might be stupid.. bear with me pls :D I would like to inject a stateless session bean into a message-driven bean. Basically, the MDB gets a JMS message, then uses a session bean to perform the work. The session bean holds the business logic. Here's my Session Bean: @Stateless public class TestBean implements TestBeanRemote { public void doSomething() { // business logic goes here } } The matching interface: @Remote public interface TestBeanRemote { public void doSomething(); } Here's my MDB: @MessageDriven(mappedName = "jms/mvs.TestController", activationConfig = { @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-acknowledge"), @ActivationConfigProperty(propertyName = "destinationType", propertyValue = "javax.jms.Queue") }) public class TestController implements MessageListener { @EJB private TestBean testBean; public TestController() { } public void onMessage(Message message) { testBean.doSomething(); } } So far, not rocket science, right? Unfortunately, when deploying this to glassfish v3, and sending a message to the appropriate JMS Queue, I get errors that glassfish is unable to locate the TestBean EJB: java.lang.IllegalStateException: Exception attempting to inject Remote ejb-ref name=mvs.test.TestController/testBean,Remote 3.x interface =mvs.test.TestBean,ejb-link=null,lookup=null,mappedName=,jndi-name=mvs.test.TestBean,refType=Session into class mvs.test.TestController Caused by: com.sun.enterprise.container.common.spi.util.InjectionException: Exception attempting to inject Remote ejb-ref name=mvs.test.TestController/testBean,Remote 3.x interface =mvs.test.TestBean,ejb-link=null,lookup=null,mappedName=,jndi-name=mvs.test.TestBean,refType=Session into class mvs.test.TestController Caused by: javax.naming.NamingException: Lookup failed for 'java:comp/env/mvs.test.TestController/testBean' in SerialContext [Root exception is javax.naming.NamingException: Exception resolving Ejb for 'Remote ejb-ref name=mvs.test.TestController/testBean,Remote 3.x interface =mvs.test.TestBean,ejb-link=null,lookup=null,mappedName=,jndi-name=mvs.test.TestBean,refType=Session' . Actual (possibly internal) Remote JNDI name used for lookup is 'mvs.test.TestBean#mvs.test.TestBean' [Root exception is javax.naming.NamingException: Lookup failed for 'mvs.test.TestBean#mvs.test.TestBean' in SerialContext [Root exception is javax.naming.NameNotFoundException: mvs.test.TestBean#mvs.test.TestBean not found]]] So my questions are: - is this the correct way of injecting a session bean into another bean (particularly a message driven bean)? - why is the naming lookup failing? Thanks for all your help! Cheers, Hank

    Read the article

  • refering Tomcat JNDI datasource in persistence.xml

    - by binary_runner
    in server.xml I've defined global resource (I'm using Tomcat 6): <GlobalNamingResources> <Resource name="jdbc/myds" auth="Container" type="javax.sql.DataSource" maxActive="10" maxIdle="3" maxWait="10000" username="sa" password="" driverClassName="org.h2.Driver" url="jdbc:h2:~/.myds/data/db" /> </GlobalNamingResources> I see in catalina.out that this is bound, so I suppose it's OK. In my web app I have the link to the datasource, I'm not sure it's OK: <Context> <ResourceLink global='jdbc/myds' name='jdbc/myds' type="javax.sql.Datasource"/> </Context> and in application there is persistence.xml : <persistence xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd" version="2.0"> <persistence-unit name="oam" transaction-type="RESOURCE_LOCAL"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <non-jta-data-source>jdbc/myds</non-jta-data-source> <!-- class definitions here, nothing else --> <properties> <property name="hibernate.dialect" value="org.hibernate.dialect.H2Dialect"/> </properties> </persistence-unit> </persistence> It shuld be OK, but most probably this or the ResourceLink definition is wrong because I'm getting: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context What's wrong and why this does not work ?

    Read the article

  • How can I convert a projection that's not part of spatial_ref_sys?

    - by Summer
    Hi, I'm importing shapefiles into a Postgres+PostGIS database. Here's my usual procedure: * Find an srid in the spatial_ref_sys table where srtext appears to match the shapefile's .prj file * Upload the data into a new table using the shp2pgsql utility, specifying the srid using the -s flag * Add the new table to my main geometry table, and on the way convert to an srid of 4269 (the Census standard projection) using ST_Transform Unfortunately, the spatial_ref_sys table doesn't include Mississippi state's standard projection. The contents of their .prj file is as follows, where I've bolded the parts I usually try to match: PROJCS["mstm",GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",1300000.0],PARAMETER["Central_Meridian",-89.75],PARAMETER["Scale_Factor",0.9998335],PARAMETER["Latitude_Of_Origin",32.5],UNIT["Meter",1.0]] I eventually found the ogr2ogr utility, and especially with the "peace and joy" promises, I decided to give it a try. I tried this command: ogr2ogr -update -f "PostgreSQL" PG:"Connection details" "File name.shp" -t_srs EPSG:4269 -nln Table_Name I am now getting the error "Terminating translation prematurely after failed translation of layer" -- which seems to indicate that ogr2ogr is not going to be the savior I imagined in getting arbitrary .prj files neatly into the 4269 projection. Any ideas about what to do?

    Read the article

  • GlassFish can't find persistence provider for EntityManager

    - by Xorty
    Hi, I am building Spring MVC project (2.5). It runs on GlassFish v3 server and I am using Hibernate for ORM mapping from Derby database. I am having trouble with deployment - GlassFish says : No Persistence provider for EntityManager named mvcspringPU. Here is how I create EntityManagerFactory : emf = Persistence.createEntityManagerFactory("mvcspringPU"); And here is my configuration file persistence.xml <?xml version="1.0" encoding="UTF-8"?> <persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd"> <persistence-unit name="mvcspringPU" transaction-type="JTA"> <provider>org.hibernate.ejb.HibernatePersistence</provider> <jta-data-source>CarsDB</jta-data-source> <properties> <property name="hibernate.hbm2ddl.auto" value="update"/> </properties> </persistence-unit> </persistence> I am building with NetBeans 6.8, so things like build paths should be alright (generated by IDE itself).

    Read the article

  • Where should 'CreateMap' statements go?

    - by jonathanconway
    I frequently use AutoMapper to map Model (Domain) objects to ViewModel objects, which are then consumed by my Views, in a Model/View/View-Model pattern. This involves many 'Mapper.CreateMap' statements, which all must be executed, but must only be executed once in the lifecycle of the application. Technically, then, I should keep them all in a static method somewhere, which gets called from my Application_Start() method (this is an ASP.NET MVC application). However, it seems wrong to group a lot of different mapping concerns together in one central location. Especially when mapping code gets complex and involves formatting and other logic. Is there a better way to organize the mapping code so that it's kept close to the ViewModel that it concerns? (I came up with one idea - having a 'CreateMappings' method on each ViewModel, and in the BaseViewModel, calling this method on instantiation. However, since the method should only be called once in the application lifecycle, it needs some additional logic to cache a list of ViewModel types for which the CreateMappings method has been called, and then only call it when necessary, for ViewModels that aren't in that list.)

    Read the article

  • Advice on optimzing speed for a Stored Procedure that uses Views

    - by Belliez
    Based on a previous question and with a lot of help from Damir Sudarevic (thanks) I have the following sql code which works great but is very slow. Can anyone suggest how I can speed this up and optimise for speed. I am now using SQL Server Express 2008 (not 2005 as per my original question). What this code does is retrieves parameters and their associated values from several tables and rotates the table in a form that can be easily compared. Its great for one of two rows of data but now I am testing with 100 rows and to run GetJobParameters takes over 7 minutes to complete? Any advice is gratefully accepted, thank you in advanced. /*********************************************************************************************** ** CREATE A VIEW (VIRTUAL TABLE) TO ALLOW EASIER RETREIVAL OF PARMETERS ************************************************************************************************/ CREATE VIEW dbo.vParameters AS SELECT m.MachineID AS [Machine ID] ,j.JobID AS [Job ID] ,p.ParamID AS [Param ID] ,t.ParamTypeID AS [Param Type ID] ,m.Name AS [Machine Name] ,j.Name AS [Job Name] ,t.Name AS [Param Type Name] ,t.JobDataType AS [Job DataType] ,x.Value AS [Measurement Value] ,x.Unit AS [Unit] ,y.Value AS [JobDataType] FROM dbo.Machines AS m JOIN dbo.JobFiles AS j ON j.MachineID = m.MachineID JOIN dbo.JobParams AS p ON p.JobFileID = j.JobID JOIN dbo.JobParamType AS t ON t.ParamTypeID = p.ParamTypeID LEFT JOIN dbo.JobMeasurement AS x ON x.ParamID = p.ParamID LEFT JOIN dbo.JobTrait AS y ON y.ParamID = p.ParamID GO -- Step 2 CREATE VIEW dbo.vJobValues AS SELECT [Job Name] ,[Param Type Name] ,COALESCE(cast([Measurement Value] AS varchar(50)), [JobDataType]) AS [Val] FROM dbo.vParameters GO /*********************************************************************************************** ** GET JOB PARMETERS FROM THE VIEW JUST CREATED ************************************************************************************************/ CREATE PROCEDURE GetJobParameters AS -- Step 3 DECLARE @Params TABLE ( id int IDENTITY (1,1) ,ParamName varchar(50) ); INSERT INTO @Params (ParamName) SELECT DISTINCT [Name] FROM dbo.JobParamType -- Step 4 DECLARE @qw TABLE( id int IDENTITY (1,1) , txt nchar(300) ) INSERT INTO @qw (txt) SELECT 'SELECT' UNION SELECT '[Job Name]' ; INSERT INTO @qw (txt) SELECT ',MAX(CASE [Param Type Name] WHEN ''' + ParamName + ''' THEN Val ELSE NULL END) AS [' + ParamName + ']' FROM @Params ORDER BY id; INSERT INTO @qw (txt) SELECT 'FROM dbo.vJobValues' UNION SELECT 'GROUP BY [Job Name]' UNION SELECT 'ORDER BY [Job Name]'; -- Step 5 --SELECT txt FROM @qw DECLARE @sql_output VARCHAR (MAX) SET @sql_output = '' -- NULL + '' = NULL, so we need to have a seed SELECT @sql_output = -- string to avoid losing the first line. COALESCE (@sql_output + txt + char (10), '') FROM @qw EXEC (@sql_output) GO

    Read the article

  • VB.Net Custom Object Master-Detail Data Binding

    - by clawson
    Since beginning to use VB.Net some years ago I have become slowly familiar with using the data binding features of .Net, however I often find my self bewildered by it's behavior and instead of discover the correct way it should work I find some dirty work around to suit my needs and continue on. Needless to say my problems continue to arise. I am using Custom Objects as the Data Sources for by controls and often entire forms. I find it frustrating to separate business logic and the graphical interface. (That could be a new question entirely.) So for a lot of objects I generate a form which has the DataBindingSource for the object. When I create each from using the New Constructor I explicitly pass to it the object to which it should be bound, and then set this passed object as the DataSource of the BindingSource. (That's a mouthful!) Now the Master object (say, bound to each form) often contains a List of objects which I like to have displayed in a DataGridView. I (sometimes) create and modify these child objects in their own form (again creating a databind the same way as the master form) but when I add them to the List in the master object the DataGridView won't update with the new items. So my question really has a few layers: How can I easily/efficiently/correctly update this DataGridView with the list of Detail objects when I add them to the list of the Master object. Is this approach to DataBinding good/viable. What's the best way to separate business logic from graphical interface. Thanks for the help!

    Read the article

  • What's are the best readings to start using WPF instead of WinForms?

    - by Ivan
    Keeping in mind what CannibalSmith once said - "All the answers are saying "WPF is different". That's a huge understatement. You not only have to learn lots of new stuff - you must forget everything you've learned from Forms. It's a completely new way of doing UI." .. and having many years of experience with visual Windows desktop applications development (VB6, Borland C++ Builder VCL, WinForms) (which is hard to forget), how do I quickly move to developing to say well-formed WPF applications with Visual Studio? I don't need boozy-woozy graphics to give my app look and feel of a Hollywood blockbuster or a million dollar pyjamas. I always loved tidiness of standard Windows common controls and UI design guidelines, end even more I enjoyed them under Vista Glass Aero Graphite sauce. I am perfectly satisfied with WinForms but I want to my applications to be built of the most efficient and up-to-date standard technologies and architectured according to the most efficient and flexible patterns of today and tomorrow, leveraging interface-based integration and functionality reuse and to take all advantages of modern hardware and APIs to maximize performance, usability, reliability, maintainability, extensibility, etc. I very much like the idea of separating view, logic and data, letting a view to take all advantages of the platform (may it run as a web browser applet on a thin client or as a desktop application on a PC with a latest GPU), letting logic be reused, parallelized and seamlessly evolve, storing data in a well structured format in a right place. But... while moving from VB6 to Borland C++ Builder was very easy (no books/tutorials needed to turn it on and start working) (assuming I already knew C++), moving from BCB to WinForms was the same seamless, it does not seem any obvious to me how to do with WPF. So how do I best convert myself from a WinForms developer into a right-way thinking and doing WPF developer?

    Read the article

  • ocunit testing on iPhone

    - by Magnus Poromaa
    Hi I am trying to get ocunit working in my project from XCode. Since I also need to debug in the unit tests I am using a script that automates the setup (see below). I just include it in the project under resources and change the name to the .ocunit file I want it to run. The problem I get is that it cant find the bundle file and therefore exists with an error. Can anyone who has a clue about XCode and objective-c take a look at it and tell me what is wrong. Also how am I supposed to produce the .ocunit file that I need to run. By setting up a new unit test target for the iPhone and add tests to it or? Hope someone has a clue since I just started ny iPhone development and need to get it up and running quickly Apple Script -- The only customized value we need is the name of the test bundle tell me to activate tell application "Xcode" activate set thisProject to project of active project document tell thisProject set testBundleName to name of active target set unitTestExecutable to make new executable at end of executables set name of unitTestExecutable to testBundleName set path of unitTestExecutable to "/Applications/TextEdit.app" tell unitTestExecutable -- Add a "-SenTest All" argument make new launch argument with properties {active:true, name:"-SenTest All"} -- Add the magic set injectValue to "$(BUILT_PRODUCTS_DIR)/" & testBundleName & ".octest" make new environment variable with properties {active:true, name:"XCInjectBundle", value:injectValue} make new environment variable with properties {active:true, name:"XCInjectBundleInto", value:"/Applications/TextEdit.app/Contents/MacOS/TextEdit"} make new environment variable with properties {active:true, name:"DYLD_INSERT_LIBRARIES", value:"$(DEVELOPER_LIBRARY_DIR)/PrivateFrameworks/DevToolsBundleInjection.framework/DevToolsBundleInjection"} make new environment variable with properties {active:true, name:"DYLD_FALLBACK_FRAMEWORK_PATH", value:"$(DEVELOPER_LIBRARY_DIR)/Frameworks"} end tell end tell end tell Cheers Magnus

    Read the article

  • Custom model validation of dependent properties using Data Annotations

    - by Darin Dimitrov
    Since now I've used the excellent FluentValidation library to validate my model classes. In web applications I use it in conjunction with the jquery.validate plugin to perform client side validation as well. One drawback is that much of the validation logic is repeated on the client side and is no longer centralized at a single place. For this reason I'm looking for an alternative. There are many examples out there showing the usage of data annotations to perform model validation. It looks very promising. One thing I couldn't find out is how to validate a property that depends on another property value. Let's take for example the following model: public class Event { [Required] public DateTime? StartDate { get; set; } [Required] public DateTime? EndDate { get; set; } } I would like to ensure that EndDate is greater than StartDate. I could write a custom validation attribute extending ValidationAttribute in order to perform custom validation logic. Unfortunately I couldn't find a way to obtain the model instance: public class CustomValidationAttribute : ValidationAttribute { public override bool IsValid(object value) { // value represents the property value on which this attribute is applied // but how to obtain the object instance to which this property belongs? return true; } } I found that the CustomValidationAttribute seems to do the job because it has this ValidationContext property that contains the object instance being validated. Unfortunately this attribute has been added only in .NET 4.0. So my question is: can I achieve the same functionality in .NET 3.5 SP1? UPDATE: It seems that FluentValidation already supports clientside validation and metadata in ASP.NET MVC 2. Still it would be good to know though if data annotations could be used to validate dependent properties.

    Read the article

  • Understanding how rpmbuild works

    - by ereOn
    Hi, For my work, I have to create a documentation on "How-to create a RPM package on Red Hat 5". I'm used to Debian and it's derivative (Ubuntu, and so on) and thus to Debian packages (aka. .deb files). It seems that the RPM logic is quite different from what I know already and I am having some issues understanding the "RPM logic". From what I read, it seems that ones need to be root to create a RPM package. While I understand why root could be required to install a package, I still don't understand why elevated privileges should be needed to just create one. If I try to create a RPM package as a user, changing the buildroot it fails on the %installstep because I don't have permission to write files into /usr/bin. Fair enough but... why does he want to copy my files into /usr/bin at this step ?! I just want to create the package, not install it ! I'm sure I'm missing something here. Is there anyone who could give me at least a basic understanding of how rpmbuild works and why ? Thank you very much !

    Read the article

  • nHibernate Domain Model and Mapping Files in Separate Projects

    - by Blake Blackwell
    Is there a way to separate out the domain objects and mapping files into two separate projects? I would like to create one project called MyCompany.MyProduct.Core that contains my domain model, and another project that is called MyCompany.MYProduct.Data.Oracle that contains my Oracle data mappings. However, when I try to unit test this I get the following error message: Named query 'GetClients' not found. Here is my mapping file: <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2" assembly="MyCompany.MyProduct.Core" namespace="MyCompany.MyProduct.Core" > <class name="MyCompany.MyProduct.Core.Client" table="MY_CLIENT" lazy="false"> <id name="ClientId" column="ClientId"></id> <property name="ClientName" column="ClientName" /> <loader query-ref="GetClients"/> </class> <sql-query name="GetClients" callable="true"> <return class="Client" /> call procedure MyPackage.GetClients(:int_SummitGroupId) </sql-query> </hibernate-mapping> Here is my unit test: try { var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly( typeof( Client ).Assembly ); ISessionFactory sessionFactory = cfg.BuildSessionFactory(); IStatelessSession session = sessionFactory.OpenStatelessSession(); IQuery query = session.GetNamedQuery( "GetClients" ); query.SetParameter( "int_SummitGroupId", 3173 ); IList<Client> clients = query.List<Client>(); Assert.AreNotEqual( 0, clients.Count ); } catch( Exception ex ) { throw ex; } I think I may be improperly referencing the assembly, because if I do put the domain model object in the MyComapny.MyProduct.Data.Oracle class it works. Only when I separate out in to two projects do I run into this problem.

    Read the article

  • Trying to use a authlogic-connect as a plugin in place of gem - Server doesn't start

    - by Arkid
    I am trying to use Authlogic-connect as a plugin in Rails 3 in place of a gem. I have made an entry in the gemfile as gem "authlogic-connect", :require => "authlogic-connect", :path => "localgems" Now when I run the bundle install, it runs fine. When I try to start the server i get the error Could not find gem 'authlogic-connect (>= 0, runtime)' in source at localgems. Source does not contain any versions of 'authlogic-connect (>= 0, runtime)' Try running `bundle install`. I have placed the unzipped Gem renamed as authlogic-connect in the localgems folder. what is the problem? Here is what I get on using rails plugin install arkidmitra$ rails plugin install git://github.com/viatropos/authlogic-connect.git Usage: rails new APP_PATH [options] Options: [--skip-gemfile] # Don't create a Gemfile -d, [--database=DATABASE] # Preconfigure for selected database (options: mysql/oracle/postgresql/sqlite3/frontbase/ibm_db) # Default: sqlite3 -O, [--skip-active-record] # Skip Active Record files [--dev] # Setup the application with Gemfile pointing to your Rails checkout -J, [--skip-prototype] # Skip Prototype files -T, [--skip-test-unit] # Skip Test::Unit files -G, [--skip-git] # Skip Git ignores and keeps -r, [--ruby=PATH] # Path to the Ruby binary of your choice # Default: /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -m, [--template=TEMPLATE] # Path to an application template (can be a filesystem path or URL) -b, [--builder=BUILDER] # Path to an application builder (can be a filesystem path or URL) [--edge] # Setup the application with Gemfile pointing to Rails repository Runtime options: -q, [--quiet] # Supress status output -s, [--skip] # Skip files that already exist -f, [--force] # Overwrite files that already exist -p, [--pretend] # Run but do not make any changes Rails options: -h, [--help] # Show this help message and quit -v, [--version] # Show Rails version number and quit Description: The 'rails new' command creates a new Rails application with a default directory structure and configuration at the path you specify. Example: rails new ~/Code/Ruby/weblog This generates a skeletal Rails installation in ~/Code/Ruby/weblog. See the README in the newly created application to get going.

    Read the article

  • P6 Architecture - Register renaming aside, does the limited user registers result in more ops spent

    - by mrjoltcola
    I'm studying JIT design with regard to dynamic languages VM implementation. I haven't done much Assembly since the 8086/8088 days, just a little here or there, so be nice if I'm out of sorts. As I understand it, the x86 (IA-32) architecture still has the same basic limited register set today that it always did, but the internal register count has grown tremendously, but these internal registers are not generally available and are used with register renaming to achieve parallel pipelining of code that otherwise could not be parallelizable. I understand this optimization pretty well, but my feeling is, while these optimizations help in overall throughput and for parallel algorithms, the limited register set we are still stuck with results in more register spilling overhead such that if x86 had double, or quadruple the registers available to us, there may be significantly less push/pop opcodes in a typical instruction stream? Or are there other processor optmizations that also optimize this away that I am unaware of? Basically if I've a unit of code that has 4 registers to work with for integer work, but my unit has a dozen variables, I've got potentially a push/pop for every 2 or so instructions. Any references to studies, or better yet, personal experiences?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >