Search Results

Search found 14402 results on 577 pages for 'interface builder'.

Page 491/577 | < Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >

  • How to specify a parameter as part of every web service call?

    - by LES2
    Currently, each web service for our application has a user parameter that is added for every method. For example: @WebService public interface FooWebService { @WebMethod public Foo getFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); @WebMethod public Result deletetFoo(@WebParam(name="alwaysHere",header=true,partName="alwaysHere") String user, @WebParam(name="fooId") Long fooId); // ... } There could be twenty methods in a service, each with the first parameter as user. And there could be twenty web services. We don't actually use the 'user' argument in the implementations - in fact, I don't know why it's there - but I wasn't involved in the design, and the person that put it there had a reason (I hope). Anyway, I'm trying to straighten out this Big Ball of Mud. I have already come a long way by wrapping the web services by a Spring proxy, which allows me to do some before-and-after processing in an interceptor (before there were at least 20 lines of copy-pasted boiler plate per method). I'm wondering if there's some kind of "message header" I can apply to the method or package and that can be accessed by some type of handler or something outside of each web service method. Thanks in advance for the advice, LES

    Read the article

  • How would you code a washing machine?

    - by Dan
    Imagine I have a class that represents a simple washing machine. It can perform following operations in the following order: turn on - wash - centrifuge - turn off. I see two basic alternatives: A) I can have a class WashingMachine with methods turnOn(), wash(int minutes), centrifuge(int revs), turnOff(). The problem with this is that the interface says nothing about the correct order of operations. I can at best throw InvalidOprationException if the client tries to centrifuge before machine was turned on. B) I can let the class itself take care of correct transitions and have the single method nextOperation(). The problem with this on the other hand, is that the semantics is poor. Client will not know what will happen when he calls the nextOperation(). Imagine you implement the centrifuge button’s click event so it calls nextOperation(). User presses the centrifuge button after machine was turned on and ups! machine starts to wash. I will probably need a few properties on my class to parameterize operations, or maybe a separate Program class with washLength and centrifugeRevs fields, but that is not really the problem. Which alternative is better? Or maybe there are some other, better alternatives that I missed to describe?

    Read the article

  • Using nohup mysqldump from php script is inserting a '!' and breaking to a new line.

    - by Aglystas
    I'm trying to run a mysqldump from php using the nohup command to prevent the script from hanging. Here's the command (The database is mc6_erik_test, everything else is just a table list until you get to the end) exec("mysqldump -u root -pPassword -h vfmy1-dev.mountainmedia.com mc6_erik_test access_log admin affiliate affiliate_2_product authorized_ip category category_2_product claim_code claim_code_log country_exclude customer customer_2_subscription customer_account_log customer_address customer_bill customer_discount customer_ip customer_key email_bulk_log email_draft email_queue email_queue_log email_template endicia_log gift_wrap image_bulk_upload log mailing_list manufacturer merchant merchant_checkout merchant_ip merchant_ship merchant_ship_conf new_account_temp order_dest order_item order_item_2_dest order_item_2_package order_item_log order_item_registrant order_note order_package order_package_label orders package package_2_product pref product product_2_supplier product_also product_event_date product_image product_option product_related product_review product_review_helpful product_ship_disable report search_log subscription supplier temp_product transaction_account transactions wish_list wish_list_fill wish_list_item --opt --where='merchant_id=\'6\'' /tmp/sync_db_card_20100519105358.sql"); As you can see it's really long, because I have to specifically include only the tables I want to dump. The command works great from the command line, however when I run it through a web script towards the end the following is being used as the command... supplier temp_product transaction_account transactio! ns wish_list wish_list_fill wish_list_item --opt --where='merchant_id="6"' > /tmp/sync_db_card_20100519105358.sql So the table 'transactions' is being split by an exclamation point and newline. The rest of the command is exactly the same. And if I run this through the php-cli interface it doesn't happen only when I try running it via the webserver using nohup. I'm wondering if there is some inherit string length to using the exec command within a php script, or really if anyone has any general idea what is going on here.

    Read the article

  • nhibernate fluent repository pattern insert problem

    - by voam
    I am trying to use Fluent NHibernate and the repository pattern. I would like my business layer to not be knowledgeable of the data persistence layer. Ideally I would pass in an initialized domain object to the insert method of the repository and all would be well. Where I run into problems is if the object being passed in has a child object. For example say I want to insert an a new order for a customer, and the customer is a property of the order object. I would like to do something like this: Customer c = new Customer; c.CustomerId = 1; Order o = new Order; o.Customer = c; repository.InsertOrder(o); The problem is that using NHiberate the CustomerId field is only privately settable so I can not set it directly like this. so what I have ended up doing is have my repository have an interface of Order InsertOrder(int customerId) where all the foreign keys get passed in as parameters. Somehow this just doesn't seem right. The other approach was to use the NHibernate session variable to load a customer object in my business model and then have the order passed in to the repository but this defeats my persistence ignorance ideal. Should I throw this persistence ignorance out the window or am I missing something here? Thanks

    Read the article

  • how to create a DataAccessLayer ?

    - by NIGHIL DAS
    hi, i am creating a database applicatin in .Net. I am using a DataAccessLayer for communicating .net objects with database but i am not sure that this class is correct or not Can anyone cross check it and rectify any mistakes namespace IDataaccess { #region Collection Class public class SPParamCollection : List<SPParams> { } public class SPParamReturnCollection : List<SPParams> { } #endregion #region struct public struct SPParams { public string Name { get; set; } public object Value { get; set; } public ParameterDirection ParamDirection { get; set; } public SqlDbType Type { get; set; } public int Size { get; set; } public string TypeName { get; set; } // public string datatype; } #endregion /// <summary> /// Interface DataAccess Layer implimentation New version /// </summary> public interface IDataAccess { DataTable getDataUsingSP(string spName); DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection); DataSet getDataSetUsingSP(string spName); DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection); SqlDataReader getDataReaderUsingSP(string spName); SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection); int executeSP(string spName); int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas); int executeSP(string spName, SPParamCollection spParamCollection); DataTable getDataUsingSqlQuery(string strSqlQuery); int executeSqlQuery(string strSqlQuery); SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection); SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas); int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection); object getScalarUsingSP(string spName); object getScalarUsingSP(string spName, SPParamCollection spParamCollection); } } using IDataaccess; namespace Dataaccess { /// <summary> /// Class DataAccess Layer implimentation New version /// </summary> public class DataAccess : IDataaccess.IDataAccess { #region Public variables static string Strcon; DataSet dts = new DataSet(); public DataAccess() { Strcon = sReadConnectionString(); } private string sReadConnectionString() { try { //dts.ReadXml("C:\\cnn.config"); //Strcon = dts.Tables[0].Rows[0][0].ToString(); //System.Configuration.Configuration config = ConfigurationManager.OpenExeConfiguration(ConfigurationUserLevel.None); //Strcon = config.ConnectionStrings.ConnectionStrings["connectionString"].ConnectionString; // Add an Application Setting. //Strcon = "Data Source=192.168.50.103;Initial Catalog=erpDB;User ID=ipixerp1;Password=NogoXVc3"; Strcon = System.Configuration.ConfigurationManager.AppSettings["connection"]; //Strcon = System.Configuration.ConfigurationSettings.AppSettings[0].ToString(); } catch (Exception) { } return Strcon; } public SqlConnection connection; public SqlCommand cmd; public SqlDataAdapter adpt; public DataTable dt; public int intresult; public SqlDataReader sqdr; #endregion #region Public Methods public DataTable getDataUsingSP(string spName) { return getDataUsingSP(spName, null); } public DataTable getDataUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public DataSet getDataSetUsingSP(string spName) { return getDataSetUsingSP(spName, null); } public DataSet getDataSetUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); adpt.Fill(ds); return ds; } } } finally { connection.Close(); } } public SqlDataReader getDataReaderUsingSP(string spName) { return getDataReaderUsingSP(spName, null); } public SqlDataReader getDataReaderUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; sqdr = cmd.ExecuteReader(); return (sqdr); } } } finally { connection.Close(); } } public int executeSP(string spName) { return executeSP(spName, null); } public int executeSP(string spName, SPParamCollection spParamCollection, bool addExtraParmas) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; cmd.CommandTimeout = 60; return (cmd.ExecuteNonQuery()); } } } finally { connection.Close(); } } public int executeSP(string spName, SPParamCollection spParamCollection) { return executeSP(spName, spParamCollection, false); } public DataTable getDataUsingSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) connection.Open(); { using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; adpt = new SqlDataAdapter(cmd); dt = new DataTable(); adpt.Fill(dt); return (dt); } } } finally { connection.Close(); } } public int executeSqlQuery(string strSqlQuery) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(strSqlQuery, connection)) { cmd.CommandType = CommandType.Text; cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); return (intresult); } } } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, null, spParamReturnCollection); } public int executeSPReturnParam() { return 0; } public int executeSPReturnParam(string spName, SPParamCollection spParamCollection, ref SPParamReturnCollection spParamReturnCollection) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Size); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; intresult = cmd.ExecuteNonQuery(); connection.Close(); //for (int i = 0; i < spParamReturnCollection.Count; i++) //{ // spParamReturned.Add(new SPParams // { // Name = spParamReturnCollection[i].Name, // Value = cmd.Parameters[spParamReturnCollection[i].Name].Value // }); //} } } return intresult; } finally { connection.Close(); } } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection) { return executeSPReturnParam(spName, spParamCollection, spParamReturnCollection, false); } public SPParamReturnCollection executeSPReturnParam(string spName, SPParamCollection spParamCollection, SPParamReturnCollection spParamReturnCollection, bool addExtraParmas) { try { SPParamReturnCollection spParamReturned = new SPParamReturnCollection(); using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { //cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); SqlParameter par = new SqlParameter(spParamCollection[count].Name, spParamCollection[count].Value); if (addExtraParmas) { par.TypeName = spParamCollection[count].TypeName; par.SqlDbType = spParamCollection[count].Type; } cmd.Parameters.Add(par); } cmd.CommandType = CommandType.StoredProcedure; foreach (SPParams paramReturn in spParamReturnCollection) { SqlParameter _parmReturn = new SqlParameter(paramReturn.Name, paramReturn.Value); _parmReturn.Direction = paramReturn.ParamDirection; if (paramReturn.Size > 0) _parmReturn.Size = paramReturn.Size; else _parmReturn.Size = 32; _parmReturn.SqlDbType = paramReturn.Type; cmd.Parameters.Add(_parmReturn); } cmd.CommandTimeout = 60; cmd.ExecuteNonQuery(); connection.Close(); for (int i = 0; i < spParamReturnCollection.Count; i++) { spParamReturned.Add(new SPParams { Name = spParamReturnCollection[i].Name, Value = cmd.Parameters[spParamReturnCollection[i].Name].Value }); } } } return spParamReturned; } catch (Exception ex) { return null; } finally { connection.Close(); } } public object getScalarUsingSP(string spName) { return getScalarUsingSP(spName, null); } public object getScalarUsingSP(string spName, SPParamCollection spParamCollection) { try { using (connection = new SqlConnection(Strcon)) { connection.Open(); using (cmd = new SqlCommand(spName, connection)) { int count, param = 0; if (spParamCollection == null) { param = -1; } else { param = spParamCollection.Count; } for (count = 0; count < param; count++) { cmd.Parameters.AddWithValue(spParamCollection[count].Name, spParamCollection[count].Value); cmd.CommandTimeout = 60; } cmd.CommandType = CommandType.StoredProcedure; return cmd.ExecuteScalar(); } } } finally { connection.Close(); cmd.Dispose(); } } #endregion } }

    Read the article

  • Can not find Driver when using generic database bundle

    - by Marc
    I have a project that is build up from several OSGi bundles. One of them is a generic Database bundle that defines a DataSource that can be used throughout the project. The spring bean definition of this service is: <osgi:service interface="javax.sql.DataSource"> <bean class="org.postgresql.ds.PGPoolingDataSource"> <property name="databaseName" value="xxx" /> <property name="serverName" value="xxx" /> <property name="user" value="xxx" /> <property name="password" value="xxx" /> </bean> </osgi:service> Now, when using this DataSource is a different bundle, we get an error: No suitable driver found for jdbc:postgresql://localhost/xxx I have tried the following to add the org.postgresql.Driver to the DriverManager: Instantiated an empty bean for that Driver in the spring context, like this: <bean class="org.postgresql.Driver" /> Instantiated the Driver statically in one of the classes, like this: Class.forName("org.postgresql.Driver"); Added a file META-INF\services\java.sql.Driver with the content org.postgresql.Driver None of these solutions seems to help.

    Read the article

  • Which software for intranet CMS - Django or Joomla?

    - by zalun
    In my company we are thinking of moving from wiki style intranet to a more bespoke CMS solution. Natural choice would be Joomla, but we have a specific architecture. There is a few hundred people who will use the system. System should be self explainable (easier than wiki). We use a lot of tools web, applications and integrated within 3rd party software. The superior element which is a glue for all of them is API. In example for the intranet tools we do use Django, but it's used without ORM, kind of limited to templates and url - every application has an adequate methods within our API. We do not use the Django admin interface, because it is hardly dependent on ORM. Because of that Joomla may be hard to integrate. Every employee should be able to edit most of the pages, authentication and privileges have to be managed by our API. How hard is it to plug Joomla to use a different authentication process? (extension only - no hacks) If one knows Django better than Joomla, should Django be used?

    Read the article

  • Synchronizing screencasting (ffmpeg) and capturing from the webcam (OpenCV)

    - by lyuba
    As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu). To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally. Screencasting fsp fully depends on the camera's fsp, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fsp already known to be started. From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization. So the sub-questions are: 1. Does the USD camera's frame rate vary during a single session? 2. Are there any tools which provide fast screencasting frame by frame? 3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request? For my task I may either use C++ or Java. I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip!

    Read the article

  • Why is my GUI unresponsive while a SwingWorker thread runs?

    - by Starchy
    Hello, I have a SwingWorker thread with an IOBound task which is totally locking up the interface while it runs. Swapping out the normal workload for a counter loop has the same result. The SwingWorker looks basically like this: public class BackupWorker extends SwingWorker<String, String> { private static String uname = null; private static String pass = null; private static String filename = null; static String status = null; BackupWorker (String uname, String pass, String filename) { this.uname = uname; this.pass = pass; this.filename = filename; } @Override protected String doInBackground() throws Exception { BackupObject bak = newBackupObject(uname,pass,filename); return "Done!"; } } The code that kicks it off lives in a class that extends JFrame: public void actionPerformed(ActionEvent event) { String cmd = event.getActionCommand(); if (BACKUP.equals(cmd)) { SwingUtilities.invokeLater(new Runnable() { public void run() { final StatusFrame statusFrame = new StatusFrame(); statusFrame.setVisible(true); SwingUtilities.invokeLater(new Runnable() { public void run () { statusFrame.beginBackup(uname,pass,filename); } }); } }); } } Here's the interesting part of StatusFrame: public void beginBackup(final String uname, final String pass, final String filename) { worker = new BackupWorker(uname, pass, filename); worker.execute(); try { System.out.println(worker.get()); } catch (InterruptedException e) { e.printStackTrace(); } catch (ExecutionException e) { e.printStackTrace(); } } } So far as I can see, everything "long-running" is handled by the worker, and everything that touches the GUI on the EDT. Have I tangled things up somewhere, or am I expecting too much of SwingWorker?

    Read the article

  • Giving another object a NSManagedObject

    - by Wayfarer
    Alright, so I'm running into an issue with my code. What I have done is subclassed UIButton so I can give it some more infomormation that pertain to my code. I have been able to create the buttons and they work great. Capiche. However, one of the things I want my subclass to hold is a reference to a NSMangedObject. I have this code in my header file: @interface ButtonSubclass : UIButton { NSManagedObjectContext *context; NSManagedObject *player; } @property (nonatomic, retain) NSManagedObject *player; @property (nonatomic, retain) NSManagedObjectContext *context; - (id)initWithFrame:(CGRect)frame andTitle:(NSString*)title; //- (void)setPlayer:(NSManagedObject *)aPlayer; @end As you can see, it has a instance variable to the NSMangedobject I want it to hold (as well as the Context). But for the life of me, I can't get it to hold that NSManagedObject. I run both the @synthesize methods in the Implementation file. @synthesize context; @synthesize player; So I'm not sure what I'm doing wrong. This is how I create my button: ButtonSubclass *playerButton = [[ButtonSubclass alloc] initWithFrame:frame andTitle:@"20"]; //works playerButton.context = self.context; //works playerButton.player = [players objectAtIndex:i]; //FAILS And I have initilaized the players array earlier, where I get the objects. Another weird thing is that when it gets to this spot in the code, the app crashes (woot) and the the console output stops. It doesn't give me any error, and notification at all that the app has crashed. It just... stops. So I don't even know what the error is that is crashing the code, besides it has to do with that line up there setting the "player" variable. Thoughts and ideas? I'd love your wisdom!

    Read the article

  • UIImagePickerControllerDelegate Returns Blank "editingInfo" Dictionary Object

    - by Leachy Peachy
    Hi there, I have an iPhone app that calls upon the UIImagePickerController to offer folks a choice between selecting images via the camera or via their photo library on the phone. The problem is that sometimes, (Can't always get it to replicate.), the editingInfo dictionary object that is supposed to be returned by didFinishPickingImage delegate message, comes back blank or (null). Has anyone else seen this before? I am implementing the UIImagePickerControllerDelegate in my .h file and I am correctly implementing the two delegate methods: didFinishPickingImage and imagePickerControllerDidCancel. Any help would be greatly appreciated. Thank you in advance! Here is my code... my .h file: @interface AddPhotoController : UIViewController <UIImagePickerControllerDelegate, UINavigationControllerDelegate> { IBOutlet UIImageView *imageView; IBOutlet UIButton *snapNewPictureButton; IBOutlet UIButton *selectFromPhotoLibraryButton; } @property (nonatomic, retain) UIImageView *imageView; @property (nonatomic, retain) UIButton *snapNewPictureButton; @property (nonatomic, retain) UIButton * selectFromPhotoLibraryButton; my .m file: @implementation AddPhotoController @synthesize imageView, snapNewPictureButton, selectFromPhotoLibraryButton; - (IBAction)getCameraPicture:(id)sender { UIImagePickerController *picker = [[UIImagePickerController alloc] init]; picker.delegate = self; picker.sourceType = UIImagePickerControllerSourceTypeCamera; picker.allowsImageEditing = YES; [self presentModalViewController:picker animated:YES]; [picker release]; } - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingImage:(UIImage *)image editingInfo:(NSDictionary *)editingInfo { NSLog(@"Image Meta Info.: %@",editingInfo); UIImage *selectedImage = image; imageView.image = selectedImage; self._havePictureData = YES; [self.useThisPhotoButton setEnabled:YES]; [picker dismissModalViewControllerAnimated:YES]; } - (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker { [picker dismissModalViewControllerAnimated:YES]; }

    Read the article

  • Suggested (simple) approach for drawing large numbers of visual elements in WPF?

    - by Ender
    I'm writing an interface that features a large (~50000px width) "canvas"-type area that is used to display a lot of data in a fairly novel way. This involves lots of lines, rectangles, and text. The user can scroll around to explore the entire canvas. At the moment I'm just using a standard Canvas panel with various Shapes placed on it. This is nice and easy to do: construct a shape, assign some coordinates, and attach it to the Canvas. Unfortunately, it's pretty slow (to construct the children, not to do the actual rendering). I've looked into some alternatives, it's a bit intimidating. I don't need anything fancy - just the ability to efficiently construct and place objects in a coordinate plane. If all I get are lines, colored rectangles, and text, I'll be happy. Do I need Geometry instances inside of Geometry Groups inside of GeometryDrawings inside of some Panel container? Note: I'd like to include text and graphics (i.e. colored rectangles) in the same space, if possible.

    Read the article

  • Help creating custom iPhone Classes

    - by seanny94
    This should be a simple question, but I just can't seem to figure it out. I'm trying to create my own class which will provide a simpler way of playing short sounds using the AudioToolbox framework as provided by Apple. When I import these files into my project and attempt to utilize them, they just don't seem to work. I was hoping someone would shed some light on what I may be doing wrong here. simplesound.h #import <Foundation/Foundation.h> @interface simplesound : NSObject { IBOutlet UILabel *statusLabel; } @property(nonatomic, retain) UILabel *statusLabel; - (void)playSimple:(NSString *)url; @end simplesound.m #import "simplesound.h" @implementation simplesound @synthesize statusLabel; - (void)playSimple:(NSString *)url { if (url = @"vibrate") { AudioServicesPlaySystemSound(kSystemSoundID_Vibrate); statusLabel.text = @"VIBRATED!"; } else { NSString *paths = [[NSBundle mainBundle] resourcePath]; NSString *audioF1ile = [paths stringByAppendingPathComponent:url]; NSURL *audioURL = [NSURL fileURLWithPath:audioFile isDirectory:NO]; SystemSoundID mySSID; OSStatus error = AudioServicesCreateSystemSoundID ((CFURLRef)audioURL,&mySSID); AudioServicesAddSystemSoundCompletion(mySSID,NULL,NULL,simpleSoundDone,NULL); if (error) { statusLabel.text = [NSString stringWithFormat:@"Error: %d",error]; } else { AudioServicesPlaySystemSound(mySSID); } } static void simpleSoundDone (SystemSoundID mySSID, void *args) { AudioServicesDisposeSystemSoundID (mySSID); } } - (void)dealloc { [url release]; } @end Does anyone see what I'm trying to accomplish here? Does anyone know how to remedy this code that is supposedly wrong?

    Read the article

  • .NET Speech recognition plugin Runtime Error: Unhandled Exception. What could possibly cause it?

    - by manuel
    I'm writing a plugin (dll file) for speech recognition, and I'm creating a WinForm as its interface/dialog. When I run the plugin and click the 'Speak' to start the initialization, I get an unhandled exception. Here is a piece of the code: public ref class Dialog : public System::Windows::Forms::Form { public: SpeechRecognitionEngine^ sre; private: System::Void btnSpeak_Click(System::Object^ sender, System::EventArgs^ e) { Initialize(); } protected: void Initialize() { if (System::Threading::Thread::CurrentThread->GetApartmentState() != System::Threading::ApartmentState::STA) { throw gcnew InvalidOperationException("UI thread required"); } //create the recognition engine sre = gcnew SpeechRecognitionEngine(); //set our recognition engine to use the default audio device sre->SetInputToDefaultAudioDevice(); //create a new GrammarBuilder to specify which commands we want to use GrammarBuilder^ grammarBuilder = gcnew GrammarBuilder(); //append all the choices we want for commands. //we want to be able to move, stop, quit the game, and check for the cake. grammarBuilder->Append(gcnew Choices("play", "stop")); //create the Grammar from th GrammarBuilder Grammar^ customGrammar = gcnew Grammar(grammarBuilder); //unload any grammars from the recognition engine sre->UnloadAllGrammars(); //load our new Grammar sre->LoadGrammar(customGrammar); //add an event handler so we get events whenever the engine recognizes spoken commands sre->SpeechRecognized += gcnew EventHandler<SpeechRecognizedEventArgs^> (this, &Dialog::sre_SpeechRecognized); //set the recognition engine to keep running after recognizing a command. //if we had used RecognizeMode.Single, the engine would quite listening after //the first recognized command. sre->RecognizeAsync(RecognizeMode::Multiple); //this->init(); } void sre_SpeechRecognized(Object^ sender, SpeechRecognizedEventArgs^ e) { //simple check to see what the result of the recognition was if (e->Result->Text == "play") { MessageBox(plugin.hwndParent, L"play", 0, 0); } if (e->Result->Text == "stop") { MessageBox(plugin.hwndParent, L"stop", 0, 0); } } };

    Read the article

  • Problem using Lazy<T> from within a generic abstract class

    - by James Black
    I have a generic class that all my DAO classes derive from, which is defined below. I also have a base class for all my entities, but that is not generic. The method GetIdOrSave is going to be a different type than how I defined SabaAbstractDAO, as I am trying to get the primary key to fulfill the foreign key relationships, so this function goes out to either get the primary key or save the entity and then get the primary key. The last code snippet has a solution on how it will work if I get rid of the generic part, so I think this can be solved by using variance, but I can't figure out how to write an interface that will compile. public abstract class SabaAbstractDAO<T> { ... public K GetIdOrSave<K>(K item, Lazy<SabaAbstractDAO<BaseModel>> lazyitemdao) where K : BaseModel { if (item != null && item.Id.IsNull()) { var itemdao = lazyitemdao.Value; item.Id = itemdao.retrieveID(item); if (item.Id.IsNull()) { return itemdao.SaveData(item); } } return item; } } I am getting this error, when I try to compile: Argument 2: cannot convert from 'System.Lazy<ORNL.HRD.LMS.Dao.SabaCourseDAO>' to 'System.Lazy<ORNL.HRD.LMS.Dao.SabaAbstractDAO<ORNL.HRD.LMS.Models.BaseModel>>' I am trying to call it this way: GetIdOrSave(input.OfferingTemplate, new Lazy<SabaCourseDAO>( () => { return new SabaCourseDAO() { Dao = Dao }; }) ); If I change the definition to this, it works. public K GetIdOrSave<K>(K item, Lazy<SabaCourseDAO> lazyitemdao) where K : BaseModel { So, how can I get this to compile using variance (if needed) and generics, so I can have a very general method that will only work with BaseModel and AbstractDAO<BaseModel>? I expect I should only need to make the change in the method and perhaps abstract class definition, the usage should be fine.

    Read the article

  • How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web in

    - by reckoner
    I am currently updating a java-based web application which allows database developers to create stored procedure regression test suites for database testing. Currently, for test setup, execution and clean-up stages, the user is provided with text boxes where they are able to enter SQL code which is executed by the isql command. I would like to extend the application to use DB Unit’s DatabaseOperation methods to provide more ways to setup the state of the database than just SQL statements. The main reason for using Db Unit rather than just SQL statements is to be able to create and store xml and xls DataSets on a server where they can be associated with their test cases and used for data setup. My question is: How can I provide users with the functionality of the DBUnit DatabaseOperation methods from a web interface? I have considered: Creating a simple programming language and a parser to read some simple syntax involving the DB Unit method names which accept a parameter being the file location to an xml or xls DataSet. I was thinking of allowing the user to register the files they need with the web app which would catalogue them and provide each file with an identifier which could passed as a parameter to the methods in this simple programming language. Creating an XML DTD which provides the user with the ability to specify operations and parameters. If I went this approach, how can I execute the methods and their parameters that I parse from the XML document? Creating a table in the database which stores the method and a FK relation to a catalogued DataSet file, however I don’t think this would be good solution due to the fact that data entry would be tedious. Thanks for your help.

    Read the article

  • Do complex JOINs cause high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • How do I setup Linq to SQL and WCF

    - by Jisaak
    So I'm venturing out into the world of Linq and WCF web services and I can't seem to make the magic happen. I have a VERY basic WCF web service going and I can get my old SqlConnection calls to work and return a DataSet. But I can't/don't know how to get the Linq to SQL queries to work. I'm guessing it might be a permissions problem since I need to connect to the SQL Database with a specific set of credentials but I don't know how I can test if that is the issue. I've tried using both of these connection strings and neither seem to give me a different result. <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;Integrated Security=True" providerName="System.Data.SqlClient" /> <add name="GeoDataConnectionString" connectionString="Data Source=SQLSERVER;Initial Catalog=GeoData;User ID=domain\userName; Password=blahblah; Trusted_Connection=true" providerName="System.Data.SqlClient" /> Here is the function in my service that does the query and I have the interface add the [OperationContract] public string GetCity(int cityId) { GeoDataContext db = new GeoDataContext(); var city = from c in db.Cities where c.CITY_ID == 30429 select c.DESCRIPTION; return city.ToString(); } The GeoData.dbml only has one simple table in it with a list of city id's and city names. I have also changed the "Serialization Mode" on the DataContext to "Unidirectional" which from what I've read needs to be done for WCF. When I run the service I get this as the return: SELECT [t0].[DESCRIPTION] FROM [dbo].[Cities] AS [t0] WHERE [t0].[CITY_ID] = @p0 Dang, so as I'm writing this I realize that maybe my query is all messed up?

    Read the article

  • Need help running Python app as service in Ubuntu with Upstart

    - by GreeenGuru
    I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job: /etc/init/greeenlog.conf # greeenlog description "I log stuff." start on startup stop on shutdown script exec /usr/local/greeenlog/main.pyw end script My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks.

    Read the article

  • Does complex JOINs causes high coupling and maintenance problems ?

    - by ashkan.kh.nazary
    Our project has ~40 tables with complex relations.A colleague believes in using long join queries which enforces me to learn about tables outside of my module but I think I should not concern about tables not directly related to my module and use data access functions (written by those responsible for other modules) when I need data from them. Let me clarify: I am responsible for the ContactVendor module which enables the customers to contact the vendor and start a conversation about some specific product. Products module has it's own complex tables and relations with functions that encapsulate details (for example i18n, activation, product availability etc ...). Now I need to show the product title of some product related to some conversation between the vendor and customers. I may either write a long query that retrieves the product info along with conversation stuff in one shot (which enforces me to learn about Product tables) OR I may pass the relevant product_id to the get_product_info(int) function. First approach is obviously demanding and introduces many bad practices and things I normally consider fault in programming. The problem with the second approach seems to be the countless mini queries these access functions cause and performance loss is a concern when a loop tries to fetch product titles for 100 products using functions that each perform a separate query. So I'm stuck between "don't code to the implementation, code to interface" and performance. What is the right way of doing things ? UPDATE: I'm specially concerned about possible future modifications to those tables outside of my module. What if the Products module decided to change the way they are doing things? or for some reason modify the schema? It means some other modules would break or malfunction until the change is integrated to them. The usual ripple effect problem.

    Read the article

  • Excel 2010 64 bit can't create .net object

    - by aboes81
    I have a simple class library that I use in Excel. Here is a simplification of my class... using System; using System.Runtime.InteropServices; namespace SimpleLibrary { [ComVisible(true)] public interface ISixGenerator { int Six(); } public class SixGenerator : ISixGenerator { public int Six() { return 6; } } } In Excel 2007 I would create a macro enabled workbook and add a module with the following code: Public Function GetSix() Dim lib As SimpleLibrary.SixGenerator lib = New SimpleLibrary.SixGenerator Six = lib.Six End Function Then in Excel I could call the function GetSix() and it would return six. This no longer works in Excel 2010 64bit. I get a Run-time error '429': ActiveX component can't create object. I tried changing the platform target to x64 instead of Any CPU but then my code wouldn't compile unless I unchecked the Register for COM interop option, doing so makes it so my macro enable workbook cannot see SimpleLibrary.dll as it is no longer regsitered. Any ideas how I can use my library with Excel 2010 64 bit?

    Read the article

  • MSBuild script fails but produces no errors

    - by Kate
    I have a MSBuild script that I am executing through TeamCity. One of the tasks that is runs is from Xheo DeploxLX CodeVeil which obfuscates some DLLs. The task I am using is called VeilProject. I have run the CodeVeil Project through the interface manually and it works correctly, so I think I can safely assume that the actual obfuscate process is ok. This task used to take around 40 minutes and the rest of the MSBuild file executed perfectly and finished without errors. For some reason this task is now taking 1hr 20 minutes or so to execute. Once the VeilProject task is finished the output from the task says it completely successfully, however the MSBuild script fails at this point. I have a task directly after the VeilProject task and it does not get outputted. Using diagnostic output from MSBUild I can see the following: My questions are: Would it be possible that the MSBuild script has timed out? Once the task has completed it is after a certain timeout period so it just fails? Why would the build fail with no errors and no warnings? [05:39:06]: [Target "Obfuscate"] Finished. [05:39:06]: [Target "Obfuscate"] Saving exception map [05:49:21]: [Target "Obfuscate"] Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds [05:49:22]: [Target "Obfuscate"] Done. [05:49:51]: MSBuild output: Ended at 11/05/2010 05:49:21, ~1 hour, 48 minutes, 6 seconds (TaskId:8) Done. (TaskId:8) Done executing task "VeilProject" -- FAILED. (TaskId:8) Done building target "Obfuscate" in project "AMK_Release.proj.teamcity.patch.tcprojx" -- FAILED.: (TargetId:12) Done Building Project "C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx" (All target(s)) -- FAILED. Project Performance Summary: 6535484 ms C:\Builds\Scripts\AMK_Release.proj.teamcity.patch.tcprojx 1 calls 6535484 ms All 1 calls Target Performance Summary: 156 ms PreClean 1 calls 266 ms SetBuildVersionNumber 1 calls 2406 ms CopyFiles 1 calls 6532391 ms Obfuscate 1 calls Task Performance Summary: 16 ms MakeDir 2 calls 31 ms TeamCitySetBuildNumber 1 calls 31 ms Message 1 calls 62 ms RemoveDir 2 calls 234 ms GetAssemblyIdentity 1 calls 2406 ms Copy 1 calls 6528047 ms VeilProject 1 calls Build FAILED. 0 Warning(s) 0 Error(s) Time Elapsed 01:48:57.46 [05:49:52]: Process exit code: 1 [05:49:55]: Build finished

    Read the article

  • Editing list properties using DataGridview

    - by toom
    Ok, I have my custom class: public class FileItem : INotifyPropertyChanged { int id=0; string value=""; public int Id { get { return id; } set { id = value; Changed("Id"); } } public string Value { get { return value; } set { this.value = value; Changed("Value"); } } public event PropertyChangedEventHandler PropertyChanged; void Changed(string name) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(name)); } } public BindingList<FileItem> FilesystemEntries = new BindingList<FileItem>(); And I have DatagridView1 with DataSource set to FilesystemEntries: binding.DataSource = FilesystemEntries; Already I can Add and remove rows - these chnages are reflected on collection. However, Value and Id are not saved into bidning list when i change them in DataGridView, id is always 0 and value is "". How can I make this work? Do I need to implement some interface to FileItem to allow editing properties? ReadOnly of DGV is set to false, same to all columns. Editing, Deleting and Changing are enabled.

    Read the article

  • JBoss Clustered Service that sends emails from txt file

    - by michael lucas
    I need a little push in the right direction. Here's my problem: I have to create an ultra-reliable service that sends email messages to clients whose addresses are stored in txt file on FTP server. Single txt file may contain unlimited number of entries. Most often the file contains about 300,000 entries. Service exposes interface with just two simple methods: TaskHandle sendEmails(String ftpFilePath); ProcessStatus checkProcessStatus(TaskHandle taskHandle); Method sendEmails() returns TaskHandle by which we can ask for ProcessStatus. For such a service to be reliable clustering is necessary. Processing single txt file might take a long time. Restarting one node in a cluster should have no impact on sending emails. We use JBoss AS 4.2.0 which comes with a nice HASingletonController that ensure one instance of service is running at given time. But once a fail-over happens, the second service should continue work from where the first one stopped. How can I share state between nodes in a cluster in such a way that leaves no possibility of sending some emails twice?

    Read the article

  • What's wrong with this code to un-camel-case a string?

    - by omair iqbal
    Here is my attempt to solve the About.com Delphi challenge to un-camel-case a string. unit challenge1; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type check = 65..90; TForm1 = class(TForm) Edit1: TEdit; Button1: TButton; procedure Button1Click(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form1: TForm1; var s1,s2 :string; int : integer; implementation {$R *.dfm} procedure TForm1.Button1Click(Sender: TObject); var i: Integer; checks : set of check; begin s1 := edit1.Text; for i := 1 to 20 do begin int :=ord(s1[i]) ; if int in checks then insert(' ',s1,i-1); end; showmessage(s1); end; end. check is a set that contains capital letters so basically whenever a capital letter is encountered the insert function adds space before its encountered (inside the s1 string), but my code does nothing. ShowMessage just shows text as it was entered in Edit1. What have I done wrong?

    Read the article

< Previous Page | 487 488 489 490 491 492 493 494 495 496 497 498  | Next Page >