Search Results

Search found 10585 results on 424 pages for 'ui builder'.

Page 184/424 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Design Pattern for building a Budget

    - by Scott
    So I've looked at the Builder Pattern, Abstract Interfaces, other design patterns, etc. - and I think I'm over thinking the simplicity behind what I'm trying to do, so I'm asking you guys for some help with either recommending a design pattern I should use, or an architecture style I'm not familiar with that fits my task. So I have one model that represents a Budget in my code. At a high level, it looks like this: public class Budget { public int Id { get; set; } public List<MonthlySummary> Months { get; set; } public float SavingsPriority { get; set; } public float DebtPriority { get; set; } public List<Savings> SavingsCollection { get; set; } public UserProjectionParameters UserProjectionParameters { get; set; } public List<Debt> DebtCollection { get; set; } public string Name { get; set; } public List<Expense> Expenses { get; set; } public List<Income> IncomeCollection { get; set; } public bool AutoSave { get; set; } public decimal AutoSaveAmount { get; set; } public FundType AutoSaveType { get; set; } public decimal TotalExcess { get; set; } public decimal AccountMinimum { get; set; } } To go into more detail about some of the properties here shouldn't be necessary, but if you have any questions about those I will fill more out for you guys. Now, I'm trying to create code that builds one of these things based on a set of BudgetBuildParameters that the user will create and supply. There are going to be multiple types of these parameters. For example, on the sites homepage, there will be an example section where you can quickly see what your numbers look like, so they would be a much simpler set of SampleBudgetBuildParameters then say after a user registers and wants to create a fully filled out Budget using much more information in the DebtBudgetBuildParameters. Now a lot of these builds are going to be using similar code for certain tasks, but might want to also check the status of a users DebtCollection when formulating a monthly spending report, where as a Budget that only focuses on savings might not want to. I'd like to reduce code duplication (obviously) as much as possible, but in my head, every way I can think to do this would require using a base BudgetBuilderFactory to return the correct builder to the caller, and then creating say a SimpleBudgetBuilder that inherits from a BudgetBuilder, and put all duplicate code in the BudgetBuilder, and let the SimpleBudgetBuilder handle it's own cases. Problem is, a lot of the unique cases are unique to 2/4 builders, so there will be duplicate code somewhere in there obviously if I did that. Can anyone think of a better way to either explain a solution to this that may or may not be similar to mine, or a completely different pattern or way of thinking here? I really appreciate it.

    Read the article

  • Developing Schema Compare for Oracle (Part 4): Script Configuration

    - by Simon Cooper
    If you've had a chance to play around with the Schema Compare for Oracle beta, you may have come across this screen in the synchronization wizard: This screen is one of the few screens that, along with the project configuration form, doesn't come from SQL Compare. This screen was designed to solve a couple of issues that, although aren't specific to Oracle, are much more of a problem than on SQL Server: Datatype conversions and NOT NULL columns. 1. Datatype conversions SQL Server is generally quite forgiving when it comes to datatype conversions using ALTER TABLE. For example, you can convert from a VARCHAR to INT using ALTER TABLE as long as all the character values are parsable as integers. Oracle, on the other hand, only allows ALTER TABLE conversions that don't change the internal data format. Essentially, every change that requires an actual datatype conversion has to be done using a rebuild with a conversion function. That's OK, as we can simply hard-code the various conversion functions for the valid datatype conversions and insert those into the rebuild SELECT list. However, as there always is with Oracle, there's a catch. Have a look at the NUMTODSINTERVAL function. As well as specifying the value (or column) to convert, you have to specify an interval_unit, which tells oracle how to interpret the input number. We can't hardcode a default for this parameter, as it is entirely dependent on the user's data context! So, in order to convert NUMBER to INTERVAL DAY TO SECOND/INTERVAL YEAR TO MONTH, we need to have feedback from the user as to what to put in this parameter while we're generating the sync script - this requires a new step in the engine action/script generation to insert these values into the script, as well as new UI to allow the user to specify these values in a sensible fashion. In implementing the engine and UI infrastructure to allow this it made much more sense to implement it for any rebuild datatype conversion, not just NUMBER to INTERVALs. For conversions which we can do, we pre-fill the 'value' box with the appropriate function from the documentation. The user can also type in arbitary SQL expressions, which allows the user to specify optional format parameters for the relevant conversion functions, or indeed call their own functions to convert between values that don't have a built-in conversion defined. As the value gets inserted as-is into the rebuild SELECT list, any expression that is valid in that context can be specified as the conversion value. 2. NOT NULL columns Another problem that is solved by the new step in the sync wizard is adding a NOT NULL column to a table. If the table contains data (as most database tables do), you can't just add a NOT NULL column, as Oracle doesn't know what value to put in the new column for existing rows - the DDL statement will fail. There are actually 3 separate scenarios for this problem that have separate solutions within the engine: Adding a NOT NULL column to a table without a rebuild Here, the workaround is to add a column default with an appropriate value to the column you're adding: ALTER TABLE tbl1 ADD newcol NUMBER DEFAULT <value> NOT NULL; Note, however, there is something to bear in mind about this solution; once specified on a column, a default cannot be removed. To 'remove' a default from a column you change it to have a default of NULL, hence there's code in the engine to treat a NULL default the same as no default at all. Adding a NOT NULL column to a table, where a separate change forced a table rebuild Fortunately, in this case, a column default is not required - we can simply insert the default value into the rebuild SELECT clause. Changing an existing NULL to a NOT NULL column To implement this, we run an UPDATE command before the ALTER TABLE to change all the NULLs in the column to the required default value. For all three, we need some way of allowing the user to specify a default value to use instead of NULL; as this is essentially the same problem as datatype conversion (inserting values into the sync script), we can re-use the UI and engine implementation of datatype conversion values. We also provide the option to alter the new column to allow NULLs, or to ignore the problem completely. Note that there is the same (long-running) problem in SQL Compare, but it is much more of an issue in Oracle as you cannot easily roll back executed DDL statements if the script fails at some point during execution. Furthermore, the engine of SQL Compare is far less conducive to inserting user-supplied values into the generated script. As we're writing the Schema Compare engine from scratch, we used what we learnt from the SQL Compare engine and designed it to be far more modular, which makes inserting procedures like this much easier.

    Read the article

  • Null Validation on EditText box in Alert Dialog - Android

    - by LordSnoutimus
    Hi, I am trying to add some text validation to an edit text field located within an alert dialog box. It prompts a user to enter in a name. I want to add some validation so that if what they have entered is blank or null, it does not do anything apart from creating a Toast saying error. So far I have: AlertDialog.Builder alert = new AlertDialog.Builder(this); alert.setTitle("Record New Track"); alert.setMessage("Please Name Your Track:"); // Set an EditText view to get user input final EditText trackName = new EditText(this); alert.setView(trackName); alert.setPositiveButton("Ok", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { String textString = trackName.getText().toString(); // Converts the value of getText to a string. if (textString != null && textString.trim().length() ==0) { Context context = getApplicationContext(); CharSequence error = "Please enter a track name" + textString; int duration = Toast.LENGTH_LONG; Toast toast = Toast.makeText(context, error, duration); toast.show(); } else { SQLiteDatabase db = waypoints.getWritableDatabase(); ContentValues trackvalues = new ContentValues(); trackvalues.put(TRACK_NAME, textString); trackvalues.put(TRACK_START_TIME,tracktimeidentifier ); insertid=db.insertOrThrow(TRACK_TABLE_NAME, null, trackvalues); } But this just closes the Alert Dialog and then displays the Toast. I want the Alert Dialog to still be on the screen. Thanks

    Read the article

  • java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl while starting the w

    - by venkat
    Hi, As part of our application we are using apache's xerces jaxp parser. When we deploy the application on weblogic9.2, we are getting the following error. org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.apache.cxf.wsdl.WSDLManager' defined in class path resource [META-INF/cxf/cxf.xml]: Instantiation of bean failed; nested exception is org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [org.apache.cxf.wsdl11.WSDLManagerImpl]: Constructor threw exception; nested exception is java.lang.ClassCastException: org.apache.xerces.jaxp.DocumentBuilderFactoryImpl As per our analysis, i)The weblogic is trying to to load its own DocumentBuilderFactoryImpl which is present in weblogic.jar instead of apache's xerces. We tried the following to force the weblogic to load DocumentBuilderFactoryImpl from xerces i)we have added the following tag into weblogic.xml true ii)we have put latest versions of xalan in jre/lib/endorced folder. this didnt resolve our problem. ii) we have added entries in weblogic-application.xml webapp.encoding.default UTF-8 javax.jws. org.apache.xerces. org.apache.xerces.jaxp.* ii)Added the following entry in weblogic-application.xml <parser-factory> <saxparser-factory>org.apache.xerces.jaxp.SAXParserFactoryImpl</saxparser-factory> <document-builder-factory>org.apache.xerces.jaxp.DocumentBuilderFactoryImpl </document-builder-factory> org.apache.xalan.processor.TransformerFactoryImpl iii)Added jaxp.properties to load DocumentBuilderFactoryImpl from xerces to the jre/lib and started the server.In this case, the weblogic didnt start. iv)Then we started the server first and then copied the jaxp.properties file during the run time when server starts.But no success None of the above worked for us. Any help is highly appreciated. Thanks in advance, Venkat.

    Read the article

  • E2251 Ambiguous overloaded call to ....

    - by Eric M
    I inherited some Delphi components/code that currently compiles with C++ Builder 2007. I'm simply now trying to compile the components with C++ Builder RAD XE. I don't know Delphi (object pascal). Here are the versions of the 'Supports' functions that appear to be in conflict. Is there a compiler switch I can use to make RAD XE backward compatible? Or is there something I can do to these function calls to correct the ambiguous nature? {$IFNDEF DELPHI5} procedure FreeAndNil(var Obj); var Temp: TObject; begin Temp := TObject(Obj); Pointer(Obj) := nil; Temp.Free; end; function Supports(const Instance: IUnknown; const Intf: TGUID; out Inst): Boolean; overload; begin Result := (Instance <> nil) and (Instance.QueryInterface(Intf, Inst) = 0); end; function Supports(Instance: TObject; const Intf: TGUID; out Inst): Boolean; overload; var Unk: IUnknown; begin Result := (Instance <> nil) and Instance.GetInterface(IUnknown, Unk) and Supports(Unk, Intf, Inst); end; {$ENDIF} {$IFNDEF DELPHI6} function Supports(const Instance: TObject; const IID: TGUID): Boolean; var Temp: IUnknown; begin Result := Supports(Instance, IID, Temp); end; {$ENDIF}

    Read the article

  • Problem with cucumber

    - by sev
    I want to make rails app which will require minimum gems. I freeze gems into app and try to run cucumber's test and I've got the an error. Below is sequence of my actions. What I do wrong? rails cucumber && cd cucumber rake rails:freeze:gems add at the end of config/environments/test.rb: config.gem 'gherkin' config.gem 'cucumber-rails' config.gem 'database_cleaner' config.gem 'webrat' rake gems:unpack:dependencies RAILS_ENV=test rake gems:build RAILS_ENV=test rake gems RAILS_ENV=test [F] gherkin [F] trollop = 1.16.2 [F] cucumber-rails [F] cucumber = 0.8.0 [F] gherkin = 1.0.30 [F] trollop = 1.16.2 [F] term-ansicolor = 1.0.4 [F] builder = 2.1.2 [F] diff-lcs = 1.1.2 [F] json_pure = 1.4.3 [F] database_cleaner [F] webrat [F] nokogiri = 1.2.0 [F] rack = 1.0 [F] rack-test = 0.5.3 [F] rack = 1.0 script/generate cucumber rake db:migrate gem uninstall builder cucumber cucumber-rails diff-lcs gherkin json_pure nokogiri rack-test term-ansicolor trollop webrat rake cucumber /usr/bin/ruby1.8 -I "cucumber/vendor/gems/cucumber-0.8.0/lib:lib" "cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber" --profile default /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- gherkin (LoadError) from /usr/lib/ruby/1.8/rubygems/custom_require.rb:31:in require' from cucumber/vendor/gems/cucumber-0.8.0/bin/../lib/cucumber/cli/main.rb:5 from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5:inrequire' from cucumber/vendor/gems/cucumber-0.8.0/bin/cucumber:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I "cucumbe...] (See full trace by running task with --trace)

    Read the article

  • java.lang.VerifyError on method that worked a minute ago

    - by Travis
    Apologies in advance but I have never seen this error before and don't know what to include. I am using NetBeans and suddenly began getting this error: Exception in thread "AWT-EventQueue-0" java.lang.VerifyError: (class: market/CostOperations, method: <init> signature: ()V) Constructor must call super() or this() at Bluebuild.Main.refreshTables(Main.java:748) at Bluebuild.Main.formComponentShown(Main.java:649) at Bluebuild.Main.access$100(Main.java:28) at Bluebuild.Main$2.componentShown(Main.java:374) at java.awt.Component.processComponentEvent(Component.java:6095) at java.awt.Component.processEvent(Component.java:6043) at java.awt.Container.processEvent(Container.java:2041) at java.awt.Window.processEvent(Window.java:1836) at java.awt.Component.dispatchEventImpl(Component.java:4630) at java.awt.Container.dispatchEventImpl(Container.java:2099) at java.awt.Window.dispatchEventImpl(Window.java:2478) at java.awt.Component.dispatchEvent(Component.java:4460) at java.awt.EventQueue.dispatchEvent(EventQueue.java:599) at java.awt.EventDispatchThread.pumpOneEventForFilters(EventDispatchThread.java:269) at java.awt.EventDispatchThread.pumpEventsForFilter(EventDispatchThread.java:184) at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:174) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:169) at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:161) at java.awt.EventDispatchThread.run(EventDispatchThread.java:122) I have not a clue what happened. I didn't even modify market/CostOperations. Here's the constructor though: public CostOperations() throws ParserConfigurationException, SAXException, IOException { //Open the xml file DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder builder = factory.newDocumentBuilder(); f = new File(dbName); doc = builder.parse(f); System.out.println(f.canWrite()); //Create the XPath XPathFactory xpfactory = XPathFactory.newInstance(); path = xpfactory.newXPath(); } In Debug Mode I get this: debug: Have no FileObject for C:\Program Files (x86)\Java\jdk1.6.0_20\jre\lib\sunrsasign.jar Have no FileObject for C:\Program Files (x86)\Java\jdk1.6.0_20\jre\classes I just need to know what is causing the error and how to fix it. Thanks!

    Read the article

  • Crawler do not create custom crawled properties

    - by user173739
    These days i have faced with very strange problem. I have development environment with MOSS 2007 SP 2 and WS 2008, i have search configured and everything works great. I have started to configuring staging environment (MOSS 2007 SP2 with June CU) and create new farm and new SSP. I have deployed my changes with package (wsp) and manually create site collections, sub webs, pages and so on. When fill crawl finishes, i see in Crawl log that all my pages have been successfully crawled and when i use some test tools to query search, my pages have been found. In crawl log there is few errors like http://mysite/sites/de/pages "The crawler could not communicate with the server. Check that the server is available and that the firewall access is configured correctly..", but all pages in this Page library were indexed. The problem is that i use custom managed properties (mapped to custom crawled properties) in search queries, but crawler didn't create crawled properties for all my new site columns. For example for site column IsAccent the crawler didn't create cralwed property ows_isAccesnt. I'm sure that i have created pages for specific content type and all my crawl categories have checked "Automatically discover new properties when a crawl takes place ". In site settings - Searchable columns i haven't got any column selected as Nocrowl. I tried to export my managed and crawled properties from dev environment to stage evironment but all my managed properties were empty, after that i recreated SSP...the result was the same... I checked specific page with tools like Sharepoint Manager 2007 and U2U Caml Query Builder 2007 that content type is correct, and i can see values of my custom site collumns.... Using U2U Caml Query Builder 2007 agains some Page library in Result tab i can see ows_IsAccent (my site collumn is IsAccent) and others site columns, but i can't find them in Crawled properties. Any idias?

    Read the article

  • Problem calling Request using RequestBuilder

    - by Tushar Ahirrao
    Hi My Code is String url = "http: gd.geobytes.com/gd?after=-1&variables=GeobytesCountry,GeobytesCity"; RequestBuilder builder = new RequestBuilder(RequestBuilder.GET, URL .encode(url)); try { Request request = builder.sendRequest(null, new RequestCallback() { public void onError(Request request, Throwable exception) { Couldn't connect to server (could be timeout, SOP violation, etc.) } public void onResponseReceived(Request request, Response response) { System.out.println(response.getText() + "Response"); if (200 == response.getStatusCode()) { Window.alert(response.getText()); } else { Window.alert(response.getText()); } } }); } catch (RequestException e) { e.printStackTrace(); } i receive following error com.google.gwt.http.client.RequestPermissionException: The URL http://gd.geobytes.com/gd?after=-1&variables=GeobytesCountry,GeobytesCity is invalid or violates the same-origin security restriction at com.google.gwt.http.client.RequestBuilder.doSend(RequestBuilder.java:378) at com.google.gwt.http.client.RequestBuilder.sendRequest(RequestBuilder.java:254) at com.ip.client.IpAddressTest.onModuleLoad(IpAddressTest.java:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.google.gwt.dev.shell.ModuleSpace.onLoad(ModuleSpace.java:369) at com.google.gwt.dev.shell.OophmSessionHandler.loadModule(OophmSessionHandler.java:185) at com.google.gwt.dev.shell.BrowserChannelServer.processConnection(BrowserChannelServer.java:380) at com.google.gwt.dev.shell.BrowserChannelServer.run(BrowserChannelServer.java:222) at java.lang.Thread.run(Thread.java:619) Caused by: com.google.gwt.http.client.RequestException: (NS_ERROR_DOM_BAD_URI): Access to restricted URI denied

    Read the article

  • Titanium won't run iPhone/Android Emulator

    - by BeOliveira
    I just installed Titanium SDK (1.5.1) and all the Android SDKs. Also, I already have iPhone SDK 4.2 installed. I downloaded KitchenSink and imported it into Titanium but whenever I try to run it on iPhone Emulator, I get this error: [INFO] One moment, building ... [INFO] Titanium SDK version: 1.5.1 [INFO] iPhone Device family: iphone [INFO] iPhone SDK version: 4.0 [INFO] Detected compiler plugin: ti.log/0.1 [INFO] Compiler plugin loaded and working for ios [INFO] Performing clean build [INFO] Compiling localization files [INFO] Detected custom font: comic_zine_ot.otf [ERROR] Error: Traceback (most recent call last): File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/builder.py", line 1003, in main execute_xcode("iphonesimulator%s" % iphone_version,["GCC_PREPROCESSOR_DEFINITIONS=LOG__ID=%s DEPLOYTYPE=development TI_DEVELOPMENT=1 DEBUG=1 TI_VERSION=%s" % (log_id,sdk_version)],False) File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/builder.py", line 925, in execute_xcode output = run.run(args,False,False,o) File "/Library/Application Support/Titanium/mobilesdk/osx/1.5.1/iphone/run.py", line 31, in run sys.exit(rc) SystemExit: 1 And for Android, it runs the OS but not the KitchenSink app, here's the log: [INFO] Launching Android emulator...one moment [INFO] Building KitchenSink for Android ... one moment [INFO] plugin=/Library/Application Support/Titanium/plugins/ti.log/0.1/plugin.py [INFO] Detected compiler plugin: ti.log/0.1 [INFO] Compiler plugin loaded and working for android [INFO] Titanium SDK version: 1.5.1 (12/16/10 16:25 16bbb92) [INFO] Waiting for the Android Emulator to become available [ERROR] Timed out waiting for android.process.acore [INFO] Copying project resources.. [INFO] Detected tiapp.xml change, forcing full re-build... [INFO] Compiling Javascript Resources ... [INFO] Copying platform-specific files ... [INFO] Compiling localization files [INFO] Compiling Android Resources... This could take some time Any ideas on how to get Titanium to work?

    Read the article

  • java Processbuilder - exec a file which is not in path on OS X

    - by Jakob
    Okay i'm trying to make ChucK available in exported Processing sketches, i.e. if i export an app from Processing, the ChucK VM binary will be executed from inside the app. So as a user of said app you don't need to worry about ChucK being in your path at all. Right now i'm generating and executing a bash script file, but this way i don't get any console output from ChucK back into Processing: #!/bin/bash cd "[to where the Chuck executable is located]" ./chuck --kill killall chuck # just to make sure ./chuck chuckScript1.ck cuckScriptn.ck then Process p = Runtime.getRuntime().exec("chmod 777 "+scriptPath); p = Runtime.getRuntime().exec(scriptPath); This works but i want to run ChucK directly from Processing instead, but can't get it to execute: String chuckPath = "[folder in which the chuck executable is located]" ProcessBuilder builder = new ProcessBuilder (chuckPath+"/chuck", "test.ck"); final Process process = builder.start(); InputStream is = process.getInputStream(); InputStreamReader isr = new InputStreamReader(is); BufferedReader br = new BufferedReader(isr); String line; while((line = br.readLine()) != null) println(line); println("done chuckin'! exitValue: " + process.exitValue()); Sorry if this is newbie style :D

    Read the article

  • How do I get google protocol buffer messages over a socket connection without disconnecting the clie

    - by Dan
    Hi there, I'm attempting to send a .proto message from an iPhone application to a Java server via a socket connection. However so far I'm running into an issue when it comes to the server receiving the data; it only seems to process it after the client connection has been terminated. This points to me that the data is getting sent, but the server is keeping its inputstream open and waiting for more data. Would anyone know how I might go about solving this? The current code (or at least the relevant parts) is as follows: iPhone: Person *person = [[[[Person builder] setId:1] setName:@"Bob"] build]; RequestWrapper *request = [[[RequestWrapper builder] setPerson:person] build]; NSData *data = [request data]; AsyncSocket *socket = [[AsyncSocket alloc] initWithDelegate:self]; if (![socket connectToHost:@"192.168.0.6" onPort:6666 error:nil]){ [self updateLabel:@"Problem connecting to socket!"]; } else { [self updateLabel:@"Sending data to server..."]; [socket writeData:data withTimeout:-1 tag:0]; [self updateLabel:@"Data sent, disconnecting"]; //[socket disconnect]; } Java: try { RequestWrapper wrapper = RequestWrapper.parseFrom(socket.getInputStream()); Person person = wrapper.getPerson(); if (person != null) { System.out.println("Persons name is " + person.getName()); socket.close(); } On running this, it seems to hang on the line where the RequestWrapper is processing the inputStream. I did try replacing the socket writedata method with [request writeToOutputStream:[socket getCFWriteStream]]; Which I thought might work, however I get an error claiming that the "Protocol message contained an invalid tag (zero)". I'm fairly certain that it doesn't contain an invalid tag as the message works when sending it via the writedata method. Any help on the matter would be greatly appreciated! Cheers! Dan (EDIT: I should mention, I am using the metasyntactic gpb code; and the cocoaasyncsocket implementation)

    Read the article

  • How to add and remove nested model fields dynamically using Haml and Formtastic

    - by Brightbyte8
    We've all seen the brilliant complex forms railscast where Ryan Bates explains how to dynamically add or remove nested objects within the parent object form using Javascript. Has anyone got any ideas about how these methods need to be modified so as to work with Haml Formtastic? To add some context here's a simplified version of the problem I'm currently facing: # Teacher form (which has nested subject forms) [from my application] - semantic_form_for(@teacher) do |form| - form.inputs do = form.input :first_name = form.input :surname = form.input :city = render 'subject_fields', :form => form = link_to_add_fields "Add Subject", form, :subjects # Individual Subject form partial [from my application] - form.fields_for :subjects do |ff| #subject_field = ff.input :name = ff.input :exam = ff.input :level = ff.hidden_field :_destroy = link_to_remove_fields "Remove Subject", ff # Application Helper (straight from Railscasts) def link_to_remove_fields(name, f) f.hidden_field(:_destroy) + link_to_function(name, "remove_fields(this)") end def link_to_add_fields(name, f, association) new_object = f.object.class.reflect_on_association(association).klass.new fields = f.fields_for(association, new_object, :child_index => "new_#{association}") do |builder| render(association.to_s.singularize + "_fields", :f => builder) end link_to_function(name, h("add_fields(this, \"#{association}\", \"#{escape_javascript(fields)} \")")) end #Application.js (straight from Railscasts) function remove_fields(link) { $(link).previous("input[type=hidden]").value = "1"; $(link).up(".fields").hide(); } function add_fields(link, association, content) { var new_id = new Date().getTime(); var regexp = new RegExp("new_" + association, "g") $(link).up().insert({ before: content.replace(regexp, new_id) }); } The problem with implementation seems to be with the javascript methods - the DOM tree of a Formtastic form differs greatly from a regular rails form. I've seen this question asked online a few times but haven't come across an answer yet - now you know that help will be appreciated by more than just me! Jack

    Read the article

  • asp mvc unit test HttpContext.Current.Cache?

    - by Paul Creasey
    Here is the first part of my controller code: public class ControlMController : Controller { IControlMService _controlMservice; public IList<User> Users { get { if (System.Web.HttpContext.Current.Cache["users"] == null) { System.Web.HttpContext.Current.Cache["users"] = _controlMservice.GetUsers(); } return (IList<User>)System.Web.HttpContext.Current.Cache["users"]; } } public ControlMController(IControlMService controlMservice) { this._controlMservice = controlMservice; var users = Users; ViewData["Users"] = users; ViewData["jqSelectUsers"] = string.Join(";", users.Select(x => x.UserID + ":" + x.Name).ToArray()); } I'm trying to test it, and because i'm caching using the HttpContext, i'm struggling with null reference exceptions. I've tried using MvcContrib.TestHelper; here is my sample test... [TestMethod] public void EventDetails_Returns_view_with_correct_event() { var builder = new TestControllerBuilder(); var controller = builder.CreateController<ControlMController>( new ControlMService( new MockControlMRepository() )); var view = (controller.EventDetails(1) as ViewResult); Assert.AreEqual(1, (view.ViewData.Model as Event).EventId); } (I haven't quite got round to using DI for my tests! I'm still getting the same null reference exception when the code hits the httpcontext: Error 1 TestCase 'SupportTool.Tests.Services.ControlM.ControlMControllerTests.EventDetails_Returns_view_with_correct_event' failed: System.NullReferenceException: Object reference not set to an instance of an object. at SupportTool.web.Controllers.ControlMController.get_Users() Any ideas?

    Read the article

  • What's are the best readings to start using WPF instead of WinForms?

    - by Ivan
    Keeping in mind what CannibalSmith once said - "All the answers are saying "WPF is different". That's a huge understatement. You not only have to learn lots of new stuff - you must forget everything you've learned from Forms. It's a completely new way of doing UI." .. and having many years of experience with visual Windows desktop applications development (VB6, Borland C++ Builder VCL, WinForms) (which is hard to forget), how do I quickly move to developing to say well-formed WPF applications with Visual Studio? I don't need boozy-woozy graphics to give my app look and feel of a Hollywood blockbuster or a million dollar pyjamas. I always loved tidiness of standard Windows common controls and UI design guidelines, end even more I enjoyed them under Vista Glass Aero Graphite sauce. I am perfectly satisfied with WinForms but I want to my applications to be built of the most efficient and up-to-date standard technologies and architectured according to the most efficient and flexible patterns of today and tomorrow, leveraging interface-based integration and functionality reuse and to take all advantages of modern hardware and APIs to maximize performance, usability, reliability, maintainability, extensibility, etc. I very much like the idea of separating view, logic and data, letting a view to take all advantages of the platform (may it run as a web browser applet on a thin client or as a desktop application on a PC with a latest GPU), letting logic be reused, parallelized and seamlessly evolve, storing data in a well structured format in a right place. But... while moving from VB6 to Borland C++ Builder was very easy (no books/tutorials needed to turn it on and start working) (assuming I already knew C++), moving from BCB to WinForms was the same seamless, it does not seem any obvious to me how to do with WPF. So how do I best convert myself from a WinForms developer into a right-way thinking and doing WPF developer?

    Read the article

  • Heroku and Refinerycms: Application failed to start ~ attachment_fu problem

    - by John Deely
    Ok so I'm trying to get Refinerycms working with Heroku, and I'm new at all of this. I've set up an amazon s3 account and added keys and ids to the amazon_s3.yml files. When launched on Heroku at gart.heroku.com I get the following error: App failed to start /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:in read': No such file or directory - /disk1/home/slugs/141557_e8490b3_d5eb/mnt/config/amazon_s3.yml (Errno::ENOENT) from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu/backends/s3_backend.rb:187:inincluded' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:in include' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/vendor/plugins/attachment_fu/lib/technoweenie/attachment_fu.rb:123:inhas_attachment' from /disk1/home/slugs/141557_e8490b3_d5eb/mnt/app/models/image.rb:13 from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from /usr/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:in require' from /usr/local/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:inrequire_or_load' ... 42 levels... from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:in instance_eval' from /usr/local/lib/ruby/gems/1.8/gems/rack-1.0.1/lib/rack/builder.rb:29:ininitialize' from /home/heroku_rack/heroku.ru:1:in `new' from /home/heroku_rack/heroku.ru:1 The s3_backend.rb line 187 contains: @@s3_config = @@s3_config = YAML.load(ERB.new(File.read(@@s3_config_path)).result)[RAILS_ENV].symbolize_keys Any help would be great!

    Read the article

  • android checkbox box issue

    - by raqz
    i have this check box in a alertdialog. when i try to check the state of the checkbox, the application force closes. any idea why? LayoutInflater factory = LayoutInflater.from(NewActivity.this); final View textDisplayView = factory.inflate(R.layout.nearestlocs, null); final AlertDialog.Builder newAlert = new AlertDialog.Builder(NewActivity.this); newAlert.setView(textDisplayView); final CheckBox checkBoxLab = (CheckBox) findViewById(R.id.checkboxlab); newAlert.setPositiveButton("Display on Map", new DialogInterface.OnClickListener() { public void onClick(DialogInterface dialog, int whichButton) { if(checkBoxLab.isChecked()){ libDisplayFlag = true; } error log 03-13 08:01:58.273: ERROR/AndroidRuntime(6188): Uncaught handler: thread main exiting due to uncaught exception 03-13 08:01:58.292: ERROR/AndroidRuntime(6188): java.lang.NullPointerException 03-13 08:01:58.292: ERROR/AndroidRuntime(6188): at com.isproj3.NewActivity$3.onClick(NewActivity.java:158) 03-13 08:01:58.292: ERROR/AndroidRuntime(6188): at com.android.internal.app.AlertController$ButtonHandler.handleMessage(AlertController.java:158) 03-13 08:01:58.292: ERROR/AndroidRuntime(6188): at android.os.Handler.dispatchMessage(Handler.java:99) xml <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:orientation="vertical" android:layout_width="fill_parent" android:layout_height="fill_parent"> <LinearLayout android:orientation="horizontal" android:gravity="center" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1"> <CheckBox android:id="@+id/checkboxlib" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Library" android:gravity="left" android:textColor="#FF0000" android:paddingBottom="5px" android:textSize="07pt" android:checked="true" /> <TextView android:id="@+id/librarytext" android:layout_width="wrap_content" android:layout_height="wrap_content" android:gravity="center" android:paddingBottom="5px" android:textSize="8pt" /> </LinearLayout> </LinearLayout>

    Read the article

  • NSTableView binding problem

    - by Niklas Ottosson
    I have only just started with XCode (v3.2.2) and Interface Builder and have run into a problem. Here is what I have done: I have made a class to be the datasource of a NSTableView: @interface TimeObjectsDS : NSControl { IBOutlet NSTableView * idTableView; NSMutableArray * timeObjects; } @property (assign) NSMutableArray * timeObjects; @property (assign) NSTableView * idTableView; - (id) init; - (void) dealloc; - (void) addTimeObject: (TimeObj *)timeObject; - (int) count; // NSTableViewDataSource Protocol functions - (int)numberOfRowsInTableView:(NSTableView *)tableView; - (id)tableView:(NSTableView *)tableView objectValueForTableColumn:(NSTableColumn *)tableColumn row: (int)row; I have then bound my NSTableView in the View to this datasource like so: I have also bound the View NSTableView to the Model idTableView variable in Interface Builder seen above In the init function I add a element to the mutable array. This is displayed correctly in the NSTableView when I run the application. However when I add another element to the array (of same type as in init) and try to call [idTableView reloadData] on the View nothing happens. In fact the Model idTableView is null. When printing the variable with NSLog(@"idTableView: %@", idTableView) I get "idTableView: (null)" Im runing out of ideas how to fix this. Any ideas to what I could do to fix the binding?

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • rails test.log is always empty

    - by Raiden
    All the log entries generated when running tests with 'rake' are written to my development.log instead of test.log file Do I have to explicitly enable logging for test in enviornments/test.config?? (I'm using 'turn' gem to format test output, Can that cause an issue?) I'm running rails 2.3.5, ruby 1.8.7 I've all these gems installed for RAILS_ENV=test. Any help is appreciated. -[I] less -[I] treetop = 1.4.2 - [I] polyglot = 0.2.5 - [I] mutter = 0.4.2 - [I] mysql - [I] authlogic - [R] activesupport - [I] turn - [I] ansi = 1.1.0 - [I] facets = 2.8.0 - [I] rspec = 1.2.0 - [I] rspec-rails = 1.2.0 - [I] rspec = 1.3.0 - [R] rack = 1.0.0 - [I] webrat = 0.4.3 - [I] nokogiri = 1.2.0 - [R] rack = 1.0 - [I] rack-test = 0.5.3 - [R] rack = 1.0 - [I] cucumber = 0.2.2 - [I] term-ansicolor = 1.0.4 - [I] treetop = 1.4.2 - [I] polyglot = 0.2.5 - [I] polyglot = 0.2.9 - [R] builder = 2.1.2 - [I] diff-lcs = 1.1.2 - [R] json_pure = 1.2.0 - [I] cucumber-rails - [I] cucumber = 0.6.2 - [I] term-ansicolor = 1.0.4 - [I] treetop = 1.4.2 - [I] polyglot = 0.2.5 - [I] polyglot = 0.2.9 - [R] builder = 2.1.2 - [I] diff-lcs = 1.1.2 - [R] json_pure = 1.2.0 - [I] database_cleaner = 0.2.3 - [I] launchy - [R] rake = 0.8.1 - [I] configuration = 0.0.5 - [I] faker - [I] populator - [R] flog = 2.1.0 - [R] flay - [I] rcov - [I] reek - [R] ruby_parser ~ 2.0 - [I] ruby2ruby ~ 1.2 - [R] sexp_processor ~ 3.0 - [R] ruby_parser ~ 2.0 - [R] sexp_processor ~ 3.0 - [I] roodi - [R] ruby_parser - [I] gruff - [I] rmagick - [I] ruby-prof - [R] jscruggs-metric_fu = 1.1.5 - [I] factory_girl - [I] notahat-machinist

    Read the article

  • Gems install fine but don't show as installed under rake gems

    - by Josh Pinter
    I'll show you my output here: rake gems (in /Users/jp/Sites/central/trunk) - [F] authlogic - [R] activesupport - [F] builder - [F] formtastic - [R] activesupport >= 2.3.0 - [R] actionpack >= 2.3.0 - [ ] fastercsv I = Installed F = Frozen R = Framework (loaded before rails starts) Making sure fastercsv is installed: gem which fastercsv /usr/local/lib/ruby/gems/1.8/gems/fastercsv-1.5.3/lib/fastercsv.rb After installing through a variety of methods but only one is shown here: sudo rake gems:install (in /Users/jp/central/trunk) gem install fastercsv Successfully installed fastercsv-1.5.3 1 gem installed Installing ri documentation for fastercsv-1.5.3... Installing RDoc documentation for fastercsv-1.5.3... And try it again. rake gems (in /Users/jp/Sites/central/trunk) - [F] authlogic - [R] activesupport - [F] builder - [F] formtastic - [R] activesupport >= 2.3.0 - [R] actionpack >= 2.3.0 - [ ] fastercsv I = Installed F = Frozen R = Framework (loaded before rails starts) One thing to know is that I tried unpacking the gems but if it doesn't think it's installed it can't unpack it. Another thing is that I really tried to figure this out. There's a bunch of people saying clean up local gems in your user account, always install with sudo, etc. But I've tried all that. What would you guys do to fix this? Thanks many times over, Josh

    Read the article

  • Serialize XML child and keep namespaces in Java

    - by Guido García
    I have an Document object that is modeling a XML like this one: <RootNode xmlns="http://a.com/a" xmlns:b="http://b.com/b"> <Child /> </RootNode> Using Java DOM, I need to get the <Child> node and serialize it to XML, but keeping the root node namespaces. This is what I currently have, but it does not serialize the namespaces: public static void main(String[] args) throws Exception { String xml = "<RootNode xmlns='http://a.com/a' xmlns:b='http://b.com/b'><Child /></RootNode>"; DocumentBuilder builder = DocumentBuilderFactory.newInstance().newDocumentBuilder(); Document doc = builder.parse(new ByteArrayInputStream(xml.getBytes())); Node childNode = doc.getFirstChild().getFirstChild(); // serialize to string StringWriter sw = new StringWriter(); DOMSource domSource = new DOMSource(childNode); StreamResult streamResult = new StreamResult(sw); TransformerFactory tf = TransformerFactory.newInstance(); Transformer serializer = tf.newTransformer(); serializer.transform(domSource, streamResult); String serializedXML = sw.toString(); System.out.println(serializedXML); } Current output: <?xml version="1.0" encoding="UTF-8"?> <Child/> Expected output: <?xml version="1.0" encoding="UTF-8"?> <Child xmlns='http://a.com/a' xmlns:b='http://b.com/b' />

    Read the article

  • Customising Flex Datagrid or alternative solutions

    - by Martin
    I'm currently building an application that is presenting tabular (fetched from a webservice) data and have squirted it into a datagrid - seemed the most obvious way to present it on screen. I've now come across a few limitations in the datagrid and wonder how I might move forward. As a relative newcomer to flex development I'm a little lost. A few things I am wanting to do. The data is logically split into groups and I would like to be able to have subheadings in the grid whenever I move to a new group. I would like to be able to highligh individual cells based on their content relative to other values in the row - ie highlight the cell with the highest value in the row. Is this possible with the standard datagrid? I'm actually using the try-before-you-buy version of flex builder at the moment but I have ordered Flex Builder 3 Pro - which is on its way to me. I understand there is an 'advanced datagrid' control in this version - perhaps that will support some of what I wish to do? Alternatively - is there another way of building custom tabular data?

    Read the article

  • E_ACCESSDENIED on CoCreateInstance

    - by vucetica
    Here is a code snippet #include "stdafx.h" #include <tchar.h> #include <windows.h> #include <dshow.h> #include <ExDisp.h> int _tmain(int argc, _TCHAR* argv[]) { CoInitialize(NULL); HRESULT hr = S_OK; DWORD err = 0; // Try to create graph builder IGraphBuilder* pGraph = 0; hr = CoCreateInstance(CLSID_FilterGraph, NULL, CLSCTX_INPROC_SERVER, IID_IGraphBuilder, (void**)&pGraph ); err = GetLastError(); // Here, hr is E_ACCESSDENIED // err is 5 (ERROR_ACCESS_DENIED) // Try to create capture graph builder (succeeds) ICaptureGraphBuilder2* pBuild = 0; hr = CoCreateInstance(CLSID_CaptureGraphBuilder2, NULL, CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2, (void **)&pBuild ); err = GetLastError(); // Here, hr is S_OK // err is 0 (ERROR_SUCCESS) // Try to create IWebBrowser (succeeds) IWebBrowser2* pBrowser = 0; hr = CoCreateInstance (CLSID_InternetExplorer, NULL, CLSCTX_LOCAL_SERVER, IID_IWebBrowser2, (LPVOID *)&pBrowser); err = GetLastError(); // Here, hr is S_OK // err is 0 (ERROR_SUCCESS) return 0; } I'm trying to create IFilterGraph, which fails with E_ACCESSDENIED. On the other hand, creating other directshow objects works ok. The same with some other COM objects (tried with IWebBrowser2 as an example). Any idea what can be the problem? Thanks!

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >