Search Results

Search found 14257 results on 571 pages for 'intel atom developer prog'.

Page 541/571 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • MSBuild: Add additional files to compile without altering the project file

    - by Craig Norton
    After looking around I can't find a simple answer to this problem. I am trying to create an MSBuild file to allow me to easily use SpecFlow and NUnit within Visual Studio 2010 express. The file below is not complete this is just a proof of concept and it needs to be made more generic. <Project DefaultTargets="Build" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> <PropertyGroup> <BuildDependsOn> BuildSolution; SpecFlow; BuildProject; NUnit; </BuildDependsOn> </PropertyGroup> <PropertyGroup> <Solution>C:\Users\Craig\Documents\My Dropbox\Cells\Cells.sln</Solution> <CSProject>C:\Users\Craig\Documents\My Dropbox\Cells\Configuration\Configuration.csproj</CSProject> <DLL>C:\Users\Craig\Documents\My Dropbox\Cells\Configuration\bin\Debug\Configuration.dll</DLL> </PropertyGroup> <Target Name="Build" DependsOnTargets="$(BuildDependsOn)"> </Target> <Target Name="BuildSolution"> <MSBuild Projects="$(Solution)" Properties="Configuration=Debug" /> </Target> <Target Name="SpecFlow"> <Exec Command="SpecFlow generateall $(CSProject)" /> </Target> <Target Name="BuildProject"> <MSBuild Projects="$(CSProject)" Properties="Configuration=Debug" /> </Target> <Target Name="NUnit"> <Exec Command='NUnit /run "$(DLL)"' /> </Target> The SpecFlow Task looks in the .csproj file and creates a SpecFlowFeature1.feature.cs. I need to include this file when building the .csproj so that NUnit can use it. I know I could modify (either directly or on a copy) the .csproj file to include the generated file but I'd prefer to avoid this. My question is: Is there a way to use the MSBuild Task to build the project file and tell it to include an additional file to include in the build? Thank you.

    Read the article

  • Delphi SOAP Envelope and WCF

    - by Chris
    Hi all, I am working on a system that provides a soap interface. One of the systems that are going to use the interface is coded in Delphi 7. The web service is developed with WCF, basic http binding, SOAP 1.1. If I use SOAP UI (JAVA), the service works properly. But Delphi seems to do special things here ;) This is how the message looks like in SOAP UI: <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ser="http://services.xxx.de/xxx"> <soapenv:Header/> <soapenv:Body> <ser:GetCustomer> <!--Optional:--> <ser:GetCustomerRequest> <!-- this is a data contract --> <ser:Id>?</ser:Id> </ser:GetCustomerRequest> </ser:GetCustomer> </soapenv:Body> </soapenv:Envelope> I am not a delphi developer , but I developed a simple test client to see what's going wrong. This what Delphi sends as a SOAP envelope. <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/"> <SOAP-ENV:Body SOAP-ENV:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" xmlns:NS2="http://services.xxx.de/xxx"> <NS1:GetCustomer xmlns:NS1="http://services.xxx.de/xxx"> <GetCustomerRequest href="#1"/> </NS1:GetCustomer> <NS2:GetCustomerRequest id="1" xsi:type="NS2:GetCustomerRequest"> <Id xsi:type="xsd:int">253</Id> </NS2:GetCustomerRequest> </SOAP-ENV:Body> </SOAP-ENV:Envelope> WCF throws an error that is in German language... ;) Es wurde das Endelement "Body" aus Namespace "http://schemas.xmlsoap.org/soap/envelope/" erwartet. Gefunden wurde "Element "NS2:GetCustomerRequest" aus Namespace "http://services.xxx.de/xxx"". Zeile 1, Position 599. Means something like The Body was expected. But instead the Element "NS2:GetCustomerReques" was found. Now my questions is: Can I somehow change the way Delphi creates the envelope? Or are the ways to make WCF work with such message formats? Any help is greatly appreciated!

    Read the article

  • Inheritance - Could not initialize proxy - no Session.

    - by Ninu
    hello....i'm newbie developer.... i really need help at now... i just get started with Nhibernate thing at .Net... when i learn Inheritance and try it...it makes me confusing...why i get error like this : Initializing[AP.Core.Domain.AccountPayable.APInvoice#API03/04/2010/001]-Could not initialize proxy - no Session. this is my xml : <class xmlns="urn:nhibernate-mapping-2.2" mutable="true" name="AP.Core.Domain.AccountPayable.APAdjustment, AP.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" table="APAdjustment"> <id name="AdjustmentNumber" type="System.String, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="AdjustmentNumber" length="17" /> <generator class="assigned" /> </id> <property name="Amount" type="System.Decimal, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="Amount" /> </property> <property name="TransactionDate" type="System.DateTime, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"> <column name="TransactionDate" /> </property> <many-to-one class="AP.Core.Domain.AccountPayable.APInvoice, AP.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" lazy="proxy" name="PurchaseInvoice"> <column name="PurchaseInvoice_id" not-null="true" /> </many-to-one> <joined-subclass name="AP.Core.Domain.AccountPayable.APCreditAdjustment, AP.Core, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" lazy="true" table="APCreditAdjustment"> <key> <column name="APAdjustment_id" /> </key> </joined-subclass> </class> </hibernate-mapping> and this is inheritance Class : Parent Class -- public class APAdjustment { #region :FIELD private string adjustmentNumber; private decimal amount; private DateTime transactionDate; private APInvoice purchaseInvoice; Child Class -- public class APCreditAdjustment : APAdjustment { public APCreditAdjustment(){ and this my Data access : public IList<APAdjustment> GetByNameAll() { ICriteria criteria = Nhibernatesession.CreateCriteria(typeof(APAdjustment)); return criteria.List<APAdjustment>() ; } My Problem is : when i load data with gridview ...it works...but i change the property to autogenerate="true" ...i missing "PurchaseInvoice" field...and i change to bind manually,and it works..when i edit that gridview ...i get this error... Initializing[AP.Core.Domain.AccountPayable.APInvoice#API03/04/2010/001]-Could not initialize proxy - no Session so then i change my xml ...lazy="no-proxy" ...it still work...but when edit again ...i get error again ..and i do "Comment out the selected lines" to my association "Many-to-one"...i really works it..but that's not i want... CAN ANYBODY HELP ME...??Plizz...:( Note : I almost forget it ,i use fluent hibernate to generate to database.From fluent Hibernate ..i put *.xml file ...so i'm work to xml NHibernate...not fluent hibernate thing...:)

    Read the article

  • 'undefined method init for Mysql:Class'

    - by sscirrus
    I've been having problems with a MySQL Server installation that got messed up after a power outage. Configuration Intel i5 Mac running OS X 10.6.5 Ruby 1.9.2 installed Rails 3.0.1 installed MySQL Server (finally) installed and running I completely reinstalled MySQL, which deleted the local development/test/production databases. So, I have run create database development; in MySQL to get the dev database ready for a migration. Current Goal Run rake db:migrate to get my databases back again. (I cannot currently access my databases or Mysql at all from Rails.) Error Using the gem 'mysql', '2.8.1' and run rake db:migrate, I get the error: rake aborted! undefined method 'init' for Mysql:Class Stack Trace: /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/mysql_adapter.rb:30:in 'mysql_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:230:in 'new_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:238:in 'checkout_new_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:194:in 'block (2 levels) in checkout' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in 'loop' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:190:in 'block in checkout' /Users/sscirrus/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/monitor.rb:201:in 'mon_synchronize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:189:in 'checkout' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:96:in 'connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_pool.rb:318:in 'retrieve_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_specification.rb:97:in 'retrieve_connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/connection_adapters/abstract/connection_specification.rb:89:in 'connection' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:486:in 'initialize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:433:in 'new' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:433:in 'up' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/migration.rb:415:in 'migrate' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/activerecord-3.0.1/lib/active_record/railties/databases.rake:142:in 'block (2 levels) in <top (required)>' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:636:in 'call' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:636:in 'block in execute' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:631:in 'each' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:631:in 'execute' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:597:in 'block in invoke_with_call_chain' /Users/sscirrus/.rvm/rubies/ruby-1.9.2-p0/lib/ruby/1.9.1/monitor.rb:201:in 'mon_synchronize' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:590:in 'invoke_with_call_chain' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:583:in 'invoke' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2051:in 'invoke_task' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'block (2 levels) in top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'each' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2029:in 'block in top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2068:in 'standard_exception_handling' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2023:in 'top_level' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2001:in 'block in run' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:2068:in 'standard_exception_handling' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/lib/rake.rb:1998:in 'run' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/gems/rake-0.8.7/bin/rake:31:in '<top (required)>' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/bin/rake:19:in 'load' /Users/sscirrus/.rvm/gems/ruby-1.9.2-p0/bin/rake:19:in '<main>'

    Read the article

  • MSSQL: How to copy a file (pdf, doc, txt...) stored in a varbinary(max) field to a file in a CLR sto

    - by user193655
    I ask this question as a followup of this question. A solution that uses bcp and xp_cmdshell, that is not my desired solution, has been posted here: stackoverflow.com/questions/828749/ms-sql-server-2005-write-varbinary-to-file-system (sorry i cannot post a second hyperlink since my reputation is les than 10). I am new to c# (since I am a Delphi developer) anyway I was able to create a simple CLR stored procedures by following a tutorial. My task is to move a file from the client file system to the server file system (the server can be accessed using remote IP, so I cannot use a shared folder as destination, this is why I need a CLR stored procedure). So I plan to: 1) store from Delphi the file in a varbinary(max) column of a temporary table 2) call the CLR stored procedure to create a file at the desired path using the data contained in the varbinary(max) field Imagine I need to move C:\MyFile.pdf to Z:\MyFile.pdf, where C: is a harddrive on local system and Z: is an harddrive on the server. I provide the code below (not working) that someone can modify to make it work? Here I suppose to have a table called MyTable with two fields: ID (int) and DATA (varbinary(max)). Please note it doesn't make a difference if the table is a real temporary table or just a table where I temporarly store the data. I would appreciate if some exception handling code is there (so that I can manage an "impossible to save file" exception). I would like to be able to write a new file or overwrite the file if already existing. [Microsoft.SqlServer.Server.SqlProcedure] public static void VarbinaryToFile(int TableId) { using (SqlConnection connection = new SqlConnection("context connection=true")) { connection.Open(); SqlCommand command = new SqlCommand("select data from mytable where ID = @TableId", connection); command.Parameters.AddWithValue("@TableId", TableId); // This was the sample code I found to run a query //SqlContext.Pipe.ExecuteAndSend(command); // instead I need something like this (THIS IS META_SYNTAX!!!): SqlContext.Pipe.ResultAsStream.SaveToFile('z:\MyFile.pdf'); } } (one subquestion is: is this approach coorect or there is a way to directly pass the data to the CLR stored procedure so I don't need to use a temp table?) If the subquestion's answer is No, could you describe the approach of avoiding a temp table? So is there a better way then the one I describe above (=temp table + Stored procedure)? A way to directly pass the dataastream from the client application to the CLR stored procedure? (my files can be any size but also very big)

    Read the article

  • jQuery load Google Visualization API with AJAX

    - by Curro
    Hello. There is an issue that I cannot solve, I've been looking a lot in the internet but found nothing. I have this JavaScript that is used to do an Ajax request by PHP. When the request is done, it calls a function that uses the Google Visualization API to draw an annotatedtimeline to present the data. The script works great without AJAX, if I do everything inline it works great, but when I try to do it with AJAX it doesn't work!!! The error that I get is in the declaration of the "data" DataTable, in the Google Chrome Developer Tools I get a Uncaught TypeError: Cannot read property 'DataTable' of undefined. When the script gets to the error, everything on the page is cleared, it just shows a blank page. So I don't know how to make it work. Please help Thanks in advance $(document).ready(function(){ // Get TIER1Tickets $("#divTendency").addClass("loading"); $.ajax({ type: "POST", url: "getTIER1Tickets.php", data: "", success: function(html){ // Succesful, load visualization API and send data google.load('visualization', '1', {'packages': ['annotatedtimeline']}); google.setOnLoadCallback(drawData(html)); } }); }); function drawData(response){ $("#divTendency").removeClass("loading"); // Data comes from PHP like: <CSV ticket count for each day>*<CSV dates for ticket counts>*<total number of days counted> // So it has to be split first by * then by , var dataArray = response.split("*"); var dataTickets = dataArray[0]; var dataDates = dataArray[1]; var dataCount = dataArray[2]; // The comma separation now splits the ticket counts and the dates var dataTicketArray = dataTickets.split(","); var dataDatesArray = dataDates.split(","); // Visualization data var data = new google.visualization.DataTable(); data.addColumn('date', 'Date'); data.addColumn('number', 'Tickets'); data.addRows(dataCount); var dateSplit = new Array(); for(var i = 0 ; i < dataCount ; i++){ // Separating the data because must be entered as "new Date(YYYY,M,D)" dateSplit = dataDatesArray[i].split("-"); data.setValue(i, 0, new Date(dateSplit[2],dateSplit[1],dateSplit[0])); data.setValue(i, 1, parseInt(dataTicketArray[i])); } var annotatedtimeline = new google.visualization.AnnotatedTimeLine(document.getElementById('divTendency')); annotatedtimeline.draw(data, {displayAnnotations: true}); }

    Read the article

  • GoogleAppEngine : possible to disable FileUpload?

    - by James.Elsey
    Hi, When I deploy my application to GoogleAppEngine I keep getting the following error Uncaught exception from servlet java.lang.NoClassDefFoundError: java.io.FileOutputStream is a restricted class. Please see the Google App Engine developer's guide for more details. at com.google.apphosting.runtime.security.shared.stub.java.io.FileOutputStream.<clinit>(FileOutputStream.java) at org.apache.log4j.FileAppender.setFile(FileAppender.java:289) at org.apache.log4j.FileAppender.activateOptions(FileAppender.java:163) at org.apache.log4j.config.PropertySetter.activate(PropertySetter.java:256) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:132) at org.apache.log4j.config.PropertySetter.setProperties(PropertySetter.java:96) at org.apache.log4j.PropertyConfigurator.parseAppender(PropertyConfigurator.java:654) at org.apache.log4j.PropertyConfigurator.parseCategory(PropertyConfigurator.java:612) at org.apache.log4j.PropertyConfigurator.configureRootCategory(PropertyConfigurator.java:509) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:415) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:441) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:468) at org.apache.log4j.LogManager.<clinit>(LogManager.java:122) at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:73) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:88) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:155) at org.apache.commons.logging.impl.SLF4JLogFactory.getInstance(SLF4JLogFactory.java:131) at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685) at org.springframework.web.context.ContextLoader.<clinit>(ContextLoader.java:146) at org.springframework.web.context.ContextLoaderListener.createContextLoader(ContextLoaderListener.java:53) at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:44) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler.java:548) at org.mortbay.jetty.servlet.Context.startContext(Context.java:136) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1250) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:517) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:467) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.createHandler(AppVersionHandlerMap.java:191) at com.google.apphosting.runtime.jetty.AppVersionHandlerMap.getHandler(AppVersionHandlerMap.java:168) at com.google.apphosting.runtime.jetty.JettyServletEngineAdapter.serviceRequest(JettyServletEngineAdapter.java:123) at com.google.apphosting.runtime.JavaRuntime.handleRequest(JavaRuntime.java:243) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5485) at com.google.apphosting.base.RuntimePb$EvaluationRuntime$6.handleBlockingRequest(RuntimePb.java:5483) at com.google.net.rpc.impl.BlockingApplicationHandler.handleRequest(BlockingApplicationHandler.java:24) at com.google.net.rpc.impl.RpcUtil.runRpcInApplication(RpcUtil.java:398) at com.google.net.rpc.impl.Server$2.run(Server.java:852) at com.google.tracing.LocalTraceSpanRunnable.run(LocalTraceSpanRunnable.java:56) at com.google.tracing.LocalTraceSpanBuilder.internalContinueSpan(LocalTraceSpanBuilder.java:536) at com.google.net.rpc.impl.Server.startRpc(Server.java:807) at com.google.net.rpc.impl.Server.processRequest(Server.java:369) at com.google.net.rpc.impl.ServerConnection.messageReceived(ServerConnection.java:442) at com.google.net.rpc.impl.RpcConnection.parseMessages(RpcConnection.java:319) at com.google.net.rpc.impl.RpcConnection.dataReceived(RpcConnection.java:290) at com.google.net.async.Connection.handleReadEvent(Connection.java:474) at com.google.net.async.EventDispatcher.processNetworkEvents(EventDispatcher.java:831) at com.google.net.async.EventDispatcher.internalLoop(EventDispatcher.java:207) at com.google.net.async.EventDispatcher.loop(EventDispatcher.java:103) at com.google.net.rpc.RpcService.runUntilServerShutdown(RpcService.java:251) at com.google.apphosting.runtime.JavaRuntime$RpcRunnable.run(JavaRuntime.java:404) at java.lang.Thread.run(Unknown Source) I've checked the documentation and it suggests to create a FileUpload class, since I won't be uploading files/documents etc from my application, is this necessary? Is there a way to disable this functionality, or at least bypass this error? I have already provided implementation for a MultipartWrapperFactory.Class as that has been suggested from searching for this error Thanks

    Read the article

  • Voice Recognition Connection problem

    - by user244190
    I,m trying to work through and test a Voice Recognition example based on the VoiceRecognition.java example at http://developer.android.com/resources/samples/ApiDemos/src/com/example/android/apis/app/VoiceRecognition.html but when click on the button to create the activity, I get a dialog that says Connection problem. My Manifest file is using the Internet Permission, and I understand it passes the to the Google Servers. Do I need to do anything else to use this. Code below UPDATE: Ok, I was able to replace my emulator image with one from HTC that appears to come with Google Voice Search, however now when I run from the emulator, i'm getting an Audio Problem message with Speak Again or Cancel buttons. It appears to make it back to the onActivityResult(), but the resultCode is 0. Here is the LogCat output: 03-07 20:21:25.396: INFO/ActivityManager(578): Starting activity: Intent { action=android.speech.action.RECOGNIZE_SPEECH comp={com.google.android.voicesearch/com.google.android.voicesearch.RecognitionActivity} (has extras) } 03-07 20:21:25.406: WARN/ActivityManager(578): Activity is launching as a new task, so cancelling activity result. 03-07 20:21:25.968: WARN/ActivityManager(578): Activity pause timeout for HistoryRecord{434f7850 {com.ikonicsoft.mileagegenie/com.ikonicsoft.mileagegenie.MileageGenie}} 03-07 20:21:26.206: WARN/AudioHardwareInterface(554): getInputBufferSize bad sampling rate: 16000 03-07 20:21:26.256: ERROR/AudioRecord(819): Recording parameters are not supported: sampleRate 16000, channelCount 1, format 1 03-07 20:21:26.696: INFO/ActivityManager(578): Displayed activity com.google.android.voicesearch/.RecognitionActivity: 1295 ms 03-07 20:21:29.890: DEBUG/dalvikvm(806): threadid=3: still suspended after undo (s=1 d=1) 03-07 20:21:29.896: INFO/dalvikvm(806): Uncaught exception thrown by finalizer (will be discarded): 03-07 20:21:29.896: INFO/dalvikvm(806): Ljava/lang/IllegalStateException;: Finalizing cursor android.database.sqlite.SQLiteCursor@435d3c50 on ml_trackdata that has not been deactivated or closed 03-07 20:21:29.896: INFO/dalvikvm(806): at android.database.sqlite.SQLiteCursor.finalize(SQLiteCursor.java:596) 03-07 20:21:29.896: INFO/dalvikvm(806): at dalvik.system.NativeStart.run(Native Method) 03-07 20:21:31.468: DEBUG/dalvikvm(806): threadid=5: still suspended after undo (s=1 d=1) 03-07 20:21:32.436: WARN/IInputConnectionWrapper(806): showStatusIcon on inactive InputConnection I,m still not sure why I,m getting the Connect problem on the Droid. I can use Voice Search ok. I also tried clearing the cache, and data as described in some posts, butstill not working?? /** * Fire an intent to start the speech recognition activity. */ private void startVoiceRecognitionActivity() { Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL, RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Speech recognition demo"); startActivityForResult(intent, VOICE_RECOGNITION_REQUEST_CODE); } /** * Handle the results from the recognition activity. */ @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == VOICE_RECOGNITION_REQUEST_CODE && resultCode == RESULT_OK) { // Fill the list view with the strings the recognizer thought it could have heard ArrayList<String> matches = data.getStringArrayListExtra( RecognizerIntent.EXTRA_RESULTS); mList.setAdapter(new ArrayAdapter<String>(this, android.R.layout.simple_list_item_1, matches)); } super.onActivityResult(requestCode, resultCode, data); }

    Read the article

  • PHP pages working slow from time to time

    - by user1038179
    I have VPS with limit of 2GB of ram and 8 CPU cores. I have 5 sites on that VPS (one of them is just for testing, no visitors exept me). All 5 sites are image galleries, like wallpaper sites. Last week I noticed problem on one site (main domain, used for name servers, and also with most traffic, visitors). That site has two image galleries, one is old static html gallery made few years ago and another, main, is powered by ZENPhoto CMS. Also I have that same gallery CMS on another two sites on that same VPS (on one running site and on one just for testing site). On other two sites I have diferent PHP driven gallery. Problem is that after some time (it vary from 10 minutes to few hours after apache restart), loading of pages on main site becomes very slow, or I get 503 Service Temporarily Unavailable error. So pages becomes unavailable. But just that part with new CMS gallery, old part of site with static html pages are working fast and just fine. Also other two sites with same CMS gallery and other two with different PHP driven gallery are working fine and fast at the same time. I thought it must be something with CMS on that main site, because other sites are working nice. Then I tryed to open contact and guest book pages on that main site which are outside of that CMS but also PHP pages, and they do not load too, but that same contact php scipts are working on other sites at the same time. So, when site starts to hangs, ONLY PHP generated content is not working, like I said other static pages are working. And, ONLY on that one main site I have problems. Then I need to restart Apache, after restart everything is vorking nice and fast, for some time, than again, just PHP pages on main site are becomming slower. If I do not restart apache that slowness take some time (several minutes, hours, depending ot traffic) and during that time PHP diven content is loading very slow or unavailable on that site. After sime time, on moments everything start to work and is fast again for some time, and again. In hours with more traffic PHP content is loading slowly or it is unavailable, in hours with less traffic it is sometimes fast and sometimes little bit slower than usually. And ones again, only on that main site, and only PHP driven pages, static pages are working fast even in most traffic hours also other sites with even same CMS are working fast. Currently I have about 7000 unique visitors on that site but site worked nice even with 11500 visitors per day. And about 17000 in total visitors on VPS, all sites ( about 3 pages per unique visitor). When site start to slow down sometimes in apache status I can see something like this: mod_fcgid status: Total FastCGI processes: 37 Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 11300 39 28 7 Working 11274 47 28 7 Working 11296 40 29 3 Working 11283 45 30 3 Working 11304 36 31 1 Working 11282 46 32 3 Working 11292 42 33 1 Working 11289 44 34 1 Working 11305 35 35 0 Working 11273 48 36 2 Working 11280 47 39 1 Working 10125 133 40 12 Exiting(communication error) 11294 41 41 1 Exiting(communication error) 11277 47 42 2 Exiting(communication error) 11291 43 43 1 Exiting(communication error) 10187 108 43 10 Exiting(communication error) 10209 95 44 7 Exiting(communication error) 10171 113 44 5 Exiting(communication error) 11275 47 47 1 Exiting(communication error) 10144 125 48 8 Exiting(communication error) 10086 149 48 20 Exiting(communication error) 10212 94 49 5 Exiting(communication error) 10158 118 49 5 Exiting(communication error) 10169 114 50 4 Exiting(communication error) 10105 141 50 16 Exiting(communication error) 10094 146 50 15 Exiting(communication error) 10115 139 51 17 Exiting(communication error) 10213 93 51 9 Exiting(communication error) 10197 103 51 7 Exiting(communication error) Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7983 1079 2 149 Ready 7979 1079 11 151 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7990 1066 0 57 Ready 8001 1031 64 35 Ready 7999 1032 94 29 Ready 8000 1031 91 36 Ready 8002 1029 34 52 Ready Process: php5 (/usr/local/cpanel/cgi-sys/php5)Pid Active Idle Accesses State 7991 1064 29 115 Ready When it is working nicly there is no lines with "Exiting(communication error)" Active and Idle are time active and time since last request, in seconds. Here are system info. Sysem info: Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5440 @ 2.83GHz Speed 88.320 MHz Cache 6144 KB All other seven are the same. System Information Linux vps.nnnnnnnnnnnnnnnnn.nnn 2.6.18-028stab099.3 #1 SMP Wed Mar 7 15:20:22 MSK 2012 x86_64 x86_64 x86_64 GNU/Linux Current Memory Usage total used free shared buffers cached Mem: 8388608 882164 7506444 0 0 0 -/+ buffers/cache: 882164 7506444 Swap: 0 0 0 Total: 8388608 882164 7506444 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/vzfs 100G 34G 67G 34% / none System Details: Running on: Apache/2.2.22 System info: (Unix) mod_ssl/2.2.22 OpenSSL/0.9.8e-fips-rhel5 DAV/2 mod_auth_passthrough/2.1 mod_bwlimited/1.4 FrontPage/5.0.2.2635 mod_fcgid/2.3.6 Powered by: PHP/5.3.10 Current Configuration Default PHP Version (.php files) 5 PHP 5 Handler fcgi PHP 4 Handler suphp Apache suEXEC on Apache Ruid2 off PHP 4 Handler suphp Apache suEXEC on Apache Configuration The following settings have been saved: fileetag: All keepalive: On keepalivetimeout: 3 maxclients: 150 maxkeepaliverequests: 10 maxrequestsperchild: 10000 maxspareservers: 10 minspareservers: 5 root_options: ExecCGI, FollowSymLinks, Includes, IncludesNOEXEC, Indexes, MultiViews, SymLinksIfOwnerMatch serverlimit: 256 serversignature: Off servertokens: Full sslciphersuite: ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP:!kEDH startservers: 5 timeout: 30 I hope, I explained my problem nicely. Any help would be nice.

    Read the article

  • TeamCity MSBuild 4.0 Help

    - by ChrisKolenko
    Hi guys, I need some help with my MSBuild file i created a while ago. All i want to do is build the solution, publish a project inside the solution and than copy the files to a directory At the moment when i set Teamcity to .net 4 msbuild, msbuild 4.0 tools and for 86 i get an error stating error MSB4067: The element beneath element is unrecognized. <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Run"> <Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/> <Import Project="$(MSBuildExtensionsPath)\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets"/> <PropertyGroup> <OutputFolder>$(OutputDir)</OutputFolder> <DeploymentFolder>$(DeploymentDir)</DeploymentFolder> <CompilationDebug /> <CustomErrorsMode /> <ContentEditorsEmail /> <AdministratorsEmail /> </PropertyGroup> <Target Name="Run"> <CallTarget Targets="Compile" /> <CallTarget Targets="Publish" /> <CallTarget Targets="Deploy" /> </Target> <Target Name="Clean"> <ItemGroup> <BinFiles Include="bin\*.*" /> </ItemGroup> <Delete Files="@(BinFiles)" /> </Target> <Target Name="Compile" DependsOnTargets="Clean"> <MSBuild Projects="WebCanvas.ZakisCatering.Website.sln" Properties="Configuration=Release"/> </Target> <Target Name="Publish"> <RemoveDir Directories="$(OutputFolder)" ContinueOnError="true" /> <MSBuild Projects="WebCanvas.ZakisCatering.Website\WebCanvas.ZakisCatering.Website.csproj" Targets="ResolveReferences;_CopyWebApplication" Properties="Configuration=Release;WebProjectOutputDir=$(OutputFolder);OutDir=$(WebProjectOutputDir)\" /> </Target> <Target Name="Deploy"> <RemoveDir Directories="$(DeploymentFolder)" ContinueOnError="true" /> <ItemGroup> <DeploymentFiles Include="$(OutputFolder)\**\*.*" /> </ItemGroup> <Copy SourceFiles="@(DeploymentFiles)" DestinationFolder="$(DeploymentFolder)\%(RecursiveDir)" /> </Target> </Project>

    Read the article

  • AVAudioRecorder Memory Leak

    - by Eric Ranschau
    I'm hoping someone out there can back me up on this... I've been working on an application that allows the end user to record a small audio file for later playback and am in the process of testing for memory leaks. I continue to very consistently run into a memory leak when the AVAudioRecorder's "stop" method attempts to close the audio file to which it's been recording. This really seems to be a leak in the framework itself, but if I'm being a bonehead you can tell me. To illustrate, I've worked up a stripped down test app that does nothing but start/stop a recording w/ the press of a button. For the sake of simplicty, everything happens in app. delegate as follows: @synthesize audioRecorder, button; @synthesize window; - (BOOL) application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { // create compplete path to database NSString *tempPath = NSTemporaryDirectory(); NSString *audioFilePath = [tempPath stringByAppendingString:@"/customStatement.caf"]; // define audio file url NSURL *audioFileURL = [[NSURL alloc] initFileURLWithPath:audioFilePath]; // define audio recorder settings NSDictionary *settings = [[NSDictionary alloc] initWithObjectsAndKeys: [NSNumber numberWithInt:kAudioFormatAppleIMA4], AVFormatIDKey, [NSNumber numberWithInt:1], AVNumberOfChannelsKey, [NSNumber numberWithInt:AVAudioQualityLow], AVSampleRateConverterAudioQualityKey, [NSNumber numberWithFloat:44100], AVSampleRateKey, [NSNumber numberWithInt:8], AVLinearPCMBitDepthKey, nil ]; // define audio recorder audioRecorder = [[AVAudioRecorder alloc] initWithURL:audioFileURL settings:settings error:nil]; [audioRecorder setDelegate:self]; [audioRecorder setMeteringEnabled:YES]; [audioRecorder prepareToRecord]; // define record button button = [UIButton buttonWithType:UIButtonTypeRoundedRect]; [button addTarget:self action:@selector(handleTouch_recordButton) forControlEvents:UIControlEventTouchUpInside]; [button setFrame:CGRectMake(110.0, 217.5, 100.0, 45.0)]; [button setTitle:@"Record" forState:UIControlStateNormal]; [button setTitle:@"Stop" forState:UIControlStateSelected]; // configure the main view controller UIViewController *viewController = [[UIViewController alloc] init]; [viewController.view addSubview:button]; // add controllers to window [window addSubview:viewController.view]; [window makeKeyAndVisible]; // release [audioFileURL release]; [settings release]; [viewController release]; return YES; } - (void) handleTouch_recordButton { if ( ![button isSelected] ) { [button setSelected:YES]; [audioRecorder record]; } else { [button setSelected:NO]; [audioRecorder stop]; } } - (void) dealloc { [audioRecorder release]; [button release]; [window release]; [super dealloc]; } The stack trace from Instruments that shows pretty clearly that the "closeFile" method in the AVFoundation code is leaking...something. You can see a screen shot of the Instruments session here: Developer Forums: AVAudioRecorder Memory Leak Any thoughts would be greatly appreciated!

    Read the article

  • .NET Code Generataion | Unable to create a T4 template in Visual Studio 2008

    - by cedar715
    I've the Visual Studio 2008 installed on my machine(licensed one). When I try to add a new .tt(say bar.tt) file to the project, the following code is generated: I've seen in a screencast, where in an empty .tt file should be opened and the developer enters the T4 code. Even if I remove the code and enter T4 code, am getting build errors. using System; using System.Collections.Generic; using System.ComponentModel; using System.Drawing; using System.Linq; using System.Reflection; using System.Windows.Forms; namespace Foobar { partial class bar : Form { public bar() { InitializeComponent(); this.Text = String.Format("About {0} {0}", AssemblyTitle); this.labelProductName.Text = AssemblyProduct; this.labelVersion.Text = String.Format("Version {0} {0}", AssemblyVersion); this.labelCopyright.Text = AssemblyCopyright; this.labelCompanyName.Text = AssemblyCompany; this.textBoxDescription.Text = AssemblyDescription; } #region Assembly Attribute Accessors public string AssemblyTitle { get { object[] attributes = Assembly.GetExecutingAssembly().GetCustomAttributes(typeof(AssemblyTitleAttribute), false); if(attributes.Length > 0) { AssemblyTitleAttribute titleAttribute = (AssemblyTitleAttribute)attributes[0]; if(titleAttribute.Title != "") { return titleAttribute.Title; } } return System.IO.Path.GetFileNameWithoutExtension(Assembly.GetExecutingAssembly().CodeBase); } } public string AssemblyVersion { get { return Assembly.GetExecutingAssembly().GetName().Version.ToString(); } } public string AssemblyDescription { get { object[] attributes = Assembly.GetExecutingAssembly().GetCustomAttributes(typeof(AssemblyDescriptionAttribute), false); if (attributes.Length == 0) { return ""; } return ((AssemblyDescriptionAttribute)attributes[0]).Description; } } public string AssemblyProduct { get { object[] attributes = Assembly.GetExecutingAssembly().GetCustomAttributes(typeof(AssemblyProductAttribute), false); if (attributes.Length == 0) { return ""; } return ((AssemblyProductAttribute)attributes[0]).Product; } } public string AssemblyCopyright { get { object[] attributes = Assembly.GetExecutingAssembly().GetCustomAttributes(typeof(AssemblyCopyrightAttribute), false); if (attributes.Length == 0) { return ""; } return ((AssemblyCopyrightAttribute)attributes[0]).Copyright; } } public string AssemblyCompany { get { object[] attributes = Assembly.GetExecutingAssembly().GetCustomAttributes(typeof(AssemblyCompanyAttribute), false); if (attributes.Length == 0) { return ""; } return ((AssemblyCompanyAttribute)attributes[0]).Company; } } #endregion } } EDIT: I didn't download any T4 software separately as I got to know that it already ships with Visual Studio 2008.

    Read the article

  • C# Monte Carlo Incremental Risk Calculation optimisation, random numbers, parallel execution

    - by m3ntat
    My current task is to optimise a Monte Carlo Simulation that calculates Capital Adequacy figures by region for a set of Obligors. It is running about 10 x too slow for where it will need to be in production and number or daily runs required. Additionally the granularity of the result figures will need to be improved down to desk possibly book level at some stage, the code I've been given is basically a prototype which is used by business units in a semi production capacity. The application is currently single threaded so I'll need to make it multi-threaded, may look at System.Threading.ThreadPool or the Microsoft Parallel Extensions library but I'm constrained to .NET 2 on the server at this bank so I may have to consider this guy's port, http://www.codeproject.com/KB/cs/aforge_parallel.aspx. I am trying my best to get them to upgrade to .NET 3.5 SP1 but it's a major exercise in an organisation of this size and might not be possible in my contract time frames. I've profiled the application using the trial of dotTrace (http://www.jetbrains.com/profiler). What other good profilers exist? Free ones? A lot of the execution time is spent generating uniform random numbers and then translating this to a normally distributed random number. They are using a C# Mersenne twister implementation. I am not sure where they got it or if it's the best way to go about this (or best implementation) to generate the uniform random numbers. Then this is translated to a normally distributed version for use in the calculation (I haven't delved into the translation code yet). Also what is the experience using the following? http://quantlib.org http://www.qlnet.org (C# port of quantlib) or http://www.boost.org Any alternatives you know of? I'm a C# developer so would prefer C#, but a wrapper to C++ shouldn't be a problem, should it? Maybe even faster leveraging the C++ implementations. I am thinking some of these libraries will have the fastest method to directly generate normally distributed random numbers, without the translation step. Also they may have some other functions that will be helpful in the subsequent calculations. Also the computer this is on is a quad core Opteron 275, 8 GB memory but Windows Server 2003 Enterprise 32 bit. Should I advise them to upgrade to a 64 bit OS? Any links to articles supporting this decision would really be appreciated. Anyway, any advice and help you may have is really appreciated.

    Read the article

  • Vs2010 MvcBuildViews Not firing

    - by Maslow
    This project in Vs2008 targeting .net 3.5 used to compile views. Vs2010 Targeting .net 4.0 the following view code is not picked up as an error, and I have not found anyway to listen to the mvcBuildview trace/debug output: <%{ %> A completely unmatched code block declaration is not being picked up, neither was a partial view inheriting from a non existent namespace/class. <PropertyGroup Condition=" '$(Configuration)|$(Platform)' == 'DebugWithBuildViews|AnyCPU' "> <!--<BaseIntermediateOutputPath>bin/intermediate</BaseIntermediateOutputPath>--> <!--<MvcBuildViews Condition=" '$(Configuration)' == 'DebugWithBuildViews' ">true</MvcBuildViews>--> <EnableUpdateable>false</EnableUpdateable> <MvcBuildViews>true</MvcBuildViews> <DebugSymbols>true</DebugSymbols> <OutputPath>bin</OutputPath> <DefineConstants>DEBUG;TRACE</DefineConstants> <DebugType>full</DebugType> <PlatformTarget>AnyCPU</PlatformTarget> <CodeAnalysisUseTypeNameInSuppression>true</CodeAnalysisUseTypeNameInSuppression> <CodeAnalysisModuleSuppressionsFile>GlobalSuppressions.cs</CodeAnalysisModuleSuppressionsFile> <ErrorReport>prompt</ErrorReport> <CodeAnalysisRuleSet>AllRules.ruleset</CodeAnalysisRuleSet> <RunCodeAnalysis>true</RunCodeAnalysis> </PropertyGroup> My BeforeBuild: <Target Name="BeforeBuild"> <WriteLinesToFile File="$(OutputPath)\env.config" Lines="$(Configuration)" Overwrite="true"> </WriteLinesToFile> My AfterBuild: <Target Name="AfterBuild" Condition="'$(MvcBuildViews)'=='true'"> <!--<BaseIntermediateOutputPath>[SomeKnownLocationIHaveAccessTo]</BaseIntermediateOutputPath>--> <Message Importance="high" Text="Precompiling views" /> <!--<AspNetCompiler VirtualPath="temp" PhysicalPath="$(ProjectDir)..\$(ProjectName)" />--> <!--<AspNetCompiler VirtualPath="temp" />--> <!--PhysicalPath="$(ProjectDir)\..\$(ProjectName)"--> I know the MvcBuildViews property is true because the Precompiling views message comes through. The compile is a success but it does not catch the view compilation errors. I have Vs2010 ultimate, vs 2008 developer+database edition on this machine. So either it compiles ignoring the errors with some combinations of the fixes I've tried, or it errors with Error 410 It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. web.config 100 The commented out sections are things I have tried Previously I have tried the fixes from these posts: Compile Views in Asp.net Mvc AllowDefinitionMachinetoApplicationError MvcBuildviews Issue Turning on MVC Build Views in 2010 TFS Johnny Coder

    Read the article

  • Using MongoDB with Ruby On Rails and the Mongomapper plugin

    - by Micke
    Hello, i am currently trying to learn Ruby On Rails as i am a long-time PHP developer so i am building my own community like page. I have came pritty far and have made the user models and suchs using MySQL. But then i heard of MongoDB and looked in to it a little bit more and i find it kinda nice. So i have set it up and i am using mongomapper for the connection between rails and MongoDB. And i am now using it for the News page on the site. I also have a profile page for every User which includes their own guestbook so other users can come to their profile and write a little message to them. My thought now is to change the User models from using MySQL to start using MongoDB. I can start by showing how the models for each User is set up. The user model: class User < ActiveRecord::Base has_one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook < ActiveRecord::Base belongs_to :user has_many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts < ActiveRecord::Base belongs_to :guestbook, :class_name => "User::Guestbook" I have divided it like this for my own convenience but now when i am going to try to migrate to MongoDB i dont know how to make the tables. I would like to have one table for each user and in that table a "column" for all the guestbook entries since MongoDB can have a EmbeddedDocument. I would like to do this so i just have one Table for each user and not like now when i have three tables just to be able to have a guestbook. So my thought is to have it like this: The user model: class User include MongoMapper::Document one :guestbook, :class_name => "User::Guestbook" The Guestbook model model: class User::Guestbook include MongoMapper::EmbeddedDocument belongs_to :user many :posts, :class_name => "User::Guestbook::Posts", :foreign_key => "user_id" And then the Guestbook posts model: class User::Guestbook::Posts include MongoMapper::EmbeddedDocument belongs_to :guestbook, :class_name => "User::Guestbook" But then i can think of one problem.. That when i just want to fetch the user information like a nickname and a birthdate then it will have to fetch all the users guestbook posts. And if each user has like a thousand posts in the guestbook it will get really much to fetch for the system. Or am i wrong? Do you think i should do it any other way? Thanks in advance and sorry if i am hard to understand but i am not so educated in the english language :)

    Read the article

  • jBASE 4.1 Database Noobster Questions

    - by Steve Johnson
    I am a software developer with devlopment experience in C#, C++ .Net alongwith SQL Server 2005/08, Oracle and mysql. But somehow i dont get jBASE to work at Windows XP SP3 machine. My goal is setup user accounts, create database on a JBASE ainstallation, authenticate and backup/restore few table via a C++ program. And i dont need to do it with builtin backup/restore tools of jBASE. I am able to install jBASe 4.1 aling with all its accessories on my WINXPSP3 machine. I was able to run the jSlimserver and TEMENOUS server along with licnesing server. I was able to add the license key as well. But after that what i was supposed to do? i have no idea about it. The docs and online help doesnt answer a simple question of how to create a database! The google search results from the jbase site all go to the 404 Pages! Can a jBASE expert guide to the following steps: Create a jBASE database. Create users Authenticate via those users Connect to database Create tables and insert data. Connect via a C++ or C# program to connect to jBASE DB and backup/restore tables. I know that this is too much too ask but i dont get to get the JBASE system. I cant get it to work on my System somehow. Btw, jdc and jexloree doesnt seem to do anything. I have checked that enironmental variables for jBASE are setup correctly and i have verified them. There are no extra JRE or JDK installations on my system. Besides all that, only licensing client, slim server and temenous server seem to run and listen for connections and no other execuatable ever seems to work. A simple tutorial to achieve the objective will be highly appreciated. Also if anyone can point out the mistake that i have done or anything i might need to check, then please do so. I will be highly encouraged and obliged. Thanks Steve

    Read the article

  • Crawling engine architecture - Java/ Perl integration

    - by Bigtwinz
    Hi all, I am looking to develop a management and administration solution around our webcrawling perl scripts. Basically, right now our scripts are saved in SVN and are manually kicked off by SysAdmin/devs etc. Everytime we need to retrieve data from new sources we have to create a ticket with business instructions and goals. As you can imagine, not an optimal solution. There are 3 consistent themes with this system: the retrieval of data has a "conceptual structure" for lack of a better phrase i.e. the retrieval of information follows a particular path we are only looking for very specific information so we dont have to really worry about extensive crawling for awhile (think thousands-tens of thousands of pages vs millions) crawls are url-based instead of site-based. As I enhance this alpha version to a more production-level beta I am looking to add automation and management of the retrieval of data. Additionally our other systems are Java (which I'm more proficient in) and I'd like to compartmentalize the perl aspects so we dont have to lean heavily on outside help. I've evaluated the usual suspects Nutch, Droid etc but the time spent on modifying those frameworks to suit our specific information retrieval cant be justified. So I'd like your thoughts regarding the following architecture. I want to create a solution which use Java as the interface for managing and execution of the perl scripts use Java for configuration and data access stick with perl for retrieval An example use case would be a data analyst delivers us a requirement for crawling perl developer creates the required script and uses this webapp to submit the script (which gets saved to the filesystem) the script gets kicked off from the webapp with specific parameters .... Webapp should be able to create multiple threads of the perl script to initiate multiple crawlers. So questions are what do you think how solid is integration between Java and Perl specifically from calling perl from java has someone used such a system which actually is part perl repository The goal really is to not have a whole bunch of unorganized perl scripts and put some management and organization on our information retrieval. Also, I know I can use perl do do the web part of what we want - but as I mentioned before - trying to keep perl focused. But it seems assbackwards I'm not adverse to making it an all perl solution. Open to any all suggestions and opinions. Thanks

    Read the article

  • firefox, jQuery ajax calls firing twice and never triggering success or error functions

    - by Adrian Adkison
    Hi, I am developing with the .NET framework, using jQuery 1.4.2 client side. When developing in Firefox version 3.6, every so often an one of the many ajax calls I make on the page will fire twice, the second will return successfully but will not trigger the success handler of the ajax call and the first never returns anything. So basically the data is all sent to the server and response is sent down but nothing happens with the response. Here is an example of the call I am making. It happens to any of the ajax calls, so there is not one particular that is causing the problem: $.ajax({ type:"POST", contentType : "application/json; charset=utf-8", data:"{}", dataType:"json", success:function(){ alert('success'); }, error:function(){ alert('error'); }, url:'/services.aspx/somemethod' }); }) From firebug, here are the headers of the first call which in firebug shows as never completely responding, meaning i see no response code and the loader gif in the firebug never goes away. Note:In firebug it usually says Response Header but for the first call this space is blank Server ASP.NET Development Server/9.0.0.0 X-AspNet-Version 2.0.50727 Content-Type application/json; charset=utf-8 Connection Close Request Headers Host mydomain.com User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401Firefox/3.6.3 ( .NET CLR 3.5.30729) Accept application/json, text/javascript, */* Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/json; charset=utf-8 X-Requested-With XMLHttpRequest Referer http://mydomain.com/mypage.aspx Here is the header from the second request which just appear to complete in firebug (i.e response is 200): Response Header Server ASP.NET Development Server/9.0.0.0 X-AspNet-Version 2.0.50727 Content-Type application/json; charset=utf-8 Connection Close Request Headers Host mydomain.com User-Agent Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729) Accept text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language en-us,en;q=0.5 Accept-Encoding gzip,deflate Accept-Charset ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive 115 Connection keep-alive Content-Type application/json; charset=utf-8 Referer http://mydomain.com/mypage.aspx To summarize my question, why are two requests being made and why are neither of them triggering a success or error handler in the ajax call. I have seen this article about firefox 3.5+ and preflighted requests https://developer.mozilla.org/En/HTTP_access_control#Preflighted_requests In the article is says if a "POST" is made with any other content type than "application/x-www-form-urlencoded, multipart/form-data, or text/plain" than the request is pre-flighted. If this is the case, this should happen to all of my calls. Thanks

    Read the article

  • All downloads being interrupted

    - by Jake
    System: Windows 7 Professional 64bit. 8GB RAM, Intel i5-2400 CPU, +300GB free on the hard drive. AVG Internet Security 2012 (enabled & disabled, with firewall enabled and disabled - no effect for either). This computer is less than a year old. Network: This problem is occurring on a single computer on a network with multiple computers. The router is a Motorola Netopia 3347-02 (DSL Modem/Wireless Router combined). The computer is plugged in directly to the modem, other computers are using the wireless successfully. The router has been reset. The only thing odd about the connection between the router and computer is that it is configured to allow RDP through, so it is assigned a static IP by the router and port forwarding is enabled for port 3389. Also, though I doubt it matters, a second wireless router is active behind this router providing a second network that some computers in the area use without issues. Details: All downloads initiated on this specific computer eventually fail, this includes streaming from youtube, specialized downloads (itunes), downloads from websites, FTP downloads, etc. Failure occurs with all browsers, but in chrome this is the process it takes: 1) Download begins normally, 2) At some point between (observed) 7MBs and 229MBs the download stops progressing (at this point, if watching chrome's task manager, you can see the network activity for the downloading tab drop to 0kps), 3) for some time the download sits there still attempting to complete, but will eventually display "123,049,871/0 B, Interrupted" (where the number is whatever it actually got to). The file I am using to test this is a very large .zip file located on a server I control, but the problem seems to occur on any site. The amount downloaded is completely random, and seems to be more time-based than anything (if I start a download immediately after the last one fails, it tends to get further than the last one). Small files can get through for this reason, though they can fail as well. In a test where I simultaneously downloaded the same file via HTTP (chrome) and FTP (windows explorer), both downloads failed at the same instant, though explorer displayed "Connection timed out" several minutes before chrome finally showed the download as interrupted. Other things I have tried based on advice given to people with similar/identical problems: Setting my MTU to 1492 (as described here: http://blog.thecompwiz.com/2011/08/networking-issues.html) Disabling write caching to the hard drive storing the download on an external device successfully transmitted +1GB file from one computer on the same network to this computer disabling indexing in the folder the download was being stored in disabling all security software checked to make sure all drivers were up to date read about 50 accounts with nearly exact descriptions of what I'm experiencing, none of which had a solution given Running Processes: Image Name PID Session Name Session# Mem Usage ========================= ======== ================ =========== ============ System Idle Process 0 Services 0 24 K System 4 Services 0 104,836 K smss.exe 332 Services 0 1,276 K csrss.exe 764 Services 0 5,060 K wininit.exe 820 Services 0 4,748 K csrss.exe 844 Console 1 23,764 K services.exe 876 Services 0 11,856 K lsass.exe 892 Services 0 14,420 K lsm.exe 900 Services 0 7,820 K winlogon.exe 944 Console 1 7,716 K svchost.exe 428 Services 0 12,744 K svchost.exe 796 Services 0 12,240 K svchost.exe 1036 Services 0 22,372 K svchost.exe 1084 Services 0 174,132 K svchost.exe 1112 Services 0 56,144 K svchost.exe 1288 Services 0 18,640 K svchost.exe 1404 Services 0 29,616 K spoolsv.exe 1576 Services 0 25,924 K svchost.exe 1616 Services 0 12,788 K AppleMobileDeviceService. 1728 Services 0 9,796 K avgwdsvc.exe 1820 Services 0 8,268 K mDNSResponder.exe 1844 Services 0 5,832 K w3dbsmgr.exe 1108 Services 0 43,760 K QBCFMonitorService.exe 1336 Services 0 16,408 K svchost.exe 2404 Services 0 28,240 K taskhost.exe 3020 Console 1 12,372 K dwm.exe 2280 Console 1 5,968 K explorer.exe 2964 Console 1 152,476 K WUDFHost.exe 3316 Services 0 6,740 K svchost.exe 3408 Services 0 5,556 K RAVCpl64.exe 3684 Console 1 13,864 K igfxtray.exe 3700 Console 1 7,804 K hkcmd.exe 3772 Console 1 7,868 K igfxpers.exe 3788 Console 1 10,940 K sidebar.exe 3836 Console 1 84,400 K chrome.exe 3964 Console 1 19,640 K pptd40nt.exe 4068 Console 1 5,156 K acrotray.exe 3908 Console 1 14,676 K avgtray.exe 3872 Console 1 9,508 K jusched.exe 4076 Console 1 4,412 K iTunesHelper.exe 1532 Console 1 87,308 K SearchIndexer.exe 3492 Services 0 36,948 K iPodService.exe 4136 Services 0 7,944 K BrccMCtl.exe 4276 Console 1 18,132 K splwow64.exe 4380 Console 1 32,600 K qbupdate.exe 4836 Console 1 24,236 K svchost.exe 4288 Services 0 20,700 K wmpnetwk.exe 3112 Services 0 9,516 K FNPLicensingService.exe 5248 Services 0 5,852 K QBW32.EXE 5508 Console 1 127,068 K QBDBMgrN.exe 5600 Services 0 42,252 K EXCEL.EXE 2512 Console 1 99,100 K LMS.exe 3188 Services 0 5,616 K UNS.exe 1600 Services 0 7,308 K axlbridge.exe 5260 Console 1 5,132 K chrome.exe 5888 Console 1 200,336 K chrome.exe 3536 Console 1 26,076 K chrome.exe 1952 Console 1 20,168 K chrome.exe 4596 Console 1 24,696 K chrome.exe 4292 Console 1 48,096 K chrome.exe 2796 Console 1 23,520 K Acrobat.exe 1240 Console 1 87,252 K 123w.exe 4892 Console 1 22,728 K calc.exe 1700 Console 1 12,636 K chrome.exe 1328 Console 1 28,888 K chrome.exe 3696 Console 1 47,012 K rundll32.exe 6320 Console 1 7,104 K chrome.exe 4928 Console 1 44,248 K AVGIDSAgent.exe 260 Services 0 12,940 K avgfws.exe 6052 Services 0 26,912 K avgnsa.exe 5064 Services 0 2,496 K avgrsa.exe 3088 Services 0 2,200 K avgcsrva.exe 2596 Services 0 380 K avgcsrva.exe 6948 Services 0 408 K StikyNot.exe 452 Console 1 14,772 K chrome.exe 4580 Console 1 28,200 K chrome.exe 4016 Console 1 57,756 K svchost.exe 7140 Services 0 4,500 K chrome.exe 6264 Console 1 56,824 K chrome.exe 7008 Console 1 56,896 K chrome.exe 2224 Console 1 38,032 K taskhost.exe 612 Console 1 7,228 K chrome.exe 6000 Console 1 10,928 K chrome.exe 2568 Console 1 43,052 K chrome.exe 272 Console 1 75,988 K chrome.exe 7328 Console 1 53,240 K PaprPort.exe 7976 Console 1 137,152 K pplinks.exe 7500 Console 1 14,052 K ppscanmg.exe 5744 Console 1 18,996 K taskeng.exe 7388 Console 1 6,308 K SearchProtocolHost.exe 8024 Services 0 8,804 K SearchFilterHost.exe 7232 Services 0 7,848 K chrome.exe 8016 Console 1 37,440 K cmd.exe 7692 Console 1 3,096 K conhost.exe 7516 Console 1 5,872 K tasklist.exe 8160 Console 1 5,772 K WmiPrvSE.exe 7684 Services 0 6,400 K Any help with this would be greatly appreciated, I've been beating my head against a wall over this all day. This computer serves dual purpose as the main company document server and the Owner's work computer, it's fairly important it be fully functional and I cannot figure this out.

    Read the article

  • Opinions on sensor / reading / alert database design

    - by Mark
    I've asked a few questions lately regarding database design, probably too many ;-) However I beleive I'm slowly getting to the heart of the matter with my design and am slowly boiling it down. I'm still wrestling with a couple of decisions regarding how "alerts" are stored in the database. In this system, an alert is an entity that must be acknowledged, acted upon, etc. Initially I related readings to alerts like this (very cut down) : - [Location] LocationId [Sensor] SensorId LocationId UpperLimitValue LowerLimitValue [SensorReading] SensorReadingId Value Status Timestamp [SensorAlert] SensorAlertId [SensorAlertReading] SensorAlertId SensorReadingId The last table is associating readings with the alert, because it is the reading that dictate that the sensor is in alert or not. The problem with this design is that it allows readings from many sensors to be associated with a single alert - whereas each alert is for a single sensor only and should only have readings for that sensor associated with it (should I be bothered that the DB allows this though?). I thought to simplify things, why even bother with the SensorAlertReading table? Instead I could do this: [Location] LocationId [Sensor] SensorId LocationId [SensorReading] SensorReadingId SensorId Value Status Timestamp [SensorAlert] SensorAlertId SensorId Timestamp [SensorAlertEnd] SensorAlertId Timestamp Basically I'm not associating readings with the alert now - instead I just know that an alert was active between a start and end time for a particular sensor, and if I want to look up the readings for that alert I can do. Obviously the downside is I no longer have any constraint stopping me deleting readings that occurred during the alert, but I'm not sure that the constraint is neccessary. Now looking in from the outside as a developer / DBA, would that make you want to be sick or does it seem reasonable? Is there perhaps another way of doing this that I may be missing? Thanks. EDIT: Here's another idea - it works in a different way. It stores each sensor state change, going from normal to alert in a table, and then readings are simply associated with a particular state. This seems to solve all the problems - what d'ya think? (the only thing I'm not sure about is calling the table "SensorState", I can't help think there's a better name (maybe SensorReadingGroup?) : - [Location] LocationId [Sensor] SensorId LocationId [SensorState] SensorStateId SensorId Timestamp Status IsInAlert [SensorReading] SensorReadingId SensorStateId Value Timestamp There must be an elegant solution to this!

    Read the article

  • Load-balancing between a Procurve switch and a server

    - by vlad
    Hello I've been searching around the web for this problem i've been having. It's similar in a way to this question: How exactly & specifically does layer 3 LACP destination address hashing work? My setup is as follows: I have a central switch, a Procurve 2510G-24, image version Y.11.16. It's the center of a star topology, there are four switches connected to it via a single gigabit link. Those switches service the users. On the central switch, I have a server with two gigabit interfaces that I want to bond together in order to achieve higher throughput, and two other servers that have single gigabit connections to the switch. The topology looks as follows: sw1 sw2 sw3 sw4 | | | | --------------------- | sw0 | --------------------- || | | srv1 srv2 srv3 The servers were running FreeBSD 8.1. On srv1 I set up a lagg interface using the lacp protocol, and on the switch I set up a trunk for the two ports using lacp as well. The switch showed that the server was a lacp partner, I could ping the server from another computer, and the server could ping other computers. If I unplugged one of the cables, the connection would keep working, so everything looked fine. Until I tested throughput. There was only one link used between srv1 and sw0. All testing was conducted with iperf, and load distribution was checked with systat -ifstat. I was looking to test the load balancing for both receive and send operations, as I want this server to be a file server. There were therefore two scenarios: iperf -s on srv1 and iperf -c on the other servers iperf -s on the other servers and iperf -c on srv1 connected to all the other servers. Every time only one link was used. If one cable was unplugged, the connections would keep going. However, once the cable was plugged back in, the load was not distributed. Each and every server is able to fill the gigabit link. In one-to-one test scenarios, iperf was reporting around 940Mbps. The CPU usage was around 20%, which means that the servers could withstand a doubling of the throughput. srv1 is a dell poweredge sc1425 with onboard intel 82541GI nics (em driver on freebsd). After troubleshooting a previous problem with vlan tagging on top of a lagg interface, it turned out that the em could not support this. So I figured that maybe something else is wrong with the em drivers and / or lagg stack, so I started up backtrack 4r2 on this same server. So srv1 now uses linux kernel 2.6.35.8. I set up a bonding interface bond0. The kernel module was loaded with option mode=4 in order to get lacp. The switch was happy with the link, I could ping to and from the server. I could even put vlans on top of the bonding interface. However, only half the problem was solved: if I used srv1 as a client to the other servers, iperf was reporting around 940Mbps for each connection, and bwm-ng showed, of course, a nice distribution of the load between the two nics; if I run the iperf server on srv1 and tried to connect with the other servers, there was no load balancing. I thought that maybe I was out of luck and the hashes for the two mac addresses of the clients were the same, so I brought in two new servers and tested with the four of them at the same time, and still nothing changed. I tried disabling and reenabling one of the links, and all that happened was the traffic switched from one link to the other and back to the first again. I also tried setting the trunk to "plain trunk mode" on the switch, and experimented with other bonding modes (roundrobin, xor, alb, tlb) but I never saw any traffic distribution. One interesting thing, though: one of the four switches is a Cisco 2950, image version 12.1(22)EA7. It has 48 10/100 ports and 2 gigabit uplinks. I have a server (call it srv4) with a 4 channel trunk connected to it (4x100), FreeBSD 8.0 release. The switch is connected to sw0 via gigabit. If I set up an iperf server on one of the servers connected to sw0 and a client on srv4, ALL 4 links are used, and iperf reports around 330Mbps. systat -ifstat shows all four interfaces are used. The cisco port-channel uses src-mac to balance the load. The HP should use both the source and destination according to the manual, so it should work as well. Could this mean there is some bug in the HP firmware? Am I doing something wrong?

    Read the article

  • Bluetooth connect to a RS232 adapter in android

    - by ThePosey
    Hello All, I am trying to use the Bluetooth Chat sample API app that google provides to connect to a bluetooth RS232 adapter hooked up to another device. Here is the app for reference: http://developer.android.com/resources/samples/BluetoothChat/index.html And here is the spec sheet for the RS232 connector just for reference: http://serialio.com/download/Docs/BlueSnap-guide-4.77%5FCommands.pdf Well the problem is that when I go to connect to the device with: mmSocket.connect(); (BluetoothSocket::connect()) I always get an IOException error thrown by the connect method. When I do a toString on the exception I get "Service discovery failed". My question is mostly what are the cases that would cause an IOException to get thrown in the connect method? I know those are in the source somewhere but I don't know exactly how the java layer that you write apps in and the C/C++ layer that contains the actual stacks interface. I know that it uses the bluez bluetooth stack which is written in C/C++ but not sure how that ties into the java layer which is what I would think is throwing the exception. Any help on pointing me to where I can try to dissect this issue would be incredible. Also just to note I am able to pair with the RS232 adapter just fine but I am never able to actually connect. Here is the logcat output for more reference: I/ActivityManager( 1018): Displayed activity com.example.android.BluetoothChat/.DeviceListActivity: 326 ms (total 326 ms) E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) D/BluetoothChat( 1729): onActivityResult -1 D/BluetoothChatService( 1729): connect to: 00:06:66:03:0C:51 D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_CONNECTING E/BluetoothChat( 1729): + ON RESUME + I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_CONNECTING I/BluetoothChatService( 1729): BEGIN mConnectThread E/BluetoothService.cpp( 1018): stopDiscoveryNative: D-Bus error in StopDiscovery: org.bluez.Error.Failed (Invalid discovery session) E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 I/BluetoothChatService( 1729): CONNECTION FAIL TOSTRING: java.io.IOException: Service discovery failed D/BluetoothChatService( 1729): setState() STATE_CONNECTING - STATE_LISTEN D/BluetoothChatService( 1729): start D/BluetoothChatService( 1729): setState() STATE_LISTEN - STATE_LISTEN I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID I/NotificationService( 1018): enqueueToast pkg=com.example.android.BluetoothChat callback=android.app.ITransientNotification$Stub$Proxy@446327c8 duration=0 I/BluetoothChat( 1729): MESSAGE_STATE_CHANGE: STATE_LISTEN E/BluetoothEventLoop.cpp( 1018): event_filter: Received signal org.bluez.Device:PropertyChanged from /org/bluez/1498/hci0/dev_00_06_66_03_0C_51 V/BluetoothEventRedirector( 1080): Received android.bleutooth.device.action.UUID The device I'm trying to connect to is the 00:06:66:03:0C:51 which I can scan for and apparently pair with just fine.

    Read the article

  • Automatic incremental SQL Script generation for incremental, nightly builds when using Team Build in

    - by Steve Johnson
    hi all, hope that everybody here is OK. We are using VS 2008 as development tool, TFS 2008 as version control as well as build automation. Some of our developer use dbpro for databases changes and some use SQL Server management studio. I am trying to automate build for Web Application built using C# and VB.Net. Our scenario is such that we have a central database to which our web application connects. Whenever we supply our clients with a new functionality or a bug fix, we supply them incremental builds. The SQL script is checked into source control for every incremental build when they have made and tested there changes on our central DB Server. I want to generate Differential script that can be run at the client as an incremental update script. Now to come about it is a problem. Sometimes our developers tend to forget the database change-sets and the script in the source control is missing an SP or a two. Also, sometimes we need to insert default data into some of the tables that have strict stringent values and not test values. Like a table that contains Services provided by the panel, we add a new service name, signature, credentials and service address, etc etc in the ServiceTable. Besides this many other tables may have test data that may not be needed. If we use DataCompare, it will generate changeset for required data (important for client to enable certain services) and our test data that was added to the database as a result of our testing of the functionality or bug fix. Currently i am using SQLSchemaCompareTask (from Visual Studio 2008 Team Database Professional Power Tools API) in the TFSBuild.proj file of the build definition for TFS 2008. Using SQLSchemaCompareTask, the script generated contains database names like [dbo]. etc which are not desired as the script fails when run against SQL Server 2000 databses (Some of our client still use SQL Server 2000) databases as teh backend of the application. Also default data can't be generated by this process. To overcome this problem, i have to come up with a solution that can compare databases and generate script automatically that does not have to be manually reviewed again before being sent to the client. Please suggest effective methodology of such SQL script generation and suggest whether two different databases may be used or something ? Is there any toolkit or api that can enable build automation for SQL Server databases? Thank you all. Regards Steve

    Read the article

  • Silverlight ViewBase in separate assembly - possible?

    - by Mark
    I have all my views in a project inheriting from a ViewBase class that inherits from UserControl. In my XAML I reference it thus: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Configure.Views" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" It works fine. Now I have moved the ViewBase to another project (so I can refernce it from multiple projects) so I reference it like: <f:ViewBase x:Class="Forte.UI.Modules.Configure.Views.AddNewEmployeeView" xmlns:f="clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" This works fine when I run from the IDE but when I run the same sln from MSBuild it gives a warning: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - H:\dev\ExternalCopy\Code\UI\Modules\Configure\Views\AddNewEmployee\AddNewEmployeeView.xaml(1,2,1,2): warning : The tag 'ViewBase' does not exist in XML namespace 'clr-namespace:Forte.UI.Modules.Common.Views;assembly=Forte.UI.Modules.Common'. Then fails with: "H:\dev\ExternalCopy\Code\UI\Modules\Configure\Forte.UI.Modules.Configure.csproj" (default target) (10:12) - (ValidateXaml target) - C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): error MSB4018: The "ValidateXaml" task failed unexpectedly.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: System.NullReferenceException: Object reference not set to an instance of an object.\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at MS.MarkupCompiler.ValidationPass.ValidateXaml(String fileName, Assembly[] assemb lies, Assembly callingAssembly, TaskLoggingHelper log, Boolean shouldThrow)\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.XamlValidator.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Silverlight.Build.Tasks.ValidateXaml.Execute()\r C:\Program Files\MSBuild\Microsoft\Silverlight\v3.0\Microsoft.Silverlight.Common.targets(210,9): er ror MSB4018: at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engin eProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult) Any ideas what might be causing this behaviour? Using Silverlight 3 Here is a cut down version of the MSBuild file that fails to build the sln that builds fine in the IDE (sorry couldn't get it to format here): <?xml version="1.0" encoding="utf-8" ?> <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Compile"> <ItemGroup> <ProjectToBuild Include="..\UI\Forte.UI.sln"> <Properties>Configuration=Debug</Properties> </ProjectToBuild> </ItemGroup> <Target Name="Compile"> <MSBuild Projects="@(ProjectToBuild)"></MSBuild> </Target> </Project> Thanks for any help!

    Read the article

  • Oracle 9i Session Disconnections

    - by mlaverd
    [Cross-Posting from ServerFault] I am in a development environment, and our test Oracle 9i server has been misbehaving for a few days now. What happens is that we have our JDBC connections disconnecting after a few successful connections. We got this box set up by our IT department and handed over to. It is 'our problem', so options like 'ask you DBA' isn't going to help me. :( Our server is set up with 3 plain databases (one is the main dev db, the other is the 'experimental' dev db). We use the Oracle 10 ojdbc14.jar thin JDBC driver (because of some bug in the version 9 of the driver). We're using Hibernate to talk to the DB. The only thing that I can see that changed is that we now have more users connecting to the server. Instead of one developer, we now have 3. With the Hibernate connection pools, I'm thinking that maybe we're hitting some limit? Anyone has any idea what's going on? Here's the stack trace on the client: Caused by: org.hibernate.exception.GenericJDBCException: could not execute query at org.hibernate.exception.SQLStateConverter.handledNonSpecificException(SQLStateConverter.java:126) [hibernate3.jar:na] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:114) [hibernate3.jar:na] at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2235) [hibernate3.jar:na] at org.hibernate.loader.Loader.listIgnoreQueryCache(Loader.java:2129) [hibernate3.jar:na] at org.hibernate.loader.Loader.list(Loader.java:2124) [hibernate3.jar:na] at org.hibernate.loader.hql.QueryLoader.list(QueryLoader.java:401) [hibernate3.jar:na] at org.hibernate.hql.ast.QueryTranslatorImpl.list(QueryTranslatorImpl.java:363) [hibernate3.jar:na] at org.hibernate.engine.query.HQLQueryPlan.performList(HQLQueryPlan.java:196) [hibernate3.jar:na] at org.hibernate.impl.SessionImpl.list(SessionImpl.java:1149) [hibernate3.jar:na] at org.hibernate.impl.QueryImpl.list(QueryImpl.java:102) [hibernate3.jar:na] ... Caused by: java.sql.SQLException: Io exception: Connection reset at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:146) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:255) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:829) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1049) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:854) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1154) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3370) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3415) [ojdbc14.jar:Oracle JDBC Driver version - "10.2.0.4.0"] at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208) [hibernate3.jar:na] at org.hibernate.loader.Loader.getResultSet(Loader.java:1812) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQuery(Loader.java:697) [hibernate3.jar:na] at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259) [hibernate3.jar:na] at org.hibernate.loader.Loader.doList(Loader.java:2232) [hibernate3.jar:na]

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >