Search Results

Search found 11897 results on 476 pages for 'dean rather'.

Page 432/476 | < Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >

  • Strange Play Framework 2.2 exceptions after trying to add MySQL / slick

    - by Mike Cialowicz
    I'm working on a Play 2.2 application, and things have gone a bit south on me since I've tried adding my DB layer. Below are my build.sbt dependencies. As you can see I use mysql-connector-java and play-slick: libraryDependencies ++= Seq( jdbc, anorm, cache, "joda-time" % "joda-time" % "2.3", "mysql" % "mysql-connector-java" % "5.1.26", "com.typesafe.play" %% "play-slick" % "0.5.0.8", "com.aetrion.flickr" % "flickrapi" % "1.1" ) My application.conf has some similarly simple DB stuff in it: db.default.url="jdbc:mysql://localhost/myDb" db.default.driver="com.mysql.jdbc.Driver" db.default.user="root" db.default.pass="" This is what it looks like when my Play server starts: [info] play - Listening for HTTP on /0:0:0:0:0:0:0:0:9000 (Server started, use Ctrl+D to stop and go back to the console...) [info] Compiling 1 Scala source to C:\bbq\cats\in\space [info] play - database [default] connected at jdbc:mysql://localhost/myDb [info] play - Application started (Dev) So, it appears that Play can connect to the MySQL DB just fine (I think). However, I get this exception when I make any request to my server: [error] p.nettyException - Exception caught in Netty java.lang.NoSuchMethodError: akka.actor.ActorSystem.dispatcher()Lscala/concurren t/ExecutionContext; at play.core.Invoker$.<init>(Invoker.scala:24) ~[play_2.10.jar:2.2.0] at play.core.Invoker$.<clinit>(Invoker.scala) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$Implicits$.defaultContext$lzycompu te(Execution.scala:7) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$Implicits$.defaultContext(Executio n.scala:6) ~[play_2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$.<init>(Execution.scala:10) ~[play _2.10.jar:2.2.0] at play.api.libs.concurrent.Execution$.<clinit>(Execution.scala) ~[play_ 2.10.jar:2.2.0] The odd thing is that the 2nd request (to the exact same URL, same controller, no changes) comes back with a different error: [error] p.nettyException - Exception caught in Netty java.lang.NoClassDefFoundError: Could not initialize class play.api.libs.concurr ent.Execution$ at play.core.server.netty.PlayDefaultUpstreamHandler.handleAction$1(Play DefaultUpstreamHandler.scala:194) ~[play_2.10.jar:2.2.0] at play.core.server.netty.PlayDefaultUpstreamHandler.messageReceived(Pla yDefaultUpstreamHandler.scala:169) ~[play_2.10.jar:2.2.0] at com.typesafe.netty.http.pipelining.HttpPipeliningHandler.messageRecei ved(HttpPipeliningHandler.java:62) ~[netty-http-pipelining.jar:na] at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived (HttpContentDecoder.java:108) ~[netty-3.6.5.Final.jar:na] at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:29 6) ~[netty-3.6.5.Final.jar:na] at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessage Received(FrameDecoder.java:459) ~[netty-3.6.5.Final.jar:na] The URL / controller that I'm requesting just renders a static web page and doesn't do anything of any significance. It was working just fine before I started adding my DB layer. I'm rather stuck. Any help would be greatly appreciated, thanks. I'm using Scala 2.10.2, Play 2.2.0, and MySQL Server 5.6.14.0 (community edition).

    Read the article

  • ASP.NET MVC2: Getting textbox data from a view to a controller

    - by mr_plumley
    Hi, I'm having difficulty getting data from a textbox into a Controller. I've read about a few ways to accomplish this in Sanderson's book, Pro ASP.NET MVC Framework, but haven't had any success. Also, I've ran across a few similiar questions online, but haven't had any success there either. Seems like I'm missing something rather fundamental. Currently, I'm trying to use the action method parameters approach. Can someone point out where I'm going wrong or provide a simple example? Thanks in advance! Using Visual Studio 2008, ASP.NET MVC2 and C#: What I would like to do is take the data entered in the "Investigator" textbox and use it to filter investigators in the controller. I plan on doing this in the List method (which is already functional), however, I'm using the SearchResults method for debugging. Here's the textbox code from my view, SearchDetails: <h2>Search Details</h2> <% using (Html.BeginForm()) { %> <fieldset> <%= Html.ValidationSummary() %> <h4>Investigator</h4> <p> <%=Html.TextBox("Investigator")%> <%= Html.ActionLink("Search", "SearchResults")%> </p> </fieldset> <% } %> Here is the code from my controller, InvestigatorsController: private IInvestigatorsRepository investigatorsRepository; public InvestigatorsController(IInvestigatorsRepository investigatorsRepository) { //IoC: this.investigatorsRepository = investigatorsRepository; } public ActionResult List() { return View(investigatorsRepository.Investigators.ToList()); } public ActionResult SearchDetails() { return View(); } public ActionResult SearchResults(SearchCriteria search) { string test = search.Investigator; return View(); } I have an Investigator class: [Table(Name = "INVESTIGATOR")] public class Investigator { [Column(IsPrimaryKey = true, IsDbGenerated = false, AutoSync=AutoSync.OnInsert)] public string INVESTID { get; set; } [Column] public string INVEST_FNAME { get; set; } [Column] public string INVEST_MNAME { get; set; } [Column] public string INVEST_LNAME { get; set; } } and created a SearchCriteria class to see if I could get MVC to push the search criteria data to it and grab it in the controller: public class SearchCriteria { public string Investigator { get; set; } } } I'm not sure if project layout has anything to do with this either, but I'm using the 3 project approach suggested by Sanderson: DomainModel, Tests, and WebUI. The Investigator and SearcCriteria classes are in the DomainModel project and the other items mentioned here are in the WebUI project. Thanks again for any hints, tips, or simple examples! Mike

    Read the article

  • Using MVC2 to update an Entity Framework v4 object with foreign keys fails

    - by jbjon
    With the following simple relational database structure: An Order has one or more OrderItems, and each OrderItem has one OrderItemStatus. Entity Framework v4 is used to communicate with the database and entities have been generated from this schema. The Entities connection happens to be called EnumTestEntities in the example. The trimmed down version of the Order Repository class looks like this: public class OrderRepository { private EnumTestEntities entities = new EnumTestEntities(); // Query Methods public Order Get(int id) { return entities.Orders.SingleOrDefault(d => d.OrderID == id); } // Persistence public void Save() { entities.SaveChanges(); } } An MVC2 app uses Entity Framework models to drive the views. I'm using the EditorFor feature of MVC2 to drive the Edit view. When it comes to POSTing back any changes to the model, the following code is called: [HttpPost] public ActionResult Edit(int id, FormCollection formValues) { // Get the current Order out of the database by ID Order order = orderRepository.Get(id); var orderItems = order.OrderItems; try { // Update the Order from the values posted from the View UpdateModel(order, ""); // Without the ValueProvider suffix it does not attempt to update the order items UpdateModel(order.OrderItems, "OrderItems.OrderItems"); // All the Save() does is call SaveChanges() on the database context orderRepository.Save(); return RedirectToAction("Details", new { id = order.OrderID }); } catch (Exception e) { return View(order); // Inserted while debugging } } The second call to UpdateModel has a ValueProvider suffix which matches the auto-generated HTML input name prefixes that MVC2 has generated for the foreign key collection of OrderItems within the View. The call to SaveChanges() on the database context after updating the OrderItems collection of an Order using UpdateModel generates the following exception: "The operation failed: The relationship could not be changed because one or more of the foreign-key properties is non-nullable. When a change is made to a relationship, the related foreign-key property is set to a null value. If the foreign-key does not support null values, a new relationship must be defined, the foreign-key property must be assigned another non-null value, or the unrelated object must be deleted." When debugging through this code, I can still see that the EntityKeys are not null and seem to be the same value as they should be. This still happens when you are not changing any of the extracted Order details from the database. Also the entity connection to the database doesn't change between the act of Getting and the SaveChanges so it doesn't appear to be a Context issue either. Any ideas what might be causing this problem? I know EF4 has done work on foreign key properties but can anyone shed any light on how to use EF4 and MVC2 to make things easy to update; rather than having to populate each property manually. I had hoped the simplicity of EditorFor and DisplayFor would also extend to Controllers updating data. Thanks

    Read the article

  • Automatically extracting inline XSD from WSDL into XSD file(s)

    - by Steven Geens
    I am using a third party Web Service whose definition and implementation are beyond my control. This web service will change in the future. The Web Service should be used to generate an XML file which contains some of the same data (represented by the same XSD types) as the Web Service plus some extra information generated by the program. My approach: create my own XSD referring to the XSD definitions of the WSDL of the called web service (This XSD also includes XSD types for the extra information obviously.) use a Java XML databinding framework (like ADB or JiXB) to generate the databinding classes from my own XSD file from step 1 use a Java SOAP framework (like Axis2 or CXF) with the same databinding framework to generate the databinding classes from the WSDL (This would enable me to use the objects retrieved by the web service directly in the generation of the XML.) The XSD types I am going to use in my own XSD file, but are defined in the WSDL, are subject to change. Whenever they change, I would like to automatically process the XSD and WSDL databinding again. (If the change is significant enough, this might trigger some development effort.(But usually not.)) My problem: In step 1 I need an XSD referring to the same types as used by the Web Service. The WSDL is referring to another WSDL, which is referring to another WSDL etc. Eventually there is an WSDL with the needed inline XSD types. As far as I know there is no way to directly reference the inline XSD types of a WSDL from an XSD. The approach I would think most viable, is to include an extra step in the automatic processing (before the databinding) that extracts the inline XSD from the WSDL into other XSD file(s). These other XSD file(s) can then be referred to by my own XSD file. Things I'd like to avoid: Manually copy pasting the inline XSD into an XSD file (I am looking for an automatic process.) Any manual steps.(Like the determining the WSDL that contains the inline types manually.(The location of that WSDL does change as well.)) Using xsd:any in my own XSD. I would like my own XSD file to be correct. Using a non-Java technology(like .NET) Huge amounts of implementation (but hints on how you would implement such an extraction are welcome anyway) PS: I found some similar questions, but they all had responses like: WTH would you want to do that? That is the reason for my rather large background story.

    Read the article

  • How to get BinarySecurityToken into the wcf soap request

    - by Mr Bell
    I need to sign my soap request to a 3rd party. The provided an example what the call should look like. And I am trying, rather unsuccessfully to make this call with wcf. I need to make a wcf soap call where the header contains BinarySecurityToken, Signature, and SecurityTokenReference. Here is the example they sent me (with some of the values omitted) I have a certificate for signing, but I cant for the life of me figure out how to make this work <?xml version="1.0" encoding="UTF-8"?> <soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><soapenv:Header><wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:BinarySecurityToken EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3" wsu:Id="SecurityToken-..omitted.." xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">..omitted..</wsse:BinarySecurityToken> <ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:SignedInfo> <ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> <ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <ds:Reference URI="#Body"> <ds:Transforms> <ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/> </ds:Transforms> <ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <ds:DigestValue>..omitted...</ds:DigestValue> </ds:Reference> </ds:SignedInfo> <ds:SignatureValue> ..omitted.. </ds:SignatureValue> <ds:KeyInfo><wsse:SecurityTokenReference><wsse:Reference URI="#SecurityToken-..omitted.." ValueType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-x509-token-profile-1.0#X509v3"/></wsse:SecurityTokenReference></ds:KeyInfo></ds:Signature></wsse:Security></soapenv:Header><soapenv:Body wsu:Id="Body"><in0 xmlns="http://test.3rdParty.com">123</in0></soapenv:Body></soapenv:Envelope>

    Read the article

  • How to troubleshoot a 'System.Management.Automation.CmdletInvocationException'

    - by JamesD
    Does anyone know how best to determine the specific underlying cause of this exception? Consider a WCF service that is supposed to use Powershell 2.0 remoting to execute MSBuild on remote machines. In both cases the scripting environments are being called in-process (via C# for Powershell and via Powershell for MSBuild), rather than 'shelling-out' - this was a specific design decision to avoid command-line hell as well as to enable passing actual objects into the Powershell script. The Powershell script that calls MSBuild is shown below: function Run-MSBuild { [System.Reflection.Assembly]::LoadWithPartialName("Microsoft.Build.Engine") $engine = New-Object Microsoft.Build.BuildEngine.Engine $engine.BinPath = "C:\Windows\Microsoft.NET\Framework\v3.5" $project = New-Object Microsoft.Build.BuildEngine.Project($engine, "3.5") $project.Load("deploy.targets") $project.InitialTargets = "DoStuff" # # Set some initial Properties & Items # # Optionally setup some loggers (have also tried it without any loggers) $consoleLogger = New-Object Microsoft.Build.BuildEngine.ConsoleLogger $engine.RegisterLogger($consoleLogger) $fileLogger = New-Object Microsoft.Build.BuildEngine.FileLogger $fileLogger.Parameters = "verbosity=diagnostic" $engine.RegisterLogger($fileLogger) # Run the build - this is the line that throws a CmdletInvocationException $result = $project.Build() $engine.Shutdown() } When running the above script from a PS command prompt it all works fine. However, as soon as the script is executed from C# it fails with the above exception. The C# code being used to call Powershell is shown below (remoting functionality removed for simplicity's sake): // Build the DTO object that will be passed to Powershell dto = SetupDTO() RunspaceConfiguration runspaceConfig = RunspaceConfiguration.Create(); using (Runspace runspace = RunspaceFactory.CreateRunspace(runspaceConfig)) { runspace.Open(); IList errors; using (var scriptInvoker = new RunspaceInvoke(runspace)) { // The Powershell script lives in a file that gets compiled as an embedded resource TextReader tr = new StreamReader(Assembly.GetExecutingAssembly().GetManifestResourceStream("MyScriptResource")); string script = tr.ReadToEnd(); // Load the script into the Runspace scriptInvoker.Invoke(script); // Call the function defined in the script, passing the DTO as an input object var psResults = scriptInvoker.Invoke("$input | Run-MSBuild", dto, out errors); } } Assuming that the issue was related to MSBuild outputting something that the Powershell runspace can't cope with, I have also tried the following variations to the second .Invoke() call: var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-String", dto, out errors); var psResults = scriptInvoker.Invoke("$input | Run-MSBuild | Out-Null", dto, out errors); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-String"); var psResults = scriptInvoker.Invoke("Run-MSBuild | Out-Null"); I've also looked at using a custom PSHost (based on this sample: http://blogs.msdn.com/daiken/archive/2007/06/22/hosting-windows-powershell-sample-code.aspx), but during debugging I was unable to see any 'interesting' calls to it being made. Do the great and the good of Stackoverflow have any insight that might save my sanity?

    Read the article

  • Wix XmlFile is keeping SqlDatabase from creating databases

    - by Grandpappy
    I've got a Wix project made up of several fragments. One of those fragments has the database components, another has a component that manipulates xml files. When I include the XmlFile element to manipulate a file, the databases defined by the SqlDatabase do not get created. If I comment out the XmlFile, then the databases do get created. Here are the two wix files with fragments that are being used: Database: <Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:sql="http://schemas.microsoft.com/wix/SqlExtension"> <Fragment> <DirectoryRef Id="TARGETDIR"> <Component Id="Database1Creation" Guid="GUID_HERE"> <sql:SqlDatabase Id="Database1" Server="[DATABASE_SERVER]" Database="[DATABASE_NAME]" CreateOnInstall="yes" ConfirmOverwrite="no" DropOnUninstall="yes"> </sql:SqlDatabase> </Component> </DirectoryRef> </Fragment> </Wix> Xml Manipulation: <Wix xmlns="http://schemas.microsoft.com/wix/2006/wi" xmlns:sql="http://schemas.microsoft.com/wix/SqlExtension" xmlns:util="http://schemas.microsoft.com/wix/UtilExtension"> <Fragment> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="ProgramFilesFolder"> <Directory Id="INSTALLDIR" Name="MyDirectory"> <Component Id="ServiceExecutables" Guid="GUID_HERE"> <File Id="File1" Name="File1.xml" Source="Source/File1.xml" /> <util:XmlFile Id="UpdateFile1" File="[INSTALLDIR]File1.xml" Action="setValue" ElementPath="//SomeContainer/SomeElement" Value="[SOME_VALUE]" /> </Component> </Directory> </Directory> </Directory> </Fragment> </Wix> There are other things that are also installed, but they don't appear to have any influence on the issue (I've removed everything else and tested the install). When looking at the install logs when using XmlFile and when not, they are almost exact copies of each other, except that the SqlDatabase calls would be completely missing, and the XmlFile calls would be in their place. Is there a known bug here? Or am I doing something I shouldn't be? This isn't a killer for our app, since I can move the things I'm putting in the xml file into the registry, but I'd rather not do that. I am using Wix 3.5.

    Read the article

  • PostgreSQL to Data-Warehouse: Best approach for near-real-time ETL / extraction of data

    - by belvoir
    Background: I have a PostgreSQL (v8.3) database that is heavily optimized for OLTP. I need to extract data from it on a semi real-time basis (some-one is bound to ask what semi real-time means and the answer is as frequently as I reasonably can but I will be pragmatic, as a benchmark lets say we are hoping for every 15min) and feed it into a data-warehouse. How much data? At peak times we are talking approx 80-100k rows per min hitting the OLTP side, off-peak this will drop significantly to 15-20k. The most frequently updated rows are ~64 bytes each but there are various tables etc so the data is quite diverse and can range up to 4000 bytes per row. The OLTP is active 24x5.5. Best Solution? From what I can piece together the most practical solution is as follows: Create a TRIGGER to write all DML activity to a rotating CSV log file Perform whatever transformations are required Use the native DW data pump tool to efficiently pump the transformed CSV into the DW Why this approach? TRIGGERS allow selective tables to be targeted rather than being system wide + output is configurable (i.e. into a CSV) and are relatively easy to write and deploy. SLONY uses similar approach and overhead is acceptable CSV easy and fast to transform Easy to pump CSV into the DW Alternatives considered .... Using native logging (http://www.postgresql.org/docs/8.3/static/runtime-config-logging.html). Problem with this is it looked very verbose relative to what I needed and was a little trickier to parse and transform. However it could be faster as I presume there is less overhead compared to a TRIGGER. Certainly it would make the admin easier as it is system wide but again, I don't need some of the tables (some are used for persistent storage of JMS messages which I do not want to log) Querying the data directly via an ETL tool such as Talend and pumping it into the DW ... problem is the OLTP schema would need tweaked to support this and that has many negative side-effects Using a tweaked/hacked SLONY - SLONY does a good job of logging and migrating changes to a slave so the conceptual framework is there but the proposed solution just seems easier and cleaner Using the WAL Has anyone done this before? Want to share your thoughts?

    Read the article

  • No endpoint mapping found for..., using SpringWS, JaxB Marshaller

    - by Saky
    I get this error: No endpoint mapping found for [SaajSoapMessage {http://mycompany/coolservice/specs}ChangePerson] Following is my ws config file: <bean class="org.springframework.ws.server.endpoint.mapping.PayloadRootAnnotationMethodEndpointMapping"> <description>An endpoint mapping strategy that looks for @Endpoint and @PayloadRoot annotations.</description> </bean> <bean class="org.springframework.ws.server.endpoint.adapter.MarshallingMethodEndpointAdapter"> <description>Enables the MessageDispatchServlet to invoke methods requiring OXM marshalling.</description> <constructor-arg ref="marshaller"/> </bean> <bean id="marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="contextPaths"> <list> <value>org.company.xml.persons</value> <value>org.company.xml.person_allextensions</value> <value>generated</value> </list> </property> </bean> <bean id="persons" class="com.easy95.springws.wsdl.wsdl11.MultiPrefixWSDL11Definition"> <property name="schemaCollection" ref="schemaCollection"/> <property name="portTypeName" value="persons"/> <property name="locationUri" value="/ws/personnelService/"/> <property name="targetNamespace" value="http://mycompany/coolservice/specs/definitions"/> </bean> <bean id="schemaCollection" class="org.springframework.xml.xsd.commons.CommonsXsdSchemaCollection"> <property name="xsds"> <list> <value>/DataContract/Person-AllExtensions.xsd</value> <value>/DataContract/Person.xsd</value> </list> </property> <property name="inline" value="true"/> </bean> I have then the following files: public interface MarshallingPersonService { public final static String NAMESPACE = "http://mycompany/coolservice/specs"; public final static String CHANGE_PERSON = "ChangePerson"; public RespondPersonType changeEquipment(ChangePersonType request); } and @Endpoint public class PersonEndPoint implements MarshallingPersonService { @PayloadRoot(localPart=CHANGE_PERSON, namespace=NAMESPACE) public RespondPersonType changePerson(ChangePersonType request) { System.out.println("Received a request, is request null? " + (request == null ? "yes" : "no")); return null; } } I am pretty much new to WebServices, and not very comfortable with annotations. I am following a tutorial on setting up jaxb marshaller in springws. I would rather use xml mappings than annotations, although for now I am getting the error message.

    Read the article

  • Managing database connections in an Android Activity

    - by Daniel Lew
    I have an application with a ListActivity that uses a CursorAdapter as its adapter. The ListActivity opens the database and does the querying for the CursorAdapter, which is all well and good, but I am having issues with figuring out when to close both the Cursor and the SQLiteDatabase. The way things are handled right now, if the user finishes the activity, I close the database and the cursor. However, this still ends up with the DalvikVM warning me that I've left a database open - for example, if the user hits the "home" button (leaving the activity in the task's stack), rather than the "back" button. If I close them during pause and then re-query during resume, then I don't get any errors, but then a user cannot return to the list without it requerying (and thus losing the user's place in the list). By this I mean, the user can click on any item in the list and open a new activity based on it, but will often want to hit "back" afterwards and return to the same place on the list. If I requery, then I cannot return the user back to the correct spot. What is the proper way to handle this issue? I want the list to remain scrolled properly, but I don't want the VM to keep complaining about unclosed databases. Edit: Here's a general outline of how I handle the code at the moment: public class MyListActivity extends ListActivity { private Cursor mCursor; private CursorAdapter mAdapter; protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); mAdapter = new MyCursorAdapter(this); setListAdapter(mAdapter); } protected void onPause() { super.onPause(); if (isFinishing()) { mCursor.close(); } } protected void onDestroy() { super.onDestroy(); mCursor.close(); } private void updateQuery() { // If we had a cursor open before, close it. if (mCursor != null) { mCursor.close(); } MyDbHelper dbHelper = new MyDbHelper(this); SQLiteDatabase db = dbHelper.getReadableDatabase(); mCursor = db.query(...); mAdapter.changeCursor(mCursor); db.close(); } } updateQuery() can be called multiple times because the user can filter the results via menu items (I left this part out of the code, as the problem still occurs even if the user does no filtering). Again, the issue is that when I hit home I get leak errors. Yet, after going home, I can go back to the app and find my list again - cursor fully intact.

    Read the article

  • Focusable EditText inside ListView

    - by Joe
    I've spent about 6 hours on this so far, and been hitting nothing but roadblocks. The general premise is that there is some row in a ListView (whether it's generated by the adapter, or added as a header view) that contains an EditText widget and a Button. All I want to do is be able to use the jogball/arrows, to navigate the selector to individual items like normal, but when I get to a particular row -- even if I have to explicitly identify the row -- that has a focusable child, I want that child to take focus instead of indicating the position with the selector. I've tried many possibilities, and have so far had no luck. layout: <ListView android:id="@android:id/list" android:layout_height="fill_parent" android:layout_width="fill_parent" /> Header view: EditText view = new EditText(this); listView.addHeaderView(view, null, true); Assuming there are other items in the adapter, using the arrow keys will move the selection up/down in the list, as expected; but when getting to the header row, it is also displayed with the selector, and no way to focus into the EditText using the jogball. Note: tapping on the EditText will focus it at that point, however that relies on a touchscreen, which should not be a requirement. ListView apparently has two modes in this regard: 1. setItemsCanFocus(true): selector is never displayed, but the EditText can get focus when using the arrows. Focus search algorithm is hard to predict, and no visual feedback (on any rows: having focusable children or not) on which item is selected, both of which can give the user an unexpected experience. 2. setItemsCanFocus(false): selector is always drawn in non-touch-mode, and EditText can never get focus -- even if you tap on it. To make matters worse, calling editTextView.requestFocus() returns true, but in fact does not give the EditText focus. What I'm envisioning is basically a hybrid of 1 & 2, where rather than the list setting if all items are focusable or not, I want to set focusability for a single item in the list, so that the selector seamlessly transitions from selecting the entire row for non-focusable items, and traversing the focus tree for items that contain focusable children. Any takers?

    Read the article

  • Does COM interop respect .NET AppDomain boundaries for assembly loading?

    - by Xiaofu
    Here's the core problem: I have a .NET application that is using COM interop in a separate AppDomain. The COM stuff seems to be loading assemblies back into the default domain, rather than the AppDomain from which the COM stuff is being called. What I want to know is: is this expected behaviour, or am I doing something wrong to cause these COM related assemblies to be loaded in the wrong AppDomain? Please see a more detailed description of the situation below... The application consists of 3 assemblies: - the main EXE, the entry point of the application. - common.dll, containing just an interface IController (in the IPlugin style) - controller.dll, containing a Controller class that implements IController and MarshalByRefObject. This class does all the work and uses COM interop to interact with another application. The relevant part of the main EXE looks like this: AppDomain controller_domain = AppDomain.CreateDomain("Controller Domain"); IController c = (IController)controller_domain.CreateInstanceFromAndUnwrap("controller.dll", "MyNamespace.Controller"); result = c.Run(); AppDomain.Unload(controller_domain); The common.dll only contains these 2 things: public enum ControllerRunResult{FatalError, Finished, NonFatalError, NotRun} public interface IController { ControllerRunResult Run(); } And the controller.dll contains this class (which also calls the COM interop stuff): public class Controller: IController, MarshalByRefObject When first running the application, Assembly.GetAssemblies() looks as expected, with common.dll being loaded in both AppDomains, and controller.dll only being loaded into the controller domain. After calling c.Run() however I see that assemblies related to the COM interop stuff have been loaded into the default AppDomain, and NOT in the AppDomain from which the COM interop is taking place. Why might this be occurring? And if you're interested, here's a bit of background: Originally this was a 1 AppDomain application. The COM stuff it interfaces with is a server API which is not stable over long periods of use. When a COMException (with no useful diagnostic information as to its cause) occurs from the COM stuff, the entire application has to restarted before the COM connection will work again. Simply reconnecting to the COM app server results in immediate COM exceptions again. To cope with this I have tried to move the COM interop stuff into a seperate AppDomain so that when the mystery COMExceptions occur I can unload the AppDomain in which it occurs, create a new one and start again, all without having to manually restart the application. That was the theory anyway...

    Read the article

  • Making mercurial subrepositories behave like subversion externals

    - by Emily Dickinson
    Hi guys, The FAQ, and hginit.com have been really useful for helping me make the transition from svn to hg. However, when it comes to using Hg's subrepository feature in the manner of subversion's externals, I've tried everythign and cannot replicate the nice behavior of svn externals. Here's the simplest example of what I want to do: Init "lib" repository This repository is never to be used as a standalone; it's always included by main repositories, as a sub-repository. Init one or more including repositories To keep the example simple, I'll "init" a repository called "main" Have "main" include "lib" as a subrepository Importantly -- AND HERE'S WHAT I CAN'T GET TO WORK: When I modify a file inside of "main/lib", and I push the modification, then that change gets pushed to the "lib" repository -- NOT to a copy inside of "main". Command lines speak louder than words. I've tried so many variations on this theme, but here's the gist. If someone can reply, in command lines, I'll be forever grateful! 1. Init "lib" repository $ cd /home/moi/hgrepos ## Where I'm storing my hg repositories, on my main server $ hg init lib $ echo "foo" lib/lib.txt $ hg add lib $ hg ci -A -m "Init lib" lib 2. Init "main" repository, and include "lib" as a subrepos $ cd /home/moi/hgrepos $ hg init main $ echo "foo" main/main.txt $ hg add main $ cd main $ hg clone ../lib lib $ echo "lib=lib" .hgsub $ hg ci -A -m "Init main" . This all works fine, but when I make a clone of the "main" repository, and make local modifications to files in "main/lib", and push them, the changes get pushed to "main/lib", NOT to "lib". IN COMMAND-LINE-ESE, THIS IS THE PROBLEM: $ /home/moi/hg-test $ hg clone ssh://[email protected]/hgrepos/lib lib $ hg clone ssh://[email protected]/hgrepos/main main $ cd main $ echo foo lib/lib.txt $ hg st M lib.txt $ hg com -m "Modified lib.txt, from inside the main repos" lib.txt $ hg push pushing to ssh://[email protected]/hgrepos/main/lib That last line of output from hg shows the problem. It shows that I've made a modification to a COPY of a file in lib, NOT to a file in the lib repository. If this were working as I'd like it to work, the push would be to hgrepos/lib, NOT to hgrepos/main/lib. I.e., I would see: $ hg push pushing to ssh://[email protected]/hgrepos/lib IF YOU CAN ANSWER THIS IN TERMS OF COMMAND LINES RATHER THAN IN ENGLISH, I WILL BE ETERNALLY GRATEFUL! Thank you in advance! Emily in Portland

    Read the article

  • Get non-overlapping dates ranges for prices history data

    - by Anonymouse
    Hello, Let's assume that I have the following table: CREATE TABLE [dbo].[PricesHist]( [Product] varchar NOT NULL, [Price] [float] NOT NULL, [StartDate] [datetime] NOT NULL, [EndDate] [datetime] NOT NULL ) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D2C00000000 AS DateTime), CAST(0x00009D2C00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D2D00000000 AS DateTime), CAST(0x00009D2D00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D2E00000000 AS DateTime), CAST(0x00009D2E00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3000000000 AS DateTime), CAST(0x00009D3000000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3100000000 AS DateTime), CAST(0x00009D3100000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3400000000 AS DateTime), CAST(0x00009D3400000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D3500000000 AS DateTime), CAST(0x00009D3500000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3600000000 AS DateTime), CAST(0x00009D3600000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3700000000 AS DateTime), CAST(0x00009D3700000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3800000000 AS DateTime), CAST(0x00009D3800000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3A00000000 AS DateTime), CAST(0x00009D3A00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3B00000000 AS DateTime), CAST(0x00009D3B00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D3C00000000 AS DateTime), CAST(0x00009D3C00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3D00000000 AS DateTime), CAST(0x00009D3D00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3E00000000 AS DateTime), CAST(0x00009D3E00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D3F00000000 AS DateTime), CAST(0x00009D3F00000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4100000000 AS DateTime), CAST(0x00009D4100000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4200000000 AS DateTime), CAST(0x00009D4200000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D4300000000 AS DateTime), CAST(0x00009D4300000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4400000000 AS DateTime), CAST(0x00009D4400000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4500000000 AS DateTime), CAST(0x00009D4500000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4600000000 AS DateTime), CAST(0x00009D4600000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 4.9, CAST(0x00009D4800000000 AS DateTime), CAST(0x00009D4800000000 AS DateTime)) INSERT [dbo].[PricesHist] ([Product], [Price], [StartDate], [EndDate]) VALUES (N'Apples', 2.5, CAST(0x00009D4A00000000 AS DateTime), CAST(0x00009D4A00000000 AS DateTime)) As you can see, there are two prices on that month for Apples. 4.90 and 2.50. In order to tidy this table up, I need to get this information as a date range rather than a row per day as it currently is. I can obviously do this with Min and Max aggregates easily but the ranges overlap and other business code expect non-overlapping ranges. I also tried to achieve this with self joins and row_number(), but without much success... Here is what I'm trying to achieve as the output: Product | StartDate | EndDate | Price ------------------------------------------- Apples | 01 Mar 2010 | 02 Mar 2010 | 4.90 Apples | 03 Mar 2010 | 03 Mar 2010 | 2.50 Apples | 05 Mar 2010 | 09 Mar 2010 | 4.90 Apples | 10 Mar 2010 | 10 Mar 2010 | 2.50 Apples | 11 Mar 2010 | 16 Mar 2010 | 4.90 Apples | 17 Mar 2010 | 17 Mar 2010 | 2.50 Apples | 18 Mar 2010 | 23 Mar 2010 | 4.90 Apples | 24 Mar 2010 | 24 Mar 2010 | 2.50 Apples | 25 Mar 2010 | 30 Mar 2010 | 4.90 Apples | 31 Mar 2010 | 31 Mar 2010 | 2.50 What would please be the best approach to get this done? Thanks a lot in advance,

    Read the article

  • How can I define a clojure type that implements the servlet interface?

    - by Rob Lachlan
    I'm attempting to use deftype (from the bleeding-edge clojure 1.2 branch) to create a java class that implements the java Servlet interface. I would expect the code below to compile (even though it's not very useful). (ns foo [:import [javax.servlet Servlet ServletRequest ServletResponse]]) (deftype servlet [] javax.servlet.Servlet (service [this #^javax.servlet.ServletRequest request #^javax.servlet.ServletResponse response] nil)) But it doesn't compile. The compiler produces the message: Mismatched return type: service, expected: void, had: java.lang.Object [Thrown class java.lang.IllegalArgumentException] Which doesn't make sense to me, because I'm returning nil. So the fact that the return type of the method is void shouldn't be a problem. For instance, for the java.util.Set interface: (deftype bar [#^Number n] java.util.Set (clear [this] nil)) compiles without issue. So what am I doing wrong with the Servlet interface? To be clear: I know that the typical case is to subclass one of the servlet abstract classes rather than implement this interface directly, but it should still be possible to do this. Stack Trace: The stack trace for the (deftype servlet... is: Mismatched return type: service, expected: void, had: java.lang.Object [Thrown class java.lang.IllegalArgumentException] Restarts: 0: [ABORT] Return to SLIME's top level. Backtrace: 0: clojure.lang.Compiler$NewInstanceMethod.parse(Compiler.java:6461) 1: clojure.lang.Compiler$NewInstanceExpr.build(Compiler.java:6119) 2: clojure.lang.Compiler$NewInstanceExpr$DeftypeParser.parse(Compiler.java:6003) 3: clojure.lang.Compiler.analyzeSeq(Compiler.java:5289) 4: clojure.lang.Compiler.analyze(Compiler.java:5110) 5: clojure.lang.Compiler.analyze(Compiler.java:5071) 6: clojure.lang.Compiler.eval(Compiler.java:5347) 7: clojure.lang.Compiler.eval(Compiler.java:5334) 8: clojure.lang.Compiler.eval(Compiler.java:5311) 9: clojure.core$eval__4350.invoke(core.clj:2364) 10: swank.commands.basic$eval_region__673.invoke(basic.clj:40) 11: swank.commands.basic$eval_region__673.invoke(basic.clj:31) 12: swank.commands.basic$eval__686$listener_eval__687.invoke(basic.clj:54) 13: clojure.lang.Var.invoke(Var.java:365) 14: foo$eval__2285.invoke(NO_SOURCE_FILE) 15: clojure.lang.Compiler.eval(Compiler.java:5343) 16: clojure.lang.Compiler.eval(Compiler.java:5311) 17: clojure.core$eval__4350.invoke(core.clj:2364) 18: swank.core$eval_in_emacs_package__320.invoke(core.clj:59) 19: swank.core$eval_for_emacs__383.invoke(core.clj:128) 20: clojure.lang.Var.invoke(Var.java:373) 21: clojure.lang.AFn.applyToHelper(AFn.java:169) 22: clojure.lang.Var.applyTo(Var.java:482) 23: clojure.core$apply__3776.invoke(core.clj:535) 24: swank.core$eval_from_control__322.invoke(core.clj:66) 25: swank.core$eval_loop__324.invoke(core.clj:71) 26: swank.core$spawn_repl_thread__434$fn__464$fn__465.invoke(core.clj:183) 27: clojure.lang.AFn.applyToHelper(AFn.java:159) 28: clojure.lang.AFn.applyTo(AFn.java:151) 29: clojure.core$apply__3776.invoke(core.clj:535) 30: swank.core$spawn_repl_thread__434$fn__464.doInvoke(core.clj:180) 31: clojure.lang.RestFn.invoke(RestFn.java:398) 32: clojure.lang.AFn.run(AFn.java:24) 33: java.lang.Thread.run(Thread.java:637)

    Read the article

  • Different Azure blob streams when using .Net client vs. REST interface

    - by knightpfhor
    I have encountered an unusual difference in the way that the .Net client for Azure and the direct REST API bring back streams of binary data. If I use the CloundBlob.DownloadToStream() vs. getting the response stream from the HTTP response, I get streams with the same length, but different content. Specifically the REST response seems to 0 out a series of bytes. I've discovered this issue because I'm trying to use the byte range feature for blobs which is currently not supported in the .Net client (if I'm wrong on this point and someone can point at where I can do this it might make the rest of this question irrelevant). If I upload a binary representation of the first 2k unicode characters with this code: Public Sub WriteFoo() Dim Blob As CloudBlob Dim Stream1 As MemoryStream Dim Container As CloudBlobContainer Dim Builder As StringBuilder Dim NextCharacter As String Dim Formatter As BinaryFormatter Container = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient.GetContainerReference("testcontainer") Container.CreateIfNotExist() Blob = Container.GetBlobReference("Foo") Stream1 = New MemoryStream() Builder = New Text.StringBuilder() For Index As Integer = 1 To 2000 Select Case Index Case Is <= 9 NextCharacter = ChrW(9) Case Is <= 31 NextCharacter = Environment.NewLine Case 127 NextCharacter = Environment.NewLine Case Else NextCharacter = ChrW(Index) End Select Builder.Append(NextCharacter) Next Formatter = New BinaryFormatter() Formatter.Serialize(Stream1, Builder.ToString()) Stream1.Position = 0 Blob.UploadFromStream(Stream1) End Sub Then try to access it with the following code: Public Sub ReadFoo() Dim Blob As CloudBlob Dim Request As System.Net.HttpWebRequest Dim Response As System.Net.WebResponse Dim ResponseSize As Integer Dim ResponseBuffer As Byte() Dim ResponseStream As Stream Dim Stream1 As MemoryStream Dim Stream2 As MemoryStream Dim Container As CloudBlobContainer Dim Byte1 As Integer Dim Byte2 As Integer Container = CloudStorageAccount.DevelopmentStorageAccount.CreateCloudBlobClient.GetContainerReference("testcontainer") Container.CreateIfNotExist() Blob = Container.GetBlobReference("Foo") Stream1 = New MemoryStream() Stream2 = New MemoryStream() Blob.DownloadToStream(Stream1) Request = DirectCast(System.Net.WebRequest.Create(Blob.Uri), System.Net.HttpWebRequest) Request.Headers.Add("x-ms-version", "2009-09-19") Request.Headers.Add("x-ms-range", String.Format("bytes={0}-{1}", 0, Integer.MaxValue)) Blob.Container.ServiceClient.Credentials.SignRequest(Request) Response = Request.GetResponse() ResponseStream = Response.GetResponseStream() ResponseSize = CInt(Response.ContentLength) ReDim ResponseBuffer(ResponseSize - 1) ResponseStream.Read(ResponseBuffer, 0, ResponseSize) Stream2.Write(ResponseBuffer, 0, ResponseSize) Stream1.Position = 0 Stream2.Position = 0 If Stream1.Length <> Stream2.Length Then System.Diagnostics.Debug.WriteLine(String.Format("Streams a different length. 1: {0}. 2: {1}", Stream1.Length, Stream2.Length)) Else While Stream1.Position < Stream1.Length Byte1 = Stream1.ReadByte() Byte2 = Stream2.ReadByte() If Byte1 <> Byte2 Then System.Diagnostics.Debug.WriteLine(String.Format("Streams differ at position {0}, 1: {1}. 2: {2}", Stream1.Position - 1, Byte1, Byte2)) End If End While End If End Sub Past all certain point all of the data in Stream2 (the data I've retrieved from the REST api) ends up being 0. To make matters even more confusing, when I reverse the order that I put the characters in the string e.g. For Index As Integer = 2000 To 1 rather than For Index As Integer = 1To 2000 it all works OK. Any help is much appreciated. My computer is sick of me swearing at it.

    Read the article

  • Secondary monitor bug: a problem in WPF or in the graphics driver?

    - by emddudley
    I have discovered a strange bug with my WPF application and I am trying to determine whether it is a problem with WPF or my graphics driver so that I can report it to the appropriate company. I have a Quadro FX 1700 with the latest drivers (197.54) on a Windows XP system, running a .NET 3.5 SP1 application. I have dual monitors, and when I maximize then minimize a child window of the main window on my primary monitor, the child window gets drawn on the secondary monitor as well. It appears in both places. I made a sample application (code is below) which induces this behavior. Start the application and ensure the main window is on your primary monitor. Double-click the main window. A green child window should appear. Click the green child window to maximize. Click the green child window to minimize. Can anyone else reproduce this problem? On my system the green child restores, but then it's drawn on both my primary and secondary monitors, rather than just the primary monitor. App.xaml <Application xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="DualMonitorBug.App" StartupUri="Shell.xaml" /> App.xaml.cs using System.Windows; namespace DualMonitorBug { public partial class App : Application { } } Shell.xaml <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="DualMonitorBug.Shell" Title="Shell" Height="480" Width="640" MouseDoubleClick="ShowDialog" /> Shell.xaml.cs using System.Windows; using System.Windows.Input; namespace DualMonitorBug { public partial class Shell : Window { public Shell() { InitializeComponent(); } private void ShowDialog(object sender, MouseButtonEventArgs e) { DialogWindow dialog = new DialogWindow(); dialog.Owner = this; dialog.Show(); } } } DialogWindow.xaml <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" x:Class="DualMonitorBug.DialogWindow" Title="Dialog Window" Height="240" Width="320" AllowsTransparency="True" Background="Green" MouseLeftButtonDown="ShowHideDialog" WindowStyle="None" /> DialogWindow.xaml.cs using System.Windows; using System.Windows.Input; namespace DualMonitorBug { public partial class DialogWindow : Window { public DialogWindow() { InitializeComponent(); } private void ShowHideDialog(object sender, MouseButtonEventArgs e) { this.WindowState = (this.WindowState == WindowState.Normal) ? WindowState.Maximized : WindowState.Normal; } } }

    Read the article

  • Unwanted SDL_QUIT Event on mouseclick.

    - by Anthony Clever
    I'm having a slight problem with my SDL/Opengl code, specifically, when i try to do something on a mousebuttondown event, the program sends an sdl_quit event to the stack, closing my application. I know this because I can make the program work (sans the ability to quit out of it :| ) by checking for SDL_QUIT during my event loop, and making it do nothing, rather than quitting the application. If anyone could help make my program work, while retaining the ability to, well, close it, it'd be much appreciated. Code attached below: #include "SDL/SDL.h" #include "SDL/SDL_opengl.h" void draw_polygon(); void init(); int main(int argc, char *argv[]) { SDL_Event Event; int quit = 0; GLfloat color[] = { 0.0f, 0.0f, 0.0f }; init(); glColor3fv (color); glOrtho(0.0, 1.0, 0.0, 1.0, -1.0, 1.0); draw_polygon(); while(!quit) { while(SDL_PollEvent( &Event )) { switch(Event.type) { case SDL_MOUSEBUTTONDOWN: for (int i = 0; i <= sizeof(color); i++) { color[i] += 0.1f; } glColor3fv ( color ); draw_polygon(); case SDL_KEYDOWN: switch(Event.key.keysym.sym) { case SDLK_ESCAPE: quit = 1; default: break; } default: break; } } } SDL_Quit(); return 0; } void draw_polygon() { glBegin(GL_POLYGON); glVertex3f (0.25, 0.25, 0.0); glVertex3f (0.75, 0.25, 0.0); glVertex3f (0.75, 0.75, 0.0); glVertex3f (0.25, 0.75, 0.0); glEnd(); SDL_GL_SwapBuffers(); } void init() { SDL_Init(SDL_INIT_EVERYTHING); SDL_SetVideoMode( 640, 480, 32, SDL_OPENGL ); glClearColor (0.0, 0.0, 0.0, 0.0); glMatrixMode( GL_PROJECTION | GL_MODELVIEW ); glLoadIdentity(); glClear (GL_COLOR_BUFFER_BIT); SDL_WM_SetCaption( "OpenGL Test", NULL ); } If it matters in this case, I'm compiling via the included compiler with Visual C++ 2008 express.

    Read the article

  • Properly clean up excel interop objects revisited: Wrapper objects

    - by chiccodoro
    Hi all, Excel 2007 Hangs When Closing via .NET How to properly clean up Excel interop objects in C# How to properly clean up interop objects in C# All of these struggle with the problem that C# does not release the Excel COM objects properly after using them. There are mainly two directions of working around this issue: Kill the Excel process when Excel is not used anymore. Take care to assign each COM object used explicitly to a variable and to Marshal.ReleaseComObject all of these. Some have stated that 2 is too tedious and there is always some uncertainty whether you forget to stick to this rule at some places in the code. Still 1 seems dirty and dangerous to me, also I could imagine that in an environment with restricted access killing processes is not allowed. So I've been thinking about solving 2 by creating another proxy object model which mimics the Excel object model (for me, it would suffice to implement the objects I actually need). The principle would look as follows: Each Excel Interop class has its proxy which wraps an object of that class. The proxy releases the COM object in its destructor. The proxy mimics the interface of the Interop class (maybe by inheriting it). Any methods that usually return another COM object return a proxy instead. The other methods simply delegate the implementation to the inner COM object. This is a rough sketch of the code: public class Application : Microsoft.Office.Interop.Excel.Application { private Microsoft.Office.Interop.Excel.Application innerApplication = new Microsoft.Office.Interop.Excel.Application innerApplication(); ~Application() { Marshal.ReleaseCOMObject(innerApplication); } public Workbooks Workbooks { get { return new Workbooks(innerApplication.Workbooks); } } } public class Workbooks { private Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks; Workbooks(Microsoft.Office.Interop.Excel.Workbooks innerWorkbooks) { this.innerWorkbooks = innerWorkbooks; } ~Workbooks() { Marshal.ReleaseCOMObject(innerWorkbooks); } } My questions to you are in particular: Who finds this a bad idea and why? Who finds this a gread idea? If so, why hasn't anybody implemented/published such a model yet? Just due to the effort, or am I missing a killing problem with that idea? Is it impossible/bad/dangerous to do the ReleaseCOMObject in the destructor? (I've only seen proposals to put it in a Dispose() rather than in a destructor - why?) If the approach makes sense, any suggestions to improve it?

    Read the article

  • Fluid CSS: floating column with max-width and overflow

    - by Ates Goral
    I'm using a fluid layout in the new theme that I'm working on for my blog. I often blog about code and include <pre> blocks within the posts. The float: left column for the content area has a max-width so that the column stops at a certain maximum width and can also be shrunk: +----------+ +------+ | text | | text | | | | | | | | | | | | | | | | | | | | | +----------+ +------+ max shrunk What I want is for the <pre> elements to be wider than the text column so that I can fit 80-character-wrapped code without horizontal scroll bars. But I want the <pre> elements to overflow from the content area, without affecting its fluidity: +----------+ +------+ | text | | text | | | | | +----------+--+ +------+------+ | code | | code | +----------+--+ +------+------+ | | | | +----------+ +------+ max shrunk But, max-width stops being fluid once I insert the overhanging <pre> in there: the width of the column remains at the specified max-width even when I shrink the browser beyond that width. I've reproduced the issue with this bare-minimum scenario: <div style="float: left; max-width: 460px; border: 1px solid red"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> <pre style="max-width: 700px; border: 1px solid blue"> function foo() { // Lorem ipsum dolor sit amet, consectetur adipisicing elit } </pre> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit</p> </div> I noticed that doing either of the following brings back the fluidity: Remove the <pre> (doh...) Remove the float: left The workaround I'm currently using is to insert the <pre> elements into "breaks" in the post column, so that the widths of the post segments and the <pre> segments are managed mutually exclusively: +----------+ +------+ | text | | text | +----------+ +------+ +-------------+ +-------------+ | code | | code | +-------------+ +-------------+ +----------+ +------+ +----------+ +------+ max shrunk But this forces me to insert additional closing and opening <div> elements into the post markup which I'd rather keep semantically pristine. Admittedly, I don't have a full grasp of how the box model works with floats with overflowing content, so I don't understand why the combination of float: left on the container and the <pre> inside it cripple the max-width of the container. I'm observing the same problem on Firefox/Chrome/Safari/Opera. IE6 (the crazy one) seems happy all the time. This also doesn't seem dependent on quirks/standards mode. Update I've done further testing to observe that max-width seems to get ignored when the element has a float: left. I glanced at the W3C box model chapter but couldn't immediately see an explicit mention of this behaviour. Any pointers?

    Read the article

  • How do you get Matlab to write the BOM (byte order markers) for UTF-16 text files?

    - by Richard Povinelli
    I am creating UTF16 text files with Matlab, which I am later reading in using Java. In Matlab, I open a file called fileName and write to it as follows: fid = fopen(fileName, 'w','n','UTF16-LE'); fprintf(fid,"Some stuff."); In Java, I can read the text file using the following code: FileInputStream fileInputStream = new FileInputStream(fileName); Scanner scanner = new Scanner(fileInputStream, "UTF-16LE"); String s = scanner.nextLine(); Here is the hex output: Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 00000000 73 00 6F 00 6D 00 65 00 20 00 73 00 74 00 75 00 66 00 66 00 s.o.m.e. .s.t.u.f.f. The above approach works fine. But, I want to be able to write out the file using UTF16 with a BOM to give me more flexibility so that I don't have to worry about big or little endian. In Matlab, I've coded: fid = fopen(fileName, 'w','n','UTF16'); fprintf(fid,"Some stuff."); In Java, I change the code to: FileInputStream fileInputStream = new FileInputStream(fileName); Scanner scanner = new Scanner(fileInputStream, "UTF-16"); String s = scanner.nextLine(); In this case, the string s is garbled, because Matlab is not writing the BOM. I can get the Java code to work just fine if I add the BOM manually. With the added BOM, the following file works fine. Offset(h) 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10 11 12 13 14 15 00000000 FF FE 73 00 6F 00 6D 00 65 00 20 00 73 00 74 00 75 00 66 00 66 00 ÿþs.o.m.e. .s.t.u.f.f. How can I get Matlab to write out the BOM? I know I could write the BOM out separately, but I'd rather have Matlab do it automatically. Addendum I selected the answer below from Amro because it exactly solves the question I posed. One key discovery for me was the difference between the Unicode Standard and a UTF (Unicode transformation format) (see http://unicode.org/faq/utf_bom.html). The Unicode Standard provides unique identifiers (code points) for characters. UTFs provide mappings of every code point "to a unique byte sequence." Since all but a handful of the characters I am using are in the first 128 code points, I'm going to switch to using UTF-8 as Romeo suggests. UTF-8 is supported by Matlab (The warning shown below won't need to be suppressed.) and Java, and for my application will generate smaller text files. I suppress the Matlab warning Warning: The encoding 'UTF-16LE' is not supported. with warning off MATLAB:iofun:UnsupportedEncoding;

    Read the article

  • Multi-step Workflows: make Workflow A depend on results of Workflow B and/or Workflow C

    - by Joey
    I have been tasked with creating a Software Installation Approval section for our Intranet. When a person requests that a particular piece of software be installed on their workstation, we need to get IT approval and then business approval. Once those are obtained, it is to be installed. I am using Sharepoint Designer to do this. I have List A, where the user enters the information on the requested software. Workflow A then creates a Task in List B, which is then assigned to the IT approver. Workflow B works on List B on item creation, setting the due dates, titles, and other fields, and then pauses until the due date. The IT approver works with the business side and completes the task. Once List B task is complete, the item in List A should be marked as complete -- I have everything up to this point working fine. I want to make this more robust in 2 ways. As the only real option is to mark List B task as "completed", which essentially means "Approved", we have no way of really denying a request. What I want to add is the option to approve or deny a request through the task on List B -- if it is approved, I want the item in List A to continue to show "In Progress" with a custom status of "Approved", and I want to create a new task for software installation; once the installation task is marked as completed, then I want List A to show "Completed" with a status of "Installed". If it is denied, I want the item in List A to show as "Completed", with a status of "Denied". The problem is, I'm not even sure where to start making these modifications. Creating and modifying the custom status fields isn't that big of an issue -- I have messed around with this and I'm fairly confident I can do this easily. My main concern is that I know I will need a Workflow C, but I don't know where or how to trigger this to get the results I need. I've managed to get Workflows A and B working fine, but anything beyond this is really pushing the limit of my knowledge. It's probably obvious that I am rather new to Sharepoint workflows. I was very much thrust into this position and I am still feeling my way around. Thanks in advance for any help!

    Read the article

  • My neural network gets "stuck" while training. Is this normal?

    - by Vivin Paliath
    I'm training a XOR neural network via back-propagation using stochastic gradient descent. The weights of the neural network are initialized to random values between -0.5 and 0.5. The neural network successfully trains itself around 80% of the time. However sometimes it gets "stuck" while backpropagating. By "stuck", I mean that I start seeing a decreasing rate of error correction. For example, during a successful training, the total error decreases rather quickly as the network learns, like so: ... ... Total error for this training set: 0.0010008071327708653 Total error for this training set: 0.001000750550254843 Total error for this training set: 0.001000693973929822 Total error for this training set: 0.0010006374037948094 Total error for this training set: 0.0010005808398488103 Total error for this training set: 0.0010005242820908169 Total error for this training set: 0.0010004677305198344 Total error for this training set: 0.0010004111851348654 Total error for this training set: 0.0010003546459349181 Total error for this training set: 0.0010002981129189812 Total error for this training set: 0.0010002415860860656 Total error for this training set: 0.0010001850654351723 Total error for this training set: 0.001000128550965301 Total error for this training set: 0.0010000720426754587 Total error for this training set: 0.0010000155405646494 Total error for this training set: 9.99959044631871E-4 Testing trained XOR neural network 0 XOR 0: 0.023956746649767453 0 XOR 1: 0.9736079194769579 1 XOR 0: 0.9735670067093437 1 XOR 1: 0.045068688874314006 However when it gets stuck, the total errors are decreasing, but it seems to be at a decreasing rate: ... ... Total error for this training set: 0.12325486644721295 Total error for this training set: 0.12325486642503929 Total error for this training set: 0.12325486640286581 Total error for this training set: 0.12325486638069229 Total error for this training set: 0.12325486635851894 Total error for this training set: 0.12325486633634561 Total error for this training set: 0.1232548663141723 Total error for this training set: 0.12325486629199914 Total error for this training set: 0.12325486626982587 Total error for this training set: 0.1232548662476525 Total error for this training set: 0.12325486622547954 Total error for this training set: 0.12325486620330656 Total error for this training set: 0.12325486618113349 Total error for this training set: 0.12325486615896045 Total error for this training set: 0.12325486613678775 Total error for this training set: 0.12325486611461482 Total error for this training set: 0.1232548660924418 Total error for this training set: 0.12325486607026936 Total error for this training set: 0.12325486604809655 Total error for this training set: 0.12325486602592373 Total error for this training set: 0.12325486600375107 Total error for this training set: 0.12325486598157878 Total error for this training set: 0.12325486595940628 Total error for this training set: 0.1232548659372337 Total error for this training set: 0.12325486591506139 Total error for this training set: 0.12325486589288918 Total error for this training set: 0.12325486587071677 Total error for this training set: 0.12325486584854453 While I was reading up on neural networks I came across a discussion on local minimas and global minimas and how neural networks don't really "know" which minima its supposed to be going towards. Is my network getting stuck in a local minima instead of a global minima?

    Read the article

  • Sentiment analysis with NLTK python for sentences using sample data or webservice?

    - by Ke
    I am embarking upon a NLP project for sentiment analysis. I have successfully installed NLTK for python (seems like a great piece of software for this). However,I am having trouble understanding how it can be used to accomplish my task. Here is my task: I start with one long piece of data (lets say several hundred tweets on the subject of the UK election from their webservice) I would like to break this up into sentences (or info no longer than 100 or so chars) (I guess i can just do this in python??) Then to search through all the sentences for specific instances within that sentence e.g. "David Cameron" Then I would like to check for positive/negative sentiment in each sentence and count them accordingly NB: I am not really worried too much about accuracy because my data sets are large and also not worried too much about sarcasm. Here are the troubles I am having: All the data sets I can find e.g. the corpus movie review data that comes with NLTK arent in webservice format. It looks like this has had some processing done already. As far as I can see the processing (by stanford) was done with WEKA. Is it not possible for NLTK to do all this on its own? Here all the data sets have already been organised into positive/negative already e.g. polarity dataset http://www.cs.cornell.edu/People/pabo/movie-review-data/ How is this done? (to organise the sentences by sentiment, is it definitely WEKA? or something else?) I am not sure I understand why WEKA and NLTK would be used together. Seems like they do much the same thing. If im processing the data with WEKA first to find sentiment why would I need NLTK? Is it possible to explain why this might be necessary? I have found a few scripts that get somewhat near this task, but all are using the same pre-processed data. Is it not possible to process this data myself to find sentiment in sentences rather than using the data samples given in the link? Any help is much appreciated and will save me much hair! Cheers Ke

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

< Previous Page | 428 429 430 431 432 433 434 435 436 437 438 439  | Next Page >