Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 440/457 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • The maximum message size quota for incoming messages (65536) has been exceeded.

    - by DaleyKD
    My WCF Service has an OperationContract that accepts, as a parameter, an array of objects. This can potentially be quite large. After looking for fixes for Bad Request: 400, I found the real reason: the maximum message size. I know this question has been asked before in MANY places. I've tried what everyone says: "Increase the sizes in the client and server config files." I have. It still doesn't work. My Service's web.config: <system.serviceModel> <services> <service name="myService"> <endpoint name="myEndpoint" address="" binding="basicHttpBinding" bindingConfiguration="myBinding" contract="Meisel.WCF.PDFDocs.IPDFDocsService" /> </service> </services> <bindings> <basicHttpBinding> <binding name="myBinding" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:15:00" sendTimeout="00:15:00" maxBufferSize="2147483647" maxReceivedMessageSize="2147483647" maxBufferPoolSize="2147483647" transferMode="Buffered" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="2147483647" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None" /> </binding> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="true" /> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> </system.serviceModel> My Client's app.config: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_IPDFDocsService" closeTimeout="00:11:00" openTimeout="00:11:00" receiveTimeout="00:10:00" sendTimeout="00:11:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferSize="2147483647" maxBufferPoolSize="2147483647" maxReceivedMessageSize="2147483647" messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="2147483647" maxArrayLength="2147483647" maxBytesPerRead="2147483647" maxNameTableCharCount="2147483647" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address="http://localhost:8451/PDFDocsService.svc" behaviorConfiguration="MoreItemsInObjectGraph" binding="basicHttpBinding" bindingConfiguration="BasicHttpBinding_IPDFDocsService" contract="PDFDocsService.IPDFDocsService" name="BasicHttpBinding_IPDFDocsService" /> </client> <behaviors> <endpointBehaviors> <behavior name="MoreItemsInObjectGraph"> <dataContractSerializer maxItemsInObjectGraph="2147483647" /> </behavior> </endpointBehaviors> </behaviors> </system.serviceModel> What can I possibly be missing or doing wrong? It's as though the service is ignoring what I typed in the maxReceivedBufferSize. Thanks in advance, Kyle UPDATE Here are two other StackOverflow questions where they never received an answer, either: http://stackoverflow.com/questions/2880623/maxreceivedmessagesize-adjusted-but-still-getting-the-quotaexceedexception-with http://stackoverflow.com/questions/2569715/wcf-maxreceivedmessagesize-property-not-taking

    Read the article

  • How can I anchor a textbox in wpf ?

    - by csuporj
    I have a window with tabs. On one of the tabs, I have a layout like below. (In fact it is more complicated, I have 4 text boxes in a row, and I have more rows.) How can I make the 3rd textbox have the width of the label + the width of the text box above, that is, to have them properly aligned ? The problem is that WPF widens the 3rd textbox, when I type text into it. Using hardcoded numbers for the sizes defeats the whole purpose of WPF. I could do that way 10 times faster in Windows Forms than in WPF. Is there a better way, than using a grid for each set of consecutive small text boxes, having to skip the large ones from the grid, because putting them in messes up everything. <Window x:Class="WpfApplication1.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525"> <Window.Resources> <Style TargetType="{x:Type Label}"> <Setter Property="VerticalAlignment" Value="Center"/> </Style> <Style TargetType="{x:Type TextBox}"> <Setter Property="VerticalAlignment" Value="Center"/> <Setter Property="Margin" Value="3"/> </Style> <Style x:Key="SmallTextBox" TargetType="{x:Type TextBox}" BasedOn="{StaticResource {x:Type TextBox}}"> <Setter Property="Width" Value="50"/> </Style> </Window.Resources> <StackPanel VerticalAlignment="Top" HorizontalAlignment="Left" Width="{Binding ElementName=grid,Path=ActualWidth}" Grid.IsSharedSizeScope="True"> <Grid Name="grid" HorizontalAlignment="Left"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="c1"/> <ColumnDefinition Width="Auto" SharedSizeGroup="c2"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <Label Content="Foo:"/> <TextBox Grid.Column="1" Style="{StaticResource SmallTextBox}"/> <Label Grid.Row="1" Content="Foobar:"/> <TextBox Grid.Row="1" Grid.Column="1" Style="{StaticResource SmallTextBox}"/> </Grid> <TextBox Grid.Row="1"/> <Grid Name="grid2" HorizontalAlignment="Left"> <Grid.ColumnDefinitions> <ColumnDefinition Width="Auto" SharedSizeGroup="c1"/> <ColumnDefinition Width="Auto" SharedSizeGroup="c2"/> </Grid.ColumnDefinitions> <Label Content="Bar:"/> <TextBox Grid.Column="1" Style="{StaticResource SmallTextBox}"/> </Grid> </StackPanel> </Window>

    Read the article

  • Is the Cloud ready for an Enterprise Java web application? Seeking a JEE hosting advice.

    - by Jakub Holý
    Greetings to all the smart people around here! I'd like to ask whether it is feasible or a good idea at all to deploy a Java enterprise web application to a Cloud such as Amazon EC2. More exactly, I'm looking for infrastructure options for an application that shall handle few hundred users with long but neither CPU nor memory intensive sessions. I'm considering dedicated servers, virtual private servers (VPSs) and EC2. I've noticed that there is a project called JBoss Cloud so people are working on enabling such a deployment, on the other hand it doesn't seem to be mature yet and I'm not sure that the cloud is ready for this kind of applications, which differs from the typical cloud-based applications like Twitter. Would you recommend to deploy it to the cloud? What are the pros and cons? The application is a Java EE 5 web application whose main function is to enable users to compose their own customized Product by combining the available Parts. It uses stateless and stateful session beans and JPA for persistence of entities to a RDBMS and fetches information about Parts from the company's inventory system via a web service. Aside of external users it's used also by few internal ones, who are authenticated against the company's LDAP. The application should handle around 300-400 concurrent users building their product and should be reasonably scalable and available though these qualities are only of a medium importance at this stage. I've proposed an architecture consisting of a firewall (FW) and load balancer supporting sticky sessions and https (in the Cloud this would be replaced with EC2's Elastic Load Balancing service and FW on the app. servers, in a physical architecture the load-balancer would be a HW), then two physical clustered application servers combined with web servers (so that if one fails, a user doesn't loose his/her long built product) and finally a database server. The DB server would need a slave backup instance that can replace the master instance if it fails. This should provide reasonable availability and fault tolerance and provide good scalability as long as a single RDBMS can keep with the load, which should be OK for quite a while because most of the operations are done in the memory using a stateful bean and only occasionally stored or retrieved from the DB and the amount of data is low too. A problematic part could be the dependency on the remote inventory system webservice but with good caching of its outputs in the application it should be OK too. Unfortunately I've only vague idea of the system resources (memory size, number and speed of CPUs/cores) that such an "average Java EE application" for few hundred users needs. My rough and mostly unfounded estimate based on actual Amazon offerings is that 1.7GB and a single, 2-core "modern CPU" with speed around 2.5GHz (the High-CPU Medium Instance) should be sufficient for any of the two application servers (since we can handle higher load by provisioning more of them). Alternatively I would consider using the Large instance (64b, 7.5GB RAM, 2 cores at 1GHz) So my question is whether such a deployment to the cloud is technically and financially feasible or whether dedicated/VPS servers would be a better option and whether there are some real-world experiences with something similar. Thank you very much! /Jakub Holy PS: I've found the JBoss EAP in a Cloud Case Study that shows that it is possible to deploy a real-world Java EE application to the EC2 cloud but unfortunately there're no details regarding topology, instance types, or anything :-(

    Read the article

  • ASP.NET MVC 2 Model encapsulated within ViewModel Validation

    - by Program.X
    I am trying to get validation to work in ASP.NET MVC 2, but without much success. I have a complex class containing a large number of fields. (Don't ask - this is oneo f those real-world situations best practices can't touch) This would normally be my Model and is a LINQ-to-SQL generated class. Because this is generated code, I have created a MetaData class as per http://davidhayden.com/blog/dave/archive/2009/08/10/AspNetMvc20BuddyClassesMetadataType.aspx. public class ConsultantRegistrationMetadata { [DisplayName("Title")] [Required(ErrorMessage = "Title is required")] [StringLength(10, ErrorMessage = "Title cannot contain more than 10 characters")] string Title { get; set; } [Required(ErrorMessage = "Forename(s) is required")] [StringLength(128, ErrorMessage = "Forename(s) cannot contain more than 128 characters")] [DisplayName("Forename(s)")] string Forenames { get; set; } // ... I've attached this to the partial class of my generated class: [MetadataType(typeof(ConsultantRegistrationMetadata))] public partial class ConsultantRegistration { // ... Because my form is complex, it has a number of dependencies, such as SelectLists, etc. which I have encapsulated in a ViewModel pattern - and included the ConsultantRegistration model as a property: public class ConsultantRegistrationFormViewModel { public Data.ConsultantRegistration ConsultantRegistration { get; private set; } public SelectList Titles { get; private set; } public SelectList Countries { get; private set; } // ... So it is essentially ViewModel=Model My View then has: <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Title) %> <%: Html.DropDownListFor(model => model.ConsultantRegistration.Title, Model.Titles,"(select a Title)") %> <%: Html.ValidationMessage("Title","*") %> </p> <p> <%: Html.LabelFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.TextBoxFor(model => model.ConsultantRegistration.Forenames) %> <%: Html.ValidationMessageFor(model=>model.ConsultantRegistration.Forenames) %> </p> The problem is, the validation attributes on the metadata class are having no effect. I tried doing it via an Interface, but also no effect. I'm beginning to think that the reason is because I am encapsulating my model within a ViewModel. My Controller (Create Action) is as follows: [HttpPost] public ActionResult Create(Data.ConsultantRegistration consultantRegistration) { if (ModelState.IsValid) // this is always true - which is wrong!! { try { consultantRegistration = ConsultantRegistrationRepository.SaveConsultantRegistration(consultantRegistration); return RedirectToAction("Edit", new { id = consultantRegistration.ID, sectionIndex = 2 }); } catch (Exception ex) { ModelState.AddModelError("CreateException",ex); } } return View(new ConsultantRegistrationFormViewModel(consultantRegistration)); } As outlined in the comment, the ModelState.IsValid property always returns true, despite fields with the Validaiton annotations not being valid. (Forenames being a key example). Am I missing something obvious - considering I am an MVC newbie? I'm after the mechanism demoed by Jon Galloway at http://www.asp.net/learn/mvc-videos/video-10082.aspx. (Am aware t is similar to http://stackoverflow.com/questions/1260562/asp-net-mvc-model-viewmodel-validation but that post seems to talk about xVal. I have no idea what that is and suspect it is for MVC 1)

    Read the article

  • Panning Image Using Javascript in ASP.Net Page Overflows Div

    - by Bob Mc
    I have an ASP.Net page that contains a <div> with an <img> tag within. The image is large so the <div> is set with a size of 500x500 pixels with overflow set to scroll. I'm attempting to add image panning using a click and drag on the image. However, every time I begin dragging the image escapes the boundary of the element and consumes the entire page. Here is example code that illustrates the problem: <%@ Page Language="VB" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head runat="server"> <title></title> </head> <body> <form id="form1" runat="server"> <div> <div id="divInnerDiv" style="overflow:scroll; width:500px; height:500px; cursor:move;"> <img id="imgTheImage" src="Page0001.jpg" border="0" /> </div> <script type="text/javascript"> document.onmousemove = mouseMove; document.onmouseup = mouseUp; var dragObject = null; var mouseOffset = null; function mouseCoords(ev){ if(ev.pageX || ev.pageY){ return {x:ev.pageX, y:ev.pageY}; } return { x:ev.clientX + document.body.scrollLeft - document.body.clientLeft, y:ev.clientY + document.body.scrollTop - document.body.clientTop }; } function getMouseOffset(target, ev){ ev = ev || window.event; var docPos = getPosition(target); var mousePos = mouseCoords(ev); return {x:mousePos.x - docPos.x, y:mousePos.y - docPos.y}; } function getPosition(e){ var left = 0; var top = 0; while (e.offsetParent){ left += e.offsetLeft; top += e.offsetTop; e = e.offsetParent; } left += e.offsetLeft; top += e.offsetTop; return {x:left, y:top}; } function mouseMove(ev) { ev = ev || window.event; var mousePos = mouseCoords(ev); if(dragObject){ dragObject.style.position = 'absolute'; dragObject.style.top = mousePos.y - mouseOffset.y; dragObject.style.left = mousePos.x - mouseOffset.x; return false; } } function mouseUp(){ dragObject = null; } function makeDraggable(item){ if(!item) return; item.onmousedown = function(ev){ dragObject = this; mouseOffset = getMouseOffset(this, ev); return false; } } makeDraggable(document.getElementById("imgTheImage")); </script> </div> </form> </body> </html> Also note that this HTML works fine in a non-ASP.Net page. Is there something in the ASP.Net Javascript that is causing this issue? Does anyone have a suggestion for panning a JPEG within a <div> block that will work in ASP.Net? I have tried using the jQueryUI library but the result is the same.

    Read the article

  • 3D game engine for networked world simulation / AI sandbox

    - by Martin
    More than 5 years ago I was playing with DirectSound and Direct3D and I found it really exciting although it took much time to get some good results with C++. I was a college student then. Now I have mostly enterprise development experience in C# and PHP, and I do it for living. There is really no chance to earn money with serious game development in our country. Each day more and more I find that I miss something. So I decided to spend an hour or so each day to do programming for fun. So my idea is to build a world simulation. I would like to begin with something simple - some human-like creatures that live their life - like Sims 3 but much more simple, just basic needs, basic animations, minimum graphic assets - I guess it won't be a city but just a large house for a start. The idea is to have some kind of a server application which stores the world data in MySQL database, and some client applications - body-less AI bots which simulate movement and some interactions with the world and each other. But it wouldn't be fun without 3D. So there are also 3D clients - I can enter that virtual world and see the AI bots living. When the bot enters visible area, it becomes material - loads a mesh and animations, so I can see it. When I leave, the bots lose their 3d mesh bodies again, but their virtual life still continues. With time I hope to make it like some expandable scriptable sandbox to experiment with various AI algorithms and so on. But I am not intended to create a full-blown MMORPG :D I have looked for many possible things I would need (free and open source) and now I have to make a choice: OGRE3D + enet (or RakNet). Old good C++. But won't it slow me down so much that I won't have fun any more? CrystalSpace. Formally not a game engine but very close to that. C++ again. MOgre (OGRE3D wrapper for .NET) + lidgren (networking library which is already used in some gaming projects). Good - I like C#, it is good for fast programming and also can be used for scripting. XNA seems just a framework, not an engine, so really have doubts, should I even look at XNA Game Studio :( Panda3D - full game engine with positive feedback. I really like idea to have all the toolset in one package, it has good reviews as a beginner-friendly engine...if you know Python. On the C++ side, Panda3D has almost non-existent documentation. I have 0 experience with Python, but I've heard it is easy to learn. And if it will be fun and challenging then I guess I would benefit from experience in one more programming language. Which of those would you suggest, not because of advanced features or good platform support but mostly for fun, easy workflow and expandability, and so I can create and integrate all the components I need - the server with the database, AI bots and a 3D client application?

    Read the article

  • SVN: Release branch headaches, how to merge in website revisions as and when cleared to go live?

    - by Pete Duncanson
    I need a sanity check here if we can, any ideas on correcting/changing the following are very welcome! We've been getting ourselves in knots of late with our SVN and are trying to correct it by putting a Trunk/Release system in place. We have a large website that we develop on and we store it all in SVN. Heres what we had in mind: We have trunk and a release branch All work gets checked into Trunk. When a feature is deemed ready for the next release it is merged into a Release branch. We only have one release branch and just tag "Latest" when we do a push to live We hope to be able to get all the files changed from Latest to Head to give us a zip that we can upload (any ideas on an easy way to do this via scripting?) So we set all this up and where very happy with ourselves. Except its not working and heres why. We work on lots a different features/fixes/problems at once and they don't all get nicely checked in feature complete (but always working at least). Then sometimes you have to wait for Clients to sign off. As a result you end up with revisions which are "ready for live" being scattered with ones which are "still being worked on" in trunk. That means that the completed revisions are not getting merged in sequentially but out of order. I thought SVN could handle this, clever little thing it is, but apparently not. Heres an example: Pete changes some CSS to make a new button look pretty (Revision 1) Dave add some CSS to the bottom of the same CSS file as Pete's for a new feature (Revision 2) Dave's mod gets the nod so he merges it into Release and commits it with a log message mentioning revision number and bug tracking id. Pete adds more buttons to finish this mod, no CSS changes here though (Revision 3) Pete then merges his mods (Revision 1 and 3) into the Head of Release (which has Daves merge in it) but this over-writes Daves CSS additions which now dissapear completely. This leads to the site being broken and the Release branch being pretty much useless. So we tried some other ideas like reverting Release back to "Latest" and then just merging in all the Revisions 1,2 and 3 in order. This worked fine until we had Revision 4 which was not ready for live and Revision 5 which was. Suddenly we are getting ourselves in knots again with exactly the same problem! Ok so take three. Revert to Latest, merge in Revision 5, then do any update back to Head. Tree conflicts galore! So thats a no no. I cracked in the end and built it all up manaually but its not something I want to do regular, ideally I want to script our deployment but can't while Release is in such a mess. HELP! What the heck are we doing wrong? I can't seem to find any solutions to this problem of wanting different none sequential Revisions in Release. If its not possible thats fine but how the heck are we meant to get stuff live easily. We can't branch for every single change, the site takes 30 minutes+ to check out it would take too long. Side note, we are using TortoiseSVN so can we keep command line examples to a minimum in any answers? Latest version of TSVN and SVN Version 1.6 so we have the funky merge tracking etc. EDIT: An excellent blog post which deals with the dev/release cycle (although using GIT but still relivant) thought everyone would like to read it if they found this question interesting. (http://nvie.com/git-model) EDIT 2: I wrote a blog post on how to show which branch you are working on in your website which others have asked me about (http://www.offroadcode.com/2010/5/14/which-svn-branch-are-you-working-on.aspx). Hope that helps. In the meantime we are looking at Kiln and hoping to make the switch next month (gulp!)

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Use JAXB unmarshalling in Weblogic Server

    - by Leo
    Especifications: - Server: Weblogic 9.2 fixed by customer. - Webservices defined by wsdl and xsd files fixed by customer; not modifications allowed. Hi, In the project we need to develope a mail system. This must do common work with the webservice. We create a Bean who recieves an auto-generated class from non-root xsd element (not wsdl); this bean do this common work. The mail system recieves a xml with elements defined in xsd file and we need to drop this elements info to wsdlc generated classes. With this objects we can use this common bean. Is not possible to redirect the mail request to the webservice. We've looking for the code to do this with WL9.2 resources but we don't found anything. At the moment we've tried to use JAXB for this unmarshalling: JAXBContext c = JAXBContext.newInstance(new Class[]{WasteDCSType.class}); Unmarshaller u = c.createUnmarshaller(); WasteDCSType w = u.unmarshal(waste, WasteDCSType.class).getValue(); waste variable is an DOM Element object. It isn't the root element 'cause the root isn't included in XSD First we needed to add no-arg constructor in some autogenerated classes. No problem, we solved this and finally we unmarshalled the xml without error Exceptions. But we had problems with the attributes. The unmarshalling didn't set attributes; none of them in any class, not simple attributes, not large or short enumeration attributes. No problem with xml elements of any type. We can't create the unmarshaller from "context string" (the package name) 'cause not ObjectFactory has been create by wsldc. If we set the schema no element descriptions are founded and unmarshall crashes. This is the build content: <taskdef name="jwsc" classname="weblogic.wsee.tools.anttasks.JwscTask" /> <taskdef name="wsdlc" classname="weblogic.wsee.tools.anttasks.WsdlcTask"/> <target name="generate-from-wsdl"> <wsdlc srcWsdl="${src.dir}/wsdls/e3s-environmentalMasterData.wsdl" destJwsDir="${src.dir}/webservices" destImplDir="${src.dir}/webservices" packageName="org.arc.eterws.generated" /> <wsdlc srcWsdl="${src.dir}/wsdls/e3s-waste.wsdl" destJwsDir="${src.dir}/webservices" destImplDir="${src.dir}/webservices" packageName="org.arc.eterws.generated" /> </target> <target name="webservices" description=""> <jwsc srcdir="${src.dir}/webservices" destdir="${dest.dir}" classpathref="wspath"> <module contextPath="E3S" name="webservices"> <jws file="org/arc/eterws/impl/IE3SEnvironmentalMasterDataImpl.java" compiledWsdl="${src.dir}/webservices/e3s-environmentalMasterData_wsdl.jar"/> <jws file="org/arc/eterws/impl/Ie3SWasteImpl.java" compiledWsdl="${src.dir}/webservices/e3s-waste_wsdl.jar"/> <descriptor file="${src.dir}/webservices/META-INF/web.xml"/> </module> </jwsc> </target> My questions are: How Weblogic "unmarshall" the xml with the JAX-RPC tech and can we do the same with a xsd element? How can we do this if yes? If not, Exists any not complex solution to this problem? If not, must we use XMLBean tech. or regenerate the XSD with JAXB tech.? What is the best solution? NOTE: There are not one single xsd but a complex xsd structure in fact.

    Read the article

  • Android: OutOfMemoryError while uploading video - how best to chunk?

    - by AP257
    Hi all, I have the same problem as described here, but I will supply a few more details. While trying to upload a video in Android, I'm reading it into memory, and if the video is large I get an OutOfMemoryError. Here's my code: // get bytestream to upload videoByteArray = getBytesFromFile(cR, fileUriString); public static byte[] getBytesFromFile(ContentResolver cR, String fileUriString) throws IOException { Uri tempuri = Uri.parse(fileUriString); InputStream is = cR.openInputStream(tempuri); byte[] b3 = readBytes(is); is.close(); return b3; } public static byte[] readBytes(InputStream inputStream) throws IOException { ByteArrayOutputStream byteBuffer = new ByteArrayOutputStream(); // this is storage overwritten on each iteration with bytes int bufferSize = 1024; byte[] buffer = new byte[bufferSize]; int len = 0; while ((len = inputStream.read(buffer)) != -1) { byteBuffer.write(buffer, 0, len); } return byteBuffer.toByteArray(); } And here's the traceback (the error is thrown on the byteBuffer.write(buffer, 0, len) line): 04-08 11:56:20.456: ERROR/dalvikvm-heap(6088): Out of memory on a 16775184-byte allocation. 04-08 11:56:20.456: INFO/dalvikvm(6088): "IntentService[UploadService]" prio=5 tid=17 RUNNABLE 04-08 11:56:20.456: INFO/dalvikvm(6088): | group="main" sCount=0 dsCount=0 s=N obj=0x449a3cf0 self=0x38d410 04-08 11:56:20.456: INFO/dalvikvm(6088): | sysTid=6119 nice=0 sched=0/0 cgrp=default handle=4010416 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:~93) 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.467: WARN/dalvikvm(6088): threadid=17: thread exiting with uncaught exception (group=0x4001b180) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): Uncaught handler: thread IntentService[UploadService] exiting due to uncaught exception 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): java.lang.OutOfMemoryError 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:93) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.496: INFO/Process(4657): Sending signal. PID: 6088 SIG: 3 I guess that as @DroidIn suggests, I need to upload it in chunks. But (newbie question alert) does that mean that I should make multiple PostMethod requests, and glue the file together at the server end? Or can I load the bytestream into memory in chunks, and glue it together in the Android code? If anyone could give me a clue as to the best approach, I would be very grateful.

    Read the article

  • Placing Varibles into an external Sheet

    - by Leslie Peer
    Trying to Build an Online D&d program which stores the character info into Tables my problem is the game works just fine while your playing but as soon as you exit game all varibles are lost which means you have to restart from scratch the next time you log on... So this is a Two Fold Question What is the Best type of External Sheet to save it on... and two How to access sheet for saving and Loading Below are Varibles <SCRIPT> Name1="Tabor Bloomfield"; Name2="Sam Wrightfield"; Name3="Gavin Hartfild"; Name4="Gail Quickfoot"; Name5="Robert Gragorian"; Name6="Peter Shain"; Class1="MagicUser"; Class2="Fighter"; Class3="Fighter"; Class4="Thief"; Class5="Cleric"; Class6="Fighter"; Level1=23; Level2=1; Level3=1; Level4=2; Level5=2; Level6=1; Hpts1=145; Hpts2=14; Hpts3=13; Hpts4=8; Hpts5=12; Hpts6=15; Armor1="Robe of Protection +5"; Armor2="Splinted Armor"; Armor3="Chain Armor"; Armor4="Leather Armor"; Armor5="Chain Armor"; Armor6="Splinted Armor"; Ac1a=5; Ac2a=3; Ac3a=3; Ac4a=4; Ac5a=2; Ac6a=3; Armor1b="Ring of Protection +5"; Armor2b="Small Shield"; Armor3b="Small Shield"; Armor4b="Wooden Shield"; Armor5b="Large Shield"; Armor6b="Small Shield"; Ac1b=5; Ac2b=1; Ac3b=1; Ac4b=1; Ac5b=1; Ac6b=1; Str1=21; Str2=16; Str3=14; Str4=13; Str5=14; Str6=13; Int1=19; Int2=11; Int3=12; Int4=13; Int5=14; Int6=13; Wis1=18; Wis2=12; Wis3=14; Wis4=13; Wis5=14; Wis6=12; Dex1=19; Dex2=14; Dex3=13; Dex4=15; Dex5=14; Dex6=12; Con1=19; Con2=15; Con3=16; Con4=13; Con5=12; Con6=10; Chr1=21; Chr2=14; Chr3=13; Chr4=12; Chr5=14; Chr6=13; </SCRIPT> File name ="gamestats" Path="trellian Webpage/droves E and F/gamestats have tryed html Page,Javascript,Creating a serperate table page and putting the varibles into cells...But at a lost on how to arrive at a solution

    Read the article

  • How do I add a namespace attribute to an element in JAXB when marshalling?

    - by Ryan Elkins
    I'm working with eBay's LMS (Large Merchant Services) and kept running into the error: org.xml.sax.SAXException: SimpleDeserializer encountered a child element, which is NOT expected, in something it was trying to deserialize. After alot of trial and error I traced the problem down. It turns out this works: <?xml version="1.0" encoding="UTF-8"?> <BulkDataExchangeRequests xmlns="urn:ebay:apis:eBLBaseComponents"> <Header> <Version>583</Version> <SiteID>0</SiteID> </Header> <AddFixedPriceItemRequest xmlns="urn:ebay:apis:eBLBaseComponents"> while this (what I've been sending) doesn't: <?xml version="1.0" encoding="UTF-8"?> <BulkDataExchangeRequests xmlns="urn:ebay:apis:eBLBaseComponents"> <Header> <Version>583</Version> <SiteID>0</SiteID> </Header> <AddFixedPriceItemRequest> The difference is the xml namespace attribute on the AddFixedPriceItemRequest . All of my XML is currently being marshalled via JAXB and I'm not sure what is the best way to go about adding a second xmlns attribute to a different element in my file. So that's the question. How do I add an xmlns attribute to another element in JAXB? UPDATE: package ebay.apis.eblbasecomponents; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlType; @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "AddFixedPriceItemRequestType", propOrder = { "item" }) public class AddFixedPriceItemRequestType extends AbstractRequestType { @XmlElement(name = "Item") protected ItemType item; public ItemType getItem() { return item; } public void setItem(ItemType value) { this.item = value; } } Added class definition by request. UPDATE 2: Edited the above class like so to no effect: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(namespace = "urn:ebay:apis:eBLBaseComponents", name = "AddFixedPriceItemRequestType", propOrder = { "item" }) public class AddFixedPriceItemRequestType UPDATE 3: Here is a snippet of the BulkDataExchangeRequestsType class. I tried throwing a namespace="urn:ebay:apis:eBLBaseComponents" into the @XmlElement for AddFixedPriceItemRequest but it didn't do anything. @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "BulkDataExchangeRequestsType", propOrder = { "header", "addFixedPriceItemRequest" }) public class BulkDataExchangeRequestsType { @XmlElement(name = "Header") protected MerchantDataRequestHeaderType header; @XmlElement(name = "AddFixedPriceItemRequest") protected List<AddFixedPriceItemRequestType> addFixedPriceItemRequest; UPDATE 4: Here's the hideous chunk of code that is updating the xml after marshalling for me. This is currently working although I'm not particulary proud of it. DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setNamespaceAware(true); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.newDocument(); marshaller.marshal(request, doc); NodeList nodes = doc.getChildNodes(); nodes = nodes.item(0).getChildNodes(); for(int i = 0; i < nodes.getLength(); i++){ Node node = nodes.item(i); if (!node.getNodeName().equals("Header")){ ((Element)node).setAttribute("xmlns", "urn:ebay:apis:eBLBaseComponents"); } } Update 5: For anyone else that runs into this problem with eBay and wonders why - The reasoning behind this most likely has to do with how eBay is handling these requests. The BulkDataExchange probably takes the XML payload, breaks it up, and send the pieces out to the Merchant or Trading API. The inner pieces of the payload then do not have the namespace and the get the error. This is a guess on my part but I wouldn't be surprised if this is what was going on behind the scenes. Thanks for all the help everyone.

    Read the article

  • eXML-PARSER output contains unwanted hash references

    - by seaworthy
    So I wrote a parser routine to take one xml file and reparse into another one. This code I later modified to split a large xml file into many small xml files. I am having a problem with an output. Parsing works fine the only thing output also includes unwanted strings like HASH(0x19f9b58), I am not sure why and need set of friendly eyes. use Encode; use XML::Parser; my $parser = XML::Parser->new( Handlers => {Start => \&handle_elem_start, End => \&handle_elem_end,Char => \&handle_char_data,}); my $record; my $file = shift @ARGV; if( $file ) {$parser->parsefile( $file );} exit; sub handle_elem_start { my( $expat, $name, %atts ) = @_; if ($name eq 'articles'){$file="_data.xml";unlink($file);} $record .= "<"; $record .= "$name"; foreach my $key (keys %atts){$record .= " $key=\"$atts{$key}\"";} $record .= ">"; } sub handle_char_data { my( $expat, $text ) = @_; $text = decode_utf8( $text ); $record .= "$text"; } sub handle_elem_end { my( $expat, $name ) = @_; $record .= "</$name>"; if( $name eq 'article' ) { open (MYFILE, '>>'.$file); print MYFILE $record; close (MYFILE); print $record; $record = {}; } return unless( $name eq 'article' ); } Sample output: ... </article>HASH(0x19f9b40) <article doi="10.1103/PhysRevSeriesI.9.304"> <journal short="Phys. Rev. (Series I)" jcode="PRI">Physical Review (Series I)</journal> <volume>9</volume> <issue printdate="1899-11-00">5</issue> <fpage>304</fpage> <lpage>309</lpage> <seqno>1</seqno> <price></price><tocsec>Articles</tocsec> <arttype type="article"></arttype><doi>10.1103/PhysRevSeriesI.9.304</doi> <title>An Investigation of the Magnetic Qualities of Building Brick</title> <authgrp> <author><givenname>O.</givenname><middlename>A.</middlename><surname>Gage</surname></author> <author><givenname>H.</givenname><middlename>E.</middlename><surname>Lawrence</surname></author> </authgrp> <cpyrt> <cpyrtdate date="1899"></cpyrtdate><cpyrtholder>The American Physical Society</cpyrtholder> </cpyrt> </article>HASH(0x19f9b58) ... HASH strings are not wanted, please advise.

    Read the article

  • Allocation algorithm help, using Python.

    - by Az
    Hi there, I've been working on this general allocation algorithm for students. The pseudocode for it (a Python implementation) is: for a student in a dictionary of students: for student's preference in a set of preferences (ordered from 1 to 10): let temp_project be the first preferred project check if temp_project is available if so, allocate it to them and make the project UNavailable to others Quite simply this will try to allocate projects by starting from their most preferred. The way it works, out of a set of say 100 projects, you list 10 you would want to do. So the 10th project wouldn't be the "least preferred overall" but rather the least preferred in their chosen set, which isn't so bad. Obviously if it can't allocate a project, a student just reverts to the base case which is an allocation of None, with a rank of 11. What I'm doing is calculating the allocation "quality" based on a weighted sum of the ranks. So the lower the numbers (i.e. more highly preferred projects), the better the allocation quality (i.e. more students have highly preferred projects). That's basically what I've currently got. Simple and it works. Now I'm working on this algorithm that tries to minimise the allocation weight locally (this pseudocode is a bit messy, sorry). The only reason this will probably work is because my "search space" as it is, isn't particularly large (just a very general, anecdotal observation, mind you). Since the project is only specific to my Department, we have their own limits imposed. So the number of students can't exceed 100 and the number of preferences won't exceed 10. for student in a dictionary/list/whatever of students: where i = 0 take the (i)st student, (i+1)nd student for their ranks: allocate the projects and set local_weighting to be sum(student_i.alloc_proj_rank, student_i+1.alloc_proj_rank) these are the cases: if local_weighting is 2 (i.e. both ranks are 1): then i += 1 and and continue above if local weighting is = N>2 (i.e. one or more ranks are greater than 1): let temp_local_weighting be N: pick student with lowest rank and then move him to his next rank and pick the other student and reallocate his project after this if temp_local_weighting is < N: then allocate those projects to the students move student with lowest rank to the next rank and reallocate other if temp_local_weighting < previous_temp_allocation: let these be the new allocated projects try moving for the lowest rank and reallocate other else: if this weighting => previous_weighting let these be the allocated projects i += 1 and move on for the rest of the students So, questions: This is sort of a modification of simulated annealing, but any sort of comments on this would be appreciated. How would I keep track of which student is (i) and which student is (i+1) If my overall list of students is 100, then the thing would mess up on (i+1) = 101 since there is none. How can I circumvent that? Any immediate flaws that can be spotted? Extra info: My students dictionary is designed as such: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id}

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • How can I handle parameterized queries in Drupal?

    - by Anthony Gatlin
    We have a client who is currently using Lotus Notes/Domino as their content management system and web server. For many reasons, we are recommending they sunset their Notes/Domino implementation and transition onto a more modern platform--such as Drupal. The client has several web applications which would be a natural fit for Drupal. However, I am unsure of the best way to implement one of the web applications in Drupal. I am running into a knowledge barrier and wondered if any of you could fill in the gaps. Situation The client has a Lotus Domino application which serves as a front-end for querying a large DB2 data store and returning a result set (generally in table form) to a user via the web. The web application provides access to approximately 100 pre-defined queries--50 of which are public and 50 of which are secured. Most of the queries accept some set of user selected parameters as input. The output of the queries is typically returned to users in a list (table) format. A limited number of result sets allow drill-down through the HTML table into detail records. The query parameters often involve database queries themselves. For example, a single query may pull a list of company divisions into a drop-down. Once a division is selected, second drop-down with the departments from that division is populated--but perhaps only departments which meet some special criteria--such as those having taken a loss within a specific time frame. Most queries have 2-4 parameters with the average probably being 3. The application involves no data entry. None of the back-end data is ever modified by the web application. All access is purely based around querying data and viewing results. The queries change relatively infrequently, and the current system has been in place for approximately 10 years. There may be 10-20 query additions, modifications, or other changes in a given year. The client simply desires to change the presentation platform but absolutely does not want to re-do the 100 database queries. Once the project is implemented, the client wants their staff to take over and manage future changes. The client's staff have no background in Drupal or PHP but are somewhat willing to learn as necessary. How would you transition this into Drupal? My major knowledge void relates to how we would manage the query parameters and access the queries themselves. Here are a few specific questions but feel free to chime in on any issue related to this implementation. Would we have to build 100 forms by hand--with each form containing the parameters for a given query? If so, how would we do this? Approximately how long would it take to build/configure each of these forms? Is there a better way than manually building 100 forms? (I understand using CCK to enter data into custom content types but since we aren't adding any nodes, I am a little stuck as to how this might work.) Would it be possible for the internal staff to learn to create these query parameter forms--even if they are unfamiliar with Drupal today? Would they be required to do any PHP programming? How would we take the query parameters from a form and execute a query against DB2? Would this require a custom module? If so, would it require one module total or one module per query? (Note: There is apparently a DB2 driver available for Drupal. See http://groups.drupal.org/node/5511.) Note: I am not looking for CMS recommendations other than Drupal as Drupal nicely fits all of the client's other requirements, and I hope to help them standardize on a single platform. Any assistance you can provide would be helpful. Thank you in advance for your help!

    Read the article

  • WCF timedout waiting for System.Diagnostics.Process to finish

    - by Bartek
    Dear All, We have a WCF Service deployed on Windows Server 2003 that handles file transfers. When file is in Unix format, I am converting it to Dos format in the initialization stage using System.Diagnostics.Process (.WaitForExit()). Client calls the service: obj_DataSenderService = New DataSendClient() obj_DataSenderService.InnerChannel.OperationTimeout = New TimeSpan(0, System.Configuration.ConfigurationManager.AppSettings("DatasenderServiceOperationTimeout"), 0) str_DataSenderGUID = obj_DataSenderService.Initialize(xe_InitDetails.GetXMLNode) This works fine, however for large files the conversion takes more than 10 minutes and I am getting exception: A first chance exception of type 'System.ServiceModel.CommunicationException' occurred in mscorlib.dll Additional information: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:59:59.8749992'. I tried configuring both client: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IDataSend" closeTimeout="01:00:00" openTimeout="01:00:00" receiveTimeout="01:00:00" sendTimeout="01:00:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:4000/DataSenderEndPoint" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IDataSend" contract="IDataSend" name="NetTcpBinding_IDataSend"> <identity> <servicePrincipalName value="host/localhost" /> <!--<servicePrincipalName value="host/axopwrapp01.Corp.Acxiom.net" />--> </identity> </endpoint> </client> </system.serviceModel> And service: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IDataSend" closeTimeout="01:00:00" openTimeout="01:00:00" receiveTimeout="01:00:00" sendTimeout="01:00:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> </binding> </netTcpBinding> </bindings> </system.serviceModel> but without luck. In the Service trace viewer I can see: Close process timed out waiting for service dispatch to complete. with stack trace: System.ServiceModel.ServiceChannelManager.CloseInput(TimeSpan timeout) System.ServiceModel.Dispatcher.InstanceContextManager.CloseInput(TimeSpan timeout) System.ServiceModel.ServiceHostBase.OnClose(TimeSpan timeout) System.ServiceModel.Channels.CommunicationObject.Close(TimeSpan timeout) System.ServiceModel.Channels.CommunicationObject.Close() DataSenderService.DataSender.OnStop() System.ServiceProcess.ServiceBase.DeferredStop() System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) System.Runtime.Remoting.Proxies.AgileAsyncWorkerItem.DoAsyncCall() System.Runtime.Remoting.Proxies.AgileAsyncWorkerItem.ThreadPoolCallBack(Object o) System.Threading._ThreadPoolWaitCallback.WaitCallback_Context(Object state) System.Threading.ExecutionContext.runTryCode(Object userData) System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack) System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(Object state) Many thanks Bartek

    Read the article

  • matplotlib and python multithread file processing

    - by Napseis
    I have a large number of files to process. I have written a script that get, sort and plot the datas I want. So far, so good. I have tested it and it gives the desired result. Then I wanted to do this using multithreading. I have looked into the doc and examples on the internet, and using one thread in my program works fine. But when I use more, at some point I get random matplotlib error, and I suspect some conflict there, even though I use a function with names for the plots, and iI can't see where the problem could be. Here is the whole script should you need more comment, i'll add them. Thank you. #!/usr/bin/python import matplotlib matplotlib.use('GTKAgg') import numpy as np from scipy.interpolate import griddata import matplotlib.pyplot as plt import matplotlib.colors as mcl from matplotlib import rc #for latex import time as tm import sys import threading import Queue #queue in 3.2 and Queue in 2.7 ! import pdb #the debugger rc('text', usetex=True)#for latex map=0 #initialize the map index. It will be use to index the array like this: array[map,[x,y]] time=np.zeros(1) #an array to store the time middle_h=np.zeros((0,3)) #x phi c #for the middle of the box current_file=open("single_void_cyl_periodic_phi_c_middle_h_out",'r') for line in current_file: if line.startswith('# === time'): map+=1 np.append(time,[float(line.strip('# === time '))]) elif line.startswith('#'): pass else: v=np.fromstring(line,dtype=float,sep=' ') middle_h=np.vstack( (middle_h,v[[1,3,4]]) ) current_file.close() middle_h=middle_h.reshape((map,-1,3)) #3d array: map, x, phi,c ##### def load_and_plot(): #will load a map file, and plot it along with the corresponding profile loaded before while not exit_flag: print("fecthing work ...") #try: if not tasks_queue.empty(): map_index=tasks_queue.get() print("----> working on map: %s" %map_index) x,y,zp=np.loadtxt("single_void_cyl_growth_periodic_post_map_"+str(map_index),unpack=True, usecols=[1, 2,3]) for i,el in enumerate(zp): if el<0.: zp[i]=0. xv=np.unique(x) yv=np.unique(y) X,Y= np.meshgrid(xv,yv) Z = griddata((x, y), zp, (X, Y),method='nearest') figure=plt.figure(num=map_index,figsize=(14, 8)) ax1=plt.subplot2grid((2,2),(0,0)) ax1.plot(middle_h[map_index,:,0],middle_h[map_index,:,1],'*b') ax1.grid(True) ax1.axis([-15, 15, 0, 1]) ax1.set_title('Profiles') ax1.set_ylabel(r'$\phi$') ax1.set_xlabel('x') ax2=plt.subplot2grid((2,2),(1,0)) ax2.plot(middle_h[map_index,:,0],middle_h[map_index,:,2],'*r') ax2.grid(True) ax2.axis([-15, 15, 0, 1]) ax2.set_ylabel('c') ax2.set_xlabel('x') ax3=plt.subplot2grid((2,2),(0,1),rowspan=2,aspect='equal') sub_contour=ax3.contourf(X,Y,Z,np.linspace(0,1,11),vmin=0.) figure.colorbar(sub_contour,ax=ax3) figure.savefig('single_void_cyl_'+str(map_index)+'.png') plt.close(map_index) tasks_queue.task_done() else: print("nothing left to do, other threads finishing,sleeping 2 seconds...") tm.sleep(2) # except: # print("failed this time: %s" %map_index+". Sleeping 2 seconds") # tm.sleep(2) ##### exit_flag=0 nb_threads=2 tasks_queue=Queue.Queue() threads_list=[] jobs=list(range(map)) #each job is composed of a map print("inserting jobs in the queue...") for job in jobs: tasks_queue.put(job) print("done") #launch the threads for i in range(nb_threads): working_bee=threading.Thread(target=load_and_plot) working_bee.daemon=True print("starting thread "+str(i)+' ...') threads_list.append(working_bee) working_bee.start() #wait for all tasks to be treated tasks_queue.join() #flip the flag, so the threads know it's time to stop exit_flag=1 for t in threads_list: print("waiting for threads %s to stop..."%t) t.join() print("all threads stopped")

    Read the article

  • Open XML SDK 2.0 - Split table to new power point slide when content flows off current slide

    - by amurra
    I have a bunch of data that I need to export from a website to a power point presentation and have been using Open XML SDK 2.0 to perform this task. I have a power point presentation that I am putting through Open XML SDK 2.0 Productivity Tool to generate the template code that I can use to recreate the export. On one of those slides I have a table and the requirement is to add data to that table and break that table across multiple slides if the table exceeds the bottom of the slide. The approach I have taken is to determine the height of the table and if it exceeds the height of the slide, move that new content into the next slide. I have read Bryan and Jones blog on adding repeating data to a power point slide, but my scenario is a little different. They use the following code: A.Table tbl = current.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = heightInEmu; tr.Append(CreateDrawingCell(imageRel + imageRelId)); tr.Append(CreateTextCell(category)); tr.Append(CreateTextCell(subcategory)); tr.Append(CreateTextCell(model)); tr.Append(CreateTextCell(price.ToString())); tbl.Append(tr); imageRelId++; This won't work for me since they know what height to set the table row to since it will be the height of the image, but when adding in different amounts of text I do not know the height ahead of time so I just set tr.Heightto a default value. Here is my attempt at figuring at the table height: A.Table tbl = tableSlide.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = 370840L; tr.Append(PowerPointUtilities.CreateTextCell("This"); tr.Append(PowerPointUtilities.CreateTextCell("is")); tr.Append(PowerPointUtilities.CreateTextCell("a")); tr.Append(PowerPointUtilities.CreateTextCell("test")); tr.Append(PowerPointUtilities.CreateTextCell("Test")); tbl.Append(tr); tableSlide.Slide.Save(); long tableHeight = PowerPointUtilities.TableHeight(tbl); Here are the helper methods: public static A.TableCell CreateTextCell(string text) { A.TableCell tableCell = new A.TableCell( new A.TextBody(new A.BodyProperties(), new A.Paragraph(new A.Run(new A.Text(text)))), new A.TableCellProperties()); return tableCell; } public static Int64Value TableHeight(A.Table table) { long height = 0; foreach (var row in table.Descendants<A.TableRow>() .Where(h => h.Height.HasValue)) { height += row.Height.Value; } return height; } This correctly adds the new table row to the existing table, but when I try and get the height of the table, it returns the original height and not the new height. The new height meaning the default height I initially set and not the height after a large amount of text has been inserted. It seems the height only gets readjusted when it is opened in power point. I have also tried accessing the height of the largest table cell in the row, but can't seem to find the right property to perform that task. My question is how do you determine the height of a dynamically added table row since it doesn't seem to update the height of the row until it is opened in power point? Any other ways to determine when to split content to another slide while using Open XML SDK 2.0? I'm open to any suggestion on a better approach someone might have taken since there isn't much documentation on this subject.

    Read the article

  • reuse div container to load images

    - by user295927
    Hi All; What I would like to do: use a single container to load images. As it is now: I have eleven (11) containers in HTML mark-up each with its own div. each container holds 4 images (2 images side by side top and bottom) when link in anchor tag is clicked div with images fades in. /*-- Jquery accordion is used for navigation typical example is below --*/ <li> <a class="head" href="#">commercial/hospitality</a> <ul> <li><a href="#" projectName="project1" projectType="hospitality1" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 1</a> </li> <li><a href="#" projectName="project2" projectType="hospitality2" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 2</a> </li> <li><a href="#" projectName="project3" projectType="hospitality3" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 3</a> </li> </ul> </li> Typical <div> container used for image insertion currently there are 11 of them: <div id="hospitality1" class="current"> <div id="image1"><img src="images/testImage.jpg"/></div> <div id="image2"><img src="images/testImage.jpg"/></div> <div id="image3"><img src="images/testImage.jpg"/></div> <div id="image4"><img src="images/testImage.jpg"/></div> </div> Here is the code I am using at this point, it does work, but is there a better way to do this that will only re-use a single div container for loading the images? $(document).ready(function(){ $('#navigation a').click(function (selected) { var projectType = $(this).attr("projectType"); //projectType var projectName = $(this).attr("projectName"); //projectName var image1 = $(this).attr("image1"); //anchor tag for image number 1 var image2 = $(this).attr("image2"); //anchor tag for image number 2 var image3 = $(this).attr("image3"); //anchor tag for image number 3 var image4 = $(this).attr("image4"); //anchor tag for image number 4 console.log(projectType); //returns type of project console.log(projectName); //returns name of project console.log(image1); //returns 1st image console.log(image2); //returns 2nd image console.log(image3); //returns 3rd image console.log(image4); //returns 4th image $(function() { $(".current").hide(); // hides previous selected image $("#" + projectType ).fadeIn("normal").addClass("current"); }); }); As you can, see the mark up getting quite large. Any help is appreciated. ussteele

    Read the article

  • setTimeout in javascript not giving browser 'breathing room'

    - by C Bauer
    Alright, I thought I had this whole setTimeout thing perfect but I seem to be horribly mistaken. I'm using excanvas and javascript to draw a map of my home state, however the drawing procedure chokes the browser. Right now I'm forced to pander to IE6 because I'm in a big organisation, which is probably a large part of the slowness. So what I thought I'd do is build a procedure called distributedDrawPolys (I'm probably using the wrong word there, so don't focus on the word distributed) which basically pops the polygons off of a global array in order to draw 50 of them at a time. This is the method that pushes the polygons on to the global array and runs the setTimeout: for (var x = 0; x < polygon.length; x++) { coordsObject.push(polygon[x]); fifty++; if (fifty > 49) { timeOutID = setTimeout(distributedDrawPolys, 5000); fifty = 0; } } I put an alert at the end of that method, it runs in practically a second. The distributed method looks like: function distributedDrawPolys() { if (coordsObject.length > 0) { for (x = 0; x < 50; x++) { //Only do 50 polygons var polygon = coordsObject.pop(); var coordinate = polygon.selectNodes("Coordinates/point"); var zip = polygon.selectNodes("ZipCode"); var rating = polygon.selectNodes("Score"); if (zip[0].text.indexOf("HH") == -1) { var lastOriginCoord = []; for (var y = 0; y < coordinate.length; y++) { var point = coordinate[y]; latitude = shiftLat(point.getAttribute("lat")); longitude = shiftLong(point.getAttribute("long")); if (y == 0) { lastOriginCoord[0] = point.getAttribute("long"); lastOriginCoord[1] = point.getAttribute("lat"); } if (y == 1) { beginPoly(longitude, latitude); } if (y > 0) { if (translateLongToX(longitude) > 0 && translateLongToX(longitude) < 800 && translateLatToY(latitude) > 0 && translateLatToY(latitude) < 600) { drawPolyPoint(longitude, latitude); } } } y = 0; if (zip[0].text != targetZipCode) { if (rating[0] != null) { if (rating[0].text == "Excellent") { endPoly("rgb(0,153,0)"); } else if (rating[0].text == "Good") { endPoly("rgb(153,204,102)"); } else if (rating[0].text == "Average") { endPoly("rgb(255,255,153)"); } } else { endPoly("rgb(255,255,255)"); } } else { endPoly("rgb(255,0,0)"); } } } } Ugh I don't know if that is properly formatted, I ended up with an extra bracket < So I thought the setTimeout method would allow the site to draw the polygons in groups so the users would be able to interact with the page while it was still drawing. What am I doing wrong here?

    Read the article

  • Keyboard for programming

    - by exhuma
    This may seem a bit a tangential topic. It's not directly related to actual code, but is important for our line of work nevertheless. Over the years, I've switched keyboards a few times. All of them had slightly different key layouts. And I'm not talking about the language/locale layout, but the physical layout! Why not the locale layout? Well, quite frankly, that's easy to change via software. I personally have a German keyboard but have it set to the UK layout. Why? It's quite hard to find different layouts in the shops where I live. Even ordering is not always easy in the shops. So that leaves me with Internet shops. But I prefer to "test" my keyboards before buying. The most notable changes are: Mangled "Home Key Block" I've seen this first on a Logitech keyboard, but it may have originated elsewhere. Shape of the "Enter" key I've seen three different cases so far: Two lines high, wider at the top Two lines high, wider at the bottom One line high Shape of the Backspace button I've seen two types so far: One "character" wide Two "characters" wide OS Keys For Macs, you have the Option and Command buttons, for Windows you have the Windows and Context Menu buttons. Cherry even produced a Linux keyboard once (unfortunately I cannot find many details except news results). I assume a dedicated Linux keyboard would sport a Compose key and had the SysRq always labelled as well (note that some standard layouts do this already). Obviously... .. all these differences entail that some keys have to be moved around the board a lot. Which means, if you are used to one and have to work on another one, you happen to hit the wrong keys quite often. As it happens, this is much more annoying for programmers as it is for people who write texts. Mainly because the keys which are moved around are special character keys, often used in programming. Often these hardware layouts depend also indirectly on where you buy the keyboards. Honestly, I haven't seen a keyboard with a one-line "Enter" key in Germany, nor Luxembourg. I may just have missed it but that's how it looks to me at least. A survey I've seen some attempts at surveys in the style "which keyboard is best for programming". But they all - in my opinion - are not using comparable sets. So I was wondering if it was possible to concoct a survey taking the above criteria into account. But ignoring key dimensions that one would be a bit overkill I guess ;) From what I can see there are the following types of physical layout: Backspace: 2-characters wide Enter: 2-Lines, wider top Backspace: 2-characters wide Enter: 1-Line Backspace: 1-character wide Enter: 2-Lines, wider bottom Then there are the other possible permutations (home-key block, os-keys), which in total makes for quite a large list of categories. Now, I wonder... Would anyone be interested in such a survey? I personally would. Because I am looking for the perfect fit for me. If yes, then I could really use the help of anyone here to propose some models to include in the survey. Once I have some models for each category (I'd say at least 3 per category) I could go ahead and write up a survey, put it on-line and let the it collect data for a while. What do you think?

    Read the article

  • In XSLT is it possible to use the value of an xpath expression in a call to a template using an par

    - by Cell
    I am performing an xsl transform and in it I call a template with a param using the following code <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + 1"/> </xsl:call-template> This calls a template function which outputs part of a table element in HTML. The curRow and curCol are used to determine which row and column we are in the table. gbl_maxCols is set to the number of columns in an html table <xsl:template name="GenerateColumns"> <xsl:when test="$curCol &lt;= $gbl_maxCols"> <td> <xsl:attribute="colspan"> <xsl:value-of select="/page/column/@skipColumns"/> </xsl:attribute> </xsl:when> </xsl:template> The result of this function is a set of td elements, however some of these elements (those with a skipColumn attribute greater than 1 span more than 1 column, I need to skip this many columns with the next call to generateColumns. this works just like I would expect in the case where I simply increment the curCol param but I have a case where I need to use the value from the xml attribute skipColumns in the math to calculate the value for curCol. In the above case I iterate through all the columns and this works for the majority of my use cases. However in same cases I need to skip over some of the columns and need to pass in that value from the xml attribute to calculate how many columns I need to skip. My naive first attempt was something like this <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + /page/column/@skipColumns"/> </xsl:call-template> But unforutnately this does not seem to work. Is there any way to use an attribute from an xml page in the calculation for the value of a param in xsl. My xml page is something like this (edited heavily since the xml file is rather large) <page> <column name="blank" skipColumns="1"/> <column name="blank" skipColumns="1"/> <column name="test" skipColumns="3"/> <column name="blank" skipColumns="1"/> <column name="test2" skipColumns="6"/> </page> after all of this I would like to have a set of td elements like the following <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td> if I just iterate through the columns I instead end up with something like this which gives me more td elements than I should have <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td> Edited to provide more information

    Read the article

  • Scrolling a canvas as a shape you're moving approaches its edges

    - by Steven Sproat
    Hi, I develop a Python-based drawing program, Whyteboard. I have tools that the user can create new shapes on the canvas, such as text/images/rectangles/circles/polygons. I also have a Select tool that allows the users to modify these shapes - for example, moving a shape's position, resizing, or editing polygon's points' positions. I'm adding in a new feature where moving or resizing a point near the canvas edge will automatically scroll the canvas. I think it's a good idea in terms of program usability, and annoys me when other program's don't have this feature. I've made some good progress on coding this; below is some Python code to demonstrate what I'm doing. These functions demonstrate how some shapes calculate their "edges": def find_edges(self): """A line.""" self.edges = {EDGE_TOP: min(self.y, self.y2), EDGE_RIGHT: max(self.x, self.x2), EDGE_BOTTOM: max(self.y, self.y2), EDGE_LEFT: min(self. x, self.x2)} def find_edges(self): """An image""" self.edges = {EDGE_TOP: self.y, EDGE_RIGHT: self.x + self.image.GetWidth(), EDGE_BOTTOM: self.y + self.image.GetWidth(), EDGE_LEFT: self.x} def find_edges(self): """Get the bounding rectangle for the polygon""" xmin = min(x for x, y in self.points) ymin = min(y for x, y in self.points) xmax = max(x for x, y in self.points) ymax = max(y for x, y in self.points) self.edges = {EDGE_TOP: ymin, EDGE_RIGHT: xmax, EDGE_BOTTOM: ymax, EDGE_LEFT: xmin} And here's the code I have so far to implement the scrolling when a shape nears the edge: def check_canvas_scroll(self, x, y, moving=False): """ We check that the x/y coords are within 50px from the edge of the canvas and scroll the canvas accordingly. If the shape is being moved, we need to check specific edges of the shape (e.g. left/right side of rectangle) """ size = self.board.GetClientSizeTuple() # visible area of the canvas if not self.board.area > size: # canvas is too small to need to scroll return start = self.board.GetViewStart() # user's starting "viewport" scroll = (-1, -1) # -1 means no change if moving: if self.shape.edges[EDGE_RIGHT] > start[0] + size[0] - 50: scroll = (start[0] + 5, -1) if self.shape.edges[EDGE_BOTTOM] > start[1] + size[1] - 50: scroll = (-1, start[1] + 5) # snip others else: if x > start[0] + size[0] - 50: scroll = (start[0] + 5, -1) if y > start[1] + size[1] - 50: scroll = (-1, start[1] + 5) # snip others self.board.Scroll(*scroll) This code actually works pretty well. If we're moving a shape, then we need to know its edges to calculate when they're coming close to the canvas edge. If we're resizing just a single point, then we just use the x/y coords of that point to see if it's close to the edge. The problem I'm having is a little tricky to describe - basically, if you move a shape to the left, and stop moving it, if you positioned the shape within the 50px from the canvas, then the next time you go to move the shape, the code that says "ok, is this shape close to the end?" gets triggered, and the canvas scrolls to the left, even if you're moving the shape to the right. Can anyone think on how to stop this? I created a youtube video to demonstrate the issue. At about 0:54, I move a polygon to the left of the canvas and position it there. The next time I move it, the canvas scrolls to the left even though I'm moving it right Another thing I'd like to add, but I'm stuck on is the scroll gaining momentum the longer a shape is scrolling? So, with a large canvas, you're not moving a shape for ages, moving 5px at a time, when you need to cover a 2000px distance. Any suggestions there? Thanks all - sorry for the super long question!

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >