Search Results

Search found 10417 results on 417 pages for 'large'.

Page 400/417 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Search engine solution for Django that actually works?

    - by prometheus
    The story so far: Decided to go with Xapian as search backend because it has all search-engine features I was looking for, knows about Unicode, stemming, has few dependencies and requires no bloated app-server installation on top of it. Tried Django and Haystack (plus xapian-haystack, the backend glue code to tie Haystack to Xapian) because it was advertised on quite some blogs as "working". Did not work. Neither django-haystack nor the xapian-haystack project provide a version combination that actually works together. MASTER from both projects yields an error from Xapian, so it's not stable at all. Haystack 1.0.1 and xapian-haystack 1.0.x/1.1.0 are not API-compatible. Plus, in a minimally working installation of Haystack 1.0.1 and xapian-haystack MASTER, any complex query yields zero results due to errors in either django-haystack or xapian-haystack (I double-verified this), maybe because the unit-tests actually test very simple cases, and no edge-cases at all. Tried Djapian. The source-code is riddled with spelling errors (mind you, in variable names, not comments), documentation is also riddled with ambiguities and outdated information that will never lead to a working installation. Not surprisingly, users rarely ask for features but how to get it working in the first place. Next on the plate: exploring Solr (installing a Java environment plus Tomcat gives me headaches, the machine is RAM- and CPU-constrained), or Lucene (slightly less headaches, but still). Before I proceed spending more time with a solution that might or might not work as advertised, I'd like to know: Did anyone ever get an actual, real-world search solution working in Django? I'm serious. I find it really frustrating reading about "large problems mostly solved", and then realizing that you will never get a working installation from the source-code because, actually, all bloggers dealing with those "mostly solved problems" never went past basic installation and copy-pasting the official tutorials. So here are the requirements: must be able to search for 10-100 terms in one query must handle + (term must be present) and - (term must not be present), AND/OR must handle arbitrary grouping (i.e. parentheses around AND/OR) must allow for Django-ORM filtering before or after fulltext-search (i.e. pre-/post-processing of results with the full set of filters that Django knows about) alternatively, there must be a facility to bulk-fetch the result set and transform it into a QuerySet should be light on the machine, so preferably no humongous JVM and Java-based app-server installation Is there anything out there that does this? I'm not interested in anecdotal evidence, or references to some blog posts that claim it should be working. I'd like to hear from someone who actually has a fully-functional setup working in the real world, under real conditions, with real queries. EDIT: Let me repeat again that I'm not so much interested in anecdotal evidence that someone, somewhere has a somewhat running installation working with unspecified properties. I already went there, I read all the blog posts, mailing lists, I contacted the authors, but when it came to actual implementation of real-world scenarios, nothing ever worked as advertised. Also, and a user below brought that point up as well, considering the TCO of any project, I'm definitely not interested in hearing that someone, somewhere was able to pull it off once a vendor parachuted in an unknown number of specialists to monkey-patch the whole installation with specific domain-knowledge that's documented nowhere. So, please, if you claim you have a working installation that actually satisfies minimum requirements for a full-fledged search (see requirements above), please provide the following so that we can all benefit from a search solution for Django that actually solves the problem: exact Linux distribution, release version, exact release version of Haystack (or equivalent) and release version of search backend, exact release version of the search engine publicly (!) available documentation how to set up all components exactly in the way that your installation was set up such that the minimal requirements above are met. Thank you.

    Read the article

  • Use JAXB unmarshalling in Weblogic Server

    - by Leo
    Especifications: - Server: Weblogic 9.2 fixed by customer. - Webservices defined by wsdl and xsd files fixed by customer; not modifications allowed. Hi, In the project we need to develope a mail system. This must do common work with the webservice. We create a Bean who recieves an auto-generated class from non-root xsd element (not wsdl); this bean do this common work. The mail system recieves a xml with elements defined in xsd file and we need to drop this elements info to wsdlc generated classes. With this objects we can use this common bean. Is not possible to redirect the mail request to the webservice. We've looking for the code to do this with WL9.2 resources but we don't found anything. At the moment we've tried to use JAXB for this unmarshalling: JAXBContext c = JAXBContext.newInstance(new Class[]{WasteDCSType.class}); Unmarshaller u = c.createUnmarshaller(); WasteDCSType w = u.unmarshal(waste, WasteDCSType.class).getValue(); waste variable is an DOM Element object. It isn't the root element 'cause the root isn't included in XSD First we needed to add no-arg constructor in some autogenerated classes. No problem, we solved this and finally we unmarshalled the xml without error Exceptions. But we had problems with the attributes. The unmarshalling didn't set attributes; none of them in any class, not simple attributes, not large or short enumeration attributes. No problem with xml elements of any type. We can't create the unmarshaller from "context string" (the package name) 'cause not ObjectFactory has been create by wsldc. If we set the schema no element descriptions are founded and unmarshall crashes. This is the build content: <taskdef name="jwsc" classname="weblogic.wsee.tools.anttasks.JwscTask" /> <taskdef name="wsdlc" classname="weblogic.wsee.tools.anttasks.WsdlcTask"/> <target name="generate-from-wsdl"> <wsdlc srcWsdl="${src.dir}/wsdls/e3s-environmentalMasterData.wsdl" destJwsDir="${src.dir}/webservices" destImplDir="${src.dir}/webservices" packageName="org.arc.eterws.generated" /> <wsdlc srcWsdl="${src.dir}/wsdls/e3s-waste.wsdl" destJwsDir="${src.dir}/webservices" destImplDir="${src.dir}/webservices" packageName="org.arc.eterws.generated" /> </target> <target name="webservices" description=""> <jwsc srcdir="${src.dir}/webservices" destdir="${dest.dir}" classpathref="wspath"> <module contextPath="E3S" name="webservices"> <jws file="org/arc/eterws/impl/IE3SEnvironmentalMasterDataImpl.java" compiledWsdl="${src.dir}/webservices/e3s-environmentalMasterData_wsdl.jar"/> <jws file="org/arc/eterws/impl/Ie3SWasteImpl.java" compiledWsdl="${src.dir}/webservices/e3s-waste_wsdl.jar"/> <descriptor file="${src.dir}/webservices/META-INF/web.xml"/> </module> </jwsc> </target> My questions are: How Weblogic "unmarshall" the xml with the JAX-RPC tech and can we do the same with a xsd element? How can we do this if yes? If not, Exists any not complex solution to this problem? If not, must we use XMLBean tech. or regenerate the XSD with JAXB tech.? What is the best solution? NOTE: There are not one single xsd but a complex xsd structure in fact.

    Read the article

  • Android: OutOfMemoryError while uploading video - how best to chunk?

    - by AP257
    Hi all, I have the same problem as described here, but I will supply a few more details. While trying to upload a video in Android, I'm reading it into memory, and if the video is large I get an OutOfMemoryError. Here's my code: // get bytestream to upload videoByteArray = getBytesFromFile(cR, fileUriString); public static byte[] getBytesFromFile(ContentResolver cR, String fileUriString) throws IOException { Uri tempuri = Uri.parse(fileUriString); InputStream is = cR.openInputStream(tempuri); byte[] b3 = readBytes(is); is.close(); return b3; } public static byte[] readBytes(InputStream inputStream) throws IOException { ByteArrayOutputStream byteBuffer = new ByteArrayOutputStream(); // this is storage overwritten on each iteration with bytes int bufferSize = 1024; byte[] buffer = new byte[bufferSize]; int len = 0; while ((len = inputStream.read(buffer)) != -1) { byteBuffer.write(buffer, 0, len); } return byteBuffer.toByteArray(); } And here's the traceback (the error is thrown on the byteBuffer.write(buffer, 0, len) line): 04-08 11:56:20.456: ERROR/dalvikvm-heap(6088): Out of memory on a 16775184-byte allocation. 04-08 11:56:20.456: INFO/dalvikvm(6088): "IntentService[UploadService]" prio=5 tid=17 RUNNABLE 04-08 11:56:20.456: INFO/dalvikvm(6088): | group="main" sCount=0 dsCount=0 s=N obj=0x449a3cf0 self=0x38d410 04-08 11:56:20.456: INFO/dalvikvm(6088): | sysTid=6119 nice=0 sched=0/0 cgrp=default handle=4010416 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:~93) 04-08 11:56:20.456: INFO/dalvikvm(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.456: INFO/dalvikvm(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.456: INFO/dalvikvm(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.467: WARN/dalvikvm(6088): threadid=17: thread exiting with uncaught exception (group=0x4001b180) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): Uncaught handler: thread IntentService[UploadService] exiting due to uncaught exception 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): java.lang.OutOfMemoryError 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.expand(ByteArrayOutputStream.java:93) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:218) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.readBytes(UploadService.java:199) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.getBytesFromFile(UploadService.java:182) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.doUploadinBackground(UploadService.java:118) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at com.android.election2010.UploadService.onHandleIntent(UploadService.java:85) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.app.IntentService$ServiceHandler.handleMessage(IntentService.java:30) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Handler.dispatchMessage(Handler.java:99) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.Looper.loop(Looper.java:123) 04-08 11:56:20.467: ERROR/AndroidRuntime(6088): at android.os.HandlerThread.run(HandlerThread.java:60) 04-08 11:56:20.496: INFO/Process(4657): Sending signal. PID: 6088 SIG: 3 I guess that as @DroidIn suggests, I need to upload it in chunks. But (newbie question alert) does that mean that I should make multiple PostMethod requests, and glue the file together at the server end? Or can I load the bytestream into memory in chunks, and glue it together in the Android code? If anyone could give me a clue as to the best approach, I would be very grateful.

    Read the article

  • Placing Varibles into an external Sheet

    - by Leslie Peer
    Trying to Build an Online D&d program which stores the character info into Tables my problem is the game works just fine while your playing but as soon as you exit game all varibles are lost which means you have to restart from scratch the next time you log on... So this is a Two Fold Question What is the Best type of External Sheet to save it on... and two How to access sheet for saving and Loading Below are Varibles <SCRIPT> Name1="Tabor Bloomfield"; Name2="Sam Wrightfield"; Name3="Gavin Hartfild"; Name4="Gail Quickfoot"; Name5="Robert Gragorian"; Name6="Peter Shain"; Class1="MagicUser"; Class2="Fighter"; Class3="Fighter"; Class4="Thief"; Class5="Cleric"; Class6="Fighter"; Level1=23; Level2=1; Level3=1; Level4=2; Level5=2; Level6=1; Hpts1=145; Hpts2=14; Hpts3=13; Hpts4=8; Hpts5=12; Hpts6=15; Armor1="Robe of Protection +5"; Armor2="Splinted Armor"; Armor3="Chain Armor"; Armor4="Leather Armor"; Armor5="Chain Armor"; Armor6="Splinted Armor"; Ac1a=5; Ac2a=3; Ac3a=3; Ac4a=4; Ac5a=2; Ac6a=3; Armor1b="Ring of Protection +5"; Armor2b="Small Shield"; Armor3b="Small Shield"; Armor4b="Wooden Shield"; Armor5b="Large Shield"; Armor6b="Small Shield"; Ac1b=5; Ac2b=1; Ac3b=1; Ac4b=1; Ac5b=1; Ac6b=1; Str1=21; Str2=16; Str3=14; Str4=13; Str5=14; Str6=13; Int1=19; Int2=11; Int3=12; Int4=13; Int5=14; Int6=13; Wis1=18; Wis2=12; Wis3=14; Wis4=13; Wis5=14; Wis6=12; Dex1=19; Dex2=14; Dex3=13; Dex4=15; Dex5=14; Dex6=12; Con1=19; Con2=15; Con3=16; Con4=13; Con5=12; Con6=10; Chr1=21; Chr2=14; Chr3=13; Chr4=12; Chr5=14; Chr6=13; </SCRIPT> File name ="gamestats" Path="trellian Webpage/droves E and F/gamestats have tryed html Page,Javascript,Creating a serperate table page and putting the varibles into cells...But at a lost on how to arrive at a solution

    Read the article

  • eXML-PARSER output contains unwanted hash references

    - by seaworthy
    So I wrote a parser routine to take one xml file and reparse into another one. This code I later modified to split a large xml file into many small xml files. I am having a problem with an output. Parsing works fine the only thing output also includes unwanted strings like HASH(0x19f9b58), I am not sure why and need set of friendly eyes. use Encode; use XML::Parser; my $parser = XML::Parser->new( Handlers => {Start => \&handle_elem_start, End => \&handle_elem_end,Char => \&handle_char_data,}); my $record; my $file = shift @ARGV; if( $file ) {$parser->parsefile( $file );} exit; sub handle_elem_start { my( $expat, $name, %atts ) = @_; if ($name eq 'articles'){$file="_data.xml";unlink($file);} $record .= "<"; $record .= "$name"; foreach my $key (keys %atts){$record .= " $key=\"$atts{$key}\"";} $record .= ">"; } sub handle_char_data { my( $expat, $text ) = @_; $text = decode_utf8( $text ); $record .= "$text"; } sub handle_elem_end { my( $expat, $name ) = @_; $record .= "</$name>"; if( $name eq 'article' ) { open (MYFILE, '>>'.$file); print MYFILE $record; close (MYFILE); print $record; $record = {}; } return unless( $name eq 'article' ); } Sample output: ... </article>HASH(0x19f9b40) <article doi="10.1103/PhysRevSeriesI.9.304"> <journal short="Phys. Rev. (Series I)" jcode="PRI">Physical Review (Series I)</journal> <volume>9</volume> <issue printdate="1899-11-00">5</issue> <fpage>304</fpage> <lpage>309</lpage> <seqno>1</seqno> <price></price><tocsec>Articles</tocsec> <arttype type="article"></arttype><doi>10.1103/PhysRevSeriesI.9.304</doi> <title>An Investigation of the Magnetic Qualities of Building Brick</title> <authgrp> <author><givenname>O.</givenname><middlename>A.</middlename><surname>Gage</surname></author> <author><givenname>H.</givenname><middlename>E.</middlename><surname>Lawrence</surname></author> </authgrp> <cpyrt> <cpyrtdate date="1899"></cpyrtdate><cpyrtholder>The American Physical Society</cpyrtholder> </cpyrt> </article>HASH(0x19f9b58) ... HASH strings are not wanted, please advise.

    Read the article

  • How do I add a namespace attribute to an element in JAXB when marshalling?

    - by Ryan Elkins
    I'm working with eBay's LMS (Large Merchant Services) and kept running into the error: org.xml.sax.SAXException: SimpleDeserializer encountered a child element, which is NOT expected, in something it was trying to deserialize. After alot of trial and error I traced the problem down. It turns out this works: <?xml version="1.0" encoding="UTF-8"?> <BulkDataExchangeRequests xmlns="urn:ebay:apis:eBLBaseComponents"> <Header> <Version>583</Version> <SiteID>0</SiteID> </Header> <AddFixedPriceItemRequest xmlns="urn:ebay:apis:eBLBaseComponents"> while this (what I've been sending) doesn't: <?xml version="1.0" encoding="UTF-8"?> <BulkDataExchangeRequests xmlns="urn:ebay:apis:eBLBaseComponents"> <Header> <Version>583</Version> <SiteID>0</SiteID> </Header> <AddFixedPriceItemRequest> The difference is the xml namespace attribute on the AddFixedPriceItemRequest . All of my XML is currently being marshalled via JAXB and I'm not sure what is the best way to go about adding a second xmlns attribute to a different element in my file. So that's the question. How do I add an xmlns attribute to another element in JAXB? UPDATE: package ebay.apis.eblbasecomponents; import javax.xml.bind.annotation.XmlAccessType; import javax.xml.bind.annotation.XmlAccessorType; import javax.xml.bind.annotation.XmlElement; import javax.xml.bind.annotation.XmlType; @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "AddFixedPriceItemRequestType", propOrder = { "item" }) public class AddFixedPriceItemRequestType extends AbstractRequestType { @XmlElement(name = "Item") protected ItemType item; public ItemType getItem() { return item; } public void setItem(ItemType value) { this.item = value; } } Added class definition by request. UPDATE 2: Edited the above class like so to no effect: @XmlAccessorType(XmlAccessType.FIELD) @XmlType(namespace = "urn:ebay:apis:eBLBaseComponents", name = "AddFixedPriceItemRequestType", propOrder = { "item" }) public class AddFixedPriceItemRequestType UPDATE 3: Here is a snippet of the BulkDataExchangeRequestsType class. I tried throwing a namespace="urn:ebay:apis:eBLBaseComponents" into the @XmlElement for AddFixedPriceItemRequest but it didn't do anything. @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "BulkDataExchangeRequestsType", propOrder = { "header", "addFixedPriceItemRequest" }) public class BulkDataExchangeRequestsType { @XmlElement(name = "Header") protected MerchantDataRequestHeaderType header; @XmlElement(name = "AddFixedPriceItemRequest") protected List<AddFixedPriceItemRequestType> addFixedPriceItemRequest; UPDATE 4: Here's the hideous chunk of code that is updating the xml after marshalling for me. This is currently working although I'm not particulary proud of it. DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setNamespaceAware(true); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.newDocument(); marshaller.marshal(request, doc); NodeList nodes = doc.getChildNodes(); nodes = nodes.item(0).getChildNodes(); for(int i = 0; i < nodes.getLength(); i++){ Node node = nodes.item(i); if (!node.getNodeName().equals("Header")){ ((Element)node).setAttribute("xmlns", "urn:ebay:apis:eBLBaseComponents"); } } Update 5: For anyone else that runs into this problem with eBay and wonders why - The reasoning behind this most likely has to do with how eBay is handling these requests. The BulkDataExchange probably takes the XML payload, breaks it up, and send the pieces out to the Merchant or Trading API. The inner pieces of the payload then do not have the namespace and the get the error. This is a guess on my part but I wouldn't be surprised if this is what was going on behind the scenes. Thanks for all the help everyone.

    Read the article

  • Allocation algorithm help, using Python.

    - by Az
    Hi there, I've been working on this general allocation algorithm for students. The pseudocode for it (a Python implementation) is: for a student in a dictionary of students: for student's preference in a set of preferences (ordered from 1 to 10): let temp_project be the first preferred project check if temp_project is available if so, allocate it to them and make the project UNavailable to others Quite simply this will try to allocate projects by starting from their most preferred. The way it works, out of a set of say 100 projects, you list 10 you would want to do. So the 10th project wouldn't be the "least preferred overall" but rather the least preferred in their chosen set, which isn't so bad. Obviously if it can't allocate a project, a student just reverts to the base case which is an allocation of None, with a rank of 11. What I'm doing is calculating the allocation "quality" based on a weighted sum of the ranks. So the lower the numbers (i.e. more highly preferred projects), the better the allocation quality (i.e. more students have highly preferred projects). That's basically what I've currently got. Simple and it works. Now I'm working on this algorithm that tries to minimise the allocation weight locally (this pseudocode is a bit messy, sorry). The only reason this will probably work is because my "search space" as it is, isn't particularly large (just a very general, anecdotal observation, mind you). Since the project is only specific to my Department, we have their own limits imposed. So the number of students can't exceed 100 and the number of preferences won't exceed 10. for student in a dictionary/list/whatever of students: where i = 0 take the (i)st student, (i+1)nd student for their ranks: allocate the projects and set local_weighting to be sum(student_i.alloc_proj_rank, student_i+1.alloc_proj_rank) these are the cases: if local_weighting is 2 (i.e. both ranks are 1): then i += 1 and and continue above if local weighting is = N>2 (i.e. one or more ranks are greater than 1): let temp_local_weighting be N: pick student with lowest rank and then move him to his next rank and pick the other student and reallocate his project after this if temp_local_weighting is < N: then allocate those projects to the students move student with lowest rank to the next rank and reallocate other if temp_local_weighting < previous_temp_allocation: let these be the new allocated projects try moving for the lowest rank and reallocate other else: if this weighting => previous_weighting let these be the allocated projects i += 1 and move on for the rest of the students So, questions: This is sort of a modification of simulated annealing, but any sort of comments on this would be appreciated. How would I keep track of which student is (i) and which student is (i+1) If my overall list of students is 100, then the thing would mess up on (i+1) = 101 since there is none. How can I circumvent that? Any immediate flaws that can be spotted? Extra info: My students dictionary is designed as such: students[student_id] = Student(student_id, student_name, alloc_proj, alloc_proj_rank, preferences) where preferences is in the form of a dictionary such that preferences[rank] = {project_id}

    Read the article

  • How to reduce Entity Framework 4 query compile time?

    - by Rup
    Summary: We're having problems with EF4 query compilation times of 12+ seconds. Cached queries will only get us so far; are there any ways we can actually reduce the compilation time? Is there anything we might be doing wrong we can look for? Thanks! We have an EF4 model which is exposed over the WCF services. For each of our entity types we expose a method to fetch and return the whole entity for display / edit including a number of referenced child objects. For one particular entity we have to .Include() 31 tables / sub-tables to return all relevant data. Unfortunately this makes the EF query compilation prohibitively slow: it takes 12-15 seconds to compile and builds a 7,800-line, 300K query. This is the back-end of a web UI which will need to be snappier than that. Is there anything we can do to improve this? We can CompiledQuery.Compile this - that doesn't do any work until first use and so helps the second and subsequent executions but our customer is nervous that the first usage shouldn't be slow either. Similarly if the IIS app pool hosting the web service gets recycled we'll lose the cached plan, although we can increase lifetimes to minimise this. Also I can't see a way to precompile this ahead of time and / or to serialise out the EF compiled query cache (short of reflection tricks). The CompiledQuery object only contains a GUID reference into the cache so it's the cache we really care about. (Writing this out it occurs to me I can kick off something in the background from app_startup to execute all queries to get them compiled - is that safe?) However even if we do solve that problem, we build up our search queries dynamically with LINQ-to-Entities clauses based on which parameters we're searching on: I don't think the SQL generator does a good enough job that we can move all that logic into the SQL layer so I don't think we can pre-compile our search queries. This is less serious because the search data results use fewer tables and so it's only 3-4 seconds compile not 12-15 but the customer thinks that still won't really be acceptable to end-users. So we really need to reduce the query compilation time somehow. Any ideas? Profiling points to ELinqQueryState.GetExecutionPlan as the place to start and I have attempted to step into that but without the real .NET 4 source available I couldn't get very far, and the source generated by Reflector won't let me step into some functions or set breakpoints in them. The project was upgraded from .NET 3.5 so I have tried regenerating the EDMX from scratch in EF4 in case there was something wrong with it but that didn't help. I have tried the EFProf utility advertised here but it doesn't look like it would help with this. My large query crashes its data collector anyway. I have run the generated query through SQL performance tuning and it already has 100% index usage. I can't see anything wrong with the database that would cause the query generator problems. Is there something O(n^2) in the execution plan compiler - is breaking this down into blocks of separate data loads rather than all 32 tables at once likely to help? Setting EF to lazy-load didn't help. I've bought the pre-release O'Reilly Julie Lerman EF4 book but I can't find anything in there to help beyond 'compile your queries'. I don't understand why it's taking 12-15 seconds to generate a single select across 32 tables so I'm optimistic there's some scope for improvement! Thanks for any suggestions! We're running against SQL Server 2008 in case that matters and XP / 7 / server 2008 R2 using RTM VS2010.

    Read the article

  • How can I handle parameterized queries in Drupal?

    - by Anthony Gatlin
    We have a client who is currently using Lotus Notes/Domino as their content management system and web server. For many reasons, we are recommending they sunset their Notes/Domino implementation and transition onto a more modern platform--such as Drupal. The client has several web applications which would be a natural fit for Drupal. However, I am unsure of the best way to implement one of the web applications in Drupal. I am running into a knowledge barrier and wondered if any of you could fill in the gaps. Situation The client has a Lotus Domino application which serves as a front-end for querying a large DB2 data store and returning a result set (generally in table form) to a user via the web. The web application provides access to approximately 100 pre-defined queries--50 of which are public and 50 of which are secured. Most of the queries accept some set of user selected parameters as input. The output of the queries is typically returned to users in a list (table) format. A limited number of result sets allow drill-down through the HTML table into detail records. The query parameters often involve database queries themselves. For example, a single query may pull a list of company divisions into a drop-down. Once a division is selected, second drop-down with the departments from that division is populated--but perhaps only departments which meet some special criteria--such as those having taken a loss within a specific time frame. Most queries have 2-4 parameters with the average probably being 3. The application involves no data entry. None of the back-end data is ever modified by the web application. All access is purely based around querying data and viewing results. The queries change relatively infrequently, and the current system has been in place for approximately 10 years. There may be 10-20 query additions, modifications, or other changes in a given year. The client simply desires to change the presentation platform but absolutely does not want to re-do the 100 database queries. Once the project is implemented, the client wants their staff to take over and manage future changes. The client's staff have no background in Drupal or PHP but are somewhat willing to learn as necessary. How would you transition this into Drupal? My major knowledge void relates to how we would manage the query parameters and access the queries themselves. Here are a few specific questions but feel free to chime in on any issue related to this implementation. Would we have to build 100 forms by hand--with each form containing the parameters for a given query? If so, how would we do this? Approximately how long would it take to build/configure each of these forms? Is there a better way than manually building 100 forms? (I understand using CCK to enter data into custom content types but since we aren't adding any nodes, I am a little stuck as to how this might work.) Would it be possible for the internal staff to learn to create these query parameter forms--even if they are unfamiliar with Drupal today? Would they be required to do any PHP programming? How would we take the query parameters from a form and execute a query against DB2? Would this require a custom module? If so, would it require one module total or one module per query? (Note: There is apparently a DB2 driver available for Drupal. See http://groups.drupal.org/node/5511.) Note: I am not looking for CMS recommendations other than Drupal as Drupal nicely fits all of the client's other requirements, and I hope to help them standardize on a single platform. Any assistance you can provide would be helpful. Thank you in advance for your help!

    Read the article

  • WCF timedout waiting for System.Diagnostics.Process to finish

    - by Bartek
    Dear All, We have a WCF Service deployed on Windows Server 2003 that handles file transfers. When file is in Unix format, I am converting it to Dos format in the initialization stage using System.Diagnostics.Process (.WaitForExit()). Client calls the service: obj_DataSenderService = New DataSendClient() obj_DataSenderService.InnerChannel.OperationTimeout = New TimeSpan(0, System.Configuration.ConfigurationManager.AppSettings("DatasenderServiceOperationTimeout"), 0) str_DataSenderGUID = obj_DataSenderService.Initialize(xe_InitDetails.GetXMLNode) This works fine, however for large files the conversion takes more than 10 minutes and I am getting exception: A first chance exception of type 'System.ServiceModel.CommunicationException' occurred in mscorlib.dll Additional information: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:59:59.8749992'. I tried configuring both client: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IDataSend" closeTimeout="01:00:00" openTimeout="01:00:00" receiveTimeout="01:00:00" sendTimeout="01:00:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="None"> <transport clientCredentialType="Windows" protectionLevel="EncryptAndSign" /> <message clientCredentialType="Windows" /> </security> </binding> </netTcpBinding> </bindings> <client> <endpoint address="net.tcp://localhost:4000/DataSenderEndPoint" binding="netTcpBinding" bindingConfiguration="NetTcpBinding_IDataSend" contract="IDataSend" name="NetTcpBinding_IDataSend"> <identity> <servicePrincipalName value="host/localhost" /> <!--<servicePrincipalName value="host/axopwrapp01.Corp.Acxiom.net" />--> </identity> </endpoint> </client> </system.serviceModel> And service: <system.serviceModel> <bindings> <netTcpBinding> <binding name="NetTcpBinding_IDataSend" closeTimeout="01:00:00" openTimeout="01:00:00" receiveTimeout="01:00:00" sendTimeout="01:00:00" transactionFlow="false" transferMode="Buffered" transactionProtocol="OleTransactions" hostNameComparisonMode="StrongWildcard" listenBacklog="10" maxBufferPoolSize="524288" maxBufferSize="65536" maxConnections="10" maxReceivedMessageSize="65536"> </binding> </netTcpBinding> </bindings> </system.serviceModel> but without luck. In the Service trace viewer I can see: Close process timed out waiting for service dispatch to complete. with stack trace: System.ServiceModel.ServiceChannelManager.CloseInput(TimeSpan timeout) System.ServiceModel.Dispatcher.InstanceContextManager.CloseInput(TimeSpan timeout) System.ServiceModel.ServiceHostBase.OnClose(TimeSpan timeout) System.ServiceModel.Channels.CommunicationObject.Close(TimeSpan timeout) System.ServiceModel.Channels.CommunicationObject.Close() DataSenderService.DataSender.OnStop() System.ServiceProcess.ServiceBase.DeferredStop() System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) System.Runtime.Remoting.Proxies.AgileAsyncWorkerItem.DoAsyncCall() System.Runtime.Remoting.Proxies.AgileAsyncWorkerItem.ThreadPoolCallBack(Object o) System.Threading._ThreadPoolWaitCallback.WaitCallback_Context(Object state) System.Threading.ExecutionContext.runTryCode(Object userData) System.Runtime.CompilerServices.RuntimeHelpers.ExecuteCodeWithGuaranteedCleanup(TryCode code, CleanupCode backoutCode, Object userData) System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state) System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) System.Threading._ThreadPoolWaitCallback.PerformWaitCallbackInternal(_ThreadPoolWaitCallback tpWaitCallBack) System.Threading._ThreadPoolWaitCallback.PerformWaitCallback(Object state) Many thanks Bartek

    Read the article

  • matplotlib and python multithread file processing

    - by Napseis
    I have a large number of files to process. I have written a script that get, sort and plot the datas I want. So far, so good. I have tested it and it gives the desired result. Then I wanted to do this using multithreading. I have looked into the doc and examples on the internet, and using one thread in my program works fine. But when I use more, at some point I get random matplotlib error, and I suspect some conflict there, even though I use a function with names for the plots, and iI can't see where the problem could be. Here is the whole script should you need more comment, i'll add them. Thank you. #!/usr/bin/python import matplotlib matplotlib.use('GTKAgg') import numpy as np from scipy.interpolate import griddata import matplotlib.pyplot as plt import matplotlib.colors as mcl from matplotlib import rc #for latex import time as tm import sys import threading import Queue #queue in 3.2 and Queue in 2.7 ! import pdb #the debugger rc('text', usetex=True)#for latex map=0 #initialize the map index. It will be use to index the array like this: array[map,[x,y]] time=np.zeros(1) #an array to store the time middle_h=np.zeros((0,3)) #x phi c #for the middle of the box current_file=open("single_void_cyl_periodic_phi_c_middle_h_out",'r') for line in current_file: if line.startswith('# === time'): map+=1 np.append(time,[float(line.strip('# === time '))]) elif line.startswith('#'): pass else: v=np.fromstring(line,dtype=float,sep=' ') middle_h=np.vstack( (middle_h,v[[1,3,4]]) ) current_file.close() middle_h=middle_h.reshape((map,-1,3)) #3d array: map, x, phi,c ##### def load_and_plot(): #will load a map file, and plot it along with the corresponding profile loaded before while not exit_flag: print("fecthing work ...") #try: if not tasks_queue.empty(): map_index=tasks_queue.get() print("----> working on map: %s" %map_index) x,y,zp=np.loadtxt("single_void_cyl_growth_periodic_post_map_"+str(map_index),unpack=True, usecols=[1, 2,3]) for i,el in enumerate(zp): if el<0.: zp[i]=0. xv=np.unique(x) yv=np.unique(y) X,Y= np.meshgrid(xv,yv) Z = griddata((x, y), zp, (X, Y),method='nearest') figure=plt.figure(num=map_index,figsize=(14, 8)) ax1=plt.subplot2grid((2,2),(0,0)) ax1.plot(middle_h[map_index,:,0],middle_h[map_index,:,1],'*b') ax1.grid(True) ax1.axis([-15, 15, 0, 1]) ax1.set_title('Profiles') ax1.set_ylabel(r'$\phi$') ax1.set_xlabel('x') ax2=plt.subplot2grid((2,2),(1,0)) ax2.plot(middle_h[map_index,:,0],middle_h[map_index,:,2],'*r') ax2.grid(True) ax2.axis([-15, 15, 0, 1]) ax2.set_ylabel('c') ax2.set_xlabel('x') ax3=plt.subplot2grid((2,2),(0,1),rowspan=2,aspect='equal') sub_contour=ax3.contourf(X,Y,Z,np.linspace(0,1,11),vmin=0.) figure.colorbar(sub_contour,ax=ax3) figure.savefig('single_void_cyl_'+str(map_index)+'.png') plt.close(map_index) tasks_queue.task_done() else: print("nothing left to do, other threads finishing,sleeping 2 seconds...") tm.sleep(2) # except: # print("failed this time: %s" %map_index+". Sleeping 2 seconds") # tm.sleep(2) ##### exit_flag=0 nb_threads=2 tasks_queue=Queue.Queue() threads_list=[] jobs=list(range(map)) #each job is composed of a map print("inserting jobs in the queue...") for job in jobs: tasks_queue.put(job) print("done") #launch the threads for i in range(nb_threads): working_bee=threading.Thread(target=load_and_plot) working_bee.daemon=True print("starting thread "+str(i)+' ...') threads_list.append(working_bee) working_bee.start() #wait for all tasks to be treated tasks_queue.join() #flip the flag, so the threads know it's time to stop exit_flag=1 for t in threads_list: print("waiting for threads %s to stop..."%t) t.join() print("all threads stopped")

    Read the article

  • Open XML SDK 2.0 - Split table to new power point slide when content flows off current slide

    - by amurra
    I have a bunch of data that I need to export from a website to a power point presentation and have been using Open XML SDK 2.0 to perform this task. I have a power point presentation that I am putting through Open XML SDK 2.0 Productivity Tool to generate the template code that I can use to recreate the export. On one of those slides I have a table and the requirement is to add data to that table and break that table across multiple slides if the table exceeds the bottom of the slide. The approach I have taken is to determine the height of the table and if it exceeds the height of the slide, move that new content into the next slide. I have read Bryan and Jones blog on adding repeating data to a power point slide, but my scenario is a little different. They use the following code: A.Table tbl = current.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = heightInEmu; tr.Append(CreateDrawingCell(imageRel + imageRelId)); tr.Append(CreateTextCell(category)); tr.Append(CreateTextCell(subcategory)); tr.Append(CreateTextCell(model)); tr.Append(CreateTextCell(price.ToString())); tbl.Append(tr); imageRelId++; This won't work for me since they know what height to set the table row to since it will be the height of the image, but when adding in different amounts of text I do not know the height ahead of time so I just set tr.Heightto a default value. Here is my attempt at figuring at the table height: A.Table tbl = tableSlide.Slide.Descendants<A.Table>().First(); A.TableRow tr = new A.TableRow(); tr.Height = 370840L; tr.Append(PowerPointUtilities.CreateTextCell("This"); tr.Append(PowerPointUtilities.CreateTextCell("is")); tr.Append(PowerPointUtilities.CreateTextCell("a")); tr.Append(PowerPointUtilities.CreateTextCell("test")); tr.Append(PowerPointUtilities.CreateTextCell("Test")); tbl.Append(tr); tableSlide.Slide.Save(); long tableHeight = PowerPointUtilities.TableHeight(tbl); Here are the helper methods: public static A.TableCell CreateTextCell(string text) { A.TableCell tableCell = new A.TableCell( new A.TextBody(new A.BodyProperties(), new A.Paragraph(new A.Run(new A.Text(text)))), new A.TableCellProperties()); return tableCell; } public static Int64Value TableHeight(A.Table table) { long height = 0; foreach (var row in table.Descendants<A.TableRow>() .Where(h => h.Height.HasValue)) { height += row.Height.Value; } return height; } This correctly adds the new table row to the existing table, but when I try and get the height of the table, it returns the original height and not the new height. The new height meaning the default height I initially set and not the height after a large amount of text has been inserted. It seems the height only gets readjusted when it is opened in power point. I have also tried accessing the height of the largest table cell in the row, but can't seem to find the right property to perform that task. My question is how do you determine the height of a dynamically added table row since it doesn't seem to update the height of the row until it is opened in power point? Any other ways to determine when to split content to another slide while using Open XML SDK 2.0? I'm open to any suggestion on a better approach someone might have taken since there isn't much documentation on this subject.

    Read the article

  • reuse div container to load images

    - by user295927
    Hi All; What I would like to do: use a single container to load images. As it is now: I have eleven (11) containers in HTML mark-up each with its own div. each container holds 4 images (2 images side by side top and bottom) when link in anchor tag is clicked div with images fades in. /*-- Jquery accordion is used for navigation typical example is below --*/ <li> <a class="head" href="#">commercial/hospitality</a> <ul> <li><a href="#" projectName="project1" projectType="hospitality1" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 1</a> </li> <li><a href="#" projectName="project2" projectType="hospitality2" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 2</a> </li> <li><a href="#" projectName="project3" projectType="hospitality3" image1="images/testImage1.jpg" image2="images/testImage2.jpg" image3="images/testImage3.jpg" image4="images/testImage4.jpg">hospitality project number 3</a> </li> </ul> </li> Typical <div> container used for image insertion currently there are 11 of them: <div id="hospitality1" class="current"> <div id="image1"><img src="images/testImage.jpg"/></div> <div id="image2"><img src="images/testImage.jpg"/></div> <div id="image3"><img src="images/testImage.jpg"/></div> <div id="image4"><img src="images/testImage.jpg"/></div> </div> Here is the code I am using at this point, it does work, but is there a better way to do this that will only re-use a single div container for loading the images? $(document).ready(function(){ $('#navigation a').click(function (selected) { var projectType = $(this).attr("projectType"); //projectType var projectName = $(this).attr("projectName"); //projectName var image1 = $(this).attr("image1"); //anchor tag for image number 1 var image2 = $(this).attr("image2"); //anchor tag for image number 2 var image3 = $(this).attr("image3"); //anchor tag for image number 3 var image4 = $(this).attr("image4"); //anchor tag for image number 4 console.log(projectType); //returns type of project console.log(projectName); //returns name of project console.log(image1); //returns 1st image console.log(image2); //returns 2nd image console.log(image3); //returns 3rd image console.log(image4); //returns 4th image $(function() { $(".current").hide(); // hides previous selected image $("#" + projectType ).fadeIn("normal").addClass("current"); }); }); As you can, see the mark up getting quite large. Any help is appreciated. ussteele

    Read the article

  • setTimeout in javascript not giving browser 'breathing room'

    - by C Bauer
    Alright, I thought I had this whole setTimeout thing perfect but I seem to be horribly mistaken. I'm using excanvas and javascript to draw a map of my home state, however the drawing procedure chokes the browser. Right now I'm forced to pander to IE6 because I'm in a big organisation, which is probably a large part of the slowness. So what I thought I'd do is build a procedure called distributedDrawPolys (I'm probably using the wrong word there, so don't focus on the word distributed) which basically pops the polygons off of a global array in order to draw 50 of them at a time. This is the method that pushes the polygons on to the global array and runs the setTimeout: for (var x = 0; x < polygon.length; x++) { coordsObject.push(polygon[x]); fifty++; if (fifty > 49) { timeOutID = setTimeout(distributedDrawPolys, 5000); fifty = 0; } } I put an alert at the end of that method, it runs in practically a second. The distributed method looks like: function distributedDrawPolys() { if (coordsObject.length > 0) { for (x = 0; x < 50; x++) { //Only do 50 polygons var polygon = coordsObject.pop(); var coordinate = polygon.selectNodes("Coordinates/point"); var zip = polygon.selectNodes("ZipCode"); var rating = polygon.selectNodes("Score"); if (zip[0].text.indexOf("HH") == -1) { var lastOriginCoord = []; for (var y = 0; y < coordinate.length; y++) { var point = coordinate[y]; latitude = shiftLat(point.getAttribute("lat")); longitude = shiftLong(point.getAttribute("long")); if (y == 0) { lastOriginCoord[0] = point.getAttribute("long"); lastOriginCoord[1] = point.getAttribute("lat"); } if (y == 1) { beginPoly(longitude, latitude); } if (y > 0) { if (translateLongToX(longitude) > 0 && translateLongToX(longitude) < 800 && translateLatToY(latitude) > 0 && translateLatToY(latitude) < 600) { drawPolyPoint(longitude, latitude); } } } y = 0; if (zip[0].text != targetZipCode) { if (rating[0] != null) { if (rating[0].text == "Excellent") { endPoly("rgb(0,153,0)"); } else if (rating[0].text == "Good") { endPoly("rgb(153,204,102)"); } else if (rating[0].text == "Average") { endPoly("rgb(255,255,153)"); } } else { endPoly("rgb(255,255,255)"); } } else { endPoly("rgb(255,0,0)"); } } } } Ugh I don't know if that is properly formatted, I ended up with an extra bracket < So I thought the setTimeout method would allow the site to draw the polygons in groups so the users would be able to interact with the page while it was still drawing. What am I doing wrong here?

    Read the article

  • Keyboard for programming

    - by exhuma
    This may seem a bit a tangential topic. It's not directly related to actual code, but is important for our line of work nevertheless. Over the years, I've switched keyboards a few times. All of them had slightly different key layouts. And I'm not talking about the language/locale layout, but the physical layout! Why not the locale layout? Well, quite frankly, that's easy to change via software. I personally have a German keyboard but have it set to the UK layout. Why? It's quite hard to find different layouts in the shops where I live. Even ordering is not always easy in the shops. So that leaves me with Internet shops. But I prefer to "test" my keyboards before buying. The most notable changes are: Mangled "Home Key Block" I've seen this first on a Logitech keyboard, but it may have originated elsewhere. Shape of the "Enter" key I've seen three different cases so far: Two lines high, wider at the top Two lines high, wider at the bottom One line high Shape of the Backspace button I've seen two types so far: One "character" wide Two "characters" wide OS Keys For Macs, you have the Option and Command buttons, for Windows you have the Windows and Context Menu buttons. Cherry even produced a Linux keyboard once (unfortunately I cannot find many details except news results). I assume a dedicated Linux keyboard would sport a Compose key and had the SysRq always labelled as well (note that some standard layouts do this already). Obviously... .. all these differences entail that some keys have to be moved around the board a lot. Which means, if you are used to one and have to work on another one, you happen to hit the wrong keys quite often. As it happens, this is much more annoying for programmers as it is for people who write texts. Mainly because the keys which are moved around are special character keys, often used in programming. Often these hardware layouts depend also indirectly on where you buy the keyboards. Honestly, I haven't seen a keyboard with a one-line "Enter" key in Germany, nor Luxembourg. I may just have missed it but that's how it looks to me at least. A survey I've seen some attempts at surveys in the style "which keyboard is best for programming". But they all - in my opinion - are not using comparable sets. So I was wondering if it was possible to concoct a survey taking the above criteria into account. But ignoring key dimensions that one would be a bit overkill I guess ;) From what I can see there are the following types of physical layout: Backspace: 2-characters wide Enter: 2-Lines, wider top Backspace: 2-characters wide Enter: 1-Line Backspace: 1-character wide Enter: 2-Lines, wider bottom Then there are the other possible permutations (home-key block, os-keys), which in total makes for quite a large list of categories. Now, I wonder... Would anyone be interested in such a survey? I personally would. Because I am looking for the perfect fit for me. If yes, then I could really use the help of anyone here to propose some models to include in the survey. Once I have some models for each category (I'd say at least 3 per category) I could go ahead and write up a survey, put it on-line and let the it collect data for a while. What do you think?

    Read the article

  • In XSLT is it possible to use the value of an xpath expression in a call to a template using an par

    - by Cell
    I am performing an xsl transform and in it I call a template with a param using the following code <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + 1"/> </xsl:call-template> This calls a template function which outputs part of a table element in HTML. The curRow and curCol are used to determine which row and column we are in the table. gbl_maxCols is set to the number of columns in an html table <xsl:template name="GenerateColumns"> <xsl:when test="$curCol &lt;= $gbl_maxCols"> <td> <xsl:attribute="colspan"> <xsl:value-of select="/page/column/@skipColumns"/> </xsl:attribute> </xsl:when> </xsl:template> The result of this function is a set of td elements, however some of these elements (those with a skipColumn attribute greater than 1 span more than 1 column, I need to skip this many columns with the next call to generateColumns. this works just like I would expect in the case where I simply increment the curCol param but I have a case where I need to use the value from the xml attribute skipColumns in the math to calculate the value for curCol. In the above case I iterate through all the columns and this works for the majority of my use cases. However in same cases I need to skip over some of the columns and need to pass in that value from the xml attribute to calculate how many columns I need to skip. My naive first attempt was something like this <xsl:call-template name="GenerateColumns"> <xsl:with-param name="curRow" select="$curRow"/> <xsl:with-param name="curCol" select="$curCol + /page/column/@skipColumns"/> </xsl:call-template> But unforutnately this does not seem to work. Is there any way to use an attribute from an xml page in the calculation for the value of a param in xsl. My xml page is something like this (edited heavily since the xml file is rather large) <page> <column name="blank" skipColumns="1"/> <column name="blank" skipColumns="1"/> <column name="test" skipColumns="3"/> <column name="blank" skipColumns="1"/> <column name="test2" skipColumns="6"/> </page> after all of this I would like to have a set of td elements like the following <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td> if I just iterate through the columns I instead end up with something like this which gives me more td elements than I should have <td></td><td></td><td colSpan="3"></td><td></td><td colSpan="6"></td><td></td><td></td><td></td><td></td><td></td><td></td><td></td> Edited to provide more information

    Read the article

  • SQL Server 2005 Blocking Problem (ASYNC_NETWORK_IO)

    - by ivankolo
    I am responsible for a third-party application (no access to source) running on IIS and SQL Server 2005 (500 concurrent users, 1TB data, 8 IIS servers). We have recently started to see significant blocking on the database (after months of running this application in production with no problems). This occurs at random intervals during the day, approximately every 30 minutes, and affects between 20 and 100 sessions each time. All of the sessions eventually hit the application time out and the sessions abort. The problem disappears and then gradually re-emerges. The SPID responsible for the blocking always has the following features: WAIT TYPE = ASYNC_NETWORK_IO The SQL being run is “(@claimid varchar(15))SELECT claimid, enrollid, status, orgclaimid, resubclaimid, primaryclaimid FROM claim WHERE primaryclaimid = @claimid AND primaryclaimid < claimid)”. This is relatively innocuous SQL that should only return one or two records, not a large dataset. NO OTHER SQL statements have been implicated in the blocking, only this SQL statement. This is parameterized SQL for which an execution plan is cached in sys.dm_exec_cached_plans. This SPID has an object-level S lock on the claim table, so all UPDATEs/INSERTs to the claim table are also blocked. HOST ID varies. Different web servers are responsible for the blocking sessions. E.g., sometimes we trace back to web server 1, sometimes web server 2. When we trace back to the web server implicated in the blocking, we see the following: There is always some sort of application related error in the Event Log on the web server, linked to the Host ID and Host Process ID from the SQL Session. The error messages vary, usually some sort of SystemOutofMemory. (These error messages seem to be similar to error messages that we have seen in the past without such dramatic consequences. We think was happening before, but didn’t lead to blocking. Why now?) No known problems with the network adapters on either the web servers or the SQL server. (In any event the record set returned by the offending query would be small.) Things ruled out: Indexes are regularly defragmented. Statistics regularly updated. Increased sample size of statistics on claim.primaryclaimid. Forced recompilation of the cached execution plan. Created a compound index with primaryclaimid, claimid. No networking problems. No known issues on the web server. No changes to application software on web servers. We hypothesize that the chain of events goes something like this: Web server process submits SQL above. SQL server executes the SQL, during which it acquires a lock on the claim table. Web server process gets an error and dies. SQL server session is hung waiting for the web server process to read the data set. SQL Server sessions that need to get X locks on parts of the claim table (anyone processing claims) are blocked by the lock on the claim table and remain blocked until they all hit the application time out. Any suggestions for troubleshooting while waiting for the vendor's assistance would be most welcome. Is there a way to force SQL Server to lock at the row/page level for this particular SQL statement only? Is there a way to set a threshold on ASYNC_NETWORK_IO waits only?

    Read the article

  • Python: Memory usage and optimization when modifying lists

    - by xApple
    The problem My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the list several times, perform some operations on the elements, and often pop an item out of the list. It seems that deleting one item out of a Python list costs O(N) since Python has to copy all the items above the element at hand down one place. Furthermore, since the number of items to delete is approximately proportional to the number of elements in the list this results in an O(N^2) algorithm. I am hoping to find a solution that is cost effective (time and memory-wise). I have studied what I could find on the internet and have summarized my different options below. Which one is the best candidate ? Keeping a local index: while processingdata: index = 0 while index < len(somelist): item = somelist[index] dosomestuff(item) if somecondition(item): del somelist[index] else: index += 1 This is the original solution I came up with. Not only is this not very elegant, but I am hoping there is better way to do it that remains time and memory efficient. Walking the list backwards: while processingdata: for i in xrange(len(somelist) - 1, -1, -1): dosomestuff(item) if somecondition(somelist, i): somelist.pop(i) This avoids incrementing an index variable but ultimately has the same cost as the original version. It also breaks the logic of dosomestuff(item) that wishes to process them in the same order as they appear in the original list. Making a new list: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) newlist = [] for item in somelist: if somecondition(item): newlist.append(item) somelist = newlist gc.collect() This is a very naive strategy for eliminating elements from a list and requires lots of memory since an almost full copy of the list must be made. Using list comprehensions: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist[:] = [x for x in somelist if somecondition(x)] This is very elegant but under-the-cover it walks the whole list one more time and must copy most of the elements in it. My intuition is that this operation probably costs more than the original del statement at least memory wise. Keep in mind that somelist can be huge and that any solution that will iterate through it only once per run will probably always win. Using the filter function: while processingdata: for i, item in enumerate(somelist): dosomestuff(item) somelist = filter(lambda x: not subtle_condition(x), somelist) This also creates a new list occupying lots of RAM. Using the itertools' filter function: from itertools import ifilterfalse while processingdata: for item in itertools.ifilterfalse(somecondtion, somelist): dosomestuff(item) This version of the filter call does not create a new list but will not call dosomestuff on every item breaking the logic of the algorithm. I am including this example only for the purpose of creating an exhaustive list. Moving items up the list while walking while processingdata: index = 0 for item in somelist: dosomestuff(item) if not somecondition(item): somelist[index] = item index += 1 del somelist[index:] This is a subtle method that seems cost effective. I think it will move each item (or the pointer to each item ?) exactly once resulting in an O(N) algorithm. Finally, I hope Python will be intelligent enough to resize the list at the end without allocating memory for a new copy of the list. Not sure though. Abandoning Python lists: class Doubly_Linked_List: def __init__(self): self.first = None self.last = None self.n = 0 def __len__(self): return self.n def __iter__(self): return DLLIter(self) def iterator(self): return self.__iter__() def append(self, x): x = DLLElement(x) x.next = None if self.last is None: x.prev = None self.last = x self.first = x self.n = 1 else: x.prev = self.last x.prev.next = x self.last = x self.n += 1 class DLLElement: def __init__(self, x): self.next = None self.data = x self.prev = None class DLLIter: etc... This type of object resembles a python list in a limited way. However, deletion of an element is guaranteed O(1). I would not like to go here since this would require massive amounts of code refactoring almost everywhere.

    Read the article

  • Scrolling a canvas as a shape you're moving approaches its edges

    - by Steven Sproat
    Hi, I develop a Python-based drawing program, Whyteboard. I have tools that the user can create new shapes on the canvas, such as text/images/rectangles/circles/polygons. I also have a Select tool that allows the users to modify these shapes - for example, moving a shape's position, resizing, or editing polygon's points' positions. I'm adding in a new feature where moving or resizing a point near the canvas edge will automatically scroll the canvas. I think it's a good idea in terms of program usability, and annoys me when other program's don't have this feature. I've made some good progress on coding this; below is some Python code to demonstrate what I'm doing. These functions demonstrate how some shapes calculate their "edges": def find_edges(self): """A line.""" self.edges = {EDGE_TOP: min(self.y, self.y2), EDGE_RIGHT: max(self.x, self.x2), EDGE_BOTTOM: max(self.y, self.y2), EDGE_LEFT: min(self. x, self.x2)} def find_edges(self): """An image""" self.edges = {EDGE_TOP: self.y, EDGE_RIGHT: self.x + self.image.GetWidth(), EDGE_BOTTOM: self.y + self.image.GetWidth(), EDGE_LEFT: self.x} def find_edges(self): """Get the bounding rectangle for the polygon""" xmin = min(x for x, y in self.points) ymin = min(y for x, y in self.points) xmax = max(x for x, y in self.points) ymax = max(y for x, y in self.points) self.edges = {EDGE_TOP: ymin, EDGE_RIGHT: xmax, EDGE_BOTTOM: ymax, EDGE_LEFT: xmin} And here's the code I have so far to implement the scrolling when a shape nears the edge: def check_canvas_scroll(self, x, y, moving=False): """ We check that the x/y coords are within 50px from the edge of the canvas and scroll the canvas accordingly. If the shape is being moved, we need to check specific edges of the shape (e.g. left/right side of rectangle) """ size = self.board.GetClientSizeTuple() # visible area of the canvas if not self.board.area > size: # canvas is too small to need to scroll return start = self.board.GetViewStart() # user's starting "viewport" scroll = (-1, -1) # -1 means no change if moving: if self.shape.edges[EDGE_RIGHT] > start[0] + size[0] - 50: scroll = (start[0] + 5, -1) if self.shape.edges[EDGE_BOTTOM] > start[1] + size[1] - 50: scroll = (-1, start[1] + 5) # snip others else: if x > start[0] + size[0] - 50: scroll = (start[0] + 5, -1) if y > start[1] + size[1] - 50: scroll = (-1, start[1] + 5) # snip others self.board.Scroll(*scroll) This code actually works pretty well. If we're moving a shape, then we need to know its edges to calculate when they're coming close to the canvas edge. If we're resizing just a single point, then we just use the x/y coords of that point to see if it's close to the edge. The problem I'm having is a little tricky to describe - basically, if you move a shape to the left, and stop moving it, if you positioned the shape within the 50px from the canvas, then the next time you go to move the shape, the code that says "ok, is this shape close to the end?" gets triggered, and the canvas scrolls to the left, even if you're moving the shape to the right. Can anyone think on how to stop this? I created a youtube video to demonstrate the issue. At about 0:54, I move a polygon to the left of the canvas and position it there. The next time I move it, the canvas scrolls to the left even though I'm moving it right Another thing I'd like to add, but I'm stuck on is the scroll gaining momentum the longer a shape is scrolling? So, with a large canvas, you're not moving a shape for ages, moving 5px at a time, when you need to cover a 2000px distance. Any suggestions there? Thanks all - sorry for the super long question!

    Read the article

  • [C#] Not enough memory or not enough handles?

    - by Nayan
    I am working on a large scale project where a custom (pretty good and robust) framework has been provided and we have to use that for showing up forms and views. There is abstract class StrategyEditor (derived from some class in framework) which is instantiated whenever a new StrategyForm is opened. StrategyForm (a customized window frame) contains StrategyEditor. StrategyEditor contains StrategyTab. StrategyTab contains StrategyCanvas. This is a small portion of the big classes to clarify that there are many objects that will be created if one StrategyForm object is allocated in memory at run-time. My component owns all these classes mentioned above except StrategyForm whose code is not in my control. Now, at run-time, user opens up many strategy objects (which trigger creation of new StrategyForm object.) After creating approx. 44 strategy objects, we see that the USER OBJECT HANDLES (I'll use UOH from here onwards) created by the application reaches to about 20k+, while in registry the default amount for handles is 10k. Read more about User Objects here. Testing on different machines made it clear that the number of strategy objects opened is different for message to pop-up - on one m/c if it is 44, then it can be 40 on another. When we see the message pop-up, it means that the application is going to respond slowly. It gets worse with few more objects and then creation of window frames and subsequent objects fail. We first thought that it was not-enough-memory issue. But then reading more about new in C# helped in understanding that an exception would be thrown if app ran out of memory. This is not a memory issue then, I feel (task manager also showed 1.5GB+ available memory.) M/C specs Core 2 Duo 2GHz+ 4GB RAM 80GB+ free disk space for page file Virtual Memory set: 4000 - 6000 My questions Q1. Does this look like a memory issue and I am wrong that it is not? Q2. Does this point to exhaustion of free UOHs (as I'm thinking) and which is resulting in failure of creation of window handles? Q3. How can we avoid loading up of an StrategyEditor object (beyond a threshold, keeping an eye on the current usage of UOHs)? (we already know how to fetch number of UOHs in use, so don't go there.) Keep in mind that the call to new StrategyForm() is outside the control of my component. Q4. I am bit confused - what are Handles to user objects exactly? Is MSDN talking about any object that we create or only some specific objects like window handles, cursor handles, icon handles? Q5. What exactly cause to use up a UOH? (almost same as Q4) I would be really thankful to anyone who can give me some knowledgeable answers. Thanks much! :)

    Read the article

  • jquery-ui .draggable is not a function error

    - by niczoom
    I am getting the following error (using Firefox 3.5.9): $("#dragMe_" + myCount).draggable is not a function $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); Line 231 http://www.liamharding.com/pgi/pgi.php Link to page in question : http://www.liamharding.com/pgi/pgi.php For example, click the 2 checkbox's 'R25 + R50 Random Walk' then click Show/Refresh Graphs. Two graphs should be displayed, both with draggable thin horizontal red lines. Re-open the options panel and de-select R50 Random Walk, now click Show/Refresh Graphs again, 1 graph is removed and the other updated; now re-select R50 Random Walk and click Show/Refresh, you will find the still checked R25 graph gets updated ok the above error occurs and i cant figure out why. Initially, when displaying the first 2 graphs it uses the same code and it works just fine. The error occurs on this line: //********* ERROR OCCURS HERE ********** $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); Here is the code for the Show/Refresh Graphs.click() event: $("#btnShowGraphs").click(function(){ // Hide 'Options' panel (only if open AND an index is checked) if (IsOptionsPanelOpen && ($("#indexCheck:checked").length != 0)) {$('#optionImgDiv').click();}; var myCount = 0; var divIsNew = false; var gif_loader_small = '<div id="gif_loader_small"></div>'; var gif_loader_big = '<div id="gif_loader_big"></div>'; $("input:checkbox[id=indexCheck]").each(function() { if (this.checked) { // check for an existing wrapper div for the current forex item, using the current checkbox value (foxex name) if ( $("#"+this.value).length == 0 ) { console.log("New 'graphContainer' div : "+this.value); divIsNew = true; // Create new divs for graph image, drag bar and heading var $structure = " \ <li id=\""+this.value+"\" class=\"graphContainer\"> \ <div id=\"dragMe_"+myCount+"\" class=\"dragMe\"></div> \ <div id=\"image_"+myCount+"\" class=\"image\"></div> \ <div id=\"heading_"+myCount+"\" class=\"heading\"></div> \ </li> \ "; $('#graphResults').append($structure); // Hide dragMe DIV $('#dragMe_'+myCount).hide(); // Make 'dragMe' draggable div //********* ERROR OCCURS HERE ********** $("#dragMe_"+myCount).draggable({ containment: 'parent', axis: 'y' }); } // Display small loading gif $(gif_loader_small).clone().appendTo( $(this).parent() ); // Display large circular loading gif var $loader = $(gif_loader_big); // add temporary css attributes onto existing graph divs as they need to be displayed diffrently if(!divIsNew){ console.log("Reposition existing 'gif_loader_big' div"); $loader = $(gif_loader_big).css({ "position" : "absolute", "top" : "35%", "opacity" : ".85"}); } // add newly styled big-loader-gif to index div $loader.clone().prependTo( $("#"+this.value) ); // Call function to fetch image using ajax get_graph(this, myCount, divIsNew); } else { // REMOVE 'graphContainer' DIVS NOT CHECKED // check for div existance if ( $("#"+this.value).length != 0 ) { console.log("DESTROY: #dragMe_"+myCount+", REMOVE: #"+this.value); // DESTROY draggable //$("#dragMe_"+myCount).draggable("destroy"); // remove div $("#"+this.value).remove(); } } // reset counters and other variables myCount++; divIsNew = false; console.log("Complete: "+this.value+", NEXT index"); }); });

    Read the article

  • What container type provides better (average) performance than std::map?

    - by Truncheon
    In the following example a std::map structure is filled with 26 values from A - Z (for key) and 0 - 26 for value. The time taken (on my system) to lookup the last entry (10000000 times) is roughly 250 ms for the vector, and 125 ms for the map. (I compiled using release mode, with O3 option turned on for g++ 4.4) But if for some odd reason I wanted better performance than the std::map, what data structures and functions would I need to consider using? I apologize if the answer seems obvious to you, but I haven't had much experience in the performance critical aspects of C++ programming. UPDATE: This example is rather trivial and hides the true complexity of what I'm trying to achieve. My real world project is a simple scripting language that uses a parser, data tree, and interpreter (instead of a VM stack system). I need to use some kind of data structure (perhaps map) to store the variables names created by script programmers. These are likely to be pretty randomly named, so I need a lookup method that can quickly find a particular key within a (probably) fairly large list of names. #include <ctime> #include <map> #include <vector> #include <iostream> struct mystruct { char key; int value; mystruct(char k = 0, int v = 0) : key(k), value(v) { } }; int find(const std::vector<mystruct>& ref, char key) { for (std::vector<mystruct>::const_iterator i = ref.begin(); i != ref.end(); ++i) if (i->key == key) return i->value; return -1; } int main() { std::map<char, int> mymap; std::vector<mystruct> myvec; for (int i = 'a'; i < 'a' + 26; ++i) { mymap[i] = i - 'a'; myvec.push_back(mystruct(i, i - 'a')); } int pre = clock(); for (int i = 0; i < 10000000; ++i) { find(myvec, 'z'); } std::cout << "linear scan: milli " << clock() - pre << "\n"; pre = clock(); for (int i = 0; i < 10000000; ++i) { mymap['z']; } std::cout << "map scan: milli " << clock() - pre << "\n"; return 0; }

    Read the article

  • What are good CLI tools for JSON?

    - by jasonmp85
    General Problem Though I may be diagnosing the root cause of an event, determining how many users it affected, or distilling timing logs in order to assess the performance and throughput impact of a recent code change, my tools stay the same: grep, awk, sed, tr, uniq, sort, zcat, tail, head, join, and split. To glue them all together, Unix gives us pipes, and for fancier filtering we have xargs. If these fail me, there's always perl -e. These tools are perfect for processing CSV files, tab-delimited files, log files with a predictable line format, or files with comma-separated key-value pairs. In other words, files where each line has next to no context. XML Analogues I recently needed to trawl through Gigabytes of XML to build a histogram of usage by user. This was easy enough with the tools I had, but for more complicated queries the normal approaches break down. Say I have files with items like this: <foo user="me"> <baz key="zoidberg" value="squid" /> <baz key="leela" value="cyclops" /> <baz key="fry" value="rube" /> </foo> And let's say I want to produce a mapping from user to average number of <baz>s per <foo>. Processing line-by-line is no longer an option: I need to know which user's <foo> I'm currently inspecting so I know whose average to update. Any sort of Unix one liner that accomplishes this task is likely to be inscrutable. Fortunately in XML-land, we have wonderful technologies like XPath, XQuery, and XSLT to help us. Previously, I had gotten accustomed to using the wonderful XML::XPath Perl module to accomplish queries like the one above, but after finding a TextMate Plugin that could run an XPath expression against my current window, I stopped writing one-off Perl scripts to query XML. And I just found out about XMLStarlet which is installing as I type this and which I look forward to using in the future. JSON Solutions? So this leads me to my question: are there any tools like this for JSON? It's only a matter of time before some investigation task requires me to do similar queries on JSON files, and without tools like XPath and XSLT, such a task will be a lot harder. If I had a bunch of JSON that looked like this: { "firstName": "Bender", "lastName": "Robot", "age": 200, "address": { "streetAddress": "123", "city": "New York", "state": "NY", "postalCode": "1729" }, "phoneNumber": [ { "type": "home", "number": "666 555-1234" }, { "type": "fax", "number": "666 555-4567" } ] } And wanted to find the average number of phone numbers each person had, I could do something like this with XPath: fn:avg(/fn:count(phoneNumber)) Questions Are there any command-line tools that can "query" JSON files in this way? If you have to process a bunch of JSON files on a Unix command line, what tools do you use? Heck, is there even work being done to make a query language like this for JSON? If you do use tools like this in your day-to-day work, what do you like/dislike about them? Are there any gotchas? I'm noticing more and more data serialization is being done using JSON, so processing tools like this will be crucial when analyzing large data dumps in the future. Language libraries for JSON are very strong and it's easy enough to write scripts to do this sort of processing, but to really let people play around with the data shell tools are needed. Related Questions Grep and Sed Equivalent for XML Command Line Processing Is there a query language for JSON? JSONPath or other XPath like utility for JSON/Javascript; or Jquery JSON

    Read the article

  • How to design a C / C++ library to be usable in many client languages?

    - by Brian Schimmel
    I'm planning to code a library that should be usable by a large number of people in on a wide spectrum of platforms. What do I have to consider to design it right? To make this questions more specific, there are four "subquestions" at the end. Choice of language Considering all the known requirements and details, I concluded that a library written in C or C++ was the way to go. I think the primary usage of my library will be in programs written in C, C++ and Java SE, but I can also think of reasons to use it from Java ME, PHP, .NET, Objective C, Python, Ruby, bash scrips, etc... Maybe I cannot target all of them, but if it's possible, I'll do it. Requirements It would be to much to describe the full purpose of my library here, but there are some aspects that might be important to this question: The library itself will start out small, but definitely will grow to enormous complexity, so it is not an option to maintain several versions in parallel. Most of the complexity will be hidden inside the library, though The library will construct an object graph that is used heavily inside. Some clients of the library will only be interested in specific attributes of specific objects, while other clients must traverse the object graph in some way Clients may change the objects, and the library must be notified thereof The library may change the objects, and the client must be notified thereof, if it already has a handle to that object The library must be multi-threaded, because it will maintain network connections to several other hosts While some requests to the library may be handled synchronously, many of them will take too long and must be processed in the background, and notify the client on success (or failure) Of course, answers are welcome no matter if they address my specific requirements, or if they answer the question in a general way that matters to a wider audience! My assumptions, so far So here are some of my assumptions and conclusions, which I gathered in the past months: Internally I can use whatever I want, e.g. C++ with operator overloading, multiple inheritance, template meta programming... as long as there is a portable compiler which handles it (think of gcc / g++) But my interface has to be a clean C interface that does not involve name mangling Also, I think my interface should only consist of functions, with basic/primitive data types (and maybe pointers) passed as parameters and return values If I use pointers, I think I should only use them to pass them back to the library, not to operate directly on the referenced memory For usage in a C++ application, I might also offer an object oriented interface (Which is also prone to name mangling, so the App must either use the same compiler, or include the library in source form) Is this also true for usage in C# ? For usage in Java SE / Java EE, the Java native interface (JNI) applies. I have some basic knowledge about it, but I should definitely double check it. Not all client languages handle multithreading well, so there should be a single thread talking to the client For usage on Java ME, there is no such thing as JNI, but I might go with Nested VM For usage in Bash scripts, there must be an executable with a command line interface For the other client languages, I have no idea For most client languages, it would be nice to have kind of an adapter interface written in that language. I think there are tools to automatically generate this for Java and some others For object oriented languages, it might be possible to create an object oriented adapter which hides the fact that the interface to the library is function based - but I don't know if its worth the effort Possible subquestions is this possible with manageable effort, or is it just too much portability? are there any good books / websites about this kind of design criteria? are any of my assumptions wrong? which open source libraries are worth studying to learn from their design / interface / souce? meta: This question is rather long, do you see any way to split it into several smaller ones? (If you reply to this, do it as a comment, not as an answer)

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >