Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 369/457 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Telephone Number to Geolocation UK

    - by David Toy
    Is there a service that provides latitude and longitude for UK phone numbers? For example: Query: 0141 574 xxx, Returns: (55.8659829, -4.2602205) [Glasgow City Centre] Allow me to stress that I am not looking for a reverse-directory-enquires. I am more interested in 'local area' for things like weather by phone or "Where's my nearest Pizza Shop?" If this service doesn't exist your suggestions on how to implement it or where to get data from would also be incredibly useful. I am aware that Ofcom provides a list of area codes with a place name [1] suitable for geolocation, but I have my concerns about resolution. I see this as a particular problem in smaller towns and rural areas where an area code will cover a large geographical area. Second Example: Area Code: 01555, Ofcom: Lanark However: 01555 860xxx is Crossford (4 miles W of Lanark) 01555 77xxxx is Carluke (5 miles NW) 01555 89xxxx is Lesmahagow (5 miles SW) 01555 840xxx is Carnwath (7 miles NE) Therefore 01555 covers about ~80 sq miles. That's not particularly local. [1] Ofcom Area Code Tool: http://www.ofcom.org.uk/consumer/2009/09/telephone-area-codes-tool/

    Read the article

  • Apply a recursive CTE on grouped table rows (SQL server 2005).

    - by Evan V.
    Hi all, I have a table (ROOMUSAGE) containing the times people check in and out of rooms grouped by PERSONKEY and ROOMKEY. It looks like this: PERSONKEY | ROOMKEY | CHECKIN | CHECKOUT | ROW ---------------------------------------------------------------- 1 | 8 | 13-4-2010 10:00 | 13-4-2010 11:00 | 1 1 | 8 | 13-4-2010 08:00 | 13-4-2010 09:00 | 2 1 | 1 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 1 | 1 | 13-4-2010 14:00 | 13-4-2010 15:00 | 2 1 | 1 | 13-4-2010 13:00 | 13-4-2010 14:00 | 3 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 2 I want to select just the consecutive rows for each PERSONKEY, ROOMKEY grouping. So the desired resulting table is: PERSONKEY | ROOMKEY | CHECKIN | CHECKOUT | ROW ---------------------------------------------------------------- 1 | 8 | 13-4-2010 10:00 | 13-4-2010 11:00 | 1 1 | 1 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 1 | 1 | 13-4-2010 14:00 | 13-4-2010 15:00 | 2 1 | 1 | 13-4-2010 13:00 | 13-4-2010 14:00 | 3 13 | 2 | 13-4-2010 15:00 | 13-4-2010 16:00 | 1 I want to avoid using cursors so I thought I would use a recursive CTE. Here is what I came up with: ;with CTE (PERSONKEY, ROOMKEY, CHECKIN, CHECKOUT, ROW) as (select RU.PERSONKEY, RU.ROOMKEY, RU.CHECKIN, RU.CHECKOUT, RU.ROW from ROOMUSAGE RU where RU.ROW = 1 union all select RU.PERSONKEY, RU.ROOMKEY, RU.CHECKIN, RU.CHECKOUT, RU.ROW from ROOMUSAGE RU inner join CTE on RU.ROWNUM = CTE.ROWNUM + 1 where CTE.CHECKIN = RU.CHECKOUT and CTE.PERSONKEY = RU.PERSONKEY and CTE.ROOMKEY = RU.ROOMKEY) This worked OK for very small datasets (under 100 records) but it's unusable on large datasets. I'm thinking that I should somehow apply the cte recursevely on each PERSONKEY, ROOMKEY grouping on my ROOMUSAGE table but I am not sure how to do that. Any help would be much appreciated, Cheers!

    Read the article

  • How much business logic belongs in RIA services layer?

    - by jkohlhepp
    I have been experimenting recently with Silverlight, RIA Services, and Entity Framework using .NET 4.0. I'm trying to figure out if that stack makes sense for use in any of my upcoming projects. It certainly seems like these technologies can be very productive for developing applications, but I'm struggling to decide how an application on top of this stack should be architected. The main issue I have is that in most of the demos I've seen most of the business logic ends up as DataAnnotations and custom validations in the RIA Services domain service class. This seems inappropriate to me. I view the domain service as basically a glorified web service that happens to make it easy to push information to the client. But most of what I've seen seems to orient the domain service as the main source of business logic in the application. So, my questions: What is the best location for business logic (rules, validations, behaviors, authorization) in an application using this stack? Are there any guidelines published at an architectural level for using this stack? My questions pertain to large, complex, and long-lived applications. Obviously for an application of only a few screens this is less of a concern. Edit: Another thing I meant to mention is that obviously you can make the domain service class stupid, but then you lose a lot of the automagic entity information (e.g. validations) being pushed to the client. And then if you lose that is there any point to using RIA services?

    Read the article

  • XSL check param length

    - by AdRock
    I need to check if a param has got a value in it and if it has then do this line otherwise do this line. I've got it working whereas I don't get errors but it's not taking the right branch Here is the template which holds the html in the XSL stylesheet <xsl:call-template name="volunteer_role"> <xsl:with-param name="volrole" select="volunteer/roles" /> </xsl:call-template> and here is the template where there is a choice whether to take this brancg or that brach depending if the param is empty <xsl:template name="volunteer_role"> <xsl:param name="volrole" select="'Not Available'" /> <div class="small bold">ROLES:</div> <div class="large"> <xsl:choose> <xsl:when test="string-length($volrole)!=0"> <xsl:value-of select="$volrole" /> </xsl:when> <xsl:otherwise> <xsl:text> </xsl:text> </xsl:otherwise> </xsl:choose> </div> </xsl:template>

    Read the article

  • Backing up my locally hosted rails apps in preparation for OS upgrade

    - by stephen murdoch
    I have some apps running on Heroku. I will be upgrading my OS in two weeks. The last time I upgraded though (6 months ago) I ran into some problems. Here's what I did: copied all my rails apps onto DVD upgraded OS transferred rails apps from DVD to new OS Then, after setting up new SSH-keys I tried to push to some of my heroku apps and, whilst I can't remember the exact error message off-hand, it more or less amounted to "fatal exception the remote end hung up" So I know that I'm doing something wrong here. First of all, is there any need for me to be putting my heroku hosted rails apps onto DVD? Would I be better just pulling all my apps from their heroku repos once I've done the upgrade? What do others do here? The reason I stuck them on DVD is because I tend to push a specific production branch to Heroku and sometimes omit large development files from it... Secondly, was this problem caused by SSH keys? Should I have backed up the old keys and transferred them from my old OS to my new one too, or is Heroku perfectly happy to let you change OS's like that? My solution in the end was to just create new heroku apps and reassign the custom domain names in heroku add-ons menu... I never actually though of pulling from the heroku repos as I tend to push a specific branch to heroku and that branch doesn't always have all the development files in it... I realise that the error message I mentioned doesn't particularly help anyone but I didn't think to remember it 6 months ago. Any advice would be appreciated PS - when I say upgrade, I mean full install of the new version with full format of the HDD.

    Read the article

  • 3 Servers, is this is a cluster?

    - by Andy Barlow
    Hello, At the moment I have one Ubuntu server, 9.10, running with a simple Samba share, a mail server, DNS server and DHCP server. Mostly its just there for file sharing and email server. I also have 2 other servers that are exactly the same hardware and spec as the first, which have an rsync set up to retrieve the shared folders and backs them up. However, if the first server goes down, all of our shares disappear along with our mail and the system must be rebuilt. Also I tend to find if people are downloading a large amount from the file server, no-one can access there emails - especially in the morning when everyone is signing in at once. Would it be more beneficial for me to have all 3 servers, all running the same services, doing the same thing with some sort of cluster with load balancing? I'm not really sure where to begin looking, or how to go about such a setup where 3 servers are all identical, but perhaps one acts as the main load balancer?? If someone can point me in the right direction, or if this simply sounds like one of those Enterprise Cloud's that is now a default setup in Ubuntu Server 9.10+, then I'll go down that route. Cheers in advance. Andy

    Read the article

  • What is best practice (and implications) for packaging projects into JAR's?

    - by user245510
    What is considered best practice deciding how to define the set of JAR's for a project (for example a Swing GUI)? There are many possible groupings: JAR per layer (presentation, business, data) JAR per (significant?) GUI panel. For significant system, this results in a large number of JAR's, but the JAR's are (should be) more re-usable - fine-grained granularity JAR per "project" (in the sense of an IDE project); "common.jar", "resources.jar", "gui.jar", etc I am an experienced developer; I know the mechanics of creating JAR's, I'm just looking for wisdom on best-practice. Personally, I like the idea of a JAR per component (e.g. a panel), as I am mad-keen on encapsulation, and the holy-grail of re-use accross projects. I am concerned, however, that on a practical, performance level, the JVM would struggle class loading over dozens, maybe hundreds of small JAR's. Each JAR would contain; the GUI panel code, necessary resources (i.e. not centralised) so each panel can stand alone. Does anyone have wisdom to share?

    Read the article

  • image analysis and 64bit OS

    - by picciopiccio
    I developed a C# application that makes use of Congex vision library (VPro). My application is developed with Visual Studio 2008 Pro on a 32bit Windows PC with 3GB of RAM. During the startup of application I see that a large amount of memory is allocated. So far so good, but when I add many and many vision elaboration the memory allocation increases and a part of application (only Cognex OCX) stops working well. The rest of application stills to work (working threads, com on socket....) I did whatever I could to save memory, but when the memory allocated is about 700MB I begin to have the problems. A note on the documentation of Cognex library tells that /LARGEADDRESSWARE is not supported. Anyway I'm thinking to try the migration of my app on win64 but what do I have to do? Can I simply use a processor with 64bit and windows 64bit without recompiling my application that would remain a 32bit application to take advantage of 64bit ? Or I should recompile my application ? If I don't need to recompile my application, can I link it with 64bit Congnex library? If I have to recompile my application, is it possible to cross compile the application so that my develop suite is on a 32bit PC? Every help will be very appreciated!! Thank in advance

    Read the article

  • Generating a static website from a set of content data (possibly with webgen, webby or a similar too

    - by Darel
    My company (an engineering firm) is looking to redesign their website with some dynamic content. We have a nice portfolio of projects that we'd like to present on our site by category. To elaborate, I'd like to have a "Projects Category" menu, where you can choose a sub-project category (such as churches, schools, etc) which links to a page with images of all projects which have been tagged with that category attribute. Clicking on an image would then take you to a detailed page for that project. I have done a good bit of asp and jsp page development, but I've always worked on the front end in an enterprise environment - I've never built a production site from the back end. The advice I've gotten so far is that a full-blown CMS solution would be somewhat overkill, as we won't have a large hit count, and we'll be displaying a few hundred projects at most. One big-picture choice I appear to have - whether to dynamically generate the pages (with asp or jsp) or to use a tool to generate a set of static html pages. The tool would build the menus, project summary pages, and individual project pages based on a set of data I could provide (in the form of a database or text file.) I'm leaning towards trying to use a tool like webgen or webby to statically generate the site due to our current web hosting situation. Any thoughts on which approach is more appropriate? Is webgen or webby capable of doing what I am trying to do? Or can anyone recommend other web authoring tools better equipped to accomplish this? Thanks for any feedback!

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • List of objects or parallel arrays of properties?

    - by Headcrab
    The question is, basically: what would be more preferable, both performance-wise and design-wise - to have a list of objects of a Python class or to have several lists of numerical properties? I am writing some sort of a scientific simulation which involves a rather large system of interacting particles. For simplicity, let's say we have a set of balls bouncing inside a box so each ball has a number of numerical properties, like x-y-z-coordinates, diameter, mass, velocity vector and so on. How to store the system better? Two major options I can think of are: to make a class "Ball" with those properties and some methods, then store a list of objects of the class, e. g. [b1, b2, b3, ...bn, ...], where for each bn we can access bn.x, bn.y, bn.mass and so on; to make an array of numbers for each property, then for each i-th "ball" we can access it's 'x' coordinate as xs[i], 'y' coordinate as ys[i], 'mass' as masses[i] and so on; To me it seems that the first option represents a better design. The second option looks somewhat uglier, but might be better in terms of performance, and it could be easier to use it with numpy and scipy, which I try to use as much as I can. I am still not sure if Python will be fast enough, so it may be necessary to rewrite it in C++ or something, after initial prototyping in Python. Would the choice of data representation be different for C/C++? What about a hybrid approach, e.g. Python with C++ extension?

    Read the article

  • Foreach is crashing script, but no errors reported.

    - by ILMV
    So I've created this smarty function to get images from my flickr photostream using SimplePie... simple really, or so it should be. The problem I'm having is the foreach will crash the script, this doesn't happen if I put an exit after the closing foreach, of course because of this the rest of my script doesn't execute. The problem also completely subsides if I remove the foreach, I've tested it and it's not the contents of the foreach, but the loop itself. Error reporting is turned on but I don't get any, I also tried messing with the memory_limit, with no luck. Anyone know why this foreach is killing my script? Thanks! function smarty_function_flickr ($params, &$smarty) { require_once('system/library/SimplePie/simplepie.inc'); require_once('system/library/SimplePie/idn/idna_convert.class.php'); $flickr=new flickr(); /** * Set up SimplePie with all default values using shorthand syntax. */ $feed = new SimplePie($params['feed'], 'system/library/SimplePie/cache', '600'); $feed->handle_content_type(); /** * What sizes should we use? * Choices: square, thumb, small, medium, large. */ $thumb = 'square'; $full = 'medium'; $output = array(); $counter=0; // If I comment this foreach out the problem subsides, I know it is not the code within the foreach foreach ($feed->get_items() as $item) { $url = $flickr->image_from_description($item->get_description()); $output[$counter]['title'] = $item->get_title(); $output[$counter]['image'] = $flickr->select_image($url, $full); $output[$counter]['thumb'] = $flickr->select_image($url, $thumb); $counter++; } // Set template variables and template $smarty->assign('flickr',$output); $smarty->display('forms/'.$params['template'].'.tpl'); }

    Read the article

  • MVC design question for forms

    - by kenny99
    Hi, I'm developing an app which has a large amount of related form data to be handled. I'm using a MVC structure and all of the related data is represented in my models, along with the handling of data validation from form submissions. I'm looking for some advice on a good way to approach laying out my controllers - basically I will have a huge form which will be broken down into manageable categories (similar to a credit card app) where the user progresses through each stage/category filling out the answers. All of these form categories are related to the main relation/object, but not to each other. Does it make more sense to have each subform/category as a method in the main controller class (which will make that one controller fairly massive), or would it be better to break each category into a subclass of the main controller? It may be just for neatness that the second approach is better, but I'm struggling to see much of a difference between either creating a new method for each category (which communicates with the model and outputs errors/success) or creating a new controller to handle the same functionality. Thanks in advance for any guidance!

    Read the article

  • How to accurately parse smtp message status code (DSN)?

    - by Geo
    RFC1893 claims that status codes will come in the format below you can read more here. But our bounce management system is having a hard time parsing error status code from bounce messages. We are able to get the raw message, but depending on the email server the code will come in different places. Is there any rule on how to parse this type of messages to obtain better results. We are not looking for the 100% solution but at least 80%. This document defines a new set of status codes to report mail system conditions. These status codes are intended to be used for media and language independent status reporting. They are not intended for system specific diagnostics. The syntax of the new status codes is defined as: status-code = class "." subject "." detail class = "2"/"4"/"5" subject = 1*3digit detail = 1*3digit White-space characters and comments are NOT allowed within a status- code. Each numeric sub-code within the status-code MUST be expressed without leading zero digits. The quote above from the RFC tells one thing but then the text below from a leading tool on bounce management says something different, where I can get a good source of standard status codes: Return Code Description 0 UNDETERMINED - (ie. Recipient Reply) 10 HARD BOUNCE - (ie. User Unknown) 20 SOFT BOUNCE - General 21 SOFT BOUNCE - Dns Failure 22 SOFT BOUNCE - Mailbox Full 23 SOFT BOUNCE - Message Too Large 30 BOUNCE - NO EMAIL ADDRESS. VERY RARE! 40 GENERAL BOUNCE 50 MAIL BLOCK - General 51 MAIL BLOCK - Known Spammer 52 MAIL BLOCK - Spam Detected 53 MAIL BLOCK - Attachment Detected 54 MAIL BLOCK - Relay Denied 60 AUTO REPLY - (ie. Out Of Office) 70 TRANSIENT BOUNCE 80 SUBSCRIBE Request 90 UNSUBSCRIBE/REMOVE Request 100 CHALLENGE-RESPONSE

    Read the article

  • What specific features of LabView are frustrating to you?

    - by Underflow
    Please bear with me: this isn't a language debate or a flame. It's a real request for opinions. Occasionally, I have to help educate a traditional text coder in how to think in LabVIEW (LV). Often during this process, I get to hear about how LV sucks. Rarely is this insight accompanied by rational observations other than "Language X is just so much better!". While this statement is satisfying to them, it doesn't help me understand what is frustrating them. So, for those of you with LabVIEW and text language experience, what specific things about LV drive you nuts? ------ Summaries ------- Thanks for all the answers! Some of the issues are answered in the comments below, some exist on other sites, and some are just genuine problems with LV. In the spirit of the original question, I'm not going to try to answer all of these here: check LAVA or NI's website, and you'll be pleasantly surprised at how many of these things can be overcome. Unintentional concurrency No access to tradition text manipulation tools Binary-only source code control Difficult to branch and merge Too many open windows Text has cleaner/clearer/more expressive syntax Clean coding requires a lot of time and manipulation Large, difficult to access API/palette system Mouse required File namespacing: no duplicate files with the same name in memory LV objects are natively by-value only Requires dev environment to view code Lack of zoom Slow startup Memory pig "Giant" code is difficult to work with UI lockup is easy to do Trackpads and LV don't mix well String manipulation is graphically bloated Limited UI customization "Hidden" primitives (yes, these exist) Lack of official metaprogramming capability (not for much longer, though) Lack of unicode support [1]: http://www.lavag.org LAVA

    Read the article

  • Matching a Repeating Sub Series using a Regular Expression with PowerShell

    - by Hinch
    I have a text file that lists the names of a large number of Excel spreadsheets, and the names of the files that are linked to from the spreadsheets. In simplified form it looks like this: "Parent File1.xls" Link: ChildFileA.xls Link: ChildFileB.xls "ParentFile2.xls" "ParentFile3.xls" Blah Link: ChildFileC.xls Link: ChildFileD.xls More Junk Link: ChildFileE.xls "Parent File4.xls" Link: ChildFileF.xls In this example, ParentFile1.xls has embedded links to ChildFileA.xls and ChildFileB.xls, ParentFile2.xls has no embedded links, and ParentFile3.xls has 3 embedded links. I am trying to write a regular expression in PowerShell that will parse the text file producing output in the following form: ParentFile1.xls:ChildFileA.xls,ChildFileB.xls ParentFile3.xls:ChildFileC.xls,ChildFileD.xls,ChildFileE.xls etc The task is complicated by the fact that the text file contains a lot of junk between each of the lines, and a parent may not always have a child. Furthermore, a single file name may pass over multiple lines. However, it's not as bad as it sounds, as the parent and child file names are always clearly demarcated (the parent with quotes and the child with a prefix of Link: ). The PowerShell code I've been using is as follows: $content = [string]::Join([environment]::NewLine, (Get-Content C:\Temp\text.txt)) $regex = [regex]'(?im)\s*\"(.*)\r?\n?\s*(.*)\"[\s\S]*?Link: (.*)\r?\n?' $regex.Matches($content) | %{$_.Groups[1].Value + $_.Groups[2].Value + ":" + $_.Groups[3].Value} Using the example above, it outputs: ParentFile1.xls:ChildFileA.xls ParentFile2.xls""ParentFile3.xls:ChildFileC.xls ParentFile4.xls:ChildFileF.xls There are two issues. Firstly, the inclusion of the "" instead of a newline whenever a Parent without a Child is processed. And the second issue, which is the most important, is that only a single child is ever shown for each parent. I'm guessing I need to somehow recursively capture and display the multiple child links that exist for each parent, but I'm totally stumped as to how to do this with a regular expression. Amy help would be greatly appreciated. The file contains 100's of thousands of lines, and manual processing is not an option :)

    Read the article

  • Unit test with live data

    - by Kurresmack
    Hey, I have googled this a little and didn't really find the answer I needed. I am working on a webpage in C# with MSSQL and LINQ for a customer. I want the users to be able to send messages to each other. So what I do is that I unit test this with live data. The problem is that I now depend on having at least 2 users who I know the ID of. Furthermore I have to clean up after my self. This leads to rather large unit tests that test alot in one test. Lets say I would like to update a user. That would mean that I would have to ceate the user, update it, and then delete it. This a lot of assertions in one unit test and if it fails with updating i have to manually delete it. If I would do it any other way, without live data, I would not fore sure be able to know that the data was present in the database after updating etc. What is the proper way to do this without having a test that tests a lot of functuality by it self?

    Read the article

  • Iterating through String word at a time in Python

    - by AlgoMan
    I have a string buffer of a huge text file. I have to search a given words/phrases in the string buffer. Whats the efficient way to do it ? I tried using re module matches. But As i have a huge text corpus that i have to search through. This is taking large amount of time. Given a Dictionary of words and Phrases. I iterate through the each file, read that into string , search all the words and phrases in the dictionary and increment the count in the dictionary if the keys are found. One small optimization that we thought was to sort the dictionary of phrases/words with the max number of words to lowest. And then compare each word start position from the string buffer and compare the list of words. If one phrase is found, we don search for the other phrases (as it matched the longest phrase ,which is what we want) Can some one suggest how to go about word by word in the string buffer. (Iterate string buffer word by word) ? Also, Is there any other optimization that can be done on this ?

    Read the article

  • Is there a standard lexer/parser tool for Python?

    - by Salim Fadhley
    A volunteer job requires us to convert a large number of LaTeX documents into ePub format. It's a series of open-source fiction book which has so far only been produced only on paper via a print on demand service. We'd like to be able to offer the book to users of book-reader devices (such as Kindle) which require the ePub format for best results. Fortunately, ePub is a very simple format, however there's no trivial way for LaTeX to produce the XHTML outut required. We experimented with alternative LaTeX compilers (e.g. plastex) but in the end we figured that it would probably be a lot easier to simply write our own compiler which understands a tiny subset of the LaTeX language and compiles directly to XHTML / ePub. Previously I used a tool on Windows called GOLD. This allowed me to go directly from BNF grammars to a stub parser. It also alllowed me to implement the parser in any language I liked. (I'd choose Python). This product has to work on Linux, so I'm wondering if there's an equivalent toolchain that works as well under Ubutnu / Eclipse / Python. The idea is that we will take the grammar of TeX and just implement a teeny subset of that, but we do not want to spend a huge amount of time worrying about grammar and parsing. A parser generator would obviously save us a great deal of time. Sal UPDATE 1: Bonus marks for a solution with excellent documentation or tutorials.

    Read the article

  • Does overloading Grails static 'mapping' property to bolt on database objects violate DRY?

    - by mikesalera
    Does Grails static 'mapping' property in Domain classes violate DRY? Let's take a look at the canonical domain class: class Book {      Long id      String title      String isbn      Date published      Author author      static mapping = {             id generator:'hilo', params:[table:'hi_value',column:'next_value',max_lo:100]      } } or: class Book { ...         static mapping = {             id( generator:'sequence', params:[sequence_name: "book_seq"] )     } } And let us say, continuing this thought, that I have my Grails application working with HSQLDB or MySQL, but the IT department says I must use a commercial software package (written by a large corp in Redwood Shores, Calif.). Does this change make my web application nobbled in development and test environments? MySQL supports autoincrement on a primary key column but does support sequence objects, for example. Is there a cleaner way to implement this sort of 'only when in production mode' without loading up the domain class?

    Read the article

  • Using a version control system as a data backend

    - by JacobM
    I'm involved in a project that, among other things, involves storing edits and changes to a large hierarchical document (HTML-formatted text). We want to include versioning of textual changes and of structural changes. Currently we're maintaining the tree of document sections in a relational database, but as we start working on how to manage versioning of structural changes, it's clear that we're in danger of having to write a lot of the functionality that a version control system provides. We don't want to reinvent the wheel. Is it possible that we could use an existing version control system as the data store, at least for the document itself? Presumably we could do so by writing out new versions to the filesystem, and keeping that directory under version control (and programmatically doing commits and so forth) but it would be better if we could directly interact with the repository via code. The VCS that we are most familiar with is Subversion, but I'm not thrilled with how Subversion represents changes to the directory structure -- it would be nice if we could see that a particular revision included moving a section from Chapter 2 to Chapter 6, rather than just seeing a new version of the tree. This sounds more like the way a system like Mercurial handles changes to the structure. Any advice? Do VCS's have public APIs and so forth? The project is in Java (with Spring) if it matters.

    Read the article

  • Submitting R jobs using PBS

    - by Tony
    I am submitting a job using qsub that runs parallelized R. My intention is to have R programme running on 4 different cores rather than 8 cores. Here are some of my settings in PBS file: #PBS -l nodes=1:ppn=4 .... time R --no-save < program1.R > program1.log I am issuing the command "ta job_id" and I'm seeing that 4 cores are listed. However, the job occupies a large amount of memory(31944900k used vs 32949628k total). If I were to use 8 cores, the jobs got hang due to memory limitation. top - 21:03:53 up 77 days, 11:54, 0 users, load average: 3.99, 3.75, 3.37 Tasks: 207 total, 5 running, 202 sleeping, 0 stopped, 0 zombie Cpu(s): 30.4%us, 1.6%sy, 0.0%ni, 66.8%id, 0.0%wa, 0.0%hi, 1.2%si, 0.0%st Mem: 32949628k total, 31944900k used, 1004728k free, 269812k buffers Swap: 2097136k total, 8360k used, 2088776k free, 6030856k cached Here is a snapshot when issuing command ta job_id PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1794 x 25 0 6247m 6.0g 1780 R 99.2 19.1 8:14.37 R 1795 x 25 0 6332m 6.1g 1780 R 99.2 19.4 8:14.37 R 1796 x 25 0 6242m 6.0g 1784 R 99.2 19.1 8:14.37 R 1797 x 25 0 6322m 6.1g 1780 R 99.2 19.4 8:14.33 R 1714 x 18 0 65932 1504 1248 S 0.0 0.0 0:00.00 bash 1761 x 18 0 63840 1244 1052 S 0.0 0.0 0:00.00 20016.hpc 1783 x 18 0 133m 7096 1128 S 0.0 0.0 0:00.00 python 1786 x 18 0 137m 46m 2688 S 0.0 0.1 0:02.06 R How can I prevent other users to use the other 4 cores? I like to mask somehow that my job is using 8 cores with 4 cores idling. Could anyone kindly help me out on this? Can this be solved using pbs? Many Thanks

    Read the article

  • Using java2d user space measurements with TextLayout and LineBreakMeasurer

    - by Andrew Wheeler
    I have a java2d image defined in user space (mm) to print an identity card. The transformation to pixels is by using an AffineTransform for the required DPI (Screen or print). I want to wrap text across several lines but the the TextLayout does not respect user space co-ordinates. private void drawParagraph(Graphics2D g2d, Rectangle2D area, String text) { LineBreakMeasurer lineMeasurer; AttributedString string = new AttributedString(text); AttributedCharacterIterator paragraph = string.getIterator(); int paragraphStart = paragraph.getBeginIndex(); int paragraphEnd = paragraph.getEndIndex(); FontRenderContext frc = g2d.getFontRenderContext(); lineMeasurer = new LineBreakMeasurer(paragraph, frc); float breakWidth = (float)area.getWidth(); float drawPosY = (float)area.getY(); float drawPosX = (float)area.getX(); lineMeasurer.setPosition(paragraphStart); while (lineMeasurer.getPosition() < paragraphEnd) { TextLayout layout = lineMeasurer.nextLayout(breakWidth); drawPosY += layout.getAscent(); layout.draw(g2d, drawPosX, drawPosY); drawPosY += layout.getDescent() + layout.getLeading(); } } The above code determines font metrics using user space sizing of the Font and thus turn out rather large. The font size is calculated as best vertical fit for the number of lines in an area with the calculation as below. E.g. attr.put(TextAttribute.SIZE, (geTextArea().getHeight() / noOfLines - LINE_SPACING) ); When using g2d.drawString("Some text to display", x, y); the font size appears correct. Does anyone have a better way of doing text layout in user space co-ords?

    Read the article

  • Extending EF4 SQL Generation

    - by Basiclife
    Hi, We're using EF4 in a fairly large system and occasionally run into problems due to EF4 being unable to convert certain expressions into SQL. At present, we either need to do some fancy footwork (DB/Code) or just accept the performance hit and allow the query to be executed in-memory. Needless to say neither of these is ideal and the hacks we've sometimes had to use reduce readability / maintainability. What we would ideally like is a way to extend the SQL generation capabilities of the EF4 SQL provider. Obviously there are some things like .Net method calls which will always have to be client-side but some functionality like date comparisons (eg Group by weeks in Linq to Entities ) should be do-able. I've Googled but perhaps I'm using the wrong terminology as all I get is information about the new features of EF4 SQL Generation. For such a flexible and extensible framework, I'd be surprised if this isn't possible. In my head, I'm imagining inheriting from the [SQL 2008] provider and extending it to handle additional expressions / similar in the expression tree it's given to convert to SQL. Any help/pointers appreciated. We're using VS2010 Ultimate, .Net 4 (non-client profile) and EF4. The app is in ASP.Net and is running in a 64-Bit environment in case it makes a difference.

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >