Search Results

Search found 13608 results on 545 pages for 'performance dashboard'.

Page 490/545 | < Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >

  • Could I do this blind relative to absolute path conversion (for perforce depot paths) better?

    - by wonderfulthunk
    I need to "blindly" (i.e. without access to the filesystem, in this case the source control server) convert some relative paths to absolute paths. So I'm playing with dotdots and indices. For those that are curious I have a log file produced by someone else's tool that sometimes outputs relative paths, and for performance reasons I don't want to access the source control server where the paths are located to check if they're valid and more easily convert them to their absolute path equivalents. I've gone through a number of (probably foolish) iterations trying to get it to work - mostly a few variations of iterating over the array of folders and trying delete_at(index) and delete_at(index-1) but my index kept incrementing while I was deleting elements of the array out from under myself, which didn't work for cases with multiple dotdots. Any tips on improving it in general or specifically the lack of non-consecutive dotdot support would be welcome. Currently this is working with my limited examples, but I think it could be improved. It can't handle non-consecutive '..' directories, and I am probably doing a lot of wasteful (and error-prone) things that I probably don't need to do because I'm a bit of a hack. I've found a lot of examples of converting other types of relative paths using other languages, but none of them seemed to fit my situation. These are my example paths that I need to convert, from: //depot/foo/../bar/single.c //depot/foo/docs/../../other/double.c //depot/foo/usr/bin/../../../else/more/triple.c to: //depot/bar/single.c //depot/other/double.c //depot/else/more/triple.c And my script: begin paths = File.open(ARGV[0]).readlines puts(paths) new_paths = Array.new paths.each { |path| folders = path.split('/') if ( folders.include?('..') ) num_dotdots = 0 first_dotdot = folders.index('..') last_dotdot = folders.rindex('..') folders.each { |item| if ( item == '..' ) num_dotdots += 1 end } if ( first_dotdot and ( num_dotdots > 0 ) ) # this might be redundant? folders.slice!(first_dotdot - num_dotdots..last_dotdot) # dependent on consecutive dotdots only end end folders.map! { |elem| if ( elem !~ /\n/ ) elem = elem + '/' else elem = elem end } new_paths << folders.to_s } puts(new_paths) end

    Read the article

  • java: libraries for immutable functional-style data structures

    - by Jason S
    This is very similar to another question (Functional Data Structures in Java) but the answers there are not particularly useful. I need to use immutable versions of the standard Java collections (e.g. HashMap / TreeMap / ArrayList / LinkedList / HashSet / TreeSet). By "immutable" I mean immutable in the functional sense (e.g. purely functional data structures), where updating operations on the data structure do not change the original data, but instead return a new instance of the same kind of data structure. Also typically new and old instances of the data structure will share immutable data to be efficient in time and space. From what I can tell my options include: Functional Java Scala Clojure but I'm not sure whether any of these are particularly appealing to me. I have a few requirements/desirements: the collections in question should be usable directly in Java (with the appropriate libraries in the classpath). FJ would work for me; I'm not sure if I can use Scala's or Clojure's data structures in Java w/o having to use the compilers/interpreters from those languages and w/o having to write Scala or Clojure code. Core operations on lists/maps/sets should be possible w/o having to create function objects with confusing syntaxes (FJ looks slightly iffy) They should be efficient in time and space. I'm looking for a library which ideally has done some performance testing. FJ's TreeMap is based on a red-black tree, not sure how that rates. Documentation / tutorials should be good enough so someone can get started quickly using the data structures. FJ fails on that front. Any suggestions?

    Read the article

  • Finding What You Need in R: function arguments/parameters from outside the function's package

    - by doug
    Often in R, there are a dozen functions scattered across as many packages--all of which have the same purpose but of course differ in accuracy, performance, theoretical rigor, and so on. How do you gather all of these in one place before you start your task? So for instance: the generic plot function. Setting secondary ticks is much easier (IMHO) using a function outside of the base package, minor.tick(nx=n, ny=n, tick.ratio=n), found in Hmisc. Of course, that doesn't show up in plot's docstring. Likewise, the data-input arguments to 'plot' can be supplied by an object returned from the function 'hexbin', again, from a library outside of the base installation (where 'plot' resides). What would be great obviously is a programmatic way to gather these function arguments from the various libraries and put them in a single namespace. edit: (trying to re-state my example just above more clearly:) the arguments to plot supplied in the base package for, e.g., setting the axis tick frequency are xaxp/yaxp; however, one can also set a/t/f via a function outside of the base package, again, as in the minor.tick function from the Hmisc package--but you wouldn't know that just from looking at the plot method signature. Is there a meta function in R for this? So far, as i come across them, i've been manually gathering them in a TextMate 'snippet' (along with the attendant library imports). This isn't that difficult or time consuming, but i can only update my snippet as i find out about these additional arguments/parameters. Is there a canonical R way to do this, or at least an easier way? Just in case that wasn't clear, i am not talking about the case where multiple packages provide functions directed to the same statistic or view (e.g., 'boxplot' in the base package; 'boxplot.matrix' in gplots; and 'bplots' in Rlab). What i am talking is the case in which the function name is the same across two or more packages.

    Read the article

  • Fastest way to clamp a real (fixed/floating point) value?

    - by Niklas
    Hi, Is there a more efficient way to clamp real numbers than using if statements or ternary operators? I want to do this both for doubles and for a 32-bit fixpoint implementation (16.16). I'm not asking for code that can handle both cases; they will be handled in separate functions. Obviously, I can do something like: double clampedA; double a = calculate(); clampedA = a > MY_MAX ? MY_MAX : a; clampedA = a < MY_MIN ? MY_MIN : a; or double a = calculate(); double clampedA = a; if(clampedA > MY_MAX) clampedA = MY_MAX; else if(clampedA < MY_MIN) clampedA = MY_MIN; The fixpoint version would use functions/macros for comparisons. This is done in a performance-critical part of the code, so I'm looking for an as efficient way to do it as possible (which I suspect would involve bit-manipulation) EDIT: It has to be standard/portable C, platform-specific functionality is not of any interest here. Also, MY_MIN and MY_MAX are the same type as the value I want clamped (doubles in the examples above).

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Lightweight HTTP application/server for static content

    - by PartlyCloudy
    Hi, I am in need of a scalable and performant HTTP application/server that will be used for static file serving/uploading. So I only need support for GET and PUT operations. However, there are a few extra features that I need: Custom authentication: I need to check credentials against a database for each request. Thus I must be able to integrate propietary database interaction. Support for signed access keys: The access to resources via PUT should be signed using a key like http://uri/?key=foo The key then contains information about the request like md5(user + path + secret) which allows me to block unwanted requests. The application/server should allow me to check for this. Performance: I'd like to avoid piping content as much as possible. Otherwise the whole application could be implemented in Perl/etc. in a few lines as CGI. Perlbal (in webserver mode) looks nice, however the single-threaded model does not fit with my database lookup and it does also not support query strings. Lighttp/Nginx/… have some modules for these tasks, however it is not feasible putting everything together without ending up writing own extensions/modules. So how would you solve this? Are there other leightweight webservers available for this? Should I implement an application inside of a webserver (i.e. CGI). How can I avoid/speed up piping content between the webserver and my application. Thanks in advance!

    Read the article

  • Android opengl releasing textures

    - by user1642418
    I have a bit of a problem. I am developing a game for android + engine and I got stuck. I am getting OpenGL out of memory error and either app crashes or phone hangs after loading a scene multiple times. For example: app launches, shows main menu, 1st level/scene is loaded. Then I go back to main menu, and repeat. It doesnt matter which scene I load, after 4-6 times the error occurs. Some background: Each time when scene is loaded all the resources are released and upon first frame render - needed stuff gets loaded. The performance is more or less ok. Note that I am calling glDeleteTexture method, but I think its not doing its job and releasing memory. Thing is that -when I minimize and open it again - problem doesn't occur, but almost the same things are executed. Problem doesn't occur. This way android releases memory. How do I release/get rid of unused textures properly? This happens on HTC Desire HD ( ice cream sandwich 4.0.4) . Other games works fine, so I bet this is not the problem in ROM.

    Read the article

  • How can I tell if a byte array has already been compressed?

    - by MikeG
    Hi, Can I rely on the first few bytes of data compressed using the System.IO.Compression.DeflateStream in .NET always being the same? These bytes seem to always be the 1st bytes: 237, 189, 7, 96, 28, 73, 150, 37, 38, 47 , ... I'm assuming this is some kind of header, I'd like to assume that this header is fixed and isn't going to change. Has anyone got any extra info about this? Background info (The reason I want to know this info is...) I have a load of data in a database table that could do with being made smaller. I've decided I'm going to start compressing the data and not going to bother compressing the existing data. When the data gets into my .NET code the data is a String. I'd like to be able to look at the 1st few bytes of the string and see if it has been compressed, if it has then I need to de-compress it. I was originally thinking I could convert the string to bytes and just try de-compressing the data. Then if an exception happens, I could just assume it wasn't compressed. But I think checking the header bytes would give me much better performance. Many thanks, Mike G

    Read the article

  • Best Functional Approach

    - by dbyrne
    I have some mutable scala code that I am trying to rewrite in a more functional style. It is a fairly intricate piece of code, so I am trying to refactor it in pieces. My first thought was this: def iterate(count:Int,d:MyComplexType) = { //Generate next value n //Process n causing some side effects return iterate(count - 1, n) } This didn't seem functional at all to me, since I still have side effects mixed throughout my code. My second thought was this: def generateStream(d:MyComplexType):Stream[MyComplexType] = { //Generate next value n return Stream.cons(n, generateStream(n)) } for (n <- generateStream(initialValue).take(2000000)) { //process n causing some side effects } This seemed like a better solution to me, because at least I've isolated my functional value-generation code from the mutable value-processing code. However, this is much less memory efficient because I am generating a large list that I don't really need to store. This leaves me with 3 choices: Write a tail-recursive function, bite the bullet and refactor the value-processing code Use a lazy list. This is not a memory sensitive app (although it is performance sensitive) Come up with a new approach. I guess what I really want is a lazily evaluated sequence where I can discard the values after I've processed them. Any suggestions?

    Read the article

  • On the search for my next great .Net Read

    - by user127954
    Just got done with "The art of unit testing". It was a great read and i think everyone should go buy a copy. With that said i think the next book I'm like to read would be a architecture / Design type book that would focus heavily on building your objects / software in such a way that it would be: Low Coupling High Cohesion Easily Maintainable / Extended Easy to test Easy to Navigate / Debug The above characteristcs are the most important ones but also maybe it would also include (but not necessary) designing for: Performance - Don't want to design a system at at the end find out its dog slow :) Scalability - Again don't want to design something at the end find out it won't scale. I'd also prefer (but not necessary again): Something newer - Architectural principles seem to gradually evolve / improve over time and id like something with current thinking. .Net as illustrating language - like i said above its not mandatory but since its what i use every day id prefer it to be in .net. Doesn't really matter if its in vb.net or c# Some of the topics that would be talked about its how to minimize dependencies and using interfaces throughout your solution rather than concrete classes. Maybe it would constract /compare some of the newest design principles like DDD, Repository Pattern, Ect... I already have "Clean Code" (don't know if its this type of book or not) and "Working effectively with legacy code" on my radar but id like to read a book based upon the topic i talked about above first. Is there such a book?

    Read the article

  • Developing a 2D Game for Windows Phone 8

    - by Vaccano
    I would like to develop a 2D game for Windows Phone 8. I am a professional Application Developer by day and this seems like a fun hobby. But I have been disapointed trying to get going. It seems that 2D games (far and away the majority of games) do not have any real support. It seems the Windows Phone makers did not include support for Direct2D. So unless you are planning to make a fully 3D app, you are out of luck. So, if you just wanted to make a nice 2D app, these are your choices: Write your game using Xaml and C# (Performance Issues?) Write your game using Direct3D and but only draw on one plane. Use the DirectX Took Kit found on codeplex. It allows you to use the dying XNA framework's API for development. Number 3 seems the best for my game. But I hate to waste my time learning the XNA api when Microsoft has clearly stated that it is not going to be supported going forward. Number 2 would work, but 3D development is really hard. I would rather not have to do all that to get the 2D effect. (Assuming Direct2D is easier. I have yet to look into that.) Number 1 seems the easiest, but I worry that my app will not run well if it is based off of xaml rendering rather than DirectX. What is the suggested method from Microsoft? And who decided that 2D games were going to get shortchanged?

    Read the article

  • Creating collaborative whiteboard drawing application

    - by Steven Sproat
    I have my own drawing program in place, with a variety of "drawing tools" such as Pen, Eraser, Rectangle, Circle, Select, Text etc. It's made with Python and wxPython. Each tool mentioned above is a class, which all have polymorphic methods, such as left_down(), mouse_motion(), hit_test() etc. The program manages a list of all drawn shapes -- when a user has drawn a shape, it's added to the list. This is used to manage undo/redo operations too. So, I have a decent codebase that I can hook collaborative drawing into. Each shape could be changed to know its owner -- the user who drew it, and to only allow delete/move/rescale operations to be performed on shapes owned by one person. I'm just wondering the best way to develop this. One person in the "session" will have to act as the server, I have no money to offer free central servers. Somehow users will need a way to connect to servers, meaning some kind of "discover servers" browser...or something. How do I broadcast changes made to the application? Drawing in realtime and broadcasting a message on each mouse motion event would be costly in terms of performance and things get worse the more users there are at a given time. Any ideas are welcome, I'm not too sure where to begin with developing this (or even how to test it)

    Read the article

  • PHP modifying and combining array

    - by Industrial
    Hi everyone, I have a bit of an array headache going on. The function does what I want, but since I am not yet to well acquainted with PHP:s array/looping functions, so thereby my question is if there's any part of this function that could be improved from a performance-wise perspective? I tried to be as complete as possible in my descriptions in each stage of the functions which shortly described prefixes all keys in an array, fill up eventual empty/non-valid keys with '' and removes the prefixes before returning the array: $var = myFunction ( array('key1', 'key2', 'key3', '111') ); function myFunction ($keys) { $prefix = 'prefix_'; $keyCount = count($keys); // Prefix each key and remove old keys for($i=0;$i<$keyCount; $i++){ $keys[] = $prefix.$keys[$i]; unset($keys[$i]); } // output: array('prefix_key1', 'prefix_key2', 'prefix_key3', '111) // Get all keys from memcached. Only returns valid keys $items = $this->memcache->get($keys); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3) // note: key 111 was not found in memcache. // Fill upp eventual keys that are not valid/empty from memcache $return = $items + array_fill_keys($keys, ''); // output: array('prefix_key1' => 'value1', 'prefix_key2' => 'value2', 'prefix_key3'=>'value3, 'prefix_111' => '') // Remove the prefixes for each result before returning array to application foreach ($return as $k => $v) { $expl = explode($prefix, $k); $return[$expl[1]] = $v; unset($return[$k]); } // output: array('key1' => 'value1', 'key2' => 'value2', 'key3'=>'value3, '111' => '') return $return; } Thanks a lot!

    Read the article

  • Saving tags into a database table in CakePHP

    - by Cameron
    I have the following setup for my CakePHP app: Posts id title content Topics id title Topic_Posts id topic_id post_id So basically I have a table of Topics (tags) that are all unique and have an id. And then they can be attached to post using the Topic_Posts join table. When a user creates a new post they will fill in the topics by typing them in to a textarea separated by a comma which will then save these into the Topics table if they do not already exist and then save the references into the Topic_posts table. I have the models set up like so: Post model: class Post extends AppModel { public $name = 'Post'; public $hasAndBelongsToMany = array( 'Topic' => array('with' => 'TopicPost') ); } Topic model: class Topic extends AppModel { public $hasMany = array( 'TopicPost' ); } TopicPost model: class TopicPost extends AppModel { public $belongsTo = array( 'Topic', 'Post' ); } And for the New post method I have this so far: public function add() { if ($this->request->is('post')) { //$this->Post->create(); if ($this->Post->saveAll($this->request->data)) { // Redirect the user to the newly created post (pass the slug for performance) $this->redirect(array('controller'=>'posts','action'=>'view','id'=>$this->Post->id)); } else { $this->Session->setFlash('Server broke!'); } } } As you can see I have used saveAll but how do I go about dealing with the Topic data?

    Read the article

  • Common "truisms" needing correction the most

    - by Charles Bretana
    In addition to "I never met a man I didn't like", Will Rogers had another great little ditty I've always remembered. It went: "It's not what you don't know that'll hurt you, it's what you do know that ain't so." We all know or subscribe to many IT "truisms" that mostly have a strong basis in fact, in something in our professional careers, something we learned from others, lessons learned the hard way by ourselves, or by others who came before us. Unfortuntely, as these truisms spread throughout the community, the details—why they came about and the caveats that affect when they apply—tend to not spread along with them. We all have a tendency to look for, and latch on to, small "rules" or principles that we can use to avoid doing a complete exhaustive analysis for every decision. But even though they are correct much of the time, when we sometimes misapply them, we pay a penalty that could be avoided by understooding the details behind them. For example, when user-defined functions were first introduced in SQL Server it became "common knowledge" within a year or so that they had extremely bad performance (because it required a re-compilation for each use) and should be avoided. This "trusim" still increases many database developers' aversion to using UDFs, even though Microsoft's introduction of InLine UDFs, which do not suffer from this issue at all, mitigates this issue substantially. In recent years I have run into numerous DBAs who still believe you should "never" use UDFs, because of this. What other common not-so-"trusims" do you know, which many developers believe, that are not quite as universally true as is commonly understood, and which the developer community would benefit from being better educated about? Please include why it was "true" to start off with, and under what circumstances it's not true. Limit responses to issues that are technical, where the "common" application of a "rule or principle" is in fact correct most of the time, or was correct back when it was first elucidated, but—in the edge cases, or because of not understanding the principle thoroughly, because technology has changed since it first spread, or applying the rule today without understanding the details behind the rule—can easily backfire or cause the opposite of the intended effect.

    Read the article

  • I'm annoyed with asp .net mvc action links? Is there something better in MVC3?

    - by Jonathon Kresner
    After almost 3 years with mvc I'm scratching my head. Is it just me, or does the way we specify links in asp .net mvc suck? @Html.ActionLink("Log Off", "LogOff", "Account") In the previews for mvc 1 we had the funky generic action links which gave us intellisense and compile checking, which I LOVED. I know they removed them because of performance issues and because you could not actually guarantee that the route would resolve all the time... However the default way of doing it just doesn't make me feel safe enough in a big application. I've also used T4Mvc with MVC2, to be honest, I didn't really like it. It's not part of the Mvc framework and frustrating to develop with especially with source control in big teams and continuous integration builds. I guess I could also import Mvc Futures and keep using the generic types (it's probably what I'll do). I'm just about to start a very big project and was wondering what other people are thinking? Is anyone else annoyed with the options or has a new solution? It seems like ActionLinks are the most basic & frequently used feature. Shouldn't there be a good out of the box solution, we're just about to hit revision 3 of this framework.

    Read the article

  • Updating UI objects in windows forms

    - by P a u l
    Pre .net I was using MFC, ON_UPDATE_COMMAND_UI, and the CCmdUI class to update the state of my windows UI. From the older MFC/Win32 reference: Typically, menu items and toolbar buttons have more than one state. For example, a menu item is grayed (dimmed) if it is unavailable in the present context. Menu items can also be checked or unchecked. A toolbar button can also be disabled if unavailable, or it can be checked. Who updates the state of these items as program conditions change? Logically, if a menu item generates a command that is handled by, say, a document, it makes sense to have the document update the menu item. The document probably contains the information on which the update is based. If a command has multiple user-interface objects (perhaps a menu item and a toolbar button), both are routed to the same handler function. This encapsulates your user-interface update code for all of the equivalent user-interface objects in a single place. The framework provides a convenient interface for automatically updating user-interface objects. You can choose to do the updating in some other way, but the interface provided is efficient and easy to use. What is the guidance for .net Windows Forms? I am using an Application.Idle handler in the main form but am not sure this is the best way to do this. About the time I put all my UI updates in the Idle event handler my app started to show some performance problems, and I don't have the metrics to track this down yet. Not sure if it's related.

    Read the article

  • return Queryable<T> or List<T> in a Repository<T>

    - by Danny Chen
    Currently I'm building an windows application using sqlite. In the data base there is a table say User, and in my code there is a Repository<User> and a UserManager. I think it's a very common design. In the repository there is a List method: //Repository<User> class public List<User> List(where, orderby, topN parameters and etc) { //query and return } This brings a problem, if I want to do something complex in UserManager.cs: //UserManager.cs public List<User> ListUsersWithBankAccounts() { var userRep = new UserRepository(); var bankRep = new BankAccountRepository(); var result = //do something complex, say "I want the users live in NY //and have at least two bank accounts in the system } You can see, returning List<User> brings performance issue, becuase the query is executed earlier than expected. Now I need to change it to something like a IQueryable<T>: //Repository<User> class public TableQuery<User> List(where, orderby, topN parameters and etc) { //query and return } TableQuery<T> is part of the sqlite driver, which is almost equals to IQueryable<T> in EF, which provides a query and won't execute it immediately. But now the problem is: in UserManager.cs, it doesn't know what is a TableQuery<T>, I need to add new reference and import namespaces like using SQLite.Query in the business layer project. It really brings bad code feeling. Why should my business layer know the details of the database? why should the business layer know what's SQLite? What's the correct design then?

    Read the article

  • Nhibernate + Gridview + TargetInvocationException

    - by Scott
    For our grid views, we're setting the data sources as a list of results from an Nhibernate query. We're using lazy loading, so the objects are actually proxied... most of the time. In some instances the list will consist of types of Student and Composition_Aop_Proxy_jklasjdkl31231, which implements the same members as the Student class. We've still got the session open, so the lazy loading would resolve fine, if GridView didn't throw an error about the different types in the gridview. Our current workaround is to clone the object, which results in fetching all of the data that can be lazily loaded, even though most of it won't be accessed.. ever. This, however, converts the proxy into an actual object and the grid view is happy. The performance implications kind of scare me as we're getting closer to rolling the code out as is. I've tried evicting the object after a save, which should ensure that everything is a proxy, but this doesn't seem like a good idea either. Does anyone have any suggestions/workarounds?

    Read the article

  • Generating jquery 'rules' from business model to UI in asp.net mvc

    - by jim
    Hi all, I've had a good look around and am certain that there's no matching question on SO, so here goes. Has anyone created a 'helper' method on their model that generates jquery (or plain javascript) rules validation dynamically, based on the criteria/rules that are contained within the object and taken from a repository (i.e. DB). What i'm thinking of is a discrete set of partial views (and associated models) that have rules at the business logic 'level' and rather than (or in combination with) validating the rule(s) at postback, translating the same rules into tightly focussed jquery methods that work identically at client (js) and server (c#) levels. I can see benefits here re performance. Also, the rules definitions could be created in a single place (in c#) and the jquery generated off of that, thus allowing single edits to update both code streams. I appreciate that there would be limitations imposed by language specific contstraints but the general principle could be quite interesting if used appropriately. I'm also aware that testibility could be an issue when using two different language structures and hoping to achieve similar test outcomes - but those aside... any thoughts or experiences of similar out there?? cheers jimi

    Read the article

  • cython setup.py gives .o instead of .dll

    - by alok1974
    Hi, I am a newbie to cython, so pardon me if I am missing something obvious here. I am trying to build c extensions to be used in python for enhanced performance. I have fc.py module with a bunch of function and trying to generate a .dll through cython using dsutils and running on win64: c:\python26\python c:\cythontest\setup.py build_ext --inplace I have the dsutils.cfg in C:\Python26\Lib\distutils. As required the disutils.cfg has the following config settings: [build] compiler = mingw32 My startup.py looks like this: from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules = [Extension('fc', [r'C:\cythonTest\fc.pyx'])] setup( name = 'FC Extensions', cmdclass = {'build_ext': build_ext}, ext_modules = ext_modules ) I have latest version mingw for target/host amdwin64 type builds. I have the latest version of cython for python26 for win64. Cython does give me an fc.c without errors, only a few warning for type conversions, which I will handle once I have it right. Further it produces fc.def an fc.o files Instead of giving a .dll. I get no errors. I find on threads that it will create the .so or .dll automatically as required, which is not happening.

    Read the article

  • Database layout for an application with geocoding features using geokit

    - by vooD
    I'm developing a real estate web catalogue and want to geocode every ad using geokit gem. My question is what would be the best database layout from the performance point if i want to make search by country, city of the selected country, administrative area or nearest metro station of the selected city. Available countries, cities, administrative areas and metro sations should be defined by the administrator of catalogue and must be validated by geocoding. I came up with single table: create_table "geo_locations", :force => true do |t| t.integer "geo_location_id" #parent geo location (ex. country is parent geo location of city t.string "country", :null => false #necessary for any geo location t.string "city", #not null for city geo location and it's children t.string "administrative_area" #not null for administrative_area geo location and it's children t.string "thoroughfare_name" #not null for metro station or street name geo location and it's children t.string "premise_number" #house number t.float "lng", :null => false t.float "lat", :null => false t.float "bound_sw_lat", :null => false t.float "bound_sw_lng", :null => false t.float "bound_ne_lat", :null => false t.float "bound_ne_lng", :null => false t.integer "mappable_id" t.string "mappable_type" t.string "type" #country, city, administrative area, metro station or address end Final geo location is address it contains all neccessary information to put marker of the real estate ad on the map. But i'm still stuck on search functionality. Any help would be highly appreciated.

    Read the article

  • Android CursorAdapters, ListViews and background threads

    - by MattC
    This application I've been working on has databases with multiple megabytes of data to sift through. A lot of the activities are just ListViews descending through various levels of data within the databases until we reach "documents", which is just HTML to be pulled from the DB(s) and displayed on the phone. The issue I am having is that some of these activities need to have the ability to search through the databases by capturing keystrokes and re-running the query with a "like %blah%" in it. This works reasonably quickly except when the user is first loading the data and when the user first enters a keystroke. I am using a ResourceCursorAdapter and I am generating the cursor in a background thread, but in order to do a listAdapter.changeCursor(), I have to use a Handler to post it to the main UI thread. This particular call is then freezing the UI thread just long enough to bring up the dreaded ANR dialog. I'm curious how I can offload this to a background thread totally so the user interface remains responsive and we don't have ANR dialogs popping up. Just for full disclosure, I was originally returning an ArrayList of custom model objects and using an ArrayAdapter, but (understandably) the customer pointed out it was bad memory-manangement and I wasn't happy with the performance anyways. I'd really like to avoid a solution where I'm generating huge lists of objects and then doing a listAdapter.notifyDataSetChanged/Invalidated() Here is the code in question: private Runnable filterDrugListRunnable = new Runnable() { public void run() { if (filterLock.tryLock() == false) return; cur = ActivityUtils.getIndexItemCursor(DrugListActivity.this); if (cur == null || forceRefresh == true) { cur = docDb.getItemCursor(selectedIndex.getIndexId(), filter); ActivityUtils.setIndexItemCursor(DrugListActivity.this, cur); forceRefresh = false; } updateHandler.post(new Runnable() { public void run() { listAdapter.changeCursor(cur); } }); filterLock.unlock(); updateHandler.post(hideProgressRunnable); updateHandler.post(updateListRunnable); } };

    Read the article

  • What's so bad about building XML with string concatenation?

    - by wsanville
    In the thread What’s your favorite “programmer ignorance” pet peeve?, the following answer appears, with a large amount of upvotes: Programmers who build XML using string concatenation. My question is, why is building XML via string concatenation (such as a StringBuilder in C#) bad? I've done this several times in the past, as it's sometimes the quickest way for me to get from point A to point B when to comes to the data structures/objects I'm working with. So far, I have come up with a few reasons why this isn't the greatest approach, but is there something I'm overlooking? Why should this be avoided? Probably the biggest reason I can think of is you need to escape your strings manually, and most programmers will forget this. It will work great for them when they test it, but then "randomly" their apps will fail when someone throws an & symbol in their input somewhere. Ok, I'll buy this, but it's really easy to prevent the problem (SecurityElement.Escape to name one). When I do this, I usually omit the XML declaration (i.e. <?xml version="1.0"?>). Is this harmful? Performance penalties? If you stick with proper string concatenation (i.e. StringBuilder), is this anything to be concerned about? Presumably, a class like XmlWriter will also need to do a bit of string manipulation... There are more elegant ways of generating XML, such as using XmlSerializer to automatically serialize/deserialize your classes. Ok sure, I agree. C# has a ton of useful classes for this, but sometimes I don't want to make a class for something really quick, like writing out a log file or something. Is this just me being lazy? If I am doing something "real" this is my preferred approach for dealing w/ XML.

    Read the article

  • Get the first and last posts in a thread

    - by Grampa
    I am trying to code a forum website and I want to display a list of threads. Each thread should be accompanied by info about the first post (the "head" of the thread) as well as the last. My current database structure is the following: threads table: id - int, PK, not NULL, auto-increment name - varchar(255) posts table: id - int, PK, not NULL, auto-increment thread_id - FK for threads The tables have other fields as well, but they are not relevant for the query. I am interested in querying threads and somehow JOINing with posts so that I obtain both the first and last post for each thread in a single query (with no subqueries). So far I am able to do it using multiple queries, and I have defined the first post as being: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id LIMIT 0, 1 The last post is pretty much the same except for ORDER BY id DESC. Now, I could select multiple threads with their first or last posts, by doing: SELECT * FROM threads t LEFT JOIN posts p ON t.id = p.thread_id ORDER BY p.id GROUP BY t.id But of course I can't get both at once, since I would need to sort both ASC and DESC at the same time. What is the solution here? Is it even possible to use a single query? Is there any way I could change the structure of my tables to facilitate this? If this is not doable, then what tips could you give me to improve the query performance in this particular situation?

    Read the article

< Previous Page | 486 487 488 489 490 491 492 493 494 495 496 497  | Next Page >