Search Results

Search found 520 results on 21 pages for 'porting'.

Page 19/21 | < Previous Page | 15 16 17 18 19 20 21  | Next Page >

  • Cross Platform C library for GUI Apps?

    - by Moshe
    Free of charge, simple to learn/use, Cross Platform C library for GUI Apps? Am I looking for Qt? Bonus question: Can I develop with the said library/toolkit on Mac then recompile on PC/Linux? Super Bonus Question: Link to tutorial and/or download of said library. (RE)EDIT: The truth is that I'm in the process of catching up on the C family (coming from web development - XHTML/PHP/MySQL) to learn iPhone development. I do understand that C is not C++ or ObjectiveC but I want to keep the learning curve as simple as possible. Not to get too off topic, but I am also on the lookout for good starter books and websites. I've found this so far. I'm trying to kill many birds with one stone here. I don understand that there are platform specific extensions, but I will try to avoid those for porting purposes The idea is that I want to write the code on one machine and just compile thrice. (Mac/Win/Linux) If Objective C will compile on Windows and Linux as well as OS X then that's good. If I must use C++, that's also fine. EDIT: Link to QT Please...

    Read the article

  • How to solve this problem with Python

    - by morpheous
    I am "porting" an application I wrote in C++ into Python. This is the current workflow: Application is started from the console Application parses CLI args Application reads an ini configuration file which specifies which plugins to load etc Application starts a timer Application iterates through each loaded plugin and orders them to start work. This spawns a new worker thread for the plugin The plugins carry out their work and when completed, they die When time interval (read from config file) is up, steps 5-7 is repeated iteratively Since I am new to Python (2 days and counting), the distinction between script, modules and packages are still a bit hazy to me, and I would like to seek advice from Pythonista as to how to implement the workflow described above, using Python as the programing language. In order to keep things simple, I have decided to leave out the time interval stuff out, and instead run the python script/scripts as a cron job instead. This is how I am thinking of approaching it: Encapsulate the whole application in a package which is executable (i.e. can be run from the command line with arguments. Write the plugins as modules (I think maybe its better to implement each module in a separate file?) I havent seen any examples of using threading in Python yet. Could someone provide a snippet of how I could spawn a thread to run a module. Also, I am not sure how to implement the concept of plugins in Python - any advice would be helpful - especially with a code snippet.

    Read the article

  • iOS 7: Best way to implement an textview that presents previous input but is easy to clear

    - by Frank R.
    I'm porting a Mac app to the iPhone and I've run into an unexpected problem. On the Mac there's a text field that is automatically pre-selected (= first responder) when a dialog shows up. The text field shows the text you entered in the field the last time and the text is pre-selected so that if you just start typing it gets cleared away. If you want to edit the existing text instead you just hit the forwards or backwards arrow. On the iPhone this behavior seems very hard to implement. The text view shows up with the old text and I can even get it to pre-select but whatever I do the result is not quite right. When I use [aTextView setMarkedText: myText selectedRange: newRange]; the text does show up as marked and if I just start typing the old text goes away. However there's no equivalent to the cursor keys on iOS, so I cannot NOT erase the text.. which is hardly the point. What kind of iOS idiom would be appropriate for giving the option to either edit or overwrite existing text? Best regards, Frank

    Read the article

  • Hovering a div shows hidden div - jquery to prototype conversion

    - by phil
    I may get killed for this but I have been trying for a few days using Prototype to show a hidden div when hovering over another div. I have this working fine in jquery but I could use some help porting it over to prototype. The code sample: <script type="text/javascript"> $(document).ready(function(){ $(".recent-question").hover(function(){ $(this).find(".interact").fadeIn(2.0); }, function(){ $(this).find(".interact").fadeOut(2.0); }); }); </script> <div class="recent-question"> <img src="images/new/img-sample.gif" alt="" width="70" height="60" /> <div class="question-text"> <h3>Heading</h3> <p><a href="#">Yadda Yadda Yadda</p> </div> <div class="interact" style="display:none;"> <ul> <li><a href="#">Choice1</a></li> <li><a href="#">Choice2</a></li> <li><a href="#">Choice3</a></li> </ul> </div> </div> So basically when I hover over a recent-question div i would like the div.interact to fade in or appear at all. The above code is for jquery but I am required to use prototype for this project. Any help converting would be greatly appreciated. Thanks!

    Read the article

  • .NET port with Java's Map, Set, HashMap

    - by Nikos Baxevanis
    I am porting Java code in .NET and I am stuck in the following lines that (behave unexpectedly in .NET). Java: Map<Set<State>, Set<State>> sets = new HashMap<Set<State>, Set<State>>(); Set<State> p = new HashSet<State>(); if (!sets.containsKey(p)) { ... } The equivalent .NET code could possibly be: IDictionary<HashSet<State>, HashSet<State>> sets = new Dictionary<HashSet<State>, HashSet<State>>(); HashSet<State> p = new HashSet<State>(); if (!sets.containsKey(p)) { /* (Add to a list). Always get here in .NET (??) */ } However the code comparison fails, the program think that "sets" never contain Key "p" and eventually results in OutOfMemoryException. Perhaps I am missing something, object equality and identity might be different between Java and .NET. I tried implementing IComparable and IEquatable in class State but the results were the same. Edit: What the code does is: If the sets does not contain key "p" (which is a HashSet) it is going to add "p" at the end of a LinkedList. The State class (Java) is a simple class defined as: public class State implements Comparable<State> { boolean accept; Set<Transition> transitions; int number; int id; static int next_id; public State() { resetTransitions(); id = next_id++; } // ... public int compareTo(State s) { return s.id - id; } public boolean equals(Object obj) { return super.equals(obj); } public int hashCode() { return super.hashCode(); }

    Read the article

  • Is there a way to prevent SQL Server silently truncating data in local variables and stored procedure parameters?

    - by Luke Woodward
    I recently encountered an issue while porting an app to SQL Server. It turned out that this issue was caused by a stored procedure parameter being declared too short for the data being passed to it: the parameter was declared as VARCHAR(100) but in one case was being passed more than 100 characters of data. What surprised me was that SQL Server didn't report any errors or warnings -- it just silently truncated the data to 100 characters. The following SQLCMD session demonstrates this: 1 create procedure WhereHasMyDataGone (@data varchar(5)) as 2 begin 3 print 'Your data is ''' + @data + '''.'; 4 end; 5 go 1 exec WhereHasMyDataGone '123456789'; 2 go Your data is '12345'. Local variables also exhibit the same behaviour: 1 declare @s varchar(5) = '123456789'; 2 print @s; 3 go 12345 Is there an option I can enable to have SQL Server report errors (or at least warnings) in such situations? Or should I just declare all local variables and stored procedure parameters as VARCHAR(MAX) or NVARCHAR(MAX)?

    Read the article

  • In Rails 3, how does one render HTML within a JSON response?

    - by ylg
    I'm porting an application from Merb 1.1 / 1.8.7 to Rails 3 (beta) / 1.9.1 that uses JSON responses containing HTML fragments, e.g., a JSON container specifying an update, on a user record, and the updated user row looks like . In Merb, since whatever a controller method returns is given to the client, one can put together a Hash, assign a rendered partial to one of the keys and return hash.to_json (though that certainly may not be the best way.) In Rails, it seems that to get data back to the client one must use render and render can only be called once, so rendering the hash to json won't work because of the partial render. From reading around, it seems one could put that data into a JSON .erb view file, with <%= render partial % in and render that. Is there a Rails-way of solving this problem (return JSON containing one or more HTML fragments) other than that? In Merb: only_provides :json ... self.status = 204 # or appropriate if not async return { 'action' => 'update', 'type' => 'user', 'id' => @user.id, 'html' => partial('user_row', format: :html, user: @user) }.to_json In Rails?

    Read the article

  • What is PHP like as a programming language?

    - by seanlinmt
    I am not really familiar with PHP, but I get the impression that it is like JavaScript (syntax-wise). What are the benefits of a dynamically typed language, when compared to a strongly typed language like C# or Java, and how would this help in the context of web development? What would make a dynamically typed language so attractive? Or, does the popularity of PHP have more to do with it being free? Okay, I think I better give a little more background to get more meaningful answers, because I am not wanting a flame war. I come from a C background, and when I moved into C# and Visual Studio. Having code completion, integration with an SQL database, huge existing class libraries and easy to access documentation, as well as new tools such as LINQ and ReSharper was like heaven. I didn't enjoy JavaScript before JQuery, but now I love it as well. Recently, I ported a PHP project over to C# and I used Zend to help me debug and understand more while porting - instead of maintaining two code streams. That also cut down on the cost of the server and maintenance. Getting into PHP would be nice. I think that Visual Studio has spoiled me - but again Eclipse is also equally spoiling. It would be nice to have an answer from someone who has experience developing both under PHP and .NET.

    Read the article

  • Good C++ array class for dealing with large arrays of data in a fast and memory efficient way?

    - by Shane MacLaughlin
    Following on from a previous question relating to heap usage restrictions, I'm looking for a good standard C++ class for dealing with big arrays of data in a way that is both memory efficient and speed efficient. I had been allocating the array using a single malloc/HealAlloc but after multiple trys using various calls, keep falling foul of heap fragmentation. So the conclusion I've come to, other than porting to 64 bit, is to use a mechanism that allows me to have a large array spanning multiple smaller memory fragments. I don't want an alloc per element as that is very memory inefficient, so the plan is to write a class that overrides the [] operator and select an appropriate element based on the index. Is there already a decent class out there to do this, or am I better off rolling my own? From my understanding, and some googling, a 32 bit Windows process should theoretically be able address up to 2GB. Now assuming I've 2GB installed, and various other processes and services are hogging about 400MB, how much usable memory do you think my program can reasonably expect to get from the heap? I'm currently using various flavours of Visual C++.

    Read the article

  • understanding and implementing Boids

    - by alphablender
    So I'm working on porting Boids to Brightscript, based on the pseudocode here: http://www.kfish.org/boids/pseudocode.html I'm trying to understand the data structures involved, for example is Velocity a single value, or is it a 3D value, ie velocity={x,y,z} It seems as if the pseudocode seems to mix this up where sometimes it has an equation that incudes both vectors and single-value items: v1 = rule1(b) v2 = rule2(b) v3 = rule3(b) b.velocity = b.velocity + v1 + v2 + v3 If Velocity is a tripartite value then this would make sense, but I'm not sure. So my first question here is, is this the correct datastructure for a single boid based on the Pseudocode on that page: boid={position:{px:0,py:0,pz:0},velocity:{x:0,y:0,z:0},vector:{x:0,y:0,z:0},pc:{x:0,y:0,z:0},pv:{x:0,y:0,z:0}) where pc=perceived center, pv= perceived velocity I"ve implemented a vector_add, vector_sub, vector_div, and vector boolean functions. The reason I'm starting from this pseudocode is I've not been able to find anything else that is as readable, but it still leaves me with lots of questions as the data structures are not explicitly defined for each variable. (edit) here's a good example of what i'm talking about: IF |b.position - bJ.position| < 100 THEN if b.position - b[j].position are both 3D coordinates, how can they be considered "less than 100" unless they are < {100,100,100} ? Maybe that is what I need to do here, use a vector comparison function?

    Read the article

  • Stack Exchange Notifier Chrome Extension [v1.2.9.3 released]

    - by Vladislav Tserman
    About Stack Exchange Notifier is a handy extension for Google Chrome browser that displays your current reputation, badges on Stack Exchange sites and notifies you on reputation's changes. You will now get notified of comments on your own posts (questions and answers) and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). All StackExchange sites are supported. Screenshots Access Install extensions from Google Chrome Extension Gallery Platform Google Chrome browser extension Contact Created by me (Vladislav Tserman). I'm available at: vladjan (at) gmail.com Follow Stack Exchange Notifier on twitter to get notified about news and updates: http://twitter.com/se_notifier Code Written in Java, Google Web Toolkit under Eclipse Helios. Stack Exchange Notifier uses the Stack Exchange API and is powered by Google App Engine for Java. Changelog I will be porting extension to not use app engine back-end due to some limitations. New versions of the extension will be making direct calls to Stack Exchange API right from your browser. Please do not expect new versions of the extension any time soon. Sorry. Read more about limitations here http://stackapps.com/questions/1713 and here http://stackoverflow.com/questions/3949815 Currently, you may sometimes experience some issues using extension, but most users will have no problems. You may notice too many errors in the logs, but there is nothing I can do with this now. Thanks for using my little app, thanks to all of you it still works in spite of many issues with API Version 1.2.9.3 - Thursday, October 14, 2010 - Bug fix release (back-end improvements) Version 1.2.9.2 - Thursday, October 07, 2010 - Bug fix release (high rate of occasional API errors were noticed so some fixes added to handle them were possible) Version 1.2.9.1 - Tuesday, October 05, 2010 - Mostly bug fix release, back-end performance improvements - You will now get notified of comments on your own posts (questions and answers) that are not older than 1 year and of any comments that refer to you by @username in a comment, even if you do not own the post (aka mentions). This is experimental feature, let me know if you like/need it. - New 'All sites' view displays all websites from Stack Exchange network (part of new feature that is not finished yet) Version 1.2.9 - Saturday, September 25, 2010 - Fixes an issue when some users got empty Account view. - When hovering on @Username on account view the title now displays '@Username on @SiteName' to easily understand the site name Version 1.2.7 - Wednesday, September 22, 2010 - Fixed an issue with notifications. - Minor improvements Version 1.2.5 - Tuesday, September 21, 2010 - Fixed an issue where some characters in response payload raised an exception when parsing to JSON. v1.2.3 (Sunday, September 19, 2010) - Support for new OpenID providers was added (Yahoo, MyOpenID, AOL) - UI improvements - Several minor defects were fixed v1.2.2 (Thursday, September 16, 2010) - New types of notifications added. Now extension notifies you on comments that are directed to you. Comments are expandable, so clicking on comment title will expand height to accommodate all available text. - UI and error handling improvements Future Application still in beta stage. I hope you're not having any problems, but if you are, please let me know. Leave your feedback and bug reports in comments. I'm available at: vladjan (at) gmail.com. I'm working on adding new features. I want to hear from the users and incorporate as much feedback as possible into the extension. Any suggestions for improvements/features to add?

    Read the article

  • HP and Microsoft: That&rsquo;s What Friends Are For ?

    - by andrewbrust
    Today, HP pre-announced the second coming out for its recently acquired Palm webOS mobile operating system.  I happen to think webOS is quite good, and when the Palm Pre first came out, I thought it a worthwhile phone.  I was worried though that the platform would never attract the developer mindshare it needed to be competitive, and that turned out to be the case.  But then HP acquired Palm and announced it would be revamping the webOS offering, not only on phones, but also on tablets.  It later announced that it would also use webOS as an embedded solution on HP printers. The timing of this came shortly after HP had announced it would be producing a “Slate” product running Windows 7. After the Palm deal, HP became vague about whether the Windows-powered slate would actually come out.  They did, in fact, bring the Slate 500 to market, but by some accounts, they only built 5000 units. Another recent awkward moment between HP and Microsoft: HP withdrew itself from the Windows Home Server ecosystem.  That one hurt, as they were the dominant OEM there.  But Microsoft’s decision to kill Drive Extender had driven away many parties, not just HP. On Wednesday, HP came out with their TouchPad, and new phone models.  Not a nice thing for Windows Phone 7, but other OEMs are taking a wait and see attitude there too, I suppose.  There was one more zinger though, and it was bigger: HP announced they’d be porting webOS to PCs. No Windows Phone 7? OK. No Windows Home Server?  Whatcha gonna do?  But no Windows 7 either?  From HP?  What comes after that, no ink and toner? Some people think Microsoft’s been around too long to be relevant.  But HP started out making oscilloscopes!  The notion that HP is too cool for Windows school is a it far-fetched.  This is the company that bought EDS. This is the company that bought Compaq.  And Compaq was the company that bought Digital Equipment Corporation.  Somehow, I don’t think the VT 220 outclasses Windows PCs. What could possibly be going on?  My sense is that HP wants to put webOS on PCs that also have Windows, and that people will buy because they have Windows.  And for every one of those sold, HP gets to count, technically speaking, another webOS unit in the install base.  webOS is really nice, as I said.  But being good isn’t good enough when you are trying to get market share.  Number of units shipped matters.  The question is whether counting PCs with webOS installed, but dormant, is helpful to HP’s cause.  Seems like a funny way to account for market share, and a strange way to treat a big partner in Redmond.

    Read the article

  • Pondering New Technology

    - by MOSSLover
    So I have been standing at the end of a fork for a year peering down a corner looking at this way and that way trying to figure out where I fit in.  I was so enthusiastic and excited about Silverlight when it came out.  It was this amazing awesome technology that had this really cool animation and webcam/multimedia piece.  I thought if I put my money on Silverlight it’s going somewhere, then HTML 5 came out and the wind shifted.  I realized times were changing. I have been working with web technologies since I was 15 years old.  Playing with html and javascript and even css back when it first came out.  In tech years 15 years is forever.  Things change so quickly and so often.  So I guess the question is where am I heading?  The answer is mobile technology.  For some reason I was resisting change and I have no idea why.  I guess I really wanted to see more than one player.  I didn’t quite feel that Microsoft was ready with Windows Phone 7.  It was a great start, but it just didn’t feel like they went all the way.  Now with Windows 8 it feels like they are at version 2.0.  It’s like hitting Silverlight 2.0 where they finally added the .Net bits.  The path is paved, but we don’t know where it’s leading.  Then we had 3.0, 4.0, and 5.0 to mature the technology into what it is today (man I’m hoping they are going to roll some of the cool bits into other tech if they don’t exist). Anyway, I’m on board, but I’m not buying a Windows Tablet just yet.  I was hoping for a 7 inch screen from Apple around $300 or just above and a 7 inch screen from the MS side around the same price.  What I got was the Apple side, but nothing from Microsoft.  I was pretty disappointed with the $500 market price on the RT version.  I realized Microsoft is close, but not quite where Apple is today.  Yes the devices have Office that they are offering, but the sticker is just too much for a first generation device.  If you guys remember correctly the first generation iPad was quite expensive.  I guess for a 1st generation device $500 is pretty good. So I guess what I’m trying to say is that I am shifting my focus entirely away from Silverlight and more towards mobile.  I will be doing a lot of postings on iOS, Windows 8, and Windows Phone 8 with SharePoint 2013.  Since I don’t have a tablet and don’t foresee myself buying one just yet it might be mostly on the phones for right now.  I want to do a bunch of testing on various devices on what needs to be done in apps on each device.  I might add a bit on porting code from one to the other.  I think it’s going to be a lot of fun and make things flow a little better for me.  In a way it’s kind of like Star Trek 6 where they talk about the Undiscovered Country.  I’m going to jump forward completely and see where I land. Technorati Tags: SharePoint 2013,Mobile,Windows 8,iOS

    Read the article

  • Silverlight Cream for February 17, 2011 -- #1048

    - by Dave Campbell
    In this Issue: Oren Gal, Andrea Boschin(-2-), Kevin Hoffman, Rudi Grobler(-2-, -3-), Michael Crump, Yochay Kiriaty, Peter Kuhn, Loek van den Ouweland, Jeremy Likness, Jesse Liberty, and WindowsPhoneGeek. Above the Fold: Silverlight: "Multiple page printing in Silverlight4 - Part 2 - preview before printing" Oren Gal WP7: "Windows Phone 7 Tombstoning with MVVM and Sterling" Jeremy Likness XNA: "XNA for Silverlight developers: Part 4 - Animation (frame-based)" Peter Kuhn From SilverlightCream.com: Multiple page printing in Silverlight4 - Part 2 - preview before printing Oren Gal has part 2 of his Printing with Silverlight 4 series up, and this time he's putting up a preview... how cool is that? Inject ApplicationServices with MEF reloaded: supporting recomposition Andrea Boschin revisited his Inject ApplicationServices with MEF post because of feedback, and took it from the realm of an interesting example to a useful solution. Windows Phone 7 - Part #5: Panorama and Pivot controls Andrea Boschin also has part 5 of his WP7 series up at SilverlightShow... want a good demo of both the panorama and the pivot controls... here it is all in one tutorial WP7 for iPhone and Android Developers - Introduction to C# This should be good.. a 12-part series on SilverlightShow by Kevin Hoffman on porting your iPhone/Android app to WP7... this first part an intro to C# Balls of Steel Rudi Grobler discusses the upcoming (?) release of 'Duke Nukem Forever', and has a 'soundboard' for WP7 to celebrate the event... get your Duke Nukem on with these sounds! Moonlight 4 (Preview) is here Rudi Grobler also has a post up about the release of Moonlight by Novel for Silverlight 4!... explanation and links on his post. WP7 Podcasts Rudi Grobler highlights two WP7 Podcasts that are putting out good material... check them out if you haven't already. Having Fun with Coding4Fun’s Windows Phone 7 Controls Michael Crump takes a look at his WP7 app and uses the Coding4Fun project toolset while doing so... getting the tools, setting them up, and consuming them. Windows Phone Silverlight Application Faster Load Time Yochay Kiriaty has a good long discussion up about how to get faster load time out of your WP7 apps... good useful external links throughout. XNA for Silverlight developers: Part 4 - Animation (frame-based) Peter Kuhn's part 4 of his XNA for Silverlight devs is up at Silverlightshow and is a great tutorial on frame-based animation. Windows Phone SoundEffect clipping Loek van den Ouweland has some good information about soudn clips on WP7... the solutions aren't always code solutions.... good to know info. Windows Phone 7 Tombstoning with MVVM and Sterling Jeremy Likness is discussing Tombstoning via MVVM and Sterling... read on how Sterling gives you a leg up on the Tombstone express. Video: Reactive Phone Programming For Windows Phone 7 Fitting in nicely with his podcast on Reactive Programming, Jesse Liberty releases a video on Reactive Programming for WP7. Talking about Data Binding in WP7 | Coding4fun TextBoxBinding helper in depth WindowsPhoneGeek's latest post walks through WP7 databinding in detail with lots of good external links, then follows up with a discussion of the Coding4Fun Binding Helpers Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles

    - by Gopinath
    Android is a great mobile OS but one thought that lingers in the mind of few Android owners is: Am I using a cheap iPhone? This is valid thought for many low end Android users as their phones runs sluggish and the user interface of Android looks like an imitation of iOS. When it comes to Windows Phone 7 users, even though their operating system features are not as great as iPhone/Android but it has its unique user interface; Windows Phone 7 user interface is a very intuitive and fresh, it’s constantly updating Live Tiles show all the required information on the home screen. Android has best mobile operating system features except UI and Windows Phone 7 has excellent user interface. How about porting Windows Phone 7 Tiles interface on an Android? That should be great. Launch 7 app brings the best of Windows Phone 7 look and feel to Android OS. Once the Launcher 7 app is installed and activated, it brings Live Tiles or constantly updating controls that show information on Android home screen. Apart from simple and smooth tiles, there are handful of customization options provided. Users can change colour of the tiles, add new tiles, enable/disable transitions. The reviews on Android Market are on the positive side with 4.4 stars by 10,000 + reviewers. Here are few user reviews 1. Does what it says. only issue for me is that the app drawer doesn’t rotate. And I would like the UI to rotate when my KB is opened. HTC desire z – Jonathan 2. Works great on atrix.Kudos to developers. Awesome. Though needs: Better notification bar More stock images of tiles Better fitting of widgets on tiles – Manny 3. Looks really good like it much more than I thought I would runs real smooth running royal ginger 2.1 – Jay 4. Omg amazing i am definetly keeping it as my default best of android and windows – Devon 5. Man! An update every week! Very very responsive developer! – Andrew You can read more reviews on Android Market here.  There is no doubt that this application is receiving rave reviews. After scanning a while through the reviews, few complaints throw light on the negative side: Battery drains a bit faster & Low end mobile run a bit sluggish. The application is available in two versions – an ad supported free version and $1.41 ad free version. Download Launcher 7 from Android Market This article titled,Launch 7:Windows Phone 7 Style Live Tiles On Android Mobiles, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Review&ndash;Build Android and iOS apps in Visual Studio with Nomad

    - by Bill Osuch
    Nomad is a Visual Studio extension that allows you build apps for both Android and iOS platforms in Visual Studio using HTML5. There is no need to switch between .Net, Java and Objective-C to target different platforms - write your code once in HTML5 and build for all common mobile platforms and tablets. You have access to the native hardware functions (such as camera and GPS) through the PhoneGap library, UI libraries such as jQuery mobile allow you to create an impressive UI with minimal work. Nomad is still in an early access beta stage, so the documentation is a bit sparse. In fact, the only documentation is a simple series of steps on how to install the plug-in, set up a project, build and deploy it. You're going to want to be a least a little familiar with the PhoneGap library and jQuery mobile to really tap into the power of this. The sample project included with the download shows you just how simple it is to create projects in Visual Studio. The sample solution comes with an index.html file containing the HTML5 code, the Cordova (PhoneGap) library, jQuery libraries, and a JQuery style sheet: The html file is pretty straightforward. If you haven't experimented with JQuery mobile before, some of the attributes (such as data-role) might be new to you, but some quick Googling will fill in everything you need to know. The first part of the file builds a simple (but attractive) list with some links in it: The second part of the file is where things get interesting and it taps into the PhoneGap library. For instance, it gets the geolocation position by calling position.coords.latitude and position.coords.longitude: ...and then displays it in a simple span: Building is pretty simple, at least for Android (I'm not an iOS developer so I didn't look at that feature) - just configure the display name, version number, and package ID. There's no need to specify Android version; Nomad supports 2.2 and later. Enter these bits of information, click the new "Build for Android" button (not the regular Visual Studio Build link...) and you get a dialog box saying that your code is being built by their cloud build service (so no building while away from a WiFi signal apparently). After a couple minutes you wind up with a .apk file that can be copied over to your device. Applications built with Nomad for Android currently use a temporary certificate, so you can test the app on your devices but you cannot publish them in the Google Play Store (yet). And I love the "success" dialog box: Since Nomad is still in Beta, no pricing plans have been announced yet, so I'll be curious to see if this becomes a cost-effective solution to mobile app development. If it is, I may even be tempted to spring for the $99 iOS membership fee! In the meantime, I plan to work on porting some of my apps over to it and seeing how they work. My only quibble at this time is the lack of a centralized documentation location - I'd like to at least see which (if any) features of JQuery and PhoneGap are limited or not supported. Also, some notes on targeting different Android screen sizes would be nice, but it's relatively easy to find jQuery examples out on the InterWebs. Oh well, trial and error! You can download the Nomad extension for Visual Studio by going to their web site: www.vsnomad.com. Technorati Tags: Android, Nomad

    Read the article

  • Disneyland Inside Out on iPhone and Android

    - by Ryan Cain
    It's hard to believe October was the last time I was over here on my blog.  Ironically after getter the developer phone from Microsoft I have been knee deep in iPhone programming and for the past few weeks Android programming again.  This time I've spent all my non-working hours programming a fun project for my "other" website, Disneyland Inside Out.  Disneyland Inside Out, a vacation planning site for Disneyland in California, has been around in various forms since June 1996.  It has always been a place for me to explore new technologies and learn about some of the new trends on the web.  I recently migrated the site over to DotNetNuke and have been building out custom modules for DNN.  I've also been hacking things together w/ the URLRewrite module in IIS 7.5 to provide strong SEO optimized URLs.  I can't say all that has really stuck within the DNN model of doing things, but it has worked pretty well. As part of my learning process, I spent most of the Fall bringing Disneyland Inside Out to the iPhone.  I will post more details on my development experiences later.  But this project gave me a really great opportunity to get a good feel for Objective-C development.  After 3 months I actually feel somewhat competent in the language and iPhone SDK, instead of just floundering around getting things to work.  The project also gave me a chance to play with some new frameworks on the iPhone and really dig into the Facebook SDK.  I also dug into some of the Gowalla REST api's as well.  We've been live with the app in iTunes for just about 10 days now, and have been sitting in the top 200 of free travel apps for the past few days.  You can get more info and the direct iTunes download link on our site: Disneyland Inside Out for iPhone Since launching the iPhone version I have gotten back into Android development, porting the Disneyland Inside Out app over to Android.  As I said in my first review of iPhone vs. Android, coming from a managed code background, Android is much easier to get going with.  I just about 3 weeks total I will have about 85 - 90% of the functionality up and running in the Android app, that took probably 1.5 - 2x's that time for iPhone.  That isn't a totally fair comparison as I am much more comfortable w/ Xcode and Objective-C today and can get some of the basic stuff done much faster than I could in the fall.  Though I'd say some of the hardest code to debug is still the null pointer issues on objects that were dealloc'd too early in Objective-C.  This isn't too bad with the NSZoombies enabled for synchronous code, but when you have a lot of async, which my app does, it can be hairy at times to track exactly what was causing the issue.   I will post more details later, as I am trying to wrap up a beta of the Android app today.  But in the meantime, if you have an iPhone, iPod Touch or iPad head on over to the site and take a look at my app.

    Read the article

  • Head in the Clouds

    - by Tony Davis
    We're just past the second anniversary of the launch of Windows Azure. A couple of years' experience with Azure in the industry has provided some obvious success stories, but has deflated some of the initial marketing hyperbole. As a general principle, Azure seems to work well in providing a Service-Oriented Architecture for services in enterprises that suffer wide fluctuations in demand. Instead of being obliged to provide hardware sufficient for the occasional peaks in demand, one can hire capacity only when it is needed, and the cost of hosting an application is no longer a capital cost. It enables companies to avoid having to scale out hardware for peak periods only to see it underused for the rest of the time. A customer-facing application such as a concert ticketing system, which suffers high demand in short, predictable bursts of activity, is a great example of an application that would work well in Azure. However, moving existing applications to Azure isn't something to be done on impulse. Unless your application is .NET-based, and consists of 'stateless' components that communicate via queues, you are probably in for a lot of redevelopment work. It makes most sense for IT departments who are already deep in this .NET mindset, and who also want 'grown-up' methods of staging, testing, and deployment. Azure fits well with this culture and offers, as a bonus, good Visual Studio integration. The most-commonly stated barrier to porting these applications to Azure is the problem of reconciling the use of the cloud with legislation for data privacy and security. Putting databases in the cloud is a sticky issue for many and impossible for some due to compliance and security issues, the need for direct control over data, and so on. In the face of feedback from the early adopters of Azure, Microsoft has broadened the architectural choices to cater for a wide range of requirements. As well as SQL Azure Database (SAD) and Azure storage, the unstructured 'BLOB and Entity-Attribute-Value' NoSQL storage alternative (which equates more closely with folders and files than a database), Windows Azure offers a wide range of storage options including use of services such as oData: developers who are programming for Windows Azure can simply choose the one most appropriate for their needs. Secondly, and crucially, the Windows Azure architecture allows you the freedom to produce hybrid applications, where only those parts that need cloud-based hosting are deployed to Azure, whereas those parts that must unavoidably be hosted in a corporate datacenter can stay there. By using a hybrid architecture, it will seldom, if ever, be necessary to move an entire application to the cloud, along with personal and financial data. For example that we could port to Azure only put those parts of our ticketing application that capture and process tickets orders. Once an order is captured, the financial side can be processed in our own data center. In short, Windows Azure seems to be a very effective way of providing services that are subject to wide but predictable fluctuations in demand. Have you come to the same conclusions, or do you think I've got it wrong? If you've had experience with Azure, would you recommend it? It would be great to hear from you. Cheers, Tony.

    Read the article

  • Identity in .NET 4.5&ndash;Part 3: (Breaking) changes

    - by Your DisplayName here!
    I recently started porting a private build of Thinktecture.IdentityModel to .NET 4.5 and noticed a number of changes. The good news is that I can delete large parts of my library because many features are now in the box. Along the way I found some other nice additions. ClaimsIdentity now has methods to query the claims collection, e.g. HasClaim(), FindFirst(), FindAll(). ClaimsPrincipal has those methods as well. But they work across all contained identities. Nice! ClaimsPrincipal.Current retrieves the ClaimsPrincipal from Thread.CurrentPrincipal. Combined with the above changes, no casting necessary anymore. SecurityTokenHandler now has read and write methods that work directly with strings. This makes it much easier to deal with non-XML tokens like SWT or JWT. A new session security token handler that uses the ASP.NET machine key to protect the cookie. This makes it easier to get started in web farm scenarios. No need for a custom service host factory or the federation behavior anymore. WCF can be switched into “WIF mode” with the useIdentityConfiguration switch (odd name though). Tooling has become better and the new test STS makes it very easy to get started. On the other hand – and that was kind of expected – to bring claims into the core framework, there are also some breaking changes for WIF code. If you want to migrate (and I would recommend that), most changes to your code are mechanical. The following is a brain dump of the changes I encountered. Assembly Microsoft.IdentityModel is gone. The new functionality is now in mscorlib, System.IdentityModel(.Services) and System.ServiceModel. All the namespaces have changed as well. No IClaimsPrincipal and IClaimsIdentity anymore. Configuration section has been split into <system.identityModel /> and <system.identityModel.services />. WCF configuration story has changed as well. Claim.ClaimType is now Claim.Type. ClaimCollection is now IEnumerable<Claim>. IsSessionMode is now IsReferenceMode. Bootstrap token handling is different now. ClaimsPrincipalHttpModule is gone. This is not really needed anymore, apart from maybe claims transformation (see here). Various factory methods on ClaimsPrincipal are gone (e.g. ClaimsPrincipal.CreateFromIdentity()). SecurityTokenHandler.ValidateToken now returns a ReadOnlyCollection<ClaimsIdentity>. Some lower level helper classes are gone or internal now (e.g. KeyGenerator). The WCF WS-Trust bindings are gone. I think this is a pity. They were *really* useful when doing work with WSTrustChannelFactory. Since WIF is part of the Windows operating system and also supported in future versions of .NET, there is no urgent need to migrate to the 4.5 claims model. But obviously, going forward, at some point you want to make the move.

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • Override an IOCTL Handler in PQOAL

    - by Kate Moss' Big Fan
    When porting or creating a BSP to a new platform, we often need to make change to OEMIoControl or HAL IOCTL handler for more specific. Since Microsoft introduced PQOAL in CE 5.0 and more and more BSP today leverages PQOAL to simplify the OAL, we no longer define the OEMIoControl directly. It is somehow analogous to migrate from pure Windows SDK to MFC; people starts to define those MFC handlers and forgot the WinMain and the big message loop. If you ever take a look at the interface between OAL and Kernel, PUBLIC\COMMON\OAK\INC\oemglobal.h, the pfnOEMIoctl is still there just as the entry point of Windows Program is WinMain since day one. (For those may argue about pfnOEMIoctl is not OEMIoControl, I will encourage you to dig into PRIVATE\WINCEOS\COREOS\NK\OEMMAIN\oemglobal.c which initialized pfnOEMIoctl to OEMIoControl. The interface is just to split OAL and Kernel which no longer linked to one executable file in CE 6, all of the function signature is still identical) So let's trace into PQOAL to realize how it implements OEMIoControl and how can we override an IOCTL handler we interest. First thing to know is the entry point (just as finding the WinMain in MFC), OEMIoControl is defined in PLATFORM\COMMON\SRC\COMMON\IOCTL\ioctl.c. Basically, it does nothing special but scan a pre-defined IOCTL table, g_oalIoCtlTable, and then execute the handler. (The highlight part) Other than that is just for error handling and the use of critical section to serialize the function. BOOL OEMIoControl(     DWORD code, VOID *pInBuffer, DWORD inSize, VOID *pOutBuffer, DWORD outSize,     DWORD *pOutSize ) {     BOOL rc = FALSE;     UINT32 i; ...     // Search the IOCTL table for the requested code.     for (i = 0; g_oalIoCtlTable[i].pfnHandler != NULL; i++) {         if (g_oalIoCtlTable[i].code == code) break;     }     // Indicate unsupported code     if (g_oalIoCtlTable[i].pfnHandler == NULL) {         NKSetLastError(ERROR_NOT_SUPPORTED);         OALMSG(OAL_IOCTL, (             L"OEMIoControl: Unsupported Code 0x%x - device 0x%04x func %d\r\n",             code, code >> 16, (code >> 2)&0x0FFF         ));         goto cleanUp;     }            // Take critical section if required (after postinit & no flag)     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Take critical section                    EnterCriticalSection(&g_ioctlState.cs);     }     // Execute the handler     rc = g_oalIoCtlTable[i].pfnHandler(         code, pInBuffer, inSize, pOutBuffer, outSize, pOutSize     );     // Release critical section if it was taken above     if (         g_ioctlState.postInit &&         (g_oalIoCtlTable[i].flags & OAL_IOCTL_FLAG_NOCS) == 0     ) {         // Release critical section                    LeaveCriticalSection(&g_ioctlState.cs);     } cleanUp:     OALMSG(OAL_IOCTL&&OAL_FUNC, (L"-OEMIoControl(rc = %d)\r\n", rc ));     return rc; }   Where is the g_oalIoCtlTable? It is defined in your BSP. Let's use DeviceEmulator BSP as an example. The PLATFORM\DEVICEEMULATOR\SRC\OAL\OALLIB\ioctl.c defines the table as const OAL_IOCTL_HANDLER g_oalIoCtlTable[] = { #include "ioctl_tab.h" }; And that leads to PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h which defined some of IOCTL handler but others are defined in oal_ioctl_tab.h which is under PLATFORM\COMMON\SRC\INC\. Finally, we got the full table body! (Just like tracing MFC, always jumping back and forth). The format of table is very straight forward, IOCTL code, Flags and Handler Function // IOCTL CODE,                          Flags   Handler Function //------------------------------------------------------------------------------ { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlHalInitRegistry     }, { IOCTL_HAL_INIT_RTC,                       0,  OALIoCtlHalInitRTC          }, { IOCTL_HAL_REBOOT,                         0,  OALIoCtlHalReboot           }, The PQOAL scans through the table until it find a matched IOCTL code, then invokes the handler function. Since it scans the table from the top which means if we define TWO handler with same IOCTL code, the first one is always invoked with no exception. Now back to the PLATFORM\DEVICEEMULATOR\SRC\INC\ioctl_tab.h, with the following table { IOCTL_HAL_INITREGISTRY,                   0,  OALIoCtlDeviceEmulatorHalInitRegistry     }, ... #include <oal_ioctl_tab.h> Note the IOCTL_HAL_INITREGISTRY handler are defined in both BSP's local ioctl_tab.h and the common oal_ioctl_tab.h, but due to BSP's local handler comes before "#include <oal_ioctl_tab.h>" so we know the OALIoCtlDeviceEmulatorHalInitRegistry always get called. In this example, the DeviceEmulator BSP overrides the IOCTL_HAL_INITREGISTRY handler from OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry by manipulating the g_oalIoCtlTable table. (In some point of view, it is similar to message map in MFC) Please be aware, when you override an IOCTL handler in PQOAL, you may want to clone the original implementation to your BSP and change to meet your need. It is recommended and save you the redundant works but remember to rename the handler function (Just like the DeviceEmulator it changes the name of OALIoCtlHalInitRegistry to OALIoCtlDeviceEmulatorHalInitRegistry). If you don't change the name, linker may not be happy (due to name conflict) and the more important is by using different handler name, you could always redirect the handler back to original one. (It is like the concept of OOP that calling a function in base class; still not so clear? I am goinf to show you soon!) The OALIoCtlDeviceEmulatorHalInitRegistry setups DeviceEmulator specific registry settings and in the end, if everything goes well, it calls the OALIoCtlHalInitRegistry (PLATFORM\COMMON\SRC\COMMON\IOCTL\reginit.c) to do the rest.     if(fOk) {         fOk = OALIoCtlHalInitRegistry(code, pInpBuffer, inpSize, pOutBuffer,             outSize, pOutSize);     } Now you got the picture, whenever you want to override an IOCTL hadnler that is implemented in PQOAL just Clone the handler function to your BSP as a template. Simple name change for the handler function, and a name change in the IOCTL table header file that maps the IOCTL with the function Implement your IOCTL handler and whenever you need to redirect it back just calling the original handler function. It is the standard way of implementing a custom IOCTL and most Microsoft developers prefer. The mapping of IOCTL routine to IOCTL code is platform specific - you control the header file that does that mapping.

    Read the article

  • HLSL, Program pixel shader with different Texture2D downscaling algorithms

    - by Kaminari
    I'm trying to port some image interpolation algorithms into HLSL code, for now i got: float2 texSize; float scale; int method; sampler TextureSampler : register(s0); float4 PixelShader(float4 color : COLOR0, float2 texCoord : TEXCOORD0) : COLOR0 { float2 newTexSize = texSize * scale; float4 tex2; if(texCoord[0] * texSize[0] > newTexSize[0] || texCoord[1] * texSize[1] > newTexSize[1]) { tex2 = float4( 0, 0, 0, 0 ); } else { if (method == 0) { tex2 = tex2D(TextureSampler, float2(texCoord[0]/scale, texCoord[1]/scale)); } else { float2 step = float2(1/texSize[0], 1/texSize[1]); float4 px1 = tex2D(TextureSampler, float2(texCoord[0]/scale-step[0], texCoord[1]/scale-step[1])); float4 px2 = tex2D(TextureSampler, float2(texCoord[0]/scale , texCoord[1]/scale-step[1])); float4 px3 = tex2D(TextureSampler, float2(texCoord[0]/scale+step[0], texCoord[1]/scale-step[1])); float4 px4 = tex2D(TextureSampler, float2(texCoord[0]/scale-step[0], texCoord[1]/scale )); float4 px5 = tex2D(TextureSampler, float2(texCoord[0]/scale+step[0], texCoord[1]/scale )); float4 px6 = tex2D(TextureSampler, float2(texCoord[0]/scale-step[0], texCoord[1]/scale+step[1])); float4 px7 = tex2D(TextureSampler, float2(texCoord[0]/scale , texCoord[1]/scale+step[1])); float4 px8 = tex2D(TextureSampler, float2(texCoord[0]/scale+step[0], texCoord[1]/scale+step[1])); tex2 = (px1+px2+px3+px4+px5+px6+px7+px8)/8; tex2.a = 1; } } return tex2; } technique Resample { pass Pass1 { PixelShader = compile ps_2_0 PixelShader(); } } The problem is that programming pixel shader requires different approach because we don't have the control of current position, only the 'inner' part of actual loop through pixels. I've been googling for about whole day and found none open source library with scaling algoriths used in loop. Is there such library from wich i could port some methods? I found http://www.codeproject.com/KB/GDI-plus/imgresizoutperfgdiplus.aspx but I really don't understand His approach to the problem, and porting it will be a pain in the ... Wikipedia tells a matematic approach. So my question is: Where can I find easy-to-port graphic open source library wich includes simple scaling algorithms? Of course if such library even exists :)

    Read the article

  • XML: Process large data

    - by Atmocreations
    Hello What XML-parser do you recommend for the following purpose: The XML-file (formatted, containing whitespaces) is around 800 MB. It mostly contains three types of tag (let's call them n, w and r). They have an attribute called id which i'd have to search for, as fast as possible. Removing attributes I don't need could save around 30%, maybe a bit more. First part for optimizing the second part: Is there any good tool (command line linux and windows if possible) to easily remove unused attributes in certain tags? I know that XSLT could be used. Or are there any easy alternatives? Also, I could split it into three files, one for each tag to gain speed for later parsing... Speed is not too important for this preparation of the data, of course it would be nice when it took rather minutes than hours. Second part: Once I have the data prepared, be it shortened or not, I should be able to search for the ID-attribute I was mentioning, this being time-critical. Estimations using wc -l tell me that there are around 3M N-tags and around 418K W-tags. The latter ones can contain up to approximately 20 subtags each. W-Tags also contain some, but they would be stripped away. "All I have to do" is navigating between tags containing certain id-attributes. Some tags have references to other id's, therefore giving me a tree, maybe even a graph. The original data is big (as mentioned), but the resultset shouldn't be too big as I only have to pick out certain elements. Now the question: What XML parsing library should I use for this kind of processing? I would use Java 6 in a first instance, with having in mind to be porting it to BlackBerry. Might it be useful to just create a flat file indexing the id's and pointing to an offset in the file? Is it even necessary to do the optimizations mentioned in the upper part? Or are there parser known to be quite as fast with the original data? Little note: To test, I took the id being on the very last line on the file and searching for the id using grep. This took around a minute on a Core 2 Duo. What happens if the file grows even bigger, let's say 5 GB? I appreciate any notice or recommendation. Thank you all very much in advance and regards

    Read the article

  • OSX/Darwin unresolved symbols when linking functions from <math.h>

    - by tbone
    I'm in the process of porting a large'ish (~1M LOC) project from a Window/Visual Studio environment to other platforms, the first of which happens to be Mac OS X. Originally the project was configured as Visual Studio solutions and projects, but now I'm using (the excellent) Premake (http://industriousone.com/premake) to generate project files for multiple platforms (VS, XCode, GMake). I configured, ported and built the first few projects without any significant problems, but having ported the math lib, I ran into this weird linking error that I haven't been able to resolve: Any functions used from math.h will fail to link (causing unresolved symbols). For reference, I'm using Premake v4.2.1 to generate projects for XCode v3.2.1, which is building using gcc v4.2 for the x86_64 architecture. (All this on 64-bit Snow Leopard) I've tried to persuade gcc to link and build everything against a 'known' SDK by adding -isysroot /Developer/SDKs/MacOSX10.6.sdk -mmacosx-version-min=10.6 to the build command line. Now under normal circumstances, adding -lm should take care of this, however in Darwin, those math libs are included in libSystem, which, as far as I can tell, gets implicitly linked by gcc/ld. I've tried creating a dummy project from within XCode which just runs: float f = log2(2.0)+log2f(3.f)+log1p(1.1)+log1pf(1.2f)+sin(8.0); std::cout << f << std::endl; and as expected, this builds just fine. However, if I put the same thing in the code inside the Premake generated project, all those math functions end up unresolved. Now comparing the linking command from the 'native' XCode project with my generated XCode project, they seem pretty identical (except that my generated project links other libs as well). 'Native' project: /Developer/usr/bin/g++-4.2 -arch x86_64 -dynamiclib -isysroot /Developer/SDKs/MacOSX10.6.sdk -Lsomepath -Fsomepath -filelist somefile -install_name somename -mmacosx-version-min=10.6 -single_module -compatibility_version 1 -current_version 1 -o somename Generated project: /Developer/usr/bin/g++-4.2 -arch x86_64 -dynamiclib -Lsomepath -Fsomepath -filelist somefile -install_name somename -isysroot /Developer/SDKs/MacOSX10.6.sdk -mmacosx-version-min=10.6 somelib.a somelib2.a somelib.dylib somelib2.dylib -single_module -compatibility_version 1 -current_version 1 -o somename Any help or hints about how to proceed would be most appreciated. Are there any gcc flags or other tools that can help me resolve this?

    Read the article

< Previous Page | 15 16 17 18 19 20 21  | Next Page >