Search Results

Search found 520 results on 21 pages for 'porting'.

Page 17/21 | < Previous Page | 13 14 15 16 17 18 19 20 21  | Next Page >

  • What's all this fuss about?

    - by atch
    Hi guys, At the beginning I want to state that it is not my intention to upset anyone who uses/like language other than C++. I'm saying that due to the fact that on one forum everytime when I've tried to ask questions of similiar nature I was almost always accused of trying to create a raw. Ok that's having done this is my question: I don't understand why java/c# creators thought/think that having something like vm and having source code compiled to bytecode instead of native code is in a long run any advantage. And why having function compiled for a first time when they are invoked is any advantege? And what's the story about write once run everywhere? When I think about the business of having something written once and it can run everywhere - well in theory this is all well. But I know for a fact that in practice it doesn't look that well at all. It is rather like write once test everywhere. And why would I preffer something to be compiled on runtime instead of compiletime. If I would have to wait even one hour longer for program to be installed once and for all and all the compilation would be done and nothing would be compiled after that I would preffer that. And I don't really know how it works in the real world (I'm a student never worked in IT business) but for example if I have working program written in C++ for Windows and I have wish to move it to another platform wouldn't I have to take my source code and compile it on desired machine? So in other words isn't that rather problem of having compiler which will compile source code on different machines (as far as I'm concerned there is just one C++ and source code will look identical in every machine). And last but not least if you think about it how many programs they are which are really word porting? I personnally can think of 3 maybe four.

    Read the article

  • UDK SettingsScene variable

    - by Ricket
    I am in the process of porting a script from UT3 to UDK. The script is for class ZOUIFrontEnd_MainMenu extends UTUIFrontEnd_MainMenu. I'm getting the following compiler error: C:\UDK\UDK-2010-03\Development\Src\FixIt\Classes\ZOUIFrontEnd_MainMenu.uc(18) : Error, Bad or missing expression in Call to 'OpenSceneByName', parameter 1 The referenced line is as follows: OpenSceneByName(SettingsScene); Okay, so I figured the OpenSceneByName function changed in UDK. Tracing the path of inheritance all the way up to UDKUIScene.uc, I found the definition: function UIScene OpenSceneByName(string SceneToOpen, bool bSkipAnimation=false, optional delegate<OnSceneActivated> SceneDelegate=None) Assuming SettingsScene is a string, everything looks fine, right? So I figured I would find SettingsScene to make sure it's a string. Well doing a search for "SettingsScene" in all files turned up only one other line; UTGameReplicationInfo.uc line 334: UTPC.OpenUIScene(class'UTUIFrontEnd_MainMenu'.default.SettingsScene); But UTUIFrontEnd_MainMenu doesn't have a SettingsScene variable in it! Just to check, I changed my line of code to match: OpenSceneByName(class'UTUIFrontEnd_MainMenu'.default.SettingsScene); And indeed, I get the following compile error now: C:\UDK\UDK-2010-03\Development\Src\FixIt\Classes\ZOUIFrontEnd_MainMenu.uc(18) : Error, Unknown Property 'SettingsScene' in 'Class UTGame.UTUIFrontEnd_MainMenu' How does UTGameReplicationInfo.uc successfully compile? Where is the SettingsScene variable?

    Read the article

  • Rails Autocompletion Issue - Rails 1.2.3 to 2.3.5

    - by Grant Sayer
    I have an issue with rails Autocompletion from some code that i've inherited from an old Rails 1.2.3 project that I'm porting to Rails 2.3.5. The issue revolves around javascript execution within the auto_complete helper :after_update_element. The scenario is: A user is presented with a popup form with a number of fields. In the first field as they enter text the auto_complete AJAX call occurs, returning a result, plus a series of other HTML data wrapped in <divs> so that the after_update_element call can iterate over the other data and fill in the remaining fields. The issue lies with the extraction of the other fields which works on IE, fails on Firefox. Here is the code: <%= text_field_with_auto_complete :item, :product_code, {:value => ""}, {:size => 40, :class => "input-text", :tabindex => 6, :select => 'code', :with => "element.name + '=' + escape(element.value) + '&supplier_id=' + $('item_supplier_id').value", :after_update_element => "function (ele, value) { $('item_supplier_id').value = Utilities.extract_value(value, 'supplier_id'); $('item_supplied_size').value = Utilities.extract_value(value, 'size')}"}%> Now the function Utilities is designed to grab the fields from the string of values and looks like: // // Extract a particular set of data from the autocomplete actions // Utilities.extract_value = function (value, className) { var result; var elements = document.getElementsByClassName(className, value); if (elements && elements.length == 1) { result = elements[0].innerHTML.unescapeHTML(); } return result; }; In Firefox the value of result is undefined

    Read the article

  • 48-bit bitwise operations in Javascript?

    - by randomhelp
    I've been given the task of porting Java's Java.util.Random() to JavaScript, and I've run across a huge performance hit/inaccuracy using bitwise operators in Javascript on sufficiently large numbers. Some cursory research states that "bitwise operators in JavaScript are inherently slow," because internally it appears that JavaScript will cast all of its double values into signed 32-bit integers to do the bitwise operations (see https://developer.mozilla.org/En/Core_JavaScript_1.5_Reference/Operators/Bitwise_Operators for more on this.) Because of this, I can't do a direct port of the Java random number generator, and I need to get the same numeric results as Java.util.Random(). Writing something like this.next = function(bits) { if (!bits) { bits = 48; } this.seed = (this.seed * 25214903917 + 11) & ((1 << 48) - 1); return this.seed >>> (48 - bits); }; (which is an almost-direct port of the Java.util.Random()) code won't work properly, since Javascript can't do bitwise operations on an integer that size.) I've figured out that I can just make a seedable random number generator in 32-bit space using the Lehmer algorithm, but the trick is that I need to get the same values as I would with Java.util.Random(). What should I do to make a faster, functional port?

    Read the article

  • General question about DirectShow.NET, DirectShow and Windows Media Format

    - by Paul Andrews
    I searched and googled for an answer but couldn't find one. Basically I'm developing a webcam/audio streaming application which should capture audio and video from a pc (usb webcam/microphone) and send them to a receiving server. What the server will do with that it's another story and phase two (which I'm skipping for now) I wrote some code using DirectShow and Windows Media Format and it worked great for capture audio/video and sending them to another client, but there's a major problem: latency. Everywhere in the internet everyone gave me the same answer: "sorry dude but media format isn't for video conferencing, their codecs have too high latency". I thought I could skip the .wmv problems but seems like it's not possible to do... this road ends here then. So I saw a few examples with DirectShow.NET which were faster for both audio and video.. my question is: how come that DirectShow.NET is faster and better for video/audio conferencing? Shouldn't it be just a .NET porting of C++'s DirectShow? Am I missing something? I'm a bit confused at this point

    Read the article

  • Touch friendly GUI in Windows Mobile

    - by vonolsson
    I'm porting an audio processing application written in C++ from Windows to Windows Mobile (version 5+). Basically what I need to port is the GUI. The application is quite complicated and the GUI will need to be able to offer a lot of functionality. I would like to create a touch friendly user interface that also looks good. Which basically means that standard WinMo controls are out the window. I've looked at libraries such as Fluid and they look like something I would like to use. However, as I said I'm developing i C++. Even though it would be possible to only write the GUI part i some .NET language I rather not. My experience with .NET on Windows Mobile is that it doesn't work very well... Can anyone either suggest a C/C++ touch friendly GUI library for Windows Mobile or some kind of "best practices" document/how-to on how to use the standard Windows Mobile controls in order to make the touch friendly and also work and look well in later versions of Windows Mobile (in particular version 6.5)?

    Read the article

  • Entity Framework - Merging 2 physical tables into one "virtual" table problems...

    - by Keith Barrows
    I have been reading up on porting ASP.NET Membership Provider into .NET 3.5 using LINQ & Entities. However, the DB model that every single sample shows is the newer model while I've inherited a rather old model. Differences: The User Table is split into a pair of User & Membership Tables. All of the tables in the DB are prepended with aspnet_ I have Lowered versions of some columns (UserName, Email, etc) To work with this I have copied the properties from the Membership table into the User table (in the DB this is a 1<-1 relationship, not a 1<-0,1), renamed aspnet_Applications to Application, aspnet_Profiles to Profile, aspnet_Users to User and aspnet_Roles to Role. (See image) Link to full size image of model Now, I am running into one of 2 problems when I try to compile. Using the model in the image I get this error: Problem in Mapping Fragment starting at line 464: EntitySets 'UserSet' and 'aspnet_Membership' are both mapped to table 'aspnet_Membership'. Their Primary Keys may collide. If I delete the aspnet_Membership table from my model (to handle the above error) I then get: Problem in Mapping Fragment starting at line 384: Column aspnet_Membership.ApplicationId in table aspnet_Membership must be mapped: It has no default value and is not nullable. My ability to hand edit the backing stores is not the best and I don't want to just hack something in that may break other things. I am looking for suggestions, best practices, etc to handle this. Note: Moving the data tables themselves is not an option as I cannot replace all the logic in the existing apps. I am building this EF Provider for a new App. Over the next 6 months the old app(s) will migrate bit-by-bit to the new structures. Note: I added a link just under the image to the full size image for better viewing.

    Read the article

  • User roles - why not store in session?

    - by Phil
    I'm porting an ASP.NET application to MVC and need to store two items relating to an authenitcated user: a list of roles and a list of visible item IDs, to determine what the user can or cannot see. We've used WSE with a web service in the past and this made things unbelievably complex and impossible to debug properly. Now we're ditching the web service I was looking foward to drastically simplifying the solution simply to store these things in the session. A colleague suggested using the roles and membership providers but on looking into this I've found a number of problems: a) It suffers from similar but different problems to WSE in that it has to be used in a very constrained way maing it tricky even to write tests; b) The only caching option for the RolesProvider is based on cookies which we've rejected on security grounds; c) It introduces no end of complications and extra unwanted baggage; All we want to do, in a nutshell, is store two string variables in a user's session or something equivalent in a secure way and refer to them when we need to. What seems to be a ten minute job has so far taken several days of investigation and to compound the problem we have now discovered that session IDs can apparently be faked, see http://blogs.sans.org/appsecstreetfighter/2009/06/14/session-attacks-and-aspnet-part-1/ I'm left thinking there is no easy way to do this very simple job, but I find that impossible to believe. Could anyone: a) provide simple information on how to make ASP.NET MVC sessions secure as I always believed they were? b) suggest another simple way to store these two string variables for a logged in user's roles etc. without having to replace one complex nightmare with another as described above? Thank you.

    Read the article

  • How are you taking advantage of Multicore?

    - by tgamblin
    As someone in the world of HPC who came from the world of enterprise web development, I'm always curious to see how developers back in the "real world" are taking advantage of parallel computing. This is much more relevant now that all chips are going multicore, and it'll be even more relevant when there are thousands of cores on a chip instead of just a few. My questions are: How does this affect your software roadmap? I'm particularly interested in real stories about how multicore is affecting different software domains, so specify what kind of development you do in your answer (e.g. server side, client-side apps, scientific computing, etc). What are you doing with your existing code to take advantage of multicore machines, and what challenges have you faced? Are you using OpenMP, Erlang, Haskell, CUDA, TBB, UPC or something else? What do you plan to do as concurrency levels continue to increase, and how will you deal with hundreds or thousands of cores? If your domain doesn't easily benefit from parallel computation, then explaining why is interesting, too. Finally, I've framed this as a multicore question, but feel free to talk about other types of parallel computing. If you're porting part of your app to use MapReduce, or if MPI on large clusters is the paradigm for you, then definitely mention that, too. Update: If you do answer #5, mention whether you think things will change if there get to be more cores (100, 1000, etc) than you can feed with available memory bandwidth (seeing as how bandwidth is getting smaller and smaller per core). Can you still use the remaining cores for your application?

    Read the article

  • Strange behavior due to wx.Frame.SetTitle

    - by Anurag Uniyal
    In a wxPython application, which i am porting to Mac OSX, I set title of app frame every 500msec in update UI event, and due to that all the panels and windows are refreshed. That seems strange to me and almost halts my application which has many custom drawn controls and screens. I wanted to know what could be the reason behind it, is it normal for MAC? Here is a self-constrained script which replicates the scenario using timers. It keeps on printing "on paint" every 500ms because in timer I set title every 500ms. import wx app = wx.PySimpleApp() frame = wx.Frame(None, title="BasePainter Test") painter = wx.Panel(frame) def onPaint(event): dc = wx.PaintDC(painter) print "onPaint" painter.Bind(wx.EVT_PAINT, onPaint) def loop(): frame.SetTitle(frame.GetTitle()) wx.CallLater(500, loop) loop() frame.Show(True) app.SetTopWindow(frame) app.MainLoop() My system details: >>> sys.version '2.5 (r25:51918, Sep 19 2006, 08:49:13) \n[GCC 4.0.1 (Apple Computer, Inc. build 5341)]' >>> wx.VERSION (2, 8, 10, 1, '') >>> os.uname() ('Darwin', 'agyeys-mac-mini.local', '9.8.0', 'Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386', 'i386')

    Read the article

  • Silverlight HttpWebRequest.Create hangs inside async block

    - by jack2010
    I am trying to prototype a Rpc Call to a JBOSS webserver from Silverlight (4). I have written the code and it is working in a console application - so I know that Jboss is responding to the web request. Porting it to silverlight 4, is causing issues: let uri = new Uri(queryUrl) // this is the line that hangs let request : HttpWebRequest = downcast WebRequest.Create(uri) request.Method <- httpMethod; request.ContentType <- contentType It may be a sandbox issue, as my silverlight is being served off of my file system and the Uri is a reference to the localhost - though I am not even getting an exception. Thoughts? Thx UPDATE 1 I created a new project and ported my code over and now it is working; something must be unstable w/ regard to the F# Silverlight integration still. Still would appreciate thoughts on debugging the "hanging" web create in the old model... UPDATE 2 let uri = Uri("http://localhost./portal/main?isSecure=IbongAdarnaNiFranciscoBalagtas") // this WebRequest.Create works fine let req : HttpWebRequest = downcast WebRequest.Create(uri) let Login = async { let uri = new Uri("http://localhost/portal/main?isSecure=IbongAdarnaNiFranciscoBalagtas") // code hangs on this WebRequest.Create let request : HttpWebRequest = downcast WebRequest.Create(uri) return request } Login |> Async.RunSynchronously I must be missing something; the Async block works fine in the console app - is it not allowed in the Silverlight App?

    Read the article

  • perl issuing os command with defined variables

    - by Vinnie Biros
    I am adding functionality into my scripts so that they can use kerberos authentication to run automatically and use secure protocols when executing. I have my functionality working for shell scripts that do exactly what i want, however i am having issues porting it to perl to work within my perl scripts as i am new to perl. Here is my working shell code and trying to get the same functionality in perl: #!/bin/sh ticketFileName=`basename $0-$$` #set filename variable to name of script plus the PID krb5CacheLocation=/tmp/$ticketFileName #set ticket cache location to /tmp + script name /usr/share/centrifydc/kerberos/bin/kinit -c $krb5CacheLocation -kt /root/.ssh/someaccount.keytab someaccount #get TGT and specifiy ticket cache location on kinit export KRB5CCNAME=$krb5CacheLocation #set the KRB5CCNAME variable to tell ssh where to look What i have attempted in perl: #!/usr/bin/perl my $ticketFileName = `basename $0-$$`; my $krb5CacheLocation = '/tmp/'.$ticketFileName; `export KRB5CCNAME=$krb5CacheLocation`; `/usr/share/centrifydc/kerberos/bin/kinit -c $krb5CacheLocation -kt /root/.ssh/unixmap0000.keytab unixmap0000`; Seems it is not liking the passed variable that i am referencing in the OS command. Anyone have any ideas or suggestions?

    Read the article

  • JVM/CLR Source-compatible Language Options

    - by Nathan Voxland
    I have an open source Java database migration tool (http://www.liquibase.org) which I am considering porting to .Net. The majority of the tool (at least from a complexity side) is around logic like "if you are adding a primary key and the database is Oracle use this SQL. If database is MySQL use this SQL. If the primary key is named and the database is Postgres use this SQL". I could fork the Java codebase and covert it (manually and/or automatically), but as updates and bug fixes to the above logic come in I do not want to have to apply it to both versions. What I would like to do is move all that logic into a form that can be compiled and used by both Java and .Net versions naively. The code I am looking to convert does not contain any advanced library usage (JDBC, System.out, etc) that would vary significantly from Java to .Net, so I don't think that will be an issue (at worst it can be designed around). So what I am looking for is: A language in which I can code common parts of my app in and compile it into classes usable by the "standard" languages on the target platform Does not add any runtime requirements to the system Nothing so strange that it scares away potential contributors I know Python and Ruby both have implementations on for the JVM and CLR. How well do they fit my requirements? Has anyone been successful (or unsuccesful) using this technique for cross-platform applications? Are there any gotcha's I need to worry about?

    Read the article

  • How to deal with recursive dependencies between static libraries using the binutils linker?

    - by Jack Lloyd
    I'm porting an existing system from Windows to Linux. The build is structured with multiple static libraries. I ran into a linking error where a symbol (defined in libA) could not be found in an object from libB. The linker line looked like g++ test_obj.o -lA -lB -o test The problem of course being that by the time the linker finds it needs the symbol from libA, it has already passed it by, and does not rescan, so it simply errors out even though the symbol is there for the taking. My initial idea was of course to simply swap the link (to -lB -lA) so that libA is scanned afterwards, and any symbols missing from libB that are in libA are picked up. But then I find there is actually a recursive dependency between libA and libB! I'm assuming the Visual C++ linker handles this in some way (does it rescan by default?). Ways of dealing with this I've considered: Use shared objects. Unfortunately this is undesirable from the perspective of requiring PIC compliation (this is performance sensitive code and losing %ebx to hold the GOT would really hurt), and shared objects aren't needed. Build one mega ar of all of the objects, avoiding the problem. Restructure the code to avoid the recursive dependency (which is obviously the Right Thing to do, but I'm trying to do this port with minimal changes). Do you have other ideas to deal with this? Is there some way I can convince the binutils linker to perform rescans of libraries it has already looked at when it is missing a symbol?

    Read the article

  • Academic question: typename

    - by Arman
    Hi, recently I accounted with a "simple problem" of porting code from VC++ to gcc/intel. The code is compiles w/o error on VC++: #include <vector> using std::vector; template <class T> void test_vec( std::vector<T> &vec) { typedef std::vector<T> M; /*==> add here typename*/ M::iterator ib=vec.begin(),ie=vec.end(); }; int main() { vector<double> x(100, 10); test_vec<double>(x); return 0; } then with g++ we have some unclear errors: g++ t.cpp t.cpp: In function 'void test_vec(std::vector<T, std::allocator<_CharT> >&)': t.cpp:13: error: expected `;' before 'ie' t.cpp: In function 'void test_vec(std::vector<T, std::allocator<_CharT> >&) [with T = double]': t.cpp:18: instantiated from here t.cpp:12: error: dependent-name 'std::M::iterator' is parsed as a non-type, but instantiation yields a type t.cpp:12: note: say 'typename std::M::iterator' if a type is meant If we add typename before iterator the code will compile w/o pb. If it is possible to make a compiler which can understand the code written in the more "natural way", then for me is unclear why we should add typename? Which rules of "C++ standards"(if there are some) will be broken if we allow all compilers to use without "typename"? kind regards Arman.

    Read the article

  • "Undefined symbols" linker error with simple template class

    - by intregus
    Been away from C++ for a few years and am getting a linker error from the following code: Gene.h #ifndef GENE_H_INCLUDED #define GENE_H_INCLUDED template <typename T> class Gene { public: T getValue(); void setValue(T value); void setRange(T min, T max); private: T value; T minValue; T maxValue; }; #endif // GENE_H_INCLUDED Gene.cpp #include "Gene.h" template <typename T> T Gene<T>::getValue() { return this->value; } template <typename T> void Gene<T>::setValue(T value) { if(value >= this->minValue && value <= this->minValue) { this->value = value; } } template <typename T> void Gene<T>::setRange(T min, T max) { this->minValue = min; this->maxValue = max; } Using Code::Blocks and GCC if it matters to anyone. Also, clearly porting some GA stuff to C++ for fun and practice.

    Read the article

  • Experiences with (free) embedded TCP / IP stacks?

    - by Dan
    Does anyone have especially good (or bad) experiences with any of the following embedded TCP / IP stacks? uIP lwIP Bentham's TCP/IP Lean implementation The TCP/IP stack from this book My needs are for a solid, easy-to-port stack. Code size isn't terribly important, performance is relatively important, but ease of use & porting is very important. The system will probably use an RTOS, that hasn't been decided, but in my experience most stacks can be used with or without an RTOS. Most likely the platform will be an ARM variant (ARM7 or CM3 in all likelihood). Not too concerned about bolting the stack to the Ethernet driver, so that isn't a big priority in the selection. I'm not terribly interested in extracting a stack out of an OS, such as Linux, RTEMS, etc. I'm also not interested in commercial offerings such as Interniche, Micrium, etc... The stack doesn't need all sorts of bells & whistles, doesn't need IPv6, and I don't need any stuff on top of it (web servers, FTP servers, etc..) In fact it's possible that I'll only use UDP, although I can envision a couple scenarios where TCP would be preferable. Experiences with other stacks I've missed are of course also very much of interest. Thanks for your time & input.

    Read the article

  • CFHost DNS Resolution - When is it OK to use synchronous API?

    - by Jasarien
    I went to the iPhone Developer Tech Talk a few months ago and asked one of the gurus there about the lack of NSHost on the iPhone. Some code I was porting to the iPhone made significant use of NSHost throughout its networking code. I was told that NSHost is on the iPhone, but its private. I was also told that NSHost is a synchronous API and that I shouldn't use it anyway. (If anyone could elaborate on why it shouldn't be used, as a bonus, that'd be great.) I can see the caveats of using synchronous API's on the main thread in that they'll block until complete - and that's never a good thing with network code because there are so many factors that could cause the API to block the thread for a significant amount of time. My solution was to write a wrapper around CFHost's asynchronous resolution functions. The result works quite well, and I'm considering releasing it into the public domain. But my question is this: Say my app only resolves a hostname once per run, during the connecting phase, and then cache's it for the rest of the session. During the time it is resolving, a modal screen is shown telling the user "Connecting" with a nice spinner. Does it really matter whether or not the resolution is asynchronous?? The user has to wait to connect anyway, and the resolution is only done on the first connection. Subsequent connections use the cached result of the resolution. When is it OK to be synchronous and when should things be asynchronous?

    Read the article

  • Daemon program that uses select() inside infinite loop uses significantly more CPU when ported from

    - by Jake
    I have a daemon app written in C and is currently running with no known issues on a Solaris 10 machine. I am in the process of porting it over to Linux. I have had to make minimal changes. During testing it passes all test cases. There are no issues with its functionality. However, when I view its CPU usage when 'idle' on my Solaris machine it is using around .03% CPU. On the Virtual Machine running Red Hat Enterprise Linux 4.8 that same process uses all available CPU (usually somewhere in the 90%+ range). My first thought was that something must be wrong with the event loop. The event loop is an infinite loop ( while(1) ) with a call to select(). The timeval is setup so that timeval.tv_sec = 0 and timeval.tv_usec = 1000. This seems reasonable enough for what the process is doing. As a test I bumped the timeval.tv_sec to 1. Even after doing that I saw the same issue. Is there something I am missing about how select works on Linux vs. Unix? Or does it work differently with and OS running on a Virtual Machine? Or maybe there is something else I am missing entirely? One more thing I am sure sure which version of vmware server is being used. It was just updated about a month ago though. Thanks.

    Read the article

  • DDK/WDM developing problem ... driver won't load on x64 windows platform

    - by user295975
    Hi there! I am a beginner at DDK/WDM driver developing field. I have a task which involves porting a virtual device driver from x86 to x64 (intel). I got the source code, I modified it a bit and compiled it succesfuly with DDK (build environments). But when I tried to load it on a ia64 Windows7 machine it didn't want to load. Then I tried some simple examples of device drivers from --http://www.codeproject.com/KB/system/driverdev.aspx (I put '--' to be able to post the hyperlink) and from other links but still the same problem. I hear on a forum that some libraries that you use to link are not compatible with the new machines and suggested to link to another similar libraries...but still didn't worked. When I build I use "-cefw" command line parameters as suggested. I do not have an *.inf file asociated but I'm copying it in system32/drivers and I'm using WinObj to see if next restart it's loaded into the memory. I also tried this program ( http://www.codeproject.com/KB/system/tdriver.aspx ) to load the driver into the memory but still didn't worked for me. Please please help me...I'm stuck on this and my deadline already passed. I feel I'driving nuts in here trying to discover what am I doing wrong.

    Read the article

  • Why use Django on Google App Engine?

    - by Travis Bradshaw
    When researching Google App Engine (GAE), it's clear that using Django is wildly popular for developing in Python on GAE. I've been scouring the web to find information on the costs and benefits of using Django, to find out why it's so popular. While I've been able to find a wide variety of sources on how to run Django on GAE and the various methods of doing so, I haven't found any comparative analysis on why Django is preferable to using the webapp framework provided by Google. To be clear, it's immediately apparent why using Django on GAE is useful for developers with an existing skillset in Django (a majority of Python web developers, no doubt) or existing code in Django (where using GAE is more of a porting exercise). My team, however, is evaluating GAE for use on an all-new project and our existing experience is with TurboGears, not Django. It's been quite difficult to determine why Django is beneficial to a development team when the BigTable libraries have replaced Django's ORM, sessions and authentication are necessarily changed, and Django's templating (if desirable) is available without using the entire Django stack. Finally, it's clear that using Django does have the advantage of providing an "exit strategy" if we later wanted to move away from GAE and need a platform to target for the exodus. I'd be extremely appreciative for help in pointing out why using Django is better than using webapp on GAE. I'm also completely inexperienced with Django, so elaboration on smaller features and/or conveniences that work on GAE are also valuable to me. Thanks in advance for your time!

    Read the article

  • Convert MFC Doc/View to?

    - by Harvey
    My question will be hard to form, but to start: I have an MFC SDI app that I have worked on for an embarrassingly long time, that never seemed to fit the Doc/View architecture. I.e. there isn't anything useful in the Doc. It is multi-threaded and I need to do more with threading, etc. I dream about also porting it to Linux X Windows, but I know nothing about that programming environment as yet. Maybe Mac also. My question is where to go from here? I think I would like to convert from MFC Doc/View to straight Win API stuff with message loops and window procedures, etc. But the task seems to be huge. Does the Linux X Windows environment use a similar kind of message loop, window procedure architecture? Can I go part way? Like convert a little at a time without rendering my program unusable for long periods of work? What is the best book to read to help me to move in that direction.

    Read the article

  • What is faster- Java or C# (Or good old C)?

    - by Rexsung
    I'm currently deciding on a platform to build a scientific computational product on, and am deciding on either C#, Java, or plain C with Intels compiler on Core2 Quad CPU's. It's mostly integer arithmetic. My benchmarks so far show Java and C are about on par with each other, and dotNET/C# trails by about 5%- however a number of my coworkers are claiming that dotNET with the right optimizations will beat both of these given enough time for the JIT to do its work. I always assume that the JIT would have done it's job within a few minutes of the app starting (Probably a few seconds in my case, as it's mostly tight loops), so I'm not sure whether to believe them Can anyone shed any light on the situation? Would dotNET beat Java? (Or am I best just sticking with C at this point?). The code is highly multithreaded and data sets are several terabytes in size. Haskell/erlang etc are not options in this case as there is a significant quantity of existing legacy C code that will be ported to the new system, and porting C to Java/C# is a lot simpler than to Haskell or Erlang. (Unless of course these provide a significant speedup). Edit: We are considering moving to C# or Java because they may, in theory, be faster. Every percent we can shave off our processing time saves us tens of thousands of dollars per year. At this point we are just trying to evaluate whether C, Java, or c# would be faster.

    Read the article

  • No right-click event Firefox 3.6

    - by cdmckay
    I'm in the process of porting an app to JavaScript/CSS and it uses right-click. For some reason Firefox 3.6 for Windows isn't issuing a right-click event, but Chrome and IE do. Here's some test code. If you right-click #test then you get nothing in Firefox but you get an alert under Chrome and IE. <html> <head> <title>Hi</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript"> $(function(){ $("#test").get(0).oncontextmenu = function() { return false; }; $("#test").mousedown(function() { alert("hi"); }); }); </script> </head> <body> <div id="test" style="background: red;">Hi</div> </body> </html> Why isn't the right-click event being generated in Firefox?

    Read the article

  • What are some choices to port existing Windows GUI app written in C to Linux?

    - by Warner Young
    I've been tasked with porting an existing Windows GUI app to Linux. Ideally, I'd like to do this so the same code base can be used to build either the Windows version or the Linux version. I'll be doing my work on Ubuntu 9.04. After searching around, it's unclear to me what tools are best suited to help me with this. A list of loose requirements would be: The code is in C, not C++, and should compile to build both Windows and Linux versions. Since it's existing code, and fairly large, converting to a managed language like .NET is out of the question for now. I would prefer if I can use the same dialogs in both systems. In Windows, putting up a dialog is pretty simple. You build the dialog in the Resource Editor in Visual Studio, then call DialogBox() API, and handle the event messages. I would really like to find something that can do the equivalent on the Linux side. It would also be nice to have a good IDE similar to Visual Studio. Any helps or hints would be appreciated. Thanks,

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21  | Next Page >