Search Results

Search found 4549 results on 182 pages for 'quickly'.

Page 145/182 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Efficiently Serving Dynamic Content in Google App Engine

    - by awegawef
    My app on google app engine returns content items (just text) and comments on them. It works like this (pseudo-ish code): query: get keys of latest content #query to datastore for each item in content if item_dict in memcache: use item_dict else: build_item_dict(item) #by fetching from datastore store item_dict in memcache send all item_dicts to template Sorry if the code isn't understandable. I get all of the content dictionaries and send them to the template, which uses them to create the webpage. My problem is that if the memcache has expired, for each item I want to display, I have to (1) lookup item in memcache, (2) since no memcache exists I must fetch item from the datastore, and (3) store the item in memcache. These calls build up quickly. I don't set an expire time for the entries to the memcache, so this really only happens once in the morning, but the webpage takes long enough to load (~1 sec) that the browser reports it as not existing. Regularly, my webpages take about 50ms to load. This approach works decently for frequent visits, but it has its flaws as shown above. How can I remedy this? The entries are dynamic enough that I don't think it would be in my best interest to cache my initial request. Thanks in advance

    Read the article

  • Implicit Type Conversions in Reflection

    - by bradhe
    So I've written a quick bit of code to help quickly convert between business objects and view models. Not to pimp my own blog, but you can find the details here if you're interested or need to know. One issue I've run in to is that I have a custom collection type, ProductCollection, and I need to turn that in to a string[] in on my model. Obviously, since there is no default implicit cast, I'm getting an exception in my contract converter. So, I thought I would write the next little bit of code and that should solve the problem: public static implicit operator string[](ProductCollection collection) { var list = new List<string>(); foreach (var product in collection) { if (product.Id == null) { list.Add(null); } else { list.Add(product.Id.ToString()); } } return list.ToArray(); } However, it still fails with the same cast exception. I'm wondering if it has something to do with being in reflection? If so, is there anything that I can do here?? I'm open to architectural solutions, too!

    Read the article

  • systematizing error codes for a web app in php?

    - by user151841
    I'm working on a class-based php web app. I have some places where objects are interacting, and I have certain situations where I'm using error codes to communicate to the end user -- typically when form values are missing or invalid. These are situations where exceptions are unwarranted ( and I'm not sure I could avoid the situations with exceptions anyways). In one object, I have some 20 code numbers, each of which correspond to a user-facing message, and a admin/developer-facing message, so both parties know what's going on. Now that I've worked over the code several times, I find that it's difficult to quickly figure out what code numbers in the series I've already used, so I accidentally create conflicting code numbers. For instance, I just did that today with 12, 13, 14 and 15. How can I better organize this so I don't create conflicting error codes? Should I create one singleton class, errorCodes, that has a master list of all error codes for all classes, systematizing them across the whole web app? Or should each object have its own set of error codes, when appropriate, and I just keep a list in the commentary of the object, to use and update that as I go along?

    Read the article

  • [C#] Three System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • Methods to Analyze Sell Orders (in a video game)

    - by Travis
    I'm building a market manager for EVE, but this problem should be solvable without any prior knowledge of the game. Basically, sell orders in EVE look like this: All I'm really concerned with is picking a good price. The numbers the program would pick are sent out to our production team so they can buy the items to build things. Just like in a real market, the numbers we send off need to reflect the cheapest price that still offers a good quantity. Also, available quantities change over time, so we can't just pick the lowest listed price. For example, in the screencap above, we would set the sell price between 2,800.00 ISK and 3,000.00 ISK, even if only 500 units were needed. Even though the cheapest order would fit the quantity, it would probably quickly disappear. It would be better to set the price a tad higher so we don't underpay the production people. The Question: It's easy enough to figure prices out just by looking at the data, but I can't figure out how to have a program [java] do it. Different items sell with much less or more quantities (i.e. 3 or 300 or 3000 items) so the program has to be able to modify itself based on the offerings. I don't really want code as much as I do a way to think this through. Thanks in advance.

    Read the article

  • What constitutes a development environment, and how do you document it?

    - by Joel Coehoorn
    What items go into a software shop's development environment, how do you document it, and what processes do you follow to make changes? I thinking about this from the standpoint where I want to make it easier to bring new hires up to speed quickly by having all this on a checklist we follow when setting them up, and then while I'm at it making it easier for the new hires or existing team members to bring new powerful toolkits and ideas into the environment without disrupting things. I want to keep this platform agnostic, so even though I'm currently at a microsoft shop where Visual Studio would be assumed I'll go ahead and list compiler/IDE as one of the items: Here are some ideas for part 1: [edit]: I'm keeping this updated based on the better suggestions. Source Control access Issue/Bug/Project tracker System Documention, or references to find the system documentation in source control or in a wiki, including: build document/environment covered by this question design documents / technical notes Coding Style guidelines Deploy for review/testing/QA/staging/production procedures Licensing details for your tools and your product Team Calendar, including the project schedule(s), deadlines, vacation time, and support/on-call schedule (if required) compiler/IDE compiler/IDE extensions (things like source control plugins or visual studio add-ins) 3rd party SDKs/toolkits Database connection and tools Testing Frameworks Internal libraries communication tools (chat, wiki, etc) Static analysis tools (FxCop, FlawFinder, etc) Virtual machines (holding dev environment or for testing) Specialized editors (modeling, xml, etc) Other tools What else goes in this list, and how do you document it and vet changes?

    Read the article

  • Fast serarch of 2 dimensional array

    - by Tim
    I need a method of quickly searching a large 2 dimensional array. I extract the array from Excel, so 1 dimension represents the rows and the second the columns. I wish to obtain a list of the rows where the columns match certain criteria. I need to know the row number (or index of the array). For example, if I extract a range from excel. I may need to find all rows where column A =”dog” and column B = 7 and column J “a”. I only know which columns and which value to find at run time, so I can’t hard code the column index. I could use a simple loop, but is this efficient ? I need to run it several thousand times, searching for different criteria each time. For r As Integer = 0 To UBound(myArray, 0) - 1 match = True For c = 0 To UBound(myArray, 1) - 1 If not doesValueMeetCriteria(myarray(r,c) then match = False Exit For End If Next If match Then addRowToMatchedRows(r) Next The doesValueMeetCriteria function is a simple function that checks the value of the array element against the query requirement. e.g. Column A = dog etc. Is it more effiecent to create a datatable from the array and use the .select method ? Can I use Linq in some way ? Perhaps some form of dictionary or hashtable ? Or is the simple loop the most effiecent ? Your suggestions are most welcome.

    Read the article

  • DataSets to POCOs - an inquiry regarding DAL architecture

    - by alexsome
    Hello all, I have to develop a fairly large ASP.NET MVC project very quickly and I would like to get some opinions on my DAL design to make sure nothing will come back to bite me since the BL is likely to get pretty complex. A bit of background: I am working with an Oracle backend so the built-in LINQ to SQL is out; I also need to use production-level libraries so the Oracle EF provider project is out; finally, I am unable to use any GPL or LGPL code (Apache, MS-PL, BSD are okay) so NHibernate/Castle Project are out. I would prefer - if at all possible - to avoid dishing out money but I am more concerned about implementing the right solution. To summarize, there are my requirements: Oracle backend Rapid development (L)GPL-free Free I'm reasonably happy with DataSets but I would benefit from using POCOs as an intermediary between DataSets and views. Who knows, maybe at some point another DAL solution will show up and I will get the time to switch it out (yeah, right). So, while I could use LINQ to convert my DataSets to IQueryable, I would like to have a generic solution so I don't have to write a custom query for each class. I'm tinkering with reflection right now, but in the meantime I have two questions: Are there any problems I overlooked with this solution? Are there any other approaches you would recommend to convert DataSets to POCOs? Thanks in advance.

    Read the article

  • Any useful suggestions to figure out where memory is being free'd in a Win32 process?

    - by LeopardSkinPillBoxHat
    An application I am working with is exhibiting the following behaviour: During a particular high-memory operation, the memory usage of the process under Task Manager (Mem Usage stat) reaches a peak of approximately 2.5GB (Note: A registry key has been set to allow this, as usually there is a maximum of 2GB for a process under 32-bit Windows) After the operation is complete, the process size slowly starts decreasing at a rate of 1MB per second. I am trying to figure out the easiest way to quickly determine who is freeing this memory, and where it is being free'd. I am having trouble attaching a memory profiler to my code, and I don't particularly want to override the new/delete operators to track the allocations/deallocations (IOW, I want to do this without re-compiling my code). Can anyone offer any useful suggestions of how I could do this via the Visual Studio debugger? Update I should also mention that it's a multi-threaded application, so pausing the application and analysing the call stack through the debugger is not the most desirable option. I considered freezing different threads one at a time to see if the memory stops reducing, but I'm fairly certain this will cause the application to crash.

    Read the article

  • Building a custom Linux Live CD

    - by Mike Heinz
    Can anyone point me to a good tutorial on creating a bootable Linux CD from scratch? I need help with a fairly specialized problem: my firm sells an expansion card that requires custom firmware. Currently we use an extremely old live CD image of RH7.2 that we update with current firmware. Manufacturing puts the cards in a machine, boots off the CD, the CD writes the firmware, they power off and pull the cards. Because of this cycle, it's essential that the CD boot and shut down as quickly as possible. The problem is that with the next generation of cards, I have to update the CD to a 2.6 kernel. It's easy enough to acquire a pre-existing live CD - but those all are designed for showing off Linux on the desktop - which means they take forever to boot. Can anyone fix me up with a current How-To? Update: So, just as a final update for anyone reading this later - the tool I ended up using was "livecd-creator". My reason for choosing this tool was that it is available for RedHat-based distributions like CentOs, Fedora and RHEL - which are all distributions that my company supports already. In addition, while the project is very poorly documented it is extremely customizable. I was able to create a minimal LiveCD and edit the boot sequence so that it booted directly into the firmware updater instead of a bash shell. The whole job would have only taken an hour or two if there had been a README explaining the configuration file!

    Read the article

  • Gdata JavaScript Authsub continues redirect

    - by Krustal
    I am using the JavaScript Google Data API and having issues getting the AuthSub script to work correctly. This is my script currently: google.load('gdata', '1'); function getCookie(c_name){ if(document.cookie.length>0){ c_start=document.cookie.indexOf(c_name + "="); if(c_start!=-1){ c_start=c_start + c_name.length+1; c_end=document.cookie.indexOf(";",c_start); if(c_end==-1) c_end=document.cookie.length; return unescape(document.cookie.substring(c_start, c_end)); } } return ""; } function main(){ var scope = 'http://www.google.com/calendar/feeds/'; if(!google.accounts.user.checkLogin(scope)){ google.accounts.user.login(); } else { /* * Retrieve all calendars */ // Create the calendar service object var calendarService = new google.gdata.calendar.CalendarService('GoogleInc-jsguide-1.0'); // The default "allcalendars" feed is used to retrieve a list of all // calendars (primary, secondary and subscribed) of the logged-in user var feedUri = 'http://www.google.com/calendar/feeds/default/allcalendars/full'; // The callback method that will be called when getAllCalendarsFeed() returns feed data var callback = function(result) { // Obtain the array of CalendarEntry var entries = result.feed.entry; //for (var i = 0; i < entries.length; i++) { var calendarEntry = entries[0]; var calendarTitle = calendarEntry.getTitle().getText(); alert('Calendar title = ' + calendarTitle); //} } // Error handler to be invoked when getAllCalendarsFeed() produces an error var handleError = function(error) { alert(error); } // Submit the request using the calendar service object calendarService.getAllCalendarsFeed(feedUri, callback, handleError); } } google.setOnLoadCallback(main); However when I run this the page redirects me to the authentication page. After I authenticate it send me back to my page and then quickly sends me back to the authenticate page again. I've included alerts to check if the token is being set and it doesn't seem to be working. Has anyone has this problem?

    Read the article

  • Implementation of MVC with SQLite and NSURLConnection, use cases?

    - by user324723
    I'm interested in knowing how others have implemented/designed database & web services in their iphone app and how they simplified it for the entire application. My application is dependent on these services and I can't figure out a efficient way to use them together due to the (semi)complexity of my requirements. My past attempts on combining them haven't been completely successful or at least optimal in my mind. I'm building a database driven iphone app that uses a relational database in sqlite and consumes web services based on missing content or user interaction. Like this hasn't been done before...right? Since I am using a relational database - any web services consumed requires normalization, parsing the result and persisting it to the database before it can be displayed in a table view controller. The applications UI consists of nested(nav controller) table views where a user can select a cell and be taken to the next table view where it attempts to populate the table views data source from the database. If nothing exists in the database then it will send a request via web services to download its content, thus download - parse - persist - query - display. Since the user has the ability to request a refresh of this data it still requires the same process. Quickly describing what I've implemented and tried to run with - 1st attempt - Used a singleton web service class that handled sending web service requests, parsing the result and returning it to the table view controller via delegate protocols. Once the controller received that data it would then be responsible for persisting it to the database and re-returning the result. I didn't like the idea of only preventing the case where the app delegate selector doesn't exists(released) causing the app to crash. 2nd attempt - Used NSNotificationCenter for easy access to both database and web services but later realized it was more complex due to adding and removing observers per view(which isn't advised anyways).

    Read the article

  • Programming exercises in Java inheritance for intern

    - by Tenner
    I work for a small software development team, working primarily in Java, for a very large company. Our new intern showed up sight-unseen (not uncommon in my company). He has some C++ experience but no Java. Worse, he's never worked with inheritance in C++. Our code has a great deal of abstraction and a heavy reliance on inheritance. We need to get him up to speed as quickly as possible. Of course the rest of the team is busy, and so we can't take the time out of our day to teach a one-student 200-level CS course. Instead, I'd like to give him an actual programming project to work on which highlights how classes, interfaces, method overrides, etc. work. I've had him look at Project Euler, but most of the solutions end up being procedural, and not object-oriented programs. Do any of you have any somewhat-straightforward (and relatively quick) projects which you would give to an intern in this situation? Or, any recent (or current) students have a school project they'd be willing to share? Anyone else had this experience?

    Read the article

  • Is there a FAST way to export and install an app on my phone, while signing it with my own keystore?

    - by Alexei Andreev
    So, I've downloaded my own application from the market and installed it on my phone. Now, I am trying to install a temporary new version from Eclipse, but here is the message I get: Re-installation failed due to different application signatures. You must perform a full uninstall of the application. WARNING: This will remove the application data! Please execute 'adb uninstall com.applicationName' in a shell. Launch canceled! Now, I really really don't want to uninstall the application, because I will lose all my data. One solution I found is to Export my application, creating new .apk, and then install it via HTC Sync (probably a different program based on what phone you have). The problem is this takes a long time to do, since I need to enter the password for the keystore each time and then wait for HTC Sync. It's a pain in the ass! So the question is: Is there a way to make Eclipse automatically use my keystore to sign the application (quickly and automatically)? Or perhaps to replace debug keystore with my own? Or perhaps just tell it to remember the password, so I don't have to enter it every time...? Or some other way to solve this problem?

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Extracting function declarations from a PHP file

    - by byronh
    I'm looking to create an on-site API reference for my framework and application. Basically, say I have a class file model.class.php: class Model extends Object { ... code here ... // Separates results into pages. // Returns Paginator object. final public function paginate($perpage = 10) { ... more code here ... } } and I want to be able to generate a reference that my developers can refer to quickly in order to know which functions are available to be called. They only need to see the comments directly above each function and the declaration line. Something like this (similar to a C++ header file): // Separates results into pages. // Returns Paginator object. final public function paginate($perpage = 10); I've done some research and this answer looked pretty good (using Reflection classes), however, I'm not sure how I can keep the comments in. Any ideas? EDIT: Sorry, but I want to keep the current comment formatting. Myself and the people who are working on the code hate the verbosity associated with PHPDocumentor. Not to mention a comment-rewrite of the entire project would take years, so I want to preserve just the // comments in plaintext.

    Read the article

  • URLLoader.load() issue when using the same URLRequest

    - by Rudy
    Hello, I have an issue with my eventListeners with the URLLoader, but this issue happens in IE, not in FF. public function getUploadURL():void { var request:URLRequest = new URLRequest(); request.url = getPath(); request.method = URLRequestMethod.GET; _loader = new URLLoader(); _loader.dataFormat = URLLoaderDataFormat.TEXT; _loader.addEventListener(Event.COMPLETE, getBaseURL); _loader.load(request); } private function getBaseURL(event:Event):void { _loader.removeEventListener(Event.COMPLETE, getBaseURL); } The issue is that my getBaseURL gets executed automatically after I have executed the code at least once, but that is the case only in IE. What happens is I call my getUploadURL, I make sure the server sends an event that will result in an Event.COMPLETE, so the getBaseURL gets executed, and the listener is removed. If I call the getUploadURL method and put the wrong path, I do not get an Event.COMPLETE but some other event, and getBaseURL should not be executed. That is the correct behavior in FireFox. In IE, it looks like the load() method does not actually call the server, it jumps directly to the getBaseURL() for the Event.COMPLETE. I checked the willTrigger() and hasEventListener() on _loader before assigning the new URLLoader, and it turns out the event has been well removed. I hope I make sense, I simplified my code. To sum up quickly: in FireFox it works well, but in IE, the first call will work but the second call won't really call the .load() method; it seems it uses the previously stored result from the first call. I hope someone can please help me, Thank you, Rudy

    Read the article

  • Using the <h2> as the title after sent?

    - by Delan Azabani
    Currently, I have a semi-dynamic system for my website's pages. head.php has all the tags before the content body, foot.php the tags after. Any page using the main theme will include head.php, then write the content, then output foot.php. Currently, to be able to set the title, I quickly set a variable $title before inclusion: <?php $title = 'Untitled document'; include_once '../head.php'; ?> <h2>Untitled document</h2> Content here... <?php include_once '../foot.php'; ?> So that in head.php... <title><?php echo $title; ?> | Delan Azabani</title> However, this seems kludgy as the title is most of the time, the same as the content of the h2 tag. Is there a way I can get PHP to read the content of h2, track back and insert it, then send the whole thing at the end?

    Read the article

  • How do I make this nested for loop, testing sums of cubes, more efficient?

    - by Brian J. Fink
    I'm trying to iterate through all the combinations of pairs of positive long integers in Java and testing the sum of their cubes to discover if it's a Fibonacci number. I'm currently doing this by using the value of the outer loop variable as the inner loop's upper limit, with the effect being that the outer loop runs a little slower each time. Initially it appeared to run very quickly--I was up to 10 digits within minutes. But now after 2 full days of continuous execution, I'm only somewhere in the middle range of 15 digits. At this rate it may end up taking a whole year just to finish running this program. The code for the program is below: import java.lang.*; import java.math.*; public class FindFib { public static void main(String args[]) { long uLimit=9223372036854775807L; //long maximum value BigDecimal PHI=new BigDecimal(1D+Math.sqrt(5D)/2D); //Golden Ratio for(long a=1;a<=uLimit;a++) //Outer Loop, 1 to maximum for(long b=1;b<=a;b++) //Inner Loop, 1 to current outer { //Cube the numbers and add BigDecimal c=BigDecimal.valueOf(a).pow(3).add(BigDecimal.valueOf(b).pow(3)); System.out.print(c+" "); //Output result //Upper and lower limits of interval for Mobius test: [c*PHI-1/c,c*PHI+1/c] BigDecimal d=c.multiply(PHI).subtract(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)), e=c.multiply(PHI).add(BigDecimal.ONE.divide(c,BigDecimal.ROUND_HALF_UP)); //Mobius test: if integer in interval (floor values unequal) Fibonacci number! if (d.toBigInteger().compareTo(e.toBigInteger())!=0) System.out.println(); //Line feed else System.out.print("\r"); //Carriage return instead } //Display final message System.out.println("\rDone. "); } } Now the use of BigDecimal and BigInteger was delibrate; I need them to get the necessary precision. Is there anything other than my variable types that I could change to gain better efficiency?

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • What's the next steps for moving from appengine to full django?

    - by tomcritchlow
    Hey guys, I'm super new to programming and I've been using appengine to help me learn python and general coding. I'm getting better quickly and I'm loving it all the way :) Appengine was awesome for allowing me to just dive into writing my app and getting something live that works (see http://www.7bks.com/). But I'm realising that the longer I continue to learn on appengine the more I'm constraining myself and locking myself into a single system. I'd like to move to developing on full django (since django looks super cool!). What are my next steps? To give you a feel for my level of knowledge: I'm not a unix user I'm not familiar with command line controls (I still use appengine/python completely via the appengine SDK) I've never programmed in anything other than python, anywhere other than appengine I know the word SQL, but don't know what MySQL is really or how to use it. So, specifically: What are the skills I need to learn to get up and running with full django/python? If I'm going to host somewhere else I suppose I'll need to learn some sysadmin type skills (maybe even unix?). Is there anywhere that offers easy hosting (like appengine) but that supports django? I hear such great things about heroku I'm considering switching to RoR and going there I appreciate that I'm likely not quite ready to move away from appengine just yet but I'm a fiercely passionate learner (http://www.7bks.com/blog/179001) and would love it if I knew all the steps I needed to learn so I could set about learning them. At the moment, I don't even know what the steps are I need to learn! Thank you very much. Sorry this isn't a specific programming question but I've looked around and haven't found a good how-to for someone of my level of experience and I think others would appreciate a good roadmap for the things we need to learn to get up and running. Thanks, Tom PS - if anyone is in London and fancies showing me the ropes in person that would be super awesome :)

    Read the article

  • JVM GC demote object to eden space?

    - by Kevin
    I'm guessing this isn't possible...but here goes. My understanding is that eden space is cheaper to collect than old gen space, especially when you start getting into very large heaps. Large heaps tend to come up with long running applications (server apps) and server apps a lot of the time want to use some kind of caches. Caches with some kind of eviction (LRU) tend to defeat some assumptions that GC makes (temporary objects die quickly). So cache evictions end up filling up old gen faster than you'd like and you end up with a more costly old gen collection. Now, it seems like this sort of thing could be avoided if java provided a way to mark a reference as about to die (delete keyword)? The difference between this and c++ is that the use is optional. And calling delete does not actually delete the object, but rather is a hint to the GC that it should demote the object back to Eden space (where it will be more easily collected). I'm guessing this feature doesn't exist, but, why not (is there a reason it's a bad idea)?

    Read the article

  • Delivering activity feed items in a moderately scalable way

    - by sotangochips
    The application I'm working on has an activity feed where each user can see their friends' activity (much like Facebook). I'm looking for a moderately scalable way to show a given users' activity stream on the fly. I say 'moderately' because I'm looking to do this with just a database (Postgresql) and maybe memcached. For instance, I want this solution to scale to 200k users each with 100 friends. Currently, there is a master activity table that stores the rendered html for the given activity (Jim added a friend, George installed an application, etc.). This master activity table keeps the source user, the html, and a timestamp. Then, there's a separate ('join') table that simply keeps a pointer to the person who should see this activity in their friend feed, and a pointer to the object in the main activity table. So, if I have 100 friends, and I do 3 activities, then the join table will then grow to 300 items. Clearly this table will grow very quickly. It has the nice property, though, that fetching activity to show to a user takes a single (relatively) inexpensive query. The other option is to just keep the main activity table and query it by saying something like: select * from activity where source_user in (1, 2, 44, 2423, ... my friend list) This has the disadvantage that you're querying for users who may never be active, and as your friend list grows, this query can get slower and slower. I see the pros and the cons of both sides, but I'm wondering if some SO folks might help me weigh the options and suggest one way or they other. I'm also open to other solutions, though I'd like to keep it simple and not install something like CouchDB, etc. Many thanks!

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • Building "isolated" and "automatically updated" caches (java.util.List) in Java.

    - by Aidos
    Hi Guys, I am trying to write a framework which contains a lot of short-lived caches created from a long-living cache. These short-lived caches need to be able to return their entier contents, which is a clone from the original long-living cache. Effectively what I am trying to build is a level of transaction isolation for the short-lived caches. The user should be able to modify the contents of the short-lived cache, but changes to the long-living cache should not be propogated through (there is also a case where the changes should be pushed through, depending on the Cache type). I will do my best to try and explain: master-cache contains: [A,B,C,D,E,F] temporary-cache created with state [A,B,C,D,E,F] 1) temporary-cache adds item G: [A,B,C,D,E,F] 2) temporary-cache removes item B: [A,C,D,E,F] master-cache contains: [A,B,C,D,E,F] 3) master-cache adds items [X,Y,Z]: [A,B,C,D,E,F,X,Y,Z] temporary-cache contains: [A,C,D,E,F] Things get even harder when the values in the items can change and shouldn't always be updated (so I can't even share the underlying object instances, I need to use clones). I have implemented the simple approach of just creating a new instance of the List using the standard Collection constructor on ArrayList, however when you get out to about 200,000 items the system just runs out of memory. I know the value of 200,000 is excessive to iterate, but I am trying to stress my code a bit. I had thought that it might be able to somehow "proxy" the list, so the temporary-cache uses the master-cache, and stores all of it's changes (effectively a Memento for the change), however that quickly becomes a nightmare when you want to iterate the temporary-cache, or retrieve an item at a specific index. Also given that I want some modifications to the contents of the list to come through (depending on the type of the temporary-cache, whether it is "auto-update" or not) and I get completly out of my depth. Any pointers to techniques or data-structures or just general concepts to try and research will be greatly appreciated. Cheers, Aidos

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >