Search Results

Search found 1274 results on 51 pages for 'pros'.

Page 42/51 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • HTML5 game programming style

    - by fnx
    I am currently trying learn javascript in form of HTML5 games. Stuff that I've done so far isn't too fancy since I'm still a beginner. My biggest concern so far has been that I don't really know what is the best way to code since I don't know the pros and cons of different methods, nor I've found any good explanations about them. So far I've been using the worst (and propably easiest) method of all (I think) since I'm just starting out, for example like this: var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); var width = 640; var height = 480; var player = new Player("pic.png", 100, 100, ...); also some other global vars... function Player(imgSrc, x, y, ...) { this.sprite = new Image(); this.sprite.src = imgSrc; this.x = x; this.y = y; ... } Player.prototype.update = function() { // blah blah... } Player.prototype.draw = function() { // yada yada... } function GameLoop() { player.update(); player.draw(); setTimeout(GameLoop, 1000/60); } However, I've seen a few examples on the internet that look interesting, but I don't know how to properly code in these styles, nor do I know if there are names for them. These might not be the best examples but hopefully you'll get the point: 1: Game = { variables: { width: 640, height: 480, stuff: value }, init: function(args) { // some stuff here }, update: function(args) { // some stuff here }, draw: function(args) { // some stuff here }, }; // from http://codeincomplete.com/posts/2011/5/14/javascript_pong/ 2: function Game() { this.Initialize = function () { } this.LoadContent = function () { this.GameLoop = setInterval(this.RunGameLoop, this.DrawInterval); } this.RunGameLoop = function (game) { this.Update(); this.Draw(); } this.Update = function () { // update } this.Draw = function () { // draw game frame } } // from http://www.felinesoft.com/blog/index.php/2010/09/accelerated-game-programming-with-html5-and-canvas/ 3: var engine = {}; engine.canvas = document.getElementById('canvas'); engine.ctx = engine.canvas.getContext('2d'); engine.map = {}; engine.map.draw = function() { // draw map } engine.player = {}; engine.player.draw = function() { // draw player } // from http://that-guy.net/articles/ So I guess my questions are: Which is most CPU efficient, is there any difference between these styles at runtime? Which one allows for easy expandability? Which one is the most safe, or at least harder to hack? Are there any good websites where stuff like this is explained? or... Does it all come to just personal preferance? :)

    Read the article

  • Is return-type-(only)-polymorphism in Haskell a good thing?

    - by dainichi
    One thing that I've never quite come to terms with in Haskell is how you can have polymorphic constants and functions whose return type cannot be determined by their input type, like class Foo a where foo::Int -> a Some of the reasons that I do not like this: Referential transparency: "In Haskell, given the same input, a function will always return the same output", but is that really true? read "3" return 3 when used in an Int context, but throws an error when used in a, say, (Int,Int) context. Yes, you can argue that read is also taking a type parameter, but the implicitness of the type parameter makes it lose some of its beauty in my opinion. Monomorphism restriction: One of the most annoying things about Haskell. Correct me if I'm wrong, but the whole reason for the MR is that computation that looks shared might not be because the type parameter is implicit. Type defaulting: Again one of the most annoying things about Haskell. Happens e.g. if you pass the result of functions polymorphic in their output to functions polymorphic in their input. Again, correct me if I'm wrong, but this would not be necessary without functions whose return type cannot be determined by their input type (and polymorphic constants). So my question is (running the risk of being stamped as a "discussion quesion"): Would it be possible to create a Haskell-like language where the type checker disallows these kinds of definitions? If so, what would be the benefits/disadvantages of that restriction? I can see some immediate problems: If, say, 2 only had the type Integer, 2/3 wouldn't type check anymore with the current definition of /. But in this case, I think type classes with functional dependencies could come to the rescue (yes, I know that this is an extension). Furthermore, I think it is a lot more intuitive to have functions that can take different input types, than to have functions that are restricted in their input types, but we just pass polymorphic values to them. The typing of values like [] and Nothing seems to me like a tougher nut to crack. I haven't thought of a good way to handle them. I doubt I am the first person to have had thoughts like these. Does anybody have links to good discussions about this Haskell design decision and the pros/cons of it?

    Read the article

  • High-Powered Sites for low Cost

    - by HighAltitudeCoder
    Ahh, I am experiencing the intimidation of my very first post - visible by the whole world. Ok, here goes.   This first post is nothing exceptional.  It is simply a recommendation based (fittingly, I suppose) upon the job search you may be gearing up for.  I find myself in this very situation right now.  And, I will take my own recommendation after posting this entry. Job-Seekers: To the left you will notice two links under "Recommended Learning".  I have found these links to be invaluable when it comes to re-tooling, re-familiarizing, or otherwise resharping my skills when looking for that next job. Often, you will find job-postings with the text, usually posted after a laborious list of qualifications indicating the company's desire to hire candidates who know what they are doing: "...Looking for a candidate who can hit the ground running...".  The interesting thing about this post to me is I've encountered many individuals who, after speaking and working with them for some time, I've realized are perfectly capable of hitting the ground running - and FAST.  But what if they speed off in the wrong direction? The next time you spearhead a major task in your job, ask yourself: Am I headed in the wrong direction?  There are many ways to do this.  In fact, I've found in this new field there are more tempting ways to steer your project in the wrong direction than there are good ones.  I don't want to suggest that every one of my posts will fall into the "right direction" category, however I do think a healthy dose of introspection of the pros and cons will always be beneficial before you set off. That said, allow me to expound on the previously mentioned links. These web sites are invaluable.  They demonstrate the capabilities of existing as well as new and upcoming tools available in several IDE's.  I've viewed many tutorials in LearnVisualStudio.NET, and only one or two so far in TrainingSpot, however I've been delighted in their simplicity and straightforward approach to proper usage of the particular tool or concept being discussed.  They have not (so far in my experience) demonstrated ways in which to use the tools that become cumbersome, impractical, or error-prone. Each website has step-by-step videos that can be paused, replayed, and most importantly, they are done in real time.  As the author is typing, the viewer gets to experience the coding experience from a first-person perspective, including syntax errors, unexpected behaviors, IDE setup idiosyncracies, everything.  A subtle value I've gained from these videos is that a certain degree of confusion and introspection is normal when working with new tools and exploring new paths.  They (as well as your own experience) are not to be feared, but enjoyed.  I highly recommend them. Good work, guys!

    Read the article

  • The type of programmer I want to be [closed]

    - by Aventinus_
    I'm an undergraduate Software Engineer student, although I've decided that pure programming is what I want to do for the rest of my life. The thing is that programming is a vast field and although most of its aspects are extremely interesting, soon or later I'll have to choose one (?) to focus on. I have several ideas on small projects I'd like to develop this summer, having in mind that this will gain me some experience and, in the best scenario, some cash. But the most important reason I'd like to develop something close to “professional” is to give myself direction on what I want to do as a programmer. One path is that of the Web Programmer. I enjoy PHP and MySQL, as well as HTML and CSS, although I don't really like ASP.NET. I can see myself writing web apps, using the above technologies, as well as XML and Javascript. I also have a neat idea on a Facebook app. The other path is that of the Desktop Programmer. This is a little more complicated cause I really-really enjoy high level languages such as Java and Python but not the low level ones, such as C. I use both Linux and Windows for the last 6 years and I like their latest DEs (meaning Gnome Shell and Metro). I can see myself writing desktop applications for both OSs as long as it means high level programming. Ideally I'd like being able to help the development of GNOME. The last path that interests me is the path of the Smartphone Programmer. I have created some sample applications on Android and due to Java I found it a quite interesting experience. I can also see myself as an independent smartphone developer. These 3 paths seem equally interesting at the moment due to the shallowness of my experience, I guess. I know that I should spend time with all of them and then choose the right one for me but I'd like to know what are the pros and cons in terms of learning curve, fun, job finding and of course financial rewards with each of these paths. I have fair or basic understanding of the languages/technologies I described earlier and this question will help me choose where to focus, at least for now.

    Read the article

  • How do you feel about being asked to code during an interview?

    - by Mystere Man
    I have seen a lot of comments about good interview questions and puzzles to require potential developers to solve during the interview process. I have personally had several interviews in which the interviewer has asked me to write some piece of code or solve a problem during the interview, and I have always performed very poorly in these "tests". The reason is simple, as a developer who spends my days talking to computers, I find I have to prepare myself and "switch gears" to be in "interview mode". I prepare myself to make a good impression. When I'm programming, I'm very focused and am totally different from when I'm being "interpersonal". I just can't get into "the zone" when I'm also having to be a charming and witty potential employee. I feel that by asking a developer to prove his skills during an interview, all you're doing is finding out if they can code under pressure, and at the drop of a hat. It has almost no ability to determine how you would perform in a "real life" development situation. Maybe, if you're looking for someone that can code and chat at the same time, i can see how that would be beneficial. But I think you overlook potential candidates that simply do not perform well in such an artificial environment. While I appreciate that a potential employer wants to see what I can do, I don't think an interview is the place for such a test. I mean, suppose a job for an over the road trucker required that you drive while being interviewed. How does that really end well? So I'm curious as to what others think about such situations. Have you failed interviews because you were not in the right frame of mind? Have you failed to make a good interpersonal impression because you were too distracted trying to solve the problem? If you're a hiring manager, or someone that gives interviews, do you even think about such things? Is it really important that someone perform well in an interview? EDIT: To clarify, I'm not against testing applicants. My concern is about testing during an interview. See also: What are the pros and cons for the employer of code questions during an interview? looking at this from the interviewer's point of view.

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • Development Approach: User Interface In or Domain Model Out?

    - by Berin Loritsch
    While I've never delivered anything using Smalltalk, my brief time playing with it has definitely left its mark. The only way to describe the experience is MVC the way it was meant to be. Essentially, all the heavy lifting for your application is done in the business objects (or domain model if you are so inclined). The standard controls are bound to the business objects in some way. For example, a text box is mapped to an object's field (the field itself is an object so it's easy to do). A button would mapped to a method. This is all done with a very simple and natural API. We don't have to think about binding objects, etc. It just works. Yet, in many newer languages and APIs you are forced to think from the outside in. First with C++ and MFC, and now with C# and WPF, Microsoft has gotten it's developer world hooked on GUI builders where you build your application by implementing event handlers. Java Swing development isn't so different, only you are writing the code to instantiate the controls on the form yourself. For some projects, there may never even be a domain model--just event handlers. I've been in and around this model for most of my carreer. Each way forces you to think differently. With the Smalltalk approach, your domain is smart while your GUI is dumb. With the default VisualStudio approach, your GUI is smart while your domain model (if it exists) is rather anemic. Many developers that I work with see value in the Smalltalk approach, and try to shoehorn that approach into the VisualStudio environment. WPF has some dynamic binding features that makes it possible; but there are limitations. Inevitably some code that belongs in the domain model ends up in the GUI classes. So, which way do you design/develop your code? Why? GUI first. User interaction is paramount. Domain first. I need to make sure the system is correct before we put a UI on it. There's pros and cons for either approach. Domain model fits in there with crystal cathedrals and pie in the sky. GUI fits in there with quick and dirty (sometimes really dirty). And for an added bonus: How do you make sure the code is maintainable?

    Read the article

  • Should I use my own public API on my site (via JS)?

    - by newboyhun
    First of all, this question is far more different other 'public api questions' like this: Should a website use its own public API?, second, sorry for my English. You can find the question summarized at the bottom of this question. What I want to achieve is a big website with a public api, so who like programming (like me) and likes my website, can replicate my website's data with a much better approach (of course with some restrictions). Almost everything could be used by the public API. Because of this, I was thinking about making the whole website AJAX driven. There would be parts of the API which would be limited only to my website (domain), like login, registering. There would be only an INTERFACE on the client side, which would use the public and private API to make this interface working. The website would be ONLY CLIENT SIDE, well, I mean, the website would only use AJAX to use the api. How do I imagine this? The website would be like a mobile application, the application only sending a request to a webserver, which returns a json, the application parses it, and uses it to advance in the application. (e.g.: login) My thoughts: Pros: The whole website is built up by javascript, this means I don't need to transfer the html to the client, saving bandwidth. (I hope so) Anyone can use up the data of my website to make their own cool things. (Is this a con or pro? O_O) The public API is always in use, so I can see if there are any error. Cons: Without Javascript the website is unusable. The bad guys easily can load the server with requesting too much data (like Request Per Second 10000), but this can be countered via limiting this with some PHP code and logging. Probably much more work So the question in some words is: Should I build my website around my own api? Is it good to work only on the client side? Is this good for a big website? (e.x.: facebook, yeah facebook is a different story, but could it run with an 'architecture' like this?)

    Read the article

  • Reasons NOT to use JSF [closed]

    - by Vain Fellowman
    I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web-development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front-controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?

    Read the article

  • mdadm+zfs vs mdadm+lvm

    - by Alex
    This may be a naive question since I'm new to this and I cannot find any results about mdadm+zfs, but after some testing it seems it might work: The use case is a server with RAID6 for some data that is backed-up somewhat infrequently. I think I'm well served by any of ZFS or RAID6. Platform is Linux. Performance is secondary. So the two setups I am considering are: A RAID6 array plus regular LVM and ext4 A RAID6 array plus ZFS (without redundancy). Is this second option that I don't see discussed at all. Why ZFS+RAID6? It's mainly because the inability of ZFS to grow a raidz2 with new disks. You can replace disks with larger ones, I know, but not add another disk. You can accomplish 2-disk redundancy and ZFS disk growth using mdadm as the redundancy layer. Besides that main point (otherwise I could go directly to raidz2 without RAID under it), these are the pros-cons that I see for each option: ZFS has snapshots without preallocated space. LVM requires preallocation (might be no longer true). ZFS has checksumming (very interested in this) and compression (nice bonus). LVM has online filesystem growth (ZFS can do it offline with export/mdadm --grow/import). LVM has encryption (ZFS-on-Linux has not). This is the only major con of this combo I see. I guess I could go RAID6+LVM+ZFS... seems too heavy, or not? So, to close with a proper question: 1) Is there anything that inherently discourages or precludes RAID6+ZFS? Anyone has experience with a setup like this? 2) Are there possibilities for checksumming and compression that would make ZFS unnecessary (maintaining the possibility of filesystem growth)? Because the RAID6+LVM combo seems the sanctioned, tested way.

    Read the article

  • Macbook Pro 2010 Ethernet Jumbo Frame(9k MTU) Support?

    - by Troggy
    Has anyone been able to use jumbo frame support on their 2010 Macbook Pros? This is kind of negative news here, but I am seeing many reports that this is not available anymore due to Apple's choice of network card in the new machines. I cannot set my MTU speed over 1500 on my new 2010 MBP i7, but my old early 2008 MBP (Core2) has the 9000 MTU setting for use. Everything I have is setup to use jumbo frames and I thought apple kept that feature in their "pro" lineup. It sounds like the Mac Pro still has it. Did they decide to use a chipset that doesn't support it? I am trying to pinpoint some solid chipset numbers and the feature support. Maybe they just need to update the drivers? Is there some more official information about this feature? This might seem minor, but this is really frustrating if apple removed this feature from their pro laptop line. From what I have read so far, it sounds like I am not alone in my frustrations with this. http://discussions.info.apple.com/message.jspa?messageID=12258067 http://discussions.apple.com/thread.jspa?messageID=12130158 Anyone have any experience or further knowledge about this issue ... beyond typing my question into google and giving the top 5 results as answers? :)

    Read the article

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • Laptops with easy heat sink service?

    - by Niten
    Can you recommend a current laptop model with easy heat sink access – or better yet, a removable air intake filter – making it easy to periodically clean out the dust and lint that always packs up in these things? Every laptop I've owned has eventually overheated on account of a clogged heat sink. (I suppose it doesn't help that I have a cat who loves to hang out where I'm working, or that my laptop is almost always running.) One of the things I really love about my current system, a Dell Inspiron 1420n, is how easy it is to service its cooling system: whenever I notice the fan starting to work harder and the CPU temperature climbing higher than it should be, I merely have to unscrew a single panel from the bottom of the machine, clean out the heat sink, and then I'm good for another few months. Which current models of the "business laptop" variety offer similar easy cooling system service? I'm looking for something roughly along the lines of: 14- or 15-inch display Nehalem-based CPU Solid construction – magnesium chassis or better (like the Inspiron) TPM (for BitLocker) ideal, but not mandatory Docking adapter ideal, but not mandatory Good battery life For example, the ThinkPad T410 would have been my top choice, but it seems like it would be a serious chore to service its heat sink. For the current MacBook Pros it looks downright impossible. No matter how nice the laptop is in other respects, it'll be of no use to me when it's overheating. So, any suggestions? Thanks in advance... (I'm constantly surprised that customers and manufacturers don't pay more attention to this feature, at least in the business laptop subcategory. In the last couple months I've fixed two friends' laptops which were also overheating due to clogged cooling systems; clearly I'm not the only one affected by this.)

    Read the article

  • Need guidance on a Google Map application that has to show 250 000 polylines.

    - by lucian.jp
    I am looking for advice for an application I am developing that uses Google Map. Summary: A user has a list of criteria for searching a street segment that fulfills the criteria. The street segments will be colored with 3 colors for showing those below average, average and over average. Then the user clicks on the street segment to see an information window showing the properties of that specific segment hiding those not selected until he/she closes the window and other polyline becomes visible again. This looks quite like the Monopoly City Streets game Hasbro made some month ago the difference being I do not use Flash, I can’t use Open Street Map because it doesn’t list street segment (if it does the IDs won’t be the same anyway) and I do not have to show Google sketch building over. Information: I have a database of street segments with IDs, polyline points and centroid. The database has 6,000,000 street segment records in it. To narrow the generated data a bit we focus on city. The largest city we must show has 250,000 street segments. This means 250,000 line segment polyline to show. Our longest polyline uses 9600 characters which is stored in two 8000 varchar columns in SQL Server 2008. We need to use the API v3 because it is faster than the API v2 and the application will be ported to iPhone. For now it's an ASP.NET 3.5 with SQl Server 2008 application. Performance is a priority. Problems: Most of the demo projects that do this are made with API v2. So besides tutorial on the Google API v3 reference page I have nothing to compare performance or technology use to achieve my goal. There is no available .NET wrapper for the API v3 yet. Generating a 250,000 line segment polyline creates a heavy file which takes time to transfer and parse. (I have found a demo of one polyline of 390,000 points. I think the encoder would be far less efficient with more polylines with less points since there will be less rounding.) Since streets segments are shown based on criteria, polylines must be dynamically created and cache can't be used. Some thoughts: KML/KMZ: Pros: Since it is a standard we can easily load Bing maps, Yahoo! maps, Google maps, Google Earth, with the same KML file. The data generation would be the same. Cons: LineString in KML cannot be encoded polyline like the Google map API can handle. So it would probably be bigger and slower to display. Zipping the file at the size it will take more processing time and require the client side to uncompress the data and I am not quite sure with 250,000 data how an iPhone would handle this and how a server would handle 40 users browsing at the same time. JavaScript file: Pros: JavaScript file can have encoded polyline and would significantly reduce the file to transfer. Cons: Have to create my own stripped version of API v3 to add overlays, create polyline, etc. It is more complex than just create a KML file and point to the source. GeoRSS: This option isn't adapted for my needs I think, but I could be wrong. MapServer: I saw some post suggesting using MapServer to generate overlays. Not quite sure for the connection with our database and the performance it would give. Plus it requires a plugin for generating KML. It seems to me that it wouldn't allow me to do better than creating my own KML or JavaScript file. Maintenance would be simpler without. Monopoly City Streets: The game is now over, but for those who know what I am talking about Monopoly City Streets was showing at max zoom level only the streets that the centroid was inside the Bounds of the window. Moving the map was sending request to the server for the new streets to show. While I think this was ingenious, I have no idea how to implement something similar. The only thing I thought about was to compare if the long was inside the bound of map area X and same with Y. While this could improve performance significantly at high zoom level, this would give nothing when showing a whole city. Clustering: While cluster is awesome for marker, it seems we cannot cluster polylines. I would have liked something like MarkerClusterer for polylines and be able to cluster by my 3 polyline colors. This will probably stay as a “would have been freaking awesome but forget it”. Arrow: I will have in a future version to show a direction for the polyline and will have to show an arrow at the centroid. Loading an image or marker will only double my data so creating a custom overlay will probably be my only option. I have found that demo for something similar I would like to achieve. Unfortunately, the demo is very slow, but I only wish to show 1 arrow per polyline and not multiple like the demo. This functionality will depend on the format of data since I don't think KML support custom overlays. Criteria: While the application is done with ASP.NET 3.5, the port to the iPhone won't use the web to show the application and be limited in screen size for selecting the criteria. This is why I was more orienting on a service or page generating the file based on criteria passed in parameters. The service would than generate the file I need to display the polylines on the map. I could also create an aspx page that does this. The aspx page is more documented than the service way. There should be a reason. Questions: Should I create a web service to returns the street segments file or create an aspx page that return the file? Should I create a JavaScript file with encoded polyline or a KML with longitude/latitude based on the fact that maximum longitude/latitude polyline have 9600 characters and I have to render maximum 250,000 line segment polyline. Or should I go with a MapServer that generate the overlay? Will I be able to display simple arrow on the polyline on the next version. In case of KML generation is it faster to create the file with XDocument, XmlDocument, XmlWriter and this manually or just serialize the street segment in the stream? This is more a brainstorming Stack Overflow question than an actual code problem. Any answer helping narrow the possibilities is as good as someone having all the knowledge to point me out a better choice.

    Read the article

  • Remote server security: handling compiler tools

    - by Gonzolas
    Hello! I was wondering wether to remove compiler tools (gcc, make, ...) from a remote production server, mainly for security purposes. Background: The server runs a web application on Linux. Consider Apache jailed. Otherwise, only OpenSSHd faces the public network. Of course there is no compiler stuff within the jail, so this is about the actual OS outside of any jails. Here's my personal PRO/CON list (regarding removal) so far: PRO: I had been reading some suggestions to remove compiler tools in order inhibit custom building of trojans etc. from within the host if an attacker attains unpriviliged user permissions. CON: I can't live without Perl/Python and a trojan/whatever could be written in a scripting language like that, anyway, so why bother about removing gcc et al. at all. There is a need to build new Linux kernels as well as some security tools from source directly on the server, because the server runs in 64-bits mode and (to my understanding) I can't (cross-)compile locally/elsewhere due to lack of another 64-bits hardware system. OK, so here are my questions for you: (a) Is my PRO/CON assessment correct? (b) Do you know of other PROs / CONs to removing all compiler tools? Do they weigh in more? (c) Which binaries should I consider dangerous if the given PRO statement holds? Only gcc, or also make, or what else? Should I remove the enitre software packages them come with? (d) Is it OK to just move those binaries to a root-only accessible directory when they are not needed? Or is there a gain in security if I "scp them in" every time? Thank you!

    Read the article

  • Recommendations for good FTP server for Win 2008 x64

    - by sfhtimssf1970
    I spent a bunch of time learning/configuring the "all new and better" FTP feature for IIS7. In my opinion, it still fails hard: In order to have multiple FTP sites on the same machine, you have to use host|user usernames (like domain.com|jason) for every account. Using IIS Manager auth doesn't seem to work at all. I'm sure I'm doing something wrong, but I can't figure out what the hell it is. I've read all the official articles on it and configured it a hundred different ways. Doesn't play well with passive connection types. That has to be disabled on the client in order for it to work. Doesn't have any way to allow one user to see multiple sites no matter what binding they are connected to. For instance, if "jason" connects to ftp.domain.com, he should be able to see domain2.com, domain3.com without seeing domain4.com and domain5.com. It takes an act of God to set this up with IIS7. So I'm wanting to install a third party FTP server instead. I've looked at FileZilla both ZFTPServer. Anyone know of any pros/cons on these? Any other recommendations?

    Read the article

  • How to send T.38 from a mac?

    - by Brian Postow
    I'm trying to set up a fax-server on a macintosh. I have Hylafax, and we're going to use an internet FOIP fax provider (Haven't decided who yet, that may be another question). The problem is how to get from Hylafax to T.38. I know of two options, but I'm not sure how to decide between them: T38modem Advantages: It's only one extra program, and i know that I can compile it for the Mac. (well, At least I can get the H323 version working on a Mac) Disadvantages: It is mostly undocumented and seems to be supported only by one guy in Russia. IAXModem/Asterisk Advantages: It's well known, and well supported. We can pay for support. It presumably does the T38 with SIP correctly, so we don't have to worry about it. Disadvantages: It's two separate programs. While I know how to get Asterisk on a mac, I'm not sure about IAXModem. (It's sourceforge, and linux, but compiling things for a mac isn't always straight forward...) It's also mostly undocumented. Do these seem like an accurate listing of the pros/cons? Anyone have any other suggestions? thanks.

    Read the article

  • Should one have a separate user account for work use? [closed]

    - by Tyler Wayne
    This question examines the practice of using a separate OS-level user account to divide work use from personal use (specifically, in a creative profession and on a personal computer). I recently left my in-the-flesh job to go to school, but I'm carrying on with the work remotely. I do all of my work on my laptop, and I currently have a separate user account called "Work" where I do exactly that. However, I'm now starting to question that practice. Because my hobby is the same as my job, I want to save notes of the things I learn while working. Because ideas come at any moment, I often want to throw something into my personal task manager's inbox and look at it again later. That task manager is well-suited to handle both the work and personal aspects of my life. Only my personal account has admin rights, but work sometimes requires me to install programs. My employer has no preference regarding my choice, so that is a non-issue. My work is essentially freelance web development, so advice given with that in mind will be much appreciated. Back up all opinion with some personal experience, please. Ideally, give a list of pros and cons and then name reasons for your position.

    Read the article

  • What are best practices on virtual lab/test bed architecture?

    - by WooYek
    I am currently preparing a new small virtual environment for development and testing with Windows Server + SQL Server + AD + Sharepoint + Exchange + IIS(ASP.NET) + Biztalk + ?, for a small (up to 5) dev team. What are pros and cons on different approaches, eg. splitting up over different machines or packing everything up per machine. I your experience what are the best practices I should follow in terms of architecture and various system/servers placement. What to share and what to split per person. I would like to achieve some flexibility for the dev and testing process (so teammebers would not be steeping on each other's toes) and limit administrative effort needed to propagate settings, integrate work items and revert changes when something breaks up. It's not supposed to be an everyday development working environment, more a tier 2 developer testing environment, and not yet an integration or QA testing environment with formal change process. IMO the two borderline solutions are: creating one all-inclusive machine for each dev team member giving them freedom to manage creating shared environment managed by the one with somehow formalized change request process What golden mean would you recommend, and why?

    Read the article

  • VMWare Host Not Responding - VMs are Disconnected (ESXi 3.5)

    - by EyeonTech
    I have inherited a VMWare environement ESX v3i 3.5. I am not fluent with VMWare ESXi so bare with me. Two days ago, when I opened Virtual Centre, one of the Hosts showed up as Not Responding. The error I am seeing is:----- Unable to acquire licenses because license source is unavailable: The license manager has not been started yet, the wrong port@host or license file is being used, or the port or hostname in the license file has been changed. What I have tried: I have Stopped and Started the license server within the application. When I do this, I am able to disconnect the host in Virtual Centre and reconnect. This brings the host back online, but a few minutes later, the host goes to a not responding state again. I have rebooted the server with the Licensing software installed. While browsing Google, I have not been able to find any steps I feel comfortable performing and I would like the opinion of the VMWare pros. Where are the log files I should be looking at to correctly determine what is going on? Has anyone seen this behaviour and how did you resolve it? Thank you

    Read the article

  • Setup staging with multiple SVN

    - by Kapil Sharma
    We are a startup, setting new environments for product to be released soon. Planned server structure with planned release flow is as shown in below image It ideally have a local server (or Staging server, shown in green) in local office, without public IP address and Production Server (Red) at Amazon EC2. Both local and production server have there own SVN copy. Management here want to update production server with production SVN and without providing its access to developers (including freelancers/contract employees). So for developers, there is a Local SVN on local server. Another purpose of local SVN to keep a copy of code on local server, which is under our direct control. Although there are some technical concerns like how will code at local server will be updated from local SVN and commit on production SVN but bigger question is, is that structure correct? Major requirement remain don't provide production SVN access to developers. What are other possible options to achieve that? Another minor question, if suitable here, if above structure is correct, is it possible for a SVN checkout to get updated from one SVN (Local SVN) but commit to other (Production SVN)? If yes, How? edit An answer has been accepted but for bounty, I'm still looking for answer Is that structure correct? Its pros/Cons? Technical solution is already provided by accepted answer.

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Managing an application across multiple servers, or PXE vs cfEngine/Chef/Puppet

    - by matt
    We have an application that is running on a few (5 or so and will grow) boxes. The hardware is identical in all the machines, and ideally the software would be as well. I have been managing them by hand up until now, and don't want to anymore (static ip addresses, disabling all necessary services, installing required packages...) . Can anyone balance the pros and cons of the following options, or suggest something more intelligent? 1: Individually install centos on all the boxes and manage the configs with chef/cfengine/puppet. This would be good, as I have wanted an excuse to learn to use one of applications, but I don't know if this is actually the best solution. 2: Make one box perfect and image it. Serve the image over PXE and whenever I want to make modifications, I can just reboot the boxes from a new image. How do cluster guys normally handle things like having mac addresses in the /etc/sysconfig/network-scripts/ifcfg* files? We use infiniband as well, and it also refuses to start if the hwaddr is wrong. Can these be correctly generated at boot? I'm leaning towards the PXE solution, but I think monitoring with munin or nagios will be a little more complicated with this. Anyone have experience with this type of problem? All the servers have SSDs in them and are fast and powerful. Thanks, matt.

    Read the article

  • Redis as substitution for Memcache

    - by Boban P.
    We have distributed web app, and for now, as session handler, we use two separate instances of memcache in redundancy, so everything that is written in one memcache is also written in other. Memcache is fairly easy to install, use, and maintain but we have one problem: if one memcache fail, everything is fine, php comunicate with other instance which has all data (although, half of connections have a delay because they try to use failed one, wait a little, and then contact other memcache). When failed instance comes back to life again, it starts up empty. If established session request data from that instance, session fails, and user logs out, and that happens to half of users.So, we are thinking about to switch to redis for session handling, and maybe keep memcache for cache only. My questions are: If we setup redis instances as master-slave, and if master fails, can sentinel promote slave as new master and when old master comes back to life, will it stay as slave or not? Is redis call malloc at startup to allocate part of memory, like memcache or varnish, or it calls malloc for every key inserted? And what are pros and cons of that?

    Read the article

  • Is the APC BR700G UPS (or similar) compatible with Active PFC power supplies?

    - by David Zaslavsky
    I'm looking at getting a UPS for my home computer. So far the APC BR700G looks very promising, except for one thing: one of the reviews on Newegg says that this UPS does not work with a power supply with Active PFC. Pros: Unit looks great, built well, very heavy, was excited to use it. Cons: Didn't research enough - many newer power supplies like my corsair 750w (and yes dells and other mainstreamers sell them too) that I bought last year have a feature called active pfc (power factor corrected). The signal for this backup battery doesn't fully support that feature and can cause issues. You can find an article on APCs site if you search their user forums for PFC. And the power supply in my computer is, in fact, an Active PFC PSU. I've already found one answer on this site claiming that it's not an issue, that "most quality supplies these days have PFC and work just fine with a UPS." That disagrees with the review on Newegg. Can someone explain this discrepancy? Also, what is it exactly about a UPS that makes it incompatible with an Active PFC PSU? (if anything) Is there some way to tell based on the technical specifications, or do I just have to hunt for reviews online to avoid wasting my money? While any input would be appreciated, I would prefer to get an answer from someone with actual experience with similar UPS's and Active PFC power supplies, who can tell me whether it works or not.

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >