Search Results

Search found 1330 results on 54 pages for 'ish kumar'.

Page 42/54 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Doubling the DPI with a shader?

    - by Mathias Lykkegaard Lorenzen
    I'm developing a game where the map is generated with Perlin Noise, but on the CPU. I am generating some perlin noise onto a texture with a small size, and then I stretch it out to the whole screen to simulate a map. The reason for the CPU generating the noise is that I want it to look the same on all devices. Now, here's the end-result. Please ignore the bullets and the explosion on the picture. What matters is the background (the black/gray pixels) and the ground (the brown-ish pixels). They are rendered to the same texture through perlin noise. However, this doesn't look very pretty. So I was wondering if it would be possible to double the amount of pixels using a shader, and rounding edges at the same time? In other words, improve the DPI. I'm using SharpDX with DirectX 11, through its toolkit feature. But any help that'll lead me in the right direction (for instance through HLSL) would be a great help. Thanks in advance.

    Read the article

  • Cross platform development query

    - by Ian
    I'm a Microsoft developer mainly, but there are a couple of small-ish projects I'd like to fiddle with which would benefit from being cross platform. The platforms I want to target are: Windows, Linux, Mac, Android and preferably iPhone, web (running in a browser). I need 3D (Around the level of support seen in something like Minecraft (I'm not writing Minecraft)), some networking. I'm pretty certain Java would work on all except iPhone. Looking at the "related questions" above it's offered up QT (no browser or phone afaik) and also HTML/CSS/Javascript (3D? package for desktop?) The other alternative is to have seperate versions for seperate platforms, developed with some common code where possible. That option isn't something I know anything about. Does anyone have experience of this sort of conundrum? I figured here was better than SO, because I imagine there are compromises which extend beyond technical choice. Finally, this is not a commercial operation, so some of the very expensive cross platform tools are out of the question unless they offer some sort of community edition. Thanks for your time.

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • Can't complete dropbox installation from behind proxy in Ubuntu 11.10

    - by Mark Jones
    Problem: My PC on campus sits behind a proxy (requiring authentication) and I can't setup Dropbox. I am convinced that this is a proxy issue as I can't setup Ubuntu one either (but I don't use Ubuntu One so that is not a problem). I have looked at the Ubuntu One fix but it seems to be to modify settings explicitly related to Ubuntu One. I can install the nautilus-dropbox package (compiled from source and from .deb package from website and from software centre) but once I click OK from the "Dropbox Installation" dialog box (prompting me to download the proprietary daemon) the installation just freezes with the OK button pressed. When I look at its process in System Monitor its waiting channel is inet_wait_for_connect. I have set the following proxy directives thus far: Added mj22:**@proxy.waikato.ac.nz:80 information to network proxy settings under network in settings. Added http_host and http_port variables under gconf-editor-system-proxy Added 'host', 'authentication_password' 'authentication_user' and ticked 'user authentication' and 'use_http_proxy' under gconf-editor-system-http_proxy Added export http_proxy="http://mj22:**@proxy.waikato.ac.nz:80/" to /etc/bash.bashrc Added Acquire::http::proxy "http://mj22:**@proxy.waikato.ac.nz:80/"; to /etc/apt/apt.conf (which is what I imagine is letting Software Center retrieve packages). (where ** is my password) I have also added the equivalent ftp and https lines for the above entries. I get the internet fine and Software Centre can download packages but thats it. Related issues: The software centre can't fetch reviews (but can download packages). When trying to add an online account in Gnome 3 a dialog pop up appears with "Error getting a Request Token: Cannot connect to proxy (proxy.waikato.ac.nz)" Updates: After some time (10mins ish) Dropbox shows an error dialog box that reads: Trouble connecting to Dropbox servers. Maybe your internet connection is down, or you need to set you http_proxy environment variable. Is there a way I can see what environment variables are currently set?

    Read the article

  • Xmonad Xsession

    - by AntLord
    My user level: noob-ish, so please bear with me I'm running 12.04 LTS. I have installed and, to some extent, configured xmonad 0.10 The "automagically" created xsession for it works fine as it is, but when I login it won't run a startup script I've created and "call from" /usr/share/xsessions/xmonad.desktop, if that's right. I've read pretty much all I could find about .xinitrc and .xsession, I tried that and it somehow messed up the other "sessions", if I'm explaining myself correctly. Had to $unity --reset to have the "main session" working again. Anyway, my question is, how do I autostart xmobar and set a desktop background after login into xmonad's default Xsession? I tried this script, start-xmonad: #!/bin/bash # #I only used one of the following each time I tried, none worked #Also, do I really need the '&'? I know what they're for, but... nitrogen --restore & feh --bg-scale ~/Pictures/picture.png & #Then I want xmobar to start, again do I need the '&'? I know it's for it to run #in the background, but I tried removing the '&' and xmonad still launched xmobar & #Finally, the only thing that seems to work in this script exec xmonad Yes, I made sure I did chomd +x ~/start-xmonad The xmonad.desktop is [Desktop Entry] Name=XMonad Encoding=UTF-8 Comment=Lightweight tiling window manager Exec=/home/myusername/start-xmonad Icon=custom_xmonad_badge.png Type=XSession So, this didn't work, now I'm here. Please help :s thanks

    Read the article

  • Farmyard

    - by Richard Jones
    Moooooooo     For a while now we’ve been using Apple’s enterprise device app distribution mechanism.   This allows you to have a user, click on a URL on their iOS device and it pulls down a new version of an enterprise app. of of our servers. Its really nice,  have a look at - http://developer.apple.com/library/ios/#featuredarticles/FA_Wireless_Enterprise_App_Distribution/Introduction/Introduction.html   I’ve embedded this, into a check on application launch, that a web-service is called to detect a newer version of the software is available.  It then calls the URL to the App and a new version is deployed. You can alert users that a new App update is available by sending them a push notification.  See screenshot at the top. We send our push notifications out to users,  using a simple C# service.    The fun part is this.   You can instruct the push notification to play a sound (embedded in the app already). So our push notification’s play a random farmyard noise, i.e from a selection of - cow.wav dogbrk.wav duck.wav goose.wav horse.wav lamb.wav monkey.wav – left field I know rooster.wav Imagine my amusement being able to periodically send out an update and watch our office (of about 60 people) turn into farm for a few seconds. I’ve messed up a few times, with people being interrupted on customer conference calls,  but people seem good humoured about it. (so far) Simple(ish) pleasures…

    Read the article

  • Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots into GIT. The end result being that the latest code is the same as the last snapshot, and other editions are accessible and are as numbered. Some other requirements: that the latest edition is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the latest edition of the code. meanwhile, there should be continuity between files that do persist between snapshots. I would like GIT to know that there are previous editions of these files and not treat them as brand new files within each edition. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in GIT as well as subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering. (I haved posted an answer separately for each tool so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same but separate question for Subversion: Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    Read the article

  • CMS for coding blog

    - by OrgnlDave
    I've got a server with a LAMP stack and such. I'd like to host a blog-type site (or if there's a free place good for this, that would be cool!) that covers a variety of tutorials, interesting content, etc. There are tons of CMS's out there but if you search for tips on ones that do programming type things well, you get tons of hits about web development. I'd like to know if anyone here has recommendations from actually using a CMS for this type of thing or, short of that, can recommend one - not based on generalities like "Joomla! is great!" I'm looking for the least setup time possible. I'm proficient with CSS and I can design a color scheme, so that's not a big problem. As you can expect, attaching files, pictures, and syntax highlighting are musts (C/C++ ish is good). Ability to group posts, perhaps use tags, etc. would be cool too, but not necessary. As I'm writing this, it almost sounds like it'd be easier to custom-code a small PHP site myself.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Go/Obj-C style interfaces with ability to extend compiled objects after initial release

    - by Skrylar
    I have a conceptual model for an object system which involves combining Go/Obj-C interfaces/protocols with being able to add virtual methods from any unit, not just the one which defines a class. The idea of this is to allow Ruby-ish open classes so you can take a minimalist approach to library development, and attach on small pieces of functionality as is actually needed by the whole program. Implementation of this involves a table of methods marked virtual in an RTTI table, which system functions are allowed to add to during module initialization. Upon typecasting an object to an interface, a Go-style lookup is done to create a vtable for that particular mapping and pass it off so you can have comparable performance to C/C++. In this case, methods may be added /afterwards/ which were not previously known and these new methods allow newer interfaces to be satisfied; while I like this idea because it seems like it would be very flexible (disregarding the potential for spaghetti code, which can happen with just about any model you use regardless). By wrapping the system calls for binding methods up in a set of clean C-compatible calls, one would also be able to integrate code with shared libraries and retain a decent amount of performance (Go does not do shared linking, and Objective-C does a dynamic lookup on each call.) Is there a valid use-case for this model that would make it worth the extra background plumbing? As much as this Dylan-style extensibility would be nice to have access to, I can't quite bring myself to a use case that would justify the overhead other than "it could make some kinds of code more extensible in future scenarios."

    Read the article

  • When mapping the surface of a sphere with tiles, how might you deal with polar distortion?

    - by clweeks
    It's easy to deal with the way locations interact on a clean Cartesian grid. It's just vanilla math. And you can kind of ignore the geometry of the sphere's surface for a bunch of it if you want to just truncate the poles or something. But I keep coming up with ideas for games where the polar space matters. Geo-coded ARGs and global roguelikes and stuff. I want square(ish?) locations -- reasonably representable by square tiles of the same size across the globe, anyway. This has to be a solved problem, right? What are the solutions? ETA: At the equator -- and assuming that your square locations are reasonably small, it's close enough to true that you can get away with having one square in the rows north and south of the most equatorial row. And you could probably get away with that by just hand-waving the difference up to like 45-degrees or so. But eventually, you need to have fewer squares in a pole-ward circumferential row. If I reduce the length of the row by one and offset the squares by 1/2 then they're just like hexes and it's relatively easy to do the coding to keep track of the connections. But as you get pole-ward, it gets more and more extreme. Projecting the surface of the world onto the surface of a cube is tempting. But I figured there must be more elegant solutions already in use. If I did the cube thing (not dissecting it further through geodesy) Are there any pros and cons related to placing the pole at the center of a face or at the vertex of three sides?

    Read the article

  • Javascript Canvas Drawing Efficiency

    - by jujumbura
    I have just recently started some experiments with game development in Javascript/HTML5, and so far it has been going pretty well. I have a simple test scene running with some basic input handling, and a hundred-ish drawImage() calls with a few transforms. This all runs great on Chrome, but unfortunately, it already chugs on Firefox. I am using a very large canvas ( 1920 x 1080 ), but it doesn't seem like I should be hitting my limit already. So on that note, I was hoping to ask a few questions: 1) What exactly is done on the CPU vs. the GPU in terms of canvas and drawImage()? I'm afraid the answer is probably "it depends on the browser", but can anybody give me some rules of thumb? I naively imagined that each drawImage call results in a textured quad on the GPU with the canvas effectively being a render target, but I'm wondering if I'm pretty far off base there... 2) I have seen posts here and there with people saying not to use the translate(), rotate(), scale() functions when drawing on the canvas. Am I adding a lot of overhead just by adding a translate() call, as opposed to passing in the x,y to drawImage()? Some people suggest using "transate3d", etc., which are CSS properties, but I'm not sure how to use them within a scene. Can they be used for animated sprites within a single canvas? 3) I have also seen a lot of posts with people mentioning that pre-building canvases and then re-using them is a lot faster than issuing all the individual draw calls again. I am guessing that my background should definitely be pre-built into a canvas, but how far should I take this? Should I maintain an individual canvas for each sprite, to cache all static image data when not animating? Thank you much for your advice!

    Read the article

  • Why doesn't it seem to be any development in the field of 3D VR gear, especially with regard to gaming?

    - by neuviemeporte
    I remember that way back around 1995, there was this big craze with VR in the media, a whole bunch of (mostly mediocre) games labeled as "virtual-reality-interactive-movie (...)" were published. If I recall correctly, the first 3D VR helmet was called VFX-1 and was sold bundled with Descent and some dedicated joystick. I never owned one, and I read just one review which was mostly enthusiastic, but pointed to some weak points, like the eyes getting tired after an hour or so of playing. Then the whole thing basically flickered down and died. I suppose the main reason it wasn't successful was that the hardware of the day was not powerful enough, the VR gear's design wasn't perfected to make it comfortable and natural to use, and the companies that made it failed to market it successfully. What I can't understand is why isn't there any development in the field today. There is some vr-ish hardware mostly targeted at the consoles (Kinect, Wii remote, TrackIR), but all projects of creating some 3d head-mounted display system seem to be in early infancy, appear once in a trade show somewhere and aren't heard of again. I think it could work great with head tracking and some of today's shooters, flight sims (trackIR is nice but the movement scale translation is awkward) and other games with an FPP POV. Is there any technological reason why decent vr headgear can't be made today, or is it just that nobody really cares/everyone is scared to repeat the '90s failure?

    Read the article

  • Why is Ubuntu One slow to sync in 11.10, either backup or any sub-folder contents?

    - by pst007x
    I have been trying to sync my documents folder of 1.4GB, it still hasn't worked and it has been syncing for a month. The top level syncs, files and folders in the Document folders, but contents of sub-folders just hang. (Gave up and stopped syncing this folder) However,I have tried using the backup facility in 11.10, to backup to Ubuntu One.... I upgraded my HDD space in Ubuntu One. It has been going now for 24hours-ish and only backed up what looks like a couple of percent. (By the way what an excellent idea to backup to Ubuntu One, if only we could get it to actually work! :-o) The odd thing is I can sync to drop box within hours, rather than months. This is bad, and has been an issue since Ubuntu One's release. I have reported this problem and there were promises in later releases this would be fixed, but it hasn't. Canonical cannot help either... I posted on several blogs, a lot of people have the same problem but no fixes. So do I use dropbox or another service, until it is sorted, as Ubuntu does not seem to see this as an issue, I think a fix will be a long time in coming. (However,I love the potential of Ubuntu One and the integration with the OS) Yes my internet speeds are fine, etc... :-) No firewall (sudo ufw status: STATUS: INACTIVE), No Proxy, etc NB: I have raised this as a separate question to others posted here, because my question relates to Ubuntu 11.10, though I have commented elsewhere for help. Plus my question also relates to deja-dup backup to Ubuntu One. Thanks

    Read the article

  • crippling repeating "pciehp card not present" notifications

    - by Nanne
    When using ubuntu (12.04, both installed and on a live usb) I get a lot of these messages: pciehp 0000:00:1c.5:pcie04: Card not present on Slot(37) pciehp 0000:00:1c.5:pcie04: Card present on Slot(37) And with a lot I mean about 20 per second. This has a crippling effect, and I would like to get rid of it :) The computer is a packard bell easynote BG48-U-100 DC. I tip I picked up from some fedora/redhat error here was to look at lspci -vnn. I have pasted the part about "00:1c.5" here: http://pastebin.com/0sfsiqW2 For what good it may do, here is the lsmod of my machine: http://pastebin.com/DQZy1kAL From that first pastebin I think to conclude that it has to do with the module shpchp, which seems to me (aka: google) to have something to do with ACPI. That's as far as I've come in disecting this. Can anyone help me along further? What can I do, check etc? I did see this topic but my intentions are not to surpress the error message: I know how to do this (from that topic ;) ), but I'm looking for a real sollution. Finding the problem on the internet does suspect me to believe it is neither an ubuntu specific nor a packard-bell specific problem.If you google the problem it seems that is present on several other distribution/hardware combo's as well, and it looks like the advice is to remove one of the drivers? I have no clue as to which driver I should look at and and what would be the effect of just removing it. I have seen this topic which is old-ish, but describes my problem and is about a similar computer. The solution in this topic was to compile a new kernel using a spanish guide, which seems a bit extreme to me, so I'm kinda hoping for a better solution than that.

    Read the article

  • Dealing with the node.js callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Versioning APIs

    - by Sharon
    Suppose that you have a large project supported by an API base. The project also ships a public API that end(ish) users can use. Sometimes you need to make changes to the API base that supports your project. For example, you need to add a feature that needs an API change, a new method, or requires altering of one of the objects, or the format of one of those objects, passed to or from the API. Assuming that you are also using these objects in your public API, the public objects will also change any time you do this, which is undesirable as your clients may rely on the API objects remaining identical for their parsing code to work. (cough C++ WSDL clients...) So one potential solution is to version the API. But when we say "version" the API, it sounds like this also must mean to version the API objects as well as well as providing duplicate method calls for each changed method signature. So I would then have a plain old clr object for each version of my api, which again seems undesirable. And even if I do this, I surely won't be building each object from scratch as that would end up with vast amounts of duplicated code. Rather, the API is likely to extend the private objects we are using for our base API, but then we run into the same problem because added properties would also be available in the public API when they are not supposed to be. So what is some sanity that is usually applied to this situation? I know many public services such as Git for Windows maintains a versioned API, but I'm having trouble imagining an architecture that supports this without vast amounts of duplicate code covering the various versioned methods and input/output objects. I'm aware that processes such as semantic versioning attempt to put some sanity on when public API breaks should occur. The problem is more that it seems like many or most changes require breaking the public API if the objects aren't more separated, but I don't see a good way to do that without duplicating code.

    Read the article

  • How to get started in coding for JBoss

    - by Mister IT Guru
    I have an idea on how to revamp our internal application, after having accessed the needs of the users, addressing thier current issues, and the like. But I am not a coder. My last application I wrote was in college, in C, (java wasn't invented-ish!) and it was a booking system, with the option to add on other modules, blah blah. I got an A, but I became a system administrator instead, more intrested in designing and maintainend networks and infrastructure, but with the advent of virtualisation, and linux management tools such as puppet I can now manage infrastructure in my sleep! Now I want to write code - to put on my infastructure, and I want to build .... a booking system! This is just to get experience, but I am at a loss as to where to start. Setting up the environment, will take me about a day. Writing the spec, even how I want it to work, I already know, but as for actually coding in a decent manner, I can only guess. If anyone can recommend a book, website, blog, twitter person to follow, or just advice on how to build a kick butt basic jboss app, then please, "I AM READY TO LEARN" :)

    Read the article

  • Dealing with the node callback pyramid

    - by thecoop
    I've just started using node, and one thing I've quickly noticed is how quickly callbacks can build up to a silly level of indentation: doStuff(arg1, arg2, function(err, result) { doMoreStuff(arg3, arg4, function(err, result) { doEvenMoreStuff(arg5, arg6, function(err, result) { omgHowDidIGetHere(); }); }); }); The official style guide says to put each callback in a separate function, but that seems overly restrictive on the use of closures, and making a single object declared in the top level available several layers down, as the object has to be passed through all the intermediate callbacks. Is it ok to use function scope to help here? Put all the callback functions that need access to a global-ish object inside a function that declares that object, so it goes into a closure? function topLevelFunction(globalishObject, callback) { function doMoreStuffImpl(err, result) { doMoreStuff(arg5, arg6, function(err, result) { callback(null, globalishObject); }); } doStuff(arg1, arg2, doMoreStuffImpl); } and so on for several more layers... Or are there frameworks etc to help reduce the levels of indentation without declaring a named function for every single callback? How do you deal with the callback pyramid?

    Read the article

  • Creating a 2D perspective in 3D game

    - by Accatyyc
    I'm new to XNA and 3D game development in general. I'm creating a puzzle game kind of similar to tetris, built with blocks. I decided to build the game in 3D since I can do some cool animations and transitions when using 3D blocks with physics etc. However, I really do want the game to look "2D". My blocks are made up of 3D models, but I don't want that to be visible when they're not animating. I have followed some XNA tutorials and set up my scene like this: this.view = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); this.aspectRatio = graphics.GraphicsDevice.Viewport.AspectRatio; this.projection = Matrix.CreatePerspectiveFieldOfView( MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); ... and it gives me a very 3D-ish look. For example, the blocks in the center of the screen looks exactly how I want them, but closer to the edges of the screen I can see the rotation and sides of them. My guess is that I'm not after a perspective field of view, but any help on which field of view/settings to use to get a "flat" look when the blocks aren't rotated would be great!

    Read the article

  • public_html permissions for local development

    - by maGz
    I know this question has popped up a couple times, but I can't seem to find a definitive answer to my issue, so please bear with me. I have Ubuntu Server 12.04 setup in VirtualBox for PHP development and testing (Drupal plus other PHP sites using Yii framework). My question is in 3 parts... 1) If I create a public_html folder under /home/myuser, do I need to give ownership of that folder to the Apache www-data group? If so, are there any specific permissions I should be setting? 755? (Btw, I am following this guide to create the public_html directory and set up multiple virtual hosts per site I create and test) I previously had all of my sites under /var/www, but ran into massive permission denied errors whenever I tried to sFTP to it, either through FileZilla or PhpStorm. This is what I had previously done: sudo chgrp www-data /var/www sudo chmod -R 775 /var/www sudo chmod -R g+s /var/www sudo usermod -G www-data [my_ftp_user] 2) The second part of my question is this: If I create my PHP project and files in Windows through PhpStorm, and then upload via sFTP, will permissions get affected? 3) Once I am satisfied with my developed project, would it be advisable to move and test them under /var/www to see how it would fair in a production-ish environment? I would really appreciate the help and advice here. I'm learning more as I go along, but dealing with Linux files and permissions is a bit of a new ballgame for me! Thank you

    Read the article

  • Do you think we will ever settle on a "standard" platform? [closed]

    - by GazTheDestroyer
    The recent explosion of phone platforms has depressed me (slightly), and made me wonder if we will ever reach any kind of standard for presentation? I don't mean language or IDE. Different languages have different strengths and I can see that there may always be a need for disparity, although I do note that languages are merging somewhat in functionality, with traditional imperitive languages like C++ now supporting things like lambdas. What I'm really talking about is a common presentation mechanism. Before smart phones and tablets came along, the web seemed to be finally becoming a reasonable platform for presenting an application that was globally accessible, not just geographically, but by platform too. Sure there are still (sometimes infuriating) implementation differences and quirks, but if you wrote a decent site you knew it could be accessed on anything from a PC to a phone to a C64 running the right software. "Write Once Run Anywhere" seemed to finally be becoming a reality. However, in the last few years we've seen an explosion of mobile operating systems, and the ubiquitous "app". A good site is no longer enough, you need a native "app", and of course we have a sudden massive disparity in OS, language, and APIs needed to write them as each battles for supremecy. It's kind of weird how the cycle of popularity goes. Mainframes with terminals - thin client. PC - thick client. Web browser - thin client. Phone app - thick(ish) client. I just wonder if you think there will ever be a global standard for clients, or whether the "shiny and different" cycle will always continue along with the battle of the tech du jour.

    Read the article

  • Learning to implement DIC in MVC

    - by Tom
    I am learning to apply DIC to MVC project. So, I have sketched this DDD-ish DIC-ready-ish layout to my best understanding. I have read many blogs articles wikis for the last few days. However, I am not confident about implementing it correctly. Could you please demonstrate to me how to put them into DIC the proper way? I prefer Ninject or Windsor after all the readings, but anyDIC will do as long as I can get the correct idea how to do it. Web controller... public class AccountBriefingController { //create private IAccountServices accountServices { get; set; } public AccountBriefingController(IAccountServices accsrv) accountServices = accsrv; } //do work public ActionResult AccountBriefing(string userid, int days) { //get days of transaction records for this user BriefingViewModel model = AccountServices.GetBriefing(userid, days); return View(model); } } View model ... public class BriefingViewModel { //from user repository public string UserId { get; set; } public string AccountNumber {get; set;} public string FirstName { get; set; } public string LastName { get; set; } //from account repository public string Credits { get; set; } public List<string> Transactions { get; set; } } Service layer ... public interface IAccountServices { BriefingViewModel GetBriefing(); } public class AccountServices { //create private IUserRepository userRepo {get; set;} private IAccountRepository accRepo {get; set;} public AccountServices(UserRepository ur, AccountRepository ar) { userRepo = ur; accRepo = ar; } //do work public BriefingViewModel GetBriefing(string userid, int days) { var model = new BriefingViewModel(); //<---is that okay to new a model here?? var user = userRepo.GetUser(userid); if(user != null) { model.UserId = userid; model.AccountNumber = user.AccountNumber; model.FirstName = user.FirstName; model.LastName = user.LastName; //account records model.Credits = accRepo.GetUserCredits(userid); model.Transactions = accRepo.GetUserTransactions(userid, days); } return model; } } Domain layer and data models... public interface IUserRepository { UserDataModel GetUser(userid); } public interface IAccountRepository { List<string> GetUserTransactions(userid, days); int GetUserCredits(userid); } // Entity Framework DBContext goes under here Please point out if my implementation is wrong, e.g.I can feel in AccountServices-GetBriefing - new BriefingViewModel() seems wrong to me, but I don't know how to fit the stud into DIC? Thank you very much for your help!

    Read the article

  • 16-bit PNGs in Slick2D

    - by Neglected
    I'm working on a project and I'm using some 3rd party sprites just to get it off the ground; recently I've come into a hitch. Slick2D doesn't seem to want to load my images. That is, it will warn me that images are the wrong bit-depth. All the images are in 16-bit PNG form (PNG is required for transparency). Is there any way I can disable the warning (being the bad guy programmer (the console print for each individual load REALLY SLOWS DOWN the image)) or is there another solution? I was thinking about converting all images (using imagemagick) to .gif (with an alpha channel). Would there be any loss in quality between formats? EDIT: I tried using imagemagick but some of the sprites use pure black so I can't do that without wrecking the image. EDIT2: using "identify" on any of the images show them as being 8-bit.. but Slick2D won't load them. What the hell? D: EDIT3: Issue solved (ish). If you are googling this then just disable the java png loader from slick by sticking this somewhere in your code (like the main method): System.setProperty("org.newdawn.slick.pngloader", "false");

    Read the article

  • how to store and retrieve/generate UI?

    - by thindery
    I'm working on a site that will have hundreds, and eventually thousands, of paper products that users can customize online. Here is a very simple sample of what needs to be generated based on the product id: demo. This is a very simple version. I plan on replacing text fields with prettier elements(like the slider on tab 3). I imagine most of this can be achieved via jquery. So basically a product will have multiple pages(tabs), with multiple form elements on each page. I've never done a large scale project like this before and I am looking for ideas/suggestions for how I can store the info for each product that needs to be generated to create the UI. For each product, I need to store how many pages there are, what form fields are on each page, and the order of the fields on the page. As well as setting default text values and form options(font size, etc). Then with all this info stored somewhere, I can have the web app retrieve it and generate the UI with text fields, sliders, and other jquery-ish form enhancements, for that particular product. Can anyone toss out some suggestions, links, blogs, tutorials? I'm not really sure where to begin with this or what I need to start to investigate. I have experience with php, mysql, javascript, jquery, html, css, and that is really about it. I'm open to learning(and would enjoy exploring) new frameworks, programming, etc that will really get this web app working correctly, efficiently, and effectively. Maybe I should start looking into a mvc framework? like i said, i really have no idea what is the best approach. please let me know your suggestions!

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >