Search Results

Search found 12328 results on 494 pages for 'cool features'.

Page 390/494 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • Continuous Deployment with an ASP.NET website?

    - by Amber Shah
    I have a website in C#/ASP.NET that is currently in development. When we are in production, I would like to do releases frequently over the course of the day, as we fix bugs and add features (like this: http://toni.org/2010/05/19/in-praise-of-continuous-deployment-the-wordpress-com-story/). If you upload a new version of the site or even change a single file, it kicks out the users that are currently logged in and makes them start over any forms and such. Is there a secret to being able to do deployments without interfering with users for .NET sites?

    Read the article

  • Why does java have an interpreter? and not a compiler?

    - by Galaxin
    Iam a newbie to java and was wondering why java have a interpreter and not a compiler? While shifting from c++ to java we come across the differences between these two Compilation process being one of them. 1.A major difference between a compiler and interpreter is that compiler compiles the whole code at once and displays all the errors at a time whereas an interpreter interprets line by line. 2.Also a compiler takes a less time to compile a code when compared to an interpreter. When java was developed for more advanced and easy features and implementations why has it been restricted to a interpreter based on above facts? Is there any special reason why this is so? If yes what is it?

    Read the article

  • Eclipse call hierarchy skips calls in undefined #ifdef regions

    - by stupakov
    Hi all, The "call hierarchy" and "declaration" features in Eclipse CDT omit results that exist in undefined (greyed out) #ifdef regions. Example: void blah(void) { #ifndef ABC foo(); #else //line is greyed out bar(); //line is greyed out #endif //line is greyed out } The call hierarchy for foo() will list blah() as a caller; the call hierarchy for bar() will not list blah(). I'm not expecting it to do full resolution of which #define blocks will get compiled, I simply would like it to return all calls/declarations of the function I'm searching for, regardless of the #define blocks that surround it. Other IDEs such as SlickEdit are able to do this. Does anyone know of a way to get Eclipse to adopt this behavior? Thanks.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • What is the point of heightmaps?

    - by Jake Petroules
    I've been pondering this question awhile now... many 3d engines support advanced terrain rendering using quadtrees, LOD... all the features you expect. But every engine I've seen loads height data from heightmaps... grayscale bitmaps. I just can't understand how this is useful - each point in a heightmap can have one of 256 values. But what if you wanted to model Mt. Everest? with detail of 1 meter, or even greater? That's far outside the range of 256. Of course I understand that you can implement your own terrain format to achieve this, but I just can't see why heightmaps are so widely used despite their great limitations.

    Read the article

  • MATLAB: draw centroids

    - by Myx
    Hello - my main question is given a feature centroid, how can I draw it in MATLAB? In more detail, I have an NxNx3 image (an rgb image) of which I take 4x4 blocks and compute a 6-dimensional feature vector for each block. I store these feature vectors in an Mx6 matrix on which I run kmeans function and obtain the centroids in a kx6 matrix, where k is the number of clusters and 6 is the number of features for each block. How can I draw these center clusters in my image in order to visualize if the algorithm is performing the way I wish it to perform? Or if anyone has any other way/suggestions on how I can visualize the centroids on my image, I'd greatly appreciate it. Thank you.

    Read the article

  • stagewebview calling local javascript

    - by wangjl1110
    friends. I'm currently developing a flex mobile project. I currently need to load local javascript using stagewebview. Like: var str:String = '<head>'+ '<script src="myLocalJs.js"/>'+ '</head><body>...</body>'; webView:StageWebView = new StageWebView(); webView.loadString(str); Is there any way to load local javascript using StageWebView? I'm not expecting an answer like 'There is a project called StageWebViewBridge' since it does't have all the features I need. THX!!

    Read the article

  • OpenLayers: Get resolution of map in a given projection (4326)

    - by David Pfeffer
    I'm using OpenLayers to display OpenStreetMap maps. (Though, I'd assume this should be general enough to work for any map product...) I'm displaying some very sophisticated vector overlays, and the amount and resolution of the features I'm returning from the server via GeoJSON to overlay has proven too much for many computers. What I'd like to do now instead is to only send data befitting the resolution of the current zoom. This should be relatively easy to do using the getResolution and calculateBounds methods on the Map object. calculateBounds returns a Bounds object that then can be transformed between the map projection and display projection. How do I transform the resolution into my desired projection (4326 in this case)?

    Read the article

  • mySQL & Relational databases: How to handle sharding/splitting on application level?

    - by Industrial
    Hi everybody, I have thought a bit about sharding tables, since partitioning cannot be done with foreign keys in a mySQL table. Maybe there's an option to switch to a different relational database that features both, but I don't see that as an option right now. So, the sharding idea seems like a pretty decent thing. But, what's a good approach to do this on a application level? I am guessing that a take-off point would be to prefix tables with a max value for the primary key in each table. Something like products_4000000 , products_8000000 and products_12000000. Then the application would have to check with a simple if-statement the size of the id (PK) that will be requested is smaller then four, eight or twelve million before doing any actual database calls. So, is this a step in the right direction or are we doing something really stupid?

    Read the article

  • Implementation communication protocols in C/C++

    - by MeThinks
    I am in the process of starting to implement some proprietary communication protocol stack in software but not sure where to start. It is the kind of work I have not done before and I am looking for help in terms of resources for best/recommended approaches. I will be using c/c++ and I am free to use use libraries (BSD/BOOST/Apache) but no GPL. I have used C++ extensively so using the features of C++ is not a problem. The protocol stack has three layers and it is already fully specified and formally verified. So all I need to do is implemented and test it fully in the specified languages. Should also mention that protocol is very simple but can run on different devices over a reliable physical transport layer Any help with references/recommendations will be appreciated. I am willing to use a different language if only to help me understand how to implement them but I will have to eventually resort to the language of choice.

    Read the article

  • Asp.net Report Viewer - Custom filter parameters

    - by Chris
    Hi all, for a data warehouse project I need to know about some best practices regarding custom report viewer filters/parameters. Usually I use the standard parameter feature for reports, like multiple select boxes, check boxes, text boxes etc.. But for the current project some reports require more complex report parameters. E.g. a user wants to analyze some measures. For that the user needs to set a filter on a specific address. There are over 100.000 address to choose from, so he has to have the ability to search for an address (full text). Since such features cannot be done with the standard parameters, I will have to create custom params within a ASPX page which are then passed to the report viewer control. So my question is: Are there any best practices on how to create custom parameters? Did anyone had similar problems, if so, how did you solve it?

    Read the article

  • Is Amazon SQS the right choice here? Rails performance issue.

    - by ole_berlin
    I'm close to releasing a rails app with the common networking features (messaging, wall, etc.). I want to use some kind of background processing (most likely Bj) for off-loading tasks from the request/response cycle. This would happen when users invite friends via email to join and for email notifications. I'm not sure if I should just drop these invites and notifications in my Database, using a model and then just process it with a worker process every x minutes or if I should go for Amazon SQS, storing the messages and invites there and let my worker retrieve it from Amazon SQS for processing (sending the invites / notifications). The Amazon approach would get load off my Database but I guess it is slower to retrieve messages from there. What do you think?

    Read the article

  • Recommendation of Jquery Table pager plugin?

    - by chobo2
    Hi I was trying to use the pager plugin that comes with the tablesorter plugin but I can't get it to work as you can see from my previous post http://stackoverflow.com/questions/2836680/need-help-with-jquery-tablesorter-pager-plugin. I given up on this plugin as no one can seem to come up with a solution how to make it work and I kinda need to get this place soon. So now I am looking for a new one but it must have the following features. Work on tables Work on tables that have the tablesorter 2.0 plugin on it( so I don't want a pager plugin that comes with its own table sorter since I don't want to change that. It should be a standalone pager plugin). Be able to add rows dynamically to the table and some how update the pager so this row now becomes part of the pager. Thanks

    Read the article

  • finding download box element with capybara in cucumber test

    - by a5his
    Hi, I have a link that downloads a file. As I click the link it displays dialog box with "save" and "open" option and "Cancel" and "OK" button. I want to find "OK" and "Cancel" button for cucumber test. I took help from below link but didn't helped much. How to test a confirm dialog with Cucumber? **features code** And I want to click "OK" **steps code** Then /^I want to click "([^\"]*)"$/ do |option| retval = (option == "OK") ? "true" : "false" page.evaluate_script('window.confirm = function() { return true; }') page.click("OK") end

    Read the article

  • Accessing contents of a file in a web-application without uploading.

    - by UniCoder
    As far as I can tell, it is impossible to access the content of files on the user's computer in a web application without first uploading to the server, then re-downloading to user, unless some sort of plug-in is used. (Flash, etc.) Ideally, the user would upload the file directly to localstorage and then scripts would have a chance to process/display/validate/filter without the user having to wait on an upload. Are there any features in upcoming web standards such as html5 that will allow this? If not, why has there been no effort to make this possible, and how can I work around it without getting stuck with plugins?

    Read the article

  • In Entity framework, we can use Model first, DB first, Code first but how can we create table programmatically

    - by AukI
    In entity framework we can use 3 approaches model first , code first , database first but each one of them needs manual hand touch(means creating database or create model or write the POCO class codes or entity class codes) before proceeding to the next step ( using EF in context ). What if I want to create database and tables and table relationships programatically and still want to have to features of EntityFramework 4.3. To be more specific , from this example http://support.microsoft.com/kb/307283 we can create database , tables and everything using SQL command but we can't have the advantages of entity framework. So if we want to have that what should we do?

    Read the article

  • Drupal: how to upgrade a running production website to a dev version?

    - by FractalizeR
    Can you help me to understand, how do I do Drupal website deployment and development? Suppose, I developed 1.0 version of Berty&Frank website. I copied everything to their production server and it is alive and kicking now. Site is already full of contents and is growing. I am asked to add additional features to the website. I am now experimenting with the way how I can implement them in a dev version. I am creating/deleting content types, fill created nodes with demo data just to see how they look like etc. Now I found the way and I want to upgrade production website to the same structure as my dev version now. How do I do that? Is the only way to manually make every change I made in dev version?

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

  • How I Can do web programming with Lisp or Scheme?

    - by Castro
    I usually write web apps in PHP, Ruby or Perl. I am starting the study of Scheme and I want to try some web project with this language. But I can't find what is the best environment for this. I am looking for the following features: A simple way of get the request parameters (something like: get-get #key, get-post #key, get-cookie #key). Mysql access. HTML Form generators, processing, validators, etc. Helpers for filter user input data (something like htmlentities, escape variables for put in queries, etc). FLOSS. And GNU/Linux friendly. So, thanks in advance to all replies.

    Read the article

  • Rolling back or re-creating the master branch in git?

    - by Matthew Savage
    I have a git repo which has a few branches - there's the master branch, which is our stable working version, and then there is a development/staging branch which we're doing new work in. Unfortunately it would appear that without thinking I was a bit overzealous with rebasing and have pulled all of the staging code into Master over a period of time (about 80 commits... yes, I know, stupid, clumsy, poor code-man-ship etc....). Due to this it makes it very hard for me to do minor fixes on the current version of our app (a rails application) and push out the changes without also pushing out the 'staged' new features which we don't yet want to release. I am wondering if it is possible to do the following: Determine the last 'trunk' commit Take all commits from that point onward and move them into a separate branch, more or less rolling back the changes Start using the branches like they were made for. Unfortunately, though, I'm still continually learning about git, so I'm a bit confused about what to really do here. Thanks!

    Read the article

  • How to stretch the image from one screen to two screens

    - by wxiiir
    I want to be able to stretch the image from one monitor to a second monitor even if i have to use some software to do it. For example i want the top half of the image that is now shown on my first monitor to occupy the whole second monitor and i want the bottom half to occupy the whole first monitor, or the other way around i don't really care as long as it works. I would be cool to know how to do the same stuff but to the left half and right half. I know that image pixels would have twice the lenght or height this way but i don't really care about it as long as it works so basically i want the stuff to show on both monitors but with the same pixels as before. I have a hd4870 and windows7 and hd4000 family doesnt support having two monitors behaving like a large one, only hd5000 upwards, this would solve my problems without any of the drawbacks but it just can't be done (or maybe it can via software but i'm just too tired of searching). A solution to make almost any graphic card have two monitors behaving like a large one is matrox dualhead2go but that's just as expensive as a good hd5000 card so it's not worth it. thanks in advance EDIT I guess that nobody so far was able to fully comprehend my problem that was very explicitly written but i will elaborate some more. My hd4870 can have 2 monitors working with it but some stuff like games won't run on both monitors, which sucks. There are some ways to circumvent this problem and two of them are perfect or almost perfect but expensive and the third would be a software solution that would make it possible. The first one is to have and hd5000 family video card which will work just fine with both monitors. The second is to have a matrox dualhead2go that will make my hd4870 detect my two monitors as a large monitor. The third is to have a software that makes my two displays be detected as a large display and then captures the output of the video card, splits the images and renders them as 2d images to both monitors OR a simpler one but that would make outputted pixels double the width or height would be to capture the output of the graphics card to one screen, split it in two and enlarge it to fit both monitors and then output it to the monitors. p.s. By capturing the output of the video card i mean just make the video card process the stuff in a certain way. Making the video card detect two monitors as a large one via software may be a bit impossible or impracticable but stretching the output as a 2d image from one to both monitors for some coders should be a walk in the park so it would be likely that such program would exist or that some widespread softwares for dual monitor would have such function in them.

    Read the article

  • Finding the longest road in a Settlers of Catan game algorithmically

    - by Jay
    I'm writing a Settlers of Catan clone for a class. One of the extra credit features is automatically determining which player has the longest road. I've thought about it, and it seems like some slight variation on depth-first search could work, but I'm having trouble figuring out what to do with cycle detection, how to handle the joining of a player's two initial road networks, and a few other minutiae. How could I do this algorithmically? For those unfamiliar with the game, I'll try to describe the problem concisely and abstractly: I need to find the longest possible path in an undirected cyclic graph.

    Read the article

  • Alternatives libraries for loading PNG images

    - by Robert
    My java J2SE application is reading a lot of (png) images from the web and some of them use features such as a transparency color for true-color images (tRNS section) that Sun's/Oracle's PNGImageReader implementation simply ignores. Therefore the common solution for loading via ImageIO.read(...); does not work for me as it relies on this incomplete PNGImageReader implementation. Does anybody know a png reader implementation that can read all forms of PNG images correctly - those with color table or true-color and alpha transparency or transparent color? As it is for a GPL project it should be a non-commercial one that can be included without licensing problems into the app. Edit: My be this question was too specific. Therefore let be redesign my question: Who knows alternative implementations and libraries that are able to load PNG files? I will then test the implementations for their capabilities to load some test png images. Edit2: The end result have to be a BufferedImage

    Read the article

  • C++ build systems

    - by flo
    I will start a new C++ project (it may have some C components as well) soon and I am looking for a modern, industrial-strength (i.e. non-beta) build system. The software will be created by several developers in 3-5 years and will run on Linux (Mac OS X and Windows might be supported later). I am looking for something that has better comprehensibility, ease-of-use and maintainability than e.g. make but is still powerful enough to handle a complex project. Open source software is preferred. I started looking into Boost.Build, CMake, Maven and SCons so far and liked features and concepts of all of those, but I'm lacking the experience to make a decision for a large project.

    Read the article

  • update iphone application behaviour

    - by Jim
    Hi, I developed one database related application for iPhone device(SQlite database). Now i want to update that application with more features(I want to push an update for the same application). Here i am more concerned about the user data while pushing the update so my question is if i will push an update then does the update will clear all the data that is stored in .sqlite file? if this is case then how to push application update without modifying the previous data in the database file? Please suggest. Thanks, Jim.

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >