Search Results

Search found 12328 results on 494 pages for 'cool features'.

Page 390/494 | < Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >

  • What are the most frustrating Python hacks to unwind, rewrite, etc.?

    - by Bialecki
    My impression of Python from the short time I've been developing with it is that it's incredible powerful and flexible, but I can't help but feel like "with great power comes great responsibility." So while I've read numerous blog posts about simple and elegant Python snippets that solve a problems, I wonder if there are design patterns or abuses of Python language features that, once built into an application or library, cause the code to be incredibly brittle and near impossible to refactor. So the question is basically what are the most frustrating, but somewhat common, Python "hacks" or language feature abuses that someone can introduce that will cause nightmares for future maintainers of that code?

    Read the article

  • How, in general, can web framework support REST style?

    - by juro
    I would like to know, what are the ways a web framework may be suitable for designing a RESTful app, in general. One goal is for example to provide http request routing, so they are automatically sent to appropriate controllers. From architectural point of view, web framework based on MVC pattern are more suitable for REST. What other features of web frameworks are helpful by building apps satisfying the REST constraints? Is there any reason why you consider certain languages(python/java) or web frameworks(django/turbogears/jersey/restlets/...) as the most applicable ones?

    Read the article

  • Good code visualization / refactoring tools for C++?

    - by Paul D.
    I've found myself coming across a lot of reasonably large, complicated codebases at work recently which I've been asked to either review or refactor or both. This can be extremely time consuming when the code is highly concurrent, makes heavy use of templates (particularly static polymorphism) and has logic that depends on callbacks/signals/condition variables/etc. Are there any good visualization tools for C++ period, and of those are there any that actually play well with "advanced" C++ features? Anything would probably be better than my approach now, which is basically pen+paper or stepping through the debugger. The debugger method can be good for following a particular code path, but isn't great for seeing the big picture you really need when doing serious refactoring. EDIT: I should mention that Visual Studio plugins aren't going to be a lot of help to me, since our stuff is mostly Linux-only.

    Read the article

  • Is my TFS2010 backup/restore hosed?

    - by bwerks
    Hi all, I recently set up a sandbox TFS to test TFS-specific features without interfering with the production TFS. I was happy I did this sooner than I thought--I hadn't been backing up the encryption key from SSRS and upon restoring the reporting databases, they remained inactive, requiring initialization that could only come from applying the encryption key. Said encryption key was lost when I nuked the partition after backing up the TFS databases. The only option I seemed to have is to delete the encrypted data. I'm fine with this, since there wasn't much in there to begin with, however once they're deleted I'm not quite sure how to configure TFS to recognize a new installation of these services while using the restored versions of everything else. Unfortunately, the TFS help file doesn't seem to account for this state though. Is there a way to essentially rebuild the reporting and analysis databases? Or are they gone forever?

    Read the article

  • Turbogears 2 vs Django - any advice on choosing replacement for Turbogears 1?

    - by michela
    I have been using Turbogears 1 for prototyping small sites for the last couple of years and it is getting a little long in the tooth. Any suggestions on making the call between upgrading to Turbogears 2 or switching to something like Django? I'm torn between the familiarity of the TG community who are pretty responsive and do pretty good documentation vs the far larger community using Django. I am quite tempted by the built-in CMS features and the Google AppEngine support. Any advice? Thanks .M.

    Read the article

  • How to manage maintenance/bug-fix branches in Subversion when third-party installers are involved?

    - by Mike Spross
    We have a suite of related products written in VB6, with some C# and VB.NET projects, and all the source is kept in a single Subversion repository. We haven't been using branches in Subversion (although we do tag releases now), and simply do all development in trunk, creating new releases when the trunk is stable enough. This causes no end of grief when we release a new version, issues are found with it, and we have already begun working on new features or major changes to the trunk. In the past, we would address this in one of two ways, depending on the severity of the issues and how stable we thought the trunk was: Hurry to stabilize the trunk, fix the issues, and then release a maintenance update based on the HEAD revision, but this had the side effect of releases that fixed the bugs but introduced new issues because of half-finished features or bugfixes that were in trunk. Make customers wait until the next official release, which is usually a few months. We want to change our policies to better deal with this situation. I was considering creating a "maintenance branch" in Subversion whenever I tag an official release. Then, new development would continue in trunk, and I can periodically merge specific fixes from trunk into the maintenance branch, and create a maintenance release when enough fixes are accumulated, while we continue to work on the next major update in parallel. I know we could also have a more stable trunk and create a branch for new updates instead, but keeping current development in trunk seems simpler to me. The major problem is that while we can easily branch the source code from a release tag and recompile it to get the binaries for that release, I'm not sure how to handle the setup and installer projects. We use QSetup to create all of our setup programs, and right now when we need to modify a setup project, we just edit the project file in-place (all the setup projects and any dependencies that we don't compile ourselves are stored on a separate server, and we make sure to always compile the setup projects on that machine only). However, since we may add or remove files to the setup as our code changes, there is no guarantee that today's setup projects will work with yesterday's source code. I was going to put all the QSetup projects in Subversion to deal with this, but I see some problems with this approach. I want the creation of setup programs to be as automated as possible, and at the very least, I want a separate build machine where I can build the release that I want (grabbing the code from Subversion first), grab the setup project for that release from Subversion, recompile the setup, and then copy the setup to another place on the network for QA testing and eventual release to customers. However, when someone needs to change a setup project (to add a new dependency that trunk now requires or to make other changes), there is a problem. If they treat it like a source file and check it out on their own machine to edit it, they won't be able to add files to the project unless they first copy the files they need to add to the build machine (so they are available to other developers), then copy all the other dependencies from the build machine to their machine, making sure to match the folder structure exactly. The issue here is that QSetup uses absolute paths for any files added to a setup project. However, this means installing a bunch of setup dependencies onto development machines, which seems messy (and which could destabilize the development environment if someone accidentally runs the setup project on their machine). Also, how do we manage third-party dependencies? For example, if the current maintenance branch used MSXML 3.0 and the trunk now requires MSXML 4.0, we can't go back and create a maintenance release if we have already replaced the MSXML library on the build machine with the latest version (assuming both versions have the same filename). The only solution I can think is to either put all the third-party dependencies in Subversion along with the source code, or to make sure we put different library versions in separate folders (i.e. C:\Setup\Dependencies\MSXML\v3.0 and C:\Setup\Dependencies\MSXML\v4.0). Is one way "better" or more common than the other? Are there any best practices for dealing with this situation? Basically, if we release v2.0 of our software, we want to be able to release v2.0.1, v2.0.2, and v.2.0.3 while we work on v2.1, but the whole setup/installation project and setup dependency issue is making this more complicated than the the typical "just create a branch in Subversion and recompile as needed" answer.

    Read the article

  • Implementation communication protocols in C/C++

    - by MeThinks
    I am in the process of starting to implement some proprietary communication protocol stack in software but not sure where to start. It is the kind of work I have not done before and I am looking for help in terms of resources for best/recommended approaches. I will be using c/c++ and I am free to use use libraries (BSD/BOOST/Apache) but no GPL. I have used C++ extensively so using the features of C++ is not a problem. The protocol stack has three layers and it is already fully specified and formally verified. So all I need to do is implemented and test it fully in the specified languages. Should also mention that protocol is very simple but can run on different devices over a reliable physical transport layer Any help with references/recommendations will be appreciated. I am willing to use a different language if only to help me understand how to implement them but I will have to eventually resort to the language of choice.

    Read the article

  • How to stretch the image from one screen to two screens

    - by wxiiir
    I want to be able to stretch the image from one monitor to a second monitor even if i have to use some software to do it. For example i want the top half of the image that is now shown on my first monitor to occupy the whole second monitor and i want the bottom half to occupy the whole first monitor, or the other way around i don't really care as long as it works. I would be cool to know how to do the same stuff but to the left half and right half. I know that image pixels would have twice the lenght or height this way but i don't really care about it as long as it works so basically i want the stuff to show on both monitors but with the same pixels as before. I have a hd4870 and windows7 and hd4000 family doesnt support having two monitors behaving like a large one, only hd5000 upwards, this would solve my problems without any of the drawbacks but it just can't be done (or maybe it can via software but i'm just too tired of searching). A solution to make almost any graphic card have two monitors behaving like a large one is matrox dualhead2go but that's just as expensive as a good hd5000 card so it's not worth it. thanks in advance EDIT I guess that nobody so far was able to fully comprehend my problem that was very explicitly written but i will elaborate some more. My hd4870 can have 2 monitors working with it but some stuff like games won't run on both monitors, which sucks. There are some ways to circumvent this problem and two of them are perfect or almost perfect but expensive and the third would be a software solution that would make it possible. The first one is to have and hd5000 family video card which will work just fine with both monitors. The second is to have a matrox dualhead2go that will make my hd4870 detect my two monitors as a large monitor. The third is to have a software that makes my two displays be detected as a large display and then captures the output of the video card, splits the images and renders them as 2d images to both monitors OR a simpler one but that would make outputted pixels double the width or height would be to capture the output of the graphics card to one screen, split it in two and enlarge it to fit both monitors and then output it to the monitors. p.s. By capturing the output of the video card i mean just make the video card process the stuff in a certain way. Making the video card detect two monitors as a large one via software may be a bit impossible or impracticable but stretching the output as a 2d image from one to both monitors for some coders should be a walk in the park so it would be likely that such program would exist or that some widespread softwares for dual monitor would have such function in them.

    Read the article

  • Eclipse call hierarchy skips calls in undefined #ifdef regions

    - by stupakov
    Hi all, The "call hierarchy" and "declaration" features in Eclipse CDT omit results that exist in undefined (greyed out) #ifdef regions. Example: void blah(void) { #ifndef ABC foo(); #else //line is greyed out bar(); //line is greyed out #endif //line is greyed out } The call hierarchy for foo() will list blah() as a caller; the call hierarchy for bar() will not list blah(). I'm not expecting it to do full resolution of which #define blocks will get compiled, I simply would like it to return all calls/declarations of the function I'm searching for, regardless of the #define blocks that surround it. Other IDEs such as SlickEdit are able to do this. Does anyone know of a way to get Eclipse to adopt this behavior? Thanks.

    Read the article

  • HTTP POSTed files automatically uploaded to root directory

    - by Baddie
    I just inherited an ASP.NET WebForms web application that I was tasked with refactoring. One of the features is a file upload and while debugging I noticed that as soon as a file is posted to a certain page/handler, it is automatically uploaded to the root directory of the application. The file is then moved to the proper location. I can't seem to figure out whats causing this automatic upload of the file. Is there something I'am overlooking in ASP.NET WebForms that allows this to happen? Is it an IIS configuration or something?

    Read the article

  • MATLAB: draw centroids

    - by Myx
    Hello - my main question is given a feature centroid, how can I draw it in MATLAB? In more detail, I have an NxNx3 image (an rgb image) of which I take 4x4 blocks and compute a 6-dimensional feature vector for each block. I store these feature vectors in an Mx6 matrix on which I run kmeans function and obtain the centroids in a kx6 matrix, where k is the number of clusters and 6 is the number of features for each block. How can I draw these center clusters in my image in order to visualize if the algorithm is performing the way I wish it to perform? Or if anyone has any other way/suggestions on how I can visualize the centroids on my image, I'd greatly appreciate it. Thank you.

    Read the article

  • What is the point of heightmaps?

    - by Jake Petroules
    I've been pondering this question awhile now... many 3d engines support advanced terrain rendering using quadtrees, LOD... all the features you expect. But every engine I've seen loads height data from heightmaps... grayscale bitmaps. I just can't understand how this is useful - each point in a heightmap can have one of 256 values. But what if you wanted to model Mt. Everest? with detail of 1 meter, or even greater? That's far outside the range of 256. Of course I understand that you can implement your own terrain format to achieve this, but I just can't see why heightmaps are so widely used despite their great limitations.

    Read the article

  • Rolling back or re-creating the master branch in git?

    - by Matthew Savage
    I have a git repo which has a few branches - there's the master branch, which is our stable working version, and then there is a development/staging branch which we're doing new work in. Unfortunately it would appear that without thinking I was a bit overzealous with rebasing and have pulled all of the staging code into Master over a period of time (about 80 commits... yes, I know, stupid, clumsy, poor code-man-ship etc....). Due to this it makes it very hard for me to do minor fixes on the current version of our app (a rails application) and push out the changes without also pushing out the 'staged' new features which we don't yet want to release. I am wondering if it is possible to do the following: Determine the last 'trunk' commit Take all commits from that point onward and move them into a separate branch, more or less rolling back the changes Start using the branches like they were made for. Unfortunately, though, I'm still continually learning about git, so I'm a bit confused about what to really do here. Thanks!

    Read the article

  • OpenLayers: Get resolution of map in a given projection (4326)

    - by David Pfeffer
    I'm using OpenLayers to display OpenStreetMap maps. (Though, I'd assume this should be general enough to work for any map product...) I'm displaying some very sophisticated vector overlays, and the amount and resolution of the features I'm returning from the server via GeoJSON to overlay has proven too much for many computers. What I'd like to do now instead is to only send data befitting the resolution of the current zoom. This should be relatively easy to do using the getResolution and calculateBounds methods on the Map object. calculateBounds returns a Bounds object that then can be transformed between the map projection and display projection. How do I transform the resolution into my desired projection (4326 in this case)?

    Read the article

  • Recommendation of Jquery Table pager plugin?

    - by chobo2
    Hi I was trying to use the pager plugin that comes with the tablesorter plugin but I can't get it to work as you can see from my previous post http://stackoverflow.com/questions/2836680/need-help-with-jquery-tablesorter-pager-plugin. I given up on this plugin as no one can seem to come up with a solution how to make it work and I kinda need to get this place soon. So now I am looking for a new one but it must have the following features. Work on tables Work on tables that have the tablesorter 2.0 plugin on it( so I don't want a pager plugin that comes with its own table sorter since I don't want to change that. It should be a standalone pager plugin). Be able to add rows dynamically to the table and some how update the pager so this row now becomes part of the pager. Thanks

    Read the article

  • finding download box element with capybara in cucumber test

    - by a5his
    Hi, I have a link that downloads a file. As I click the link it displays dialog box with "save" and "open" option and "Cancel" and "OK" button. I want to find "OK" and "Cancel" button for cucumber test. I took help from below link but didn't helped much. How to test a confirm dialog with Cucumber? **features code** And I want to click "OK" **steps code** Then /^I want to click "([^\"]*)"$/ do |option| retval = (option == "OK") ? "true" : "false" page.evaluate_script('window.confirm = function() { return true; }') page.click("OK") end

    Read the article

  • Asp.net Report Viewer - Custom filter parameters

    - by Chris
    Hi all, for a data warehouse project I need to know about some best practices regarding custom report viewer filters/parameters. Usually I use the standard parameter feature for reports, like multiple select boxes, check boxes, text boxes etc.. But for the current project some reports require more complex report parameters. E.g. a user wants to analyze some measures. For that the user needs to set a filter on a specific address. There are over 100.000 address to choose from, so he has to have the ability to search for an address (full text). Since such features cannot be done with the standard parameters, I will have to create custom params within a ASPX page which are then passed to the report viewer control. So my question is: Are there any best practices on how to create custom parameters? Did anyone had similar problems, if so, how did you solve it?

    Read the article

  • Is Amazon SQS the right choice here? Rails performance issue.

    - by ole_berlin
    I'm close to releasing a rails app with the common networking features (messaging, wall, etc.). I want to use some kind of background processing (most likely Bj) for off-loading tasks from the request/response cycle. This would happen when users invite friends via email to join and for email notifications. I'm not sure if I should just drop these invites and notifications in my Database, using a model and then just process it with a worker process every x minutes or if I should go for Amazon SQS, storing the messages and invites there and let my worker retrieve it from Amazon SQS for processing (sending the invites / notifications). The Amazon approach would get load off my Database but I guess it is slower to retrieve messages from there. What do you think?

    Read the article

  • Rails 3 Create method using nested resources?

    - by user1461119
    How can I clean this up using rails 3 features? I have a post that belongs to a group and also a user. The group and user has_many posts. I am using a nested resource resources :groups do resources :posts end <%= form_for @post, :url => group_posts_path(params[:group_id]) do |f| %> .... <% end %> def create @group = Group.find(1) @post = @group.posts.build(params[:post]) @post.user_id = current_user.id respond_to do |format| if @post.save ..... end end end Thank you.

    Read the article

  • Drupal: how to upgrade a running production website to a dev version?

    - by FractalizeR
    Can you help me to understand, how do I do Drupal website deployment and development? Suppose, I developed 1.0 version of Berty&Frank website. I copied everything to their production server and it is alive and kicking now. Site is already full of contents and is growing. I am asked to add additional features to the website. I am now experimenting with the way how I can implement them in a dev version. I am creating/deleting content types, fill created nodes with demo data just to see how they look like etc. Now I found the way and I want to upgrade production website to the same structure as my dev version now. How do I do that? Is the only way to manually make every change I made in dev version?

    Read the article

  • Getting Older PHP Extensions

    - by maSnun
    Hello All, It happens that I need the GD, SQLite and a few more extensions for php 5.1. But I can't find out where to get them. I am using WinBinder to develop some desktop applications for Windows with php. The minimal php 5.1 pack has the winbinder extension only. I need other extensions for enhanced features like image editing or data storage. Can anybody help? I really need this very much. Thanks and Regards, Masnun

    Read the article

  • How I Can do web programming with Lisp or Scheme?

    - by Castro
    I usually write web apps in PHP, Ruby or Perl. I am starting the study of Scheme and I want to try some web project with this language. But I can't find what is the best environment for this. I am looking for the following features: A simple way of get the request parameters (something like: get-get #key, get-post #key, get-cookie #key). Mysql access. HTML Form generators, processing, validators, etc. Helpers for filter user input data (something like htmlentities, escape variables for put in queries, etc). FLOSS. And GNU/Linux friendly. So, thanks in advance to all replies.

    Read the article

  • Accessing contents of a file in a web-application without uploading.

    - by UniCoder
    As far as I can tell, it is impossible to access the content of files on the user's computer in a web application without first uploading to the server, then re-downloading to user, unless some sort of plug-in is used. (Flash, etc.) Ideally, the user would upload the file directly to localstorage and then scripts would have a chance to process/display/validate/filter without the user having to wait on an upload. Are there any features in upcoming web standards such as html5 that will allow this? If not, why has there been no effort to make this possible, and how can I work around it without getting stuck with plugins?

    Read the article

  • Cooling Server Rack with Water? Sensible? Reuse energy for small installation?

    - by TomTom
    First - this is not a shopping question, this is not so much about concrete prices but about general feasibility. Makes no sense to get looking fo ra manufacturer it the approach is bad. I am moving my company to new Offices in September, and among them we will expand and consolidate our number crunch cluster. It is so far in a data center. I have a nice room in the basement prepared now. I think about cooling. We will likely run up a power usage of around 10kw by end of the year. That is a LOT of stuff, and cooling will be expensive. I am located in south Poland, close to the German border. This is an area where water is available for relatively cheap price - "wasting water" is not a concern here. My situation is thus a lot different for example than in Spain ;) Physics tells me that to heat 1 liter of water by 1 degree I use 1 Calorie (1KCal), and a kwh power is (and we can assume 100% efficiency - water heaters are pretty efficient) 750 Calories. That means that 1 KWH is 750 liter by 1 degree. 10kw and a 20 degree heat would mean that per hour I need 375 liters. That is 6.25 liters per minute and not WHAT much ;) We talk 270 cubic meters here. Even in summer, the significant underground pipes really cool down the water a LOT more ;) Question: This such an approach feasible? Anyone done that? We talk of a 10kw installation for now. Is it feasible to reuse that heat? The alternative is a decent cooling system that WILL use around 2.5kwh for running. Dropping the water would basically (a) get me a quite cold input compared to the outside air even in summer (I.e. a lower temperature medium to drop the heat in) and (b) replace the need to actually have the outside cooling (which may b problematic - if the air is 22 degree, that is a LOT to fight off, but OTOH the water will be quite cold). I also would possibly save the investment for the outside part of the cooling circuit. Now, second question - is there a feasible way to heat a house with that? ;) After all, brutally speaking, it is a LOT of energy in that water ;) If it is a bad idea, I stop here - if it is not, I start looking for suppliers. Maybe my math is wrong?

    Read the article

  • Which .NET performance and/or memory profilers will allow me to profile a DLL?

    - by Eric
    I write a lot of .NET based plug-ins for other programs which are usually compiled as a DLL which is up to the native application to start up. I've been using Equatec's profiler, which works great, but now would like something with more features, including the ability to profile memory usage. I tried out Red Gate's Ant Profiler, but as far as I can see there is no way to profile a DLL. The only option is to profile an EXE. So my question is what other profiling tools are available that will allow me to profile a single library DLL rather than an EXE. I'm assuming this would require injecting profile code into the library as Equatec does?

    Read the article

< Previous Page | 386 387 388 389 390 391 392 393 394 395 396 397  | Next Page >