Search Results

Search found 10677 results on 428 pages for 'feature comparison'.

Page 147/428 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • Named previously unnamed branch

    - by Jab
    It seems naming a previously unnamed branch doesn't really work out. It creates a nasty multiple heads problem that I can't find a solution for. Here is the workflow... UserA starts working on feature that they expect to be small, so they just start working(off the default branch). The change turns out to be a large project and will need multiple contributors. So UserA issues... hg branch "Feature1" and continues working, committing locally s needed. UserA then pulls down the changes from the central repo so he can push. At this point, why does hg heads return 3 heads? It shows 2 for default and 1 for Feature1. The first head for default is the latest change by another user on the branch(irrelevant). The second default head is the commit prior to the hg branch "Feature1" commit. The central repository has rules enforced so that only 1 head per branch is allowed, so forcing a push isn't an option. The repo doesn't want multiple heads on the default branch. UserA should be able to push these changes so that other users can see the Feature1 branch and help out. I can't seem to find a way to "correct" this. I don't think I can re-write the branch of the initial commits for the feature, before it was a named branch. I know the initial changes before the named branch are technically on the default branch, but does that mean they will be heads until that Feature1 branch is merged?

    Read the article

  • Easiest way to programmatically create email accounts for use by web app?

    - by cadwag
    I have a web app that create groups. Each group gets their own discussion board. I would like to add the feature of allowing users to send emails to their "group" within the web app to start a new discussion or reply to an email from the "group" to make a new post in an already ongoing discussion. For example, to start a new discussion a user would send: From: [email protected] To: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? All members of the group would receive an email: From: [email protected] Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? Reply-To: group1@ example.com And, the app would start a new discussion with: Author: Bill Fake Subject: Hey guys! Meet up on Tuesday? Body: Yes? No? This is a pretty standard feature for Google Groups and other big sites. So how do us mere mortals go about implementing this? Is there an easy way? Or do I: 1. Install postfix 2. Write scripts to create new accounts for each new group 3. Access the server periodically via pop3 (or imap?) to retrieve the email messages sent to each account? 4. Parse the message for content If it's the latter, did I miss a step?

    Read the article

  • Which plugin framework to use for native C++/Win32

    - by Kerido
    Hi everybody. I have an extensible product that allows 3rd party developers to extend it. The aspects that can be extended are documented and interfaces are provided in the SDK. Currently, I'm using COM and I'm getting pretty comfortable with it. I especially like the ability to provide interface versioning in a unified manner. I consider it to be a requirement because you never know what you're gonna need in the future. Just to be precise, here's an example. Let's suppose I have an interface representing a particular feature: class IFeature { public: virtual void DoFeatureTask() = 0; }; Then after the interface is already documented (and someone may have used it in the plugin code) I'm realizing, I need more from this feature. Maybe, there is an option I need to provide. I just define the second version: class IFeature2 { public: virtual void DoFeatureTask(int theOption) = 0; }; I don't mean I intend to have lots of versions. But it just may happen. In COM, because every interface is associated with a GUID, I can query a preferred implementation, determine its presence, and, finally, fall back to a legacy one. But after glancing through C++/COM-related questions, I noticed many recommendations against COM. So maybe it's not the best choice and I'm just too old-school. Can you advise on an alternative?

    Read the article

  • Writing an audio player in C#

    - by Malki
    Hi, I have a pretty cool idea for a very special media player. I like to think about this project as a mini-startup, since I don't yet know if my idea is practical. Anyways, before implementing my idea, I first need to be able to implement a simple audio player. My preferred language for this project is C#, simply because it's so easy to use, but any other object oriented language would be fine too I guess. I started out with no knowledge whatsoever about audio. My main goals right now are: Being able to play audio files - as many formats as possible (sort of a VLC type player, but only audio for now). Being able to analyze audio files - as in, reading frequency, amplitude, volume, and other information about the audio. I think maybe a good idea here is to be able to analyze one file format (PCM?), and then temporarily converting any file I want to analyze to that format. This is in order to later implement a mechanism that compares songs and identifies similar songs to recommend to the user (this feature isn't part of my idea, but I figured since it exists in many players nowadays, I need to have it too if I want be able to compete with them). BTW - I currently don't have any knowledge about audio/wavelengths/frequencies and such, so I'd appreciate it if someone could point me in the right direction about this analyzation feature. Maybe in the future I'd expand to playing video files as well, but for now I'm concentrating on audio. After searching the Internet for a while, I've come across LAME. Problem is, it's not C#, and I'm not sure how to use it. I know there is something called "Interoperability", that is supposed to let me work with native DLL files through C#. Any information about that would be helpful as well. Any help would be much appreciated. Thanks, Malki :)

    Read the article

  • Do database engines other than SQL Server behave this way?

    - by Yishai
    I have a stored procedure that goes something like this (pseudo code) storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin select * from SOME_VIEW order by somecolumn end else if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent end All ran well for a long time. Suddenly, the query started running forever if param4 was 'Y'. Changing the code to this: storedprocedure param1, param2, param3, param4 begin if (param4 = 'Y') begin set param2 = null set param3 = null end if (param1 is null) begin select * from SOME_VIEW where (param2 is null or param2 = SOME_VIEW.Somecolumn2) and (param3 is null or param3 = SOME_VIEW.SomeColumn3) order by somecolumn end else select somethingcompletelydifferent And it runs again within expected parameters (15 seconds or so for 40,000+ records). This is with SQL Server 2005. The gist of my question is this particular "feature" specific to SQL Server, or is this a common feature among RDBMS' in general that: Queries that ran fine for two years just stop working as the data grows. The "new" execution plan destroys the ability of the database server to execute the query even though a logically equivalent alternative runs just fine? This may seem like a rant against SQL Server, and I suppose to some degree it is, but I really do want to know if others experience this kind of reality with Oracle, DB2 or any other RDBMS. Although I have some experience with others, I have only seen this kind of volume and complexity on SQL Server, so I'm curious if others with large complex databases have similar experience in other products.

    Read the article

  • Why is "origin/HEAD" shown when running "git branch -r"?

    - by Ben Hamill
    When you run git branch -r why the blazes does it list origin/HEAD? For example, there's a remote repo on GitHub, say, with two branches: master and awesome-feature. If I do git clone to grab it and then go into my new directory and list the branches, I see this: $ git branch -r origin/HEAD origin/master origin/awesome-feature Or whatever order it would be in (alpha? I'm faking this example to keep the identity of an innocent repo secret). So what's the HEAD business? Is it what the last person to push had their HEAD pointed at when they pushed? Won't that always be whatever it was they pushed? HEADs move around... why do I care what someone's HEAD pointed at on another machine? I'm just getting a handle on remote tracking and such, so this is one lingering confusion. Thanks! EDIT: I was under the impression that dedicated remote repos (like GitHub where no one will ssh in and work on that code, but only pull or push, etc) didn't and shouldn't have a HEAD because there was, basically, no working copy. Not so?

    Read the article

  • C++: calling non-member functions with the same syntax of member ones

    - by peoro
    One thing I'd like to do in C++ is to call non-member functions with the same syntax you call member functions: class A { }; void f( A & this ) { /* ... */ } // ... A a; a.f(); // this is the same as f(a); Of course this could only work as long as f is not virtual (since it cannot appear in A's virtual table. f doesn't need to access A's non-public members. f doesn't conflict with a function declared in A (A::f). I'd like such a syntax because in my opinion it would be quite comfortable and would push good habits: calling str.strip() on a std::string (where strip is a function defined by the user) would sound a lot better than calling strip( str );. most of the times (always?) classes provide some member functions which don't require to be member (ie: are not virtual and don't use non-public members). This breaks encapsulation, but is the most practical thing to do (due to point 1). My question here is: what do you think of such feature? Do you think it would be something nice, or something that would introduce more issues than the ones it aims to solve? Could it make sense to propose such a feature to the next standard (the one after C++0x)? Of course this is just a brief description of this idea; it is not complete; we'd probably need to explicitly mark a function with a special keyword to let it work like this and many other stuff.

    Read the article

  • How do I controll clipping with non-opaque graphics-item's in Qt?

    - by JJacobsson
    I have a bunch of QGraphicsSvgItem's in a QGraphicsScene that are drawn connected by QGraphicsLineItem's. This show's a graph of a tree-structure. What I want to do is provide a feature where everything but a selected sub-tree becomes transparent. A kind of "highlight this sub-tree" feature. That part was easy, but the results are ugly because now the lines can be seen through the semi-transparent svg's. I am looking for some way to still clip other QGraphicsItem's in the scene to the svg item's, giving the effect that the svg's are semi-transparent windows to the background. I know this code does not use svg's but I figure you can replace that yourself if you are so inclined. int main(int argc, char *argv[]) { QApplication app(argc, argv); QGraphicsScene scene; for( int i = 0; i < 10; ++i ) { QGraphicsLineItem* line = new QGraphicsLineItem; line->setLine( i * 25.0 + 1.0, 0, i * 25.0 + 23.0, 0 ); scene.addItem( line ); } for( int i = 0; i < 11; ++i ) { QGraphicsEllipseItem* ellipse = new QGraphicsEllipseItem; ellipse->setRect( (i * 25.0) - 9.0, -9.0, 18.0, 18.0f ); ellipse->setBrush( QBrush( Qt::green, Qt::SolidPattern ) ); ellipse->setOpacity( 0.5 ); scene.addItem( ellipse ); } QGraphicsView view( &scene ); view.show(); return app.exec(); } I would like the line's to not be seen behind the circle's. I have tried fiddling with the depth-buffer and the stencil buffer using opengl rendering to no avail. How do I get the QGraphicsSvgItem's (or QGraphicsEllipseItem's in the example code) to still clip the lines even though they are semi-transparent?

    Read the article

  • Java conditional compilation: how to prevent code chunks to be compiled?

    - by khachik
    My project requires Java 1.6 for compilation and running. Now I have a requirement to make it working with Java 1.5 (from the marketing side). I want to replace method body (return type and arguments remain the same) to make it compiling with Java 1.5 without errors. Details: I have an utility class called OS which encapsulates all OS-specific things. It has a method public static void openFile(java.io.File file) throws java.io.IOException { // open the file using java.awt.Desktop ... } to open files like with double-click (start Windows command or open Mac OS X command equivalent). Since it cannot be compiled with Java 1.5, I want to exclude it during compilation and replace by another method which calls run32dll for Windows or open for Mac OS X using Runtime.exec. Question: How can I do that? Can annotations help here? Note: I use ant, and I can make two java files OS4J5.java and OS4J6.java which will contain the OS class with the desired code for Java 1.5 and 1.6 and copy one of them to OS.java before compiling (or an ugly way - replace the content of OS.java conditionally depending on java version) but I don't want to do that, if there is another way. Elaborating more: in C I could use ifdef, ifndef, in Python there is no compilation and I could check a feature using hasattr or something else, in Common Lisp I could use #+feature. Is there something similar for Java? Found this post but it doesn't seem to be helpful. Any help is greatly appreciated. kh.

    Read the article

  • eclipse - starting with android sdk

    - by dontHaveName
    I want start programming for android.. What I have: -Windows 7 -Eclipse Classic 4.2 -Downloaded all there required files - http://developer.android.com/sdk/installing/adding-packages.html -ADT Plugin I want install new ADT plugin.. at first I tried to download it from http://dl-ssl.google.com/android/eclipse, I add it, but if i selected it there is only "pending.." and nothing has load..(maybe internet connection?I have selected Native connection in preferences after pending it wrotes: Unable to connect to repository http://dl-ssl.google.com/android/eclipse/content.xml org.eclipse.equinox.p2.core.ProvidesException ) Thats why I download ADT plugin. So if I select downloaded ADT plugin - content of it load - developer tools and ndk plugin so I select all and click next. It loads and writes this: "Cannot complete the install because one or more required items could not be found. Software being installed: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) Missing requirement: Android Development Tools 20.0.3.v201208082019-427395 (com.android.ide.eclipse.adt.feature.group 20.0.3.v201208082019-427395) requires 'org.eclipse.wst.sse.core 0.0.0' but it could not be found" requires 'org.eclipse.wst.sse.core 0.0.0 this problem is shown here: http://developer.android.com/resources/faq/troubleshooting.html#installeclipsecomponents but there is solution only for version 3.3 and 3.4 (I have 4.2) anyway but I tried it- I look for updates but nothing were found I really dont know where could be problem.. Thanks for any answer. (sorry for my english) I will send 1€ to somebody who can solve my problem ;) (I think all problems all for internet connection but I cant set it..)

    Read the article

  • What different terms mean the same thing (or don't, but people think they do)?

    - by Matthew Jones
    One of the pitfalls I run into on a daily basis is customers saying one thing while meaning another. Usually, this is just due to a miscommunication somewhere, but occasionally they are, in fact, saying the same thing I am just using a different term. For example, one of my customers the other day mentioned a feature he called, "find as you type." Being a little confused, I asked him what he meant, and he described the feature in Google where, once you start typing a search query, Google suggests other, popular queries that match the letters you have typed. Click! He meant AutoComplete! He was not wrong, it is just that I had never heard that term before. In the spirit of reducing confusion, what terms can you think of that are different but mean, essentially, the same thing? Also, what terms do people think mean the same thing, but don't. Please differentiate between the two. Please only one set of terms per answer, so we can vote on the best ones.

    Read the article

  • How to remove erroneous dependency from tycho build?

    - by sfinnie
    Context: Have built an eclipse update site using tycho but trying to install into target IDE fails. The update site builds fine; I can see it from a target eclipse installation and select the feature for installation. However, the dependency check fails at start of install as it can't find a declared dependency (org.eclipselabs.xtext.utils.unittesting). This shouldn't be a dependency: it was erroneously included in MANIFEST.MF for one of my eclipse plugin projects. I removed the dependency from the manifest and run mvn clean install. Build reported success. However when I try to use the newly built update site it still complains that the dependency to org.eclipselabs.xtext.utils.unittesting (a) exists and (b) can't be satisfied. So the question is: what else do I need to do to remove the dependency from the generated update site? Thanks for any pointers. PS: I know I could add the site for o.e.x.u.unittesting in the target eclipse installation so it can satisfy the dependency. However I don't want to do that; it's not needed for the feature to work and I don't want other users to have to add an unnecessary dependency.

    Read the article

  • Code Contracts: Do we have to specify Contract.Requires(...) statements redundantly in delegating me

    - by herzmeister der welten
    I'm intending to use the new .NET 4 Code Contracts feature for future development. This made me wonder if we have to specify equivalent Contract.Requires(...) statements redundantly in a chain of methods. I think a code example is worth a thousand words: public bool CrushGodzilla(string weapon, int velocity) { Contract.Requires(weapon != null); // long code return false; } public bool CrushGodzilla(string weapon) { Contract.Requires(weapon != null); // specify contract requirement here // as well??? return this.CrushGodzilla(weapon, int.MaxValue); } For runtime checking it doesn't matter much, as we will eventually always hit the requirement check, and we will get an error if it fails. However, is it considered bad practice when we don't specify the contract requirement here in the second overload again? Also, there will be the feature of compile time checking, and possibly also design time checking of code contracts. It seems it's not yet available for C# in Visual Studio 2010, but I think there are some languages like Spec# that already do. These engines will probably give us hints when we write code to call such a method and our argument currently can or will be null. So I wonder if these engines will always analyze a call stack until they find a method with a contract that is currently not satisfied? Furthermore, here I learned about the difference between Contract.Requires(...) and Contract.Assume(...). I suppose that difference is also to consider in the context of this question then?

    Read the article

  • Breaking dependencies when you can't make changes to other files?

    - by codemuncher
    I'm doing some stealth agile development on a project. The lead programmer sees unit testing, refactoring, etc as a waste of resources and there is no way to convince him otherwise. His philosophy is "If it ain't broke don't fix it" and I understand his point of view. He's been working on the project for over a decade and knows the code inside and out. I'm not looking to debate development practices. I'm new to the project and I've been tasked with adding a new feature. I've worked on legacy projects before and used agile development practices with good result but those teams were more receptive to the idea and weren't afraid of making changes to code. I've been told I can use whatever development methodology I want but I have to limit my changes to only those necessary to add the feature. I'm using tdd for the new classes I'm writing but I keep running into road blocks caused by the liberal use of global variables and the high coupling in the classes I need to interact with. Normally I'd start extracting interfaces for these classes and make their dependence on the global variables explicit by injecting them as constructor arguments or public properties. I could argue that the changes are necessary but considering the lead never had to make them I doubt he would see it my way. What techniques can I use to break these dependencies without ruffling the lead developer's feathers? I've made some headway using: Extract Interface (for the new classes I'm creating) Extend and override the wayward classes with test stubs. (luckily most methods are public virtual) But these two can only get me so far.

    Read the article

  • How can I rewrite the history of a published git branch in multiple steps?

    - by Frerich Raabe
    I've got a git repository with two branches, master and amazing_new_feature. The latter branch contains the work on, well, an amazing new feature. A colleague and me are both working on the same repository, and the two of us commit to both branches. Now the work on the amazing new feature finished, and a bit more than 100 commits were accumulated in the amazing_new_feature branch. I'd like to clean those commits up a bit (using git rebase -i) before merging the work into master. The issue we're facing is that it's quite a pain to rewrite/reorder all 100 commits in one go. Instead, what I'd like to do is: Rewrite/merge/reorder the first few commits in the amazing_new_feature branch and put the result into a dedicated branch which contains the 'cleaned up' history (say, a amazing_new_feature_ready_for_merge branch). Rebase the remaining amazing_new_feature branch on the amazing_new_feature_ready_for_merge branch. Repeat at 1. My idea is that at some point, all the work from amazing_new_feature should be in amazing_new_feature_ready_for_merge and then I can merge the latter into master. Is this a sensible approach, or are there better/easier/more fool-proff solutions to this problem? I'm especially scared about the second step of the above algorithm since it means rebasing a published branch. IIRC it's a dangerous thing to do.

    Read the article

  • PHP IDE with Integrated Web Server

    - by seth
    Note: This is not another "What is the best PHP IDE?" question. I'm looking for a PHP IDE with a specific feature, namely an integrated / embedded (php enabled) web server; ideally with xdebug pre-bundled. I already know that Aptana 1.5 has this functionality (and some older versions of Zend Studio as well), but Aptana 1.5 hasn't been supported for quite some time and as we make the transition to PHP 5.3 and beyond, it's usefulness will diminish significantly. I've looked at some options including Eclipse PDT and NetBeans, but it seems every PHP IDE relies on a separate local/remote web server to actually interpret the code. I know installing a web server locally is fairly trivial, but this is for a classroom solution, where installing, configuring, and maintaining a web server on 1000 machines is simply not feasible. A remote server solution will also not work due to the need to use debugging functionality (xdebug currently requires a hardcoded IP for the debug client). This seems like such an obvious feature/plugin for a PHP IDE, but my research thus far has turned up no results.

    Read the article

  • Mercurial Subrepos, how to control which changeset I want to use for a subrepo?

    - by Lasse V. Karlsen
    I am reading up on subrepos, and have been running some tests locally, seems to work OK so far, but I have one question. How do I specify/control which changeset I want to use for a particular subrepo? For instance, let's say I have the following two projects: class library application o fourth commit o second commit, added a feature | | o third commit o initial commit | | o second commit |/ o initial commit Now, I want the class library as a subrepo of my application, but due to the immaturity of the longest branch (the one ending up as fourth commit), I want to temporarily use the "second commit" tip. How do I go about configuring that, assuming it is even possible? Here's a batch file that sets up the above two repos + adds the library as a subrepo. If you run the batch file, it will output: [C:\Temp] :test ... v4 As you can see from that last line there, it verifies the contents of the file in the class library, which is "v4" from the fourth commit. I'd like it to be "v2", and persist as "v2" until I'm ready to pull down a newer version from the class library repository. Can anyone tell me if it is possible to do what I want, and if so, what I need to do in order to lock my subrepo to the right changeset? Batch-file: @echo off if exist app rd /s /q app if exist lib rd /s /q lib if exist app-clone rd /s /q app-clone rem == app == hg init app cd app echo program>main.txt hg add main.txt hg commit -m "initial commit" echo program+feature1>main.txt hg commit -m "second commit, added a feature" cd .. rem == lib == hg init lib cd lib echo v1>lib.txt hg add lib.txt hg commit -m "initial commit" echo v2>lib.txt hg commit -m "second commit" hg update 0 echo v3>lib.txt hg commit -m "third commit" echo v4>lib.txt hg commit -m "fourth commit" cd .. rem == subrepos == cd app hg clone ..\lib lib echo lib = ..\lib >.hgsub hg add .hgsub hg commit -m "added subrepo" cd .. rem == clone == hg clone app app-clone type app-clone\lib\lib.txt

    Read the article

  • Java: refactoring static constants

    - by akf
    We are in the process of refactoring some code. There is a feature that we have developed in one project that we would like to now use in other projects. We are extracting the foundation of this feature and making it a full-fledged project which can then be imported by its current project and others. This effort has been relatively straight-forward but we have one headache. When the framework in question was originally developed, we chose to keep a variety of constant values defined as static fields in a single class. Over time this list of static members grew. The class is used in very many places in our code. In our current refactoring, we will be elevating some of the members of this class to our new framework, but leaving others in place. Our headache is in extracting the foundation members of this class to be used in our new project, and more specifically, how we should address those extracted members in our existing code. We know that we can have our existing Constants class subclass this new project's Constants class and it would inherit all of the parent's static members. This would allow us to effect the change without touching the code that uses these members to change the class name on the static reference. However, the tight coupling inherent in this choice doesn't feel right. before: public class ConstantsA { public static final String CONSTANT1 = "constant.1"; public static final String CONSTANT2 = "constant.2"; public static final String CONSTANT3 = "constant.3"; } after: public class ConstantsA extends ConstantsB { public static final String CONSTANT1 = "constant.1"; } public class ConstantsB { public static final String CONSTANT2 = "constant.2"; public static final String CONSTANT3 = "constant.3"; } In our existing code branch, all of the above would be accessible in this manner: ConstantsA.CONSTANT2 I would like to solicit arguments about whether this is 'acceptable' and/or what the best practices are.

    Read the article

  • [OpenCV] What do the "left" and "right" values mean in the haar cascade xml files?

    - by user117046
    In OpenCV's haar cascade files, what are the "left" and "right" values, and how does this refer to the "threshold" value? Thanks! Just for reference, here's the structure of the files: <haarcascade_frontalface_alt type_id="opencv-haar-classifier"> <size>20 20</size> <stages> <_> <!-- stage 0 --> <trees> <_> <!-- tree 0 --> <_> <!-- root node --> <feature> <rects> <_>3 7 14 4 -1.</_> <_>3 9 14 2 2.</_></rects> <tilted>0</tilted></feature> <threshold>4.0141958743333817e-003</threshold> <left_val>0.0337941907346249</left_val> <right_val>0.8378106951713562</right_val></_></_> <_>

    Read the article

  • User Defined Conversions in C++

    - by wash
    Recently, I was browsing through my copy of the C++ Pocket Reference from O'Reilly Media, and I was surprised when I came across a brief section and example regarding user-defined conversion for user-defined types: #include <iostream> class account { private: double balance; public: account (double b) { balance = b; } operator double (void) { return balance; } }; int main (void) { account acc(100.0); double balance = acc; std::cout << balance << std::endl; return 0; } I've been programming in C++ for awhile, and this is the first time I've ever seen this sort of operator overloading. The book's description of this subject is somewhat brief, leaving me with a few unanswered questions about this feature: Is this a particularly obscure feature? As I said, I've been programming in C++ for awhile and this is the first time I've ever come across this. I haven't had much luck finding more in-depth material regarding this. Is this relatively portable? (I'm compiling on GCC 4.1) Can user-defined conversions to user defined types be done? e.g. operator std::string () { /* code */ }

    Read the article

  • Cant install .NET application in Clients PC

    - by Niraj Doshi
    Hello all, My client's PC runs Windows 7 Ultimate with .netframework 4 client profile. I am unable to install my application developed in VS2008. I tried uninstalling .NET Framework 4 From his PC and running the Clean up tool provided by Microsoft. But still I am unable to install it successfully. It provides Error 1001. I tried running the program as administrator. I also tried to Turn on .net 3.5 feature from add or remove program. Thanks in advance. :) Edit: The error what i get is shown here. Furthermore, I have confirmed that it is a 32bit processor and i run x86 release version of setup The application is developed in a Windows 7 OS with .NET Framework 3.5 I have installed this application in 7 PCs which have .NET 3.5 installed in them and having OS Windows XP,Vista and Windows 7; and all are working fine. In clients PC, when I try to install .NET 3.5 again, the installer starts but then it disappears suddenly without doing anything I have tried turning on .NET 3.5 framework feature from control panel Program and Features. I have tried running the program as Administrator I have tried setting the application setup in Windows XP and Vista compatible mode. But still the issue persists. Thanks :)

    Read the article

  • How to give new life into a five years old, simple but reliable PHP form?

    - by Sam
    Hi all. I have a script in php 5.2. I want to use a simple form. I found something a programmer made for me about 5 years ago. When I use it, PHP outputs an error now unless I set register_long_arrays = On, then it works fine. On the PHP website, however, it says: Warning This feature has been DEPRECATED as of PHP 5.3.0. Relying on this feature is highly discouraged. It's recommended to turn them off, for performance reasons. Instead, use the superglobal arrays, like $_GET. Should I listen to PHP's warning, or just enable the option and keep using my old form happily? If the former, then how/where do I change this simple form, so it does not rely on the deprecated setting? Your answer is much appreciated. form.htm <html><body> <form method="POST" action="form_sent.php"> ... </form> </body></html> form_sent.php <html><body> <?php $email = $HTTP_POST_VARS[email]; $mailto = "[email protected]"; $mailsubj = "A Form was Sent from Website!"; $mailhead = "From: $email\n"; reset ($HTTP_POST_VARS); $mailbody = "Values submitted from web site form:\n"; while (list($key, $val) = each ($HTTP_POST_VARS)){$mailbody .= "$key : $val\n";} if (!eregi("\n",$HTTP_POST_VARS[email])) { mail($mailto, $mailsubj, $mailbody, $mailhead); } ?> <b>Form Sent. Thank you.</b> </body></html>

    Read the article

  • How to detect that cookies are disabled in browser with AngularJS

    - by user2943082
    I use an AngularJS in my current project and try to implement feature which detects does cookies are disable in browser. I have tried to use an AngularJS module "ngCookies" for resolve this issue. The main idea of this feature is to try to create some cookie, then check does this cookie was created and show message if it wasn't. But it didn't worked. Controller: someProject.controller('CookieCtrl', ['$scope', '$cookieStore', function($scope, $cookieStore) { $scope.areCookiesEnabled = false; $cookieStore.put("TestCookie", "TestCookieText"); $scope.cookieValue = $cookieStore.get("TestCookie"); if ($scope.someValue) { $cookieStore.remove("TestCookie"); $scope.areCookiesEnabled = true; } }]); View: <div class="main" data-ng-controller="CookieCtrl"> <div class="warning_message" data-ng-show="!areCookiesEnabled"> <span data-ng-bind="areCookiesEnabled"></span> </div> </div> Can anybody tell me where is my mistake?

    Read the article

  • How do I manage multiple development branches in GIT?

    - by Ian
    I have 5 branches of one system - lets call them master, London, Birmingham, Manchester and demo. These differ in only a configuration file and each has its own set of graphics files. When I do some development, I create a temp branch from master, called after the feature, and work on that. When ready to merge I checkout master, and git merge feature to bring in my work. That appears to work just fine. Now I need to get my changes into the other Branches, without losing the differences between then that are there already. How can I do that? I have been having no end of problems with Birmingham geting London's graphics, and with conflicts within the configuration file. When the branch is finally correct, I push it up to a depot, and pull each Branch down to a linux box for final testing, From there the release into production is using rsync (set to ignore the .git repository itself). This phase works just fine also. I am the only developer at the moment, but I need to get the process solid before inviting assistance :)

    Read the article

  • Is "tip-of-the-day" good?

    - by Jonta
    Many programs (often large ones, like MS Office, The GIMP, Maxthon) have a feature called "tip-of-the-day". It explains a small part of the program, like this one in Maxthon: "You can hide/show the main menu bar by pressing Ctrl+F11" You can usually browse through them by clicking next. And other options provided are "Previous", "Close", "Do not show at startup". I think I like the way Maxthon used to handle this; in the browser's statusbar (down at the bottom usually, together with "Done", the progress-bar etc), there would sometimes be a small hint or tip on what else you could do with it. As Joel Spolsky wrote in his article-series "User Interface Design for Programmers", people don't like reading manuals. But we still want them to use the program, and the features they could benefit from, don't we? Therefore, I think it is useful to have such a feature, without the annoyance of the pop-up on startup. What do you think? Pop-up? Maxthonstyle? No way?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >