Search Results

Search found 7595 results on 304 pages for 'functionality'.

Page 232/304 | < Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >

  • UIScrollview calling superviews layoutSubviews when scrolling?

    - by marchinram
    Hello, I added a UITableView as a subview to a custom UIView class I'm working on. However I noticed that whenever I scroll the table it calls my classes layoutSubviews. I'm pretty sure its the UIScrollview that the table is inheriting from which is actually doing this but wanted to know if there is a way to disable this functionality and if not why is it happening? I don't understand why when you scroll a scrollview it needs its superview to layout its subviews. Code: @implementation CustomView - (id)initWithFrame:(CGRect)frame { if ((self = [super initWithFrame:frame])) { self.clipsToBounds = YES; UITableView *tableView = [[UITableView alloc] initWithFrame:CGRectMake(0.0, 15.0, 436.0, 132.0) style:UITableViewStylePlain]; tableView.dataSource = self; tableView.delegate = self; tableView.separatorStyle = UITableViewCellSeparatorStyleNone; tableView.backgroundColor = [UIColor clearColor]; tableView.showsVerticalScrollIndicator = NO; tableView.contentInset = UIEdgeInsetsMake(kRowHeight, 0.0, kRowHeight, 0.0); tableView.tag = componentIndex; [self addSubview:tableView]; [tableView release]; } return self; } - (void)layoutSubviews { // This is called everytime I scroll the tableview } @end

    Read the article

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • jQuery Autocomplete plugin (Jorn Zaefferer's) - how to dynamically change the list of displayed valu

    - by Max Williams
    I'm using Jorn Zaefferer's Autocomplete query plugin, http://bassistance.de/jquery-plugins/jquery-plugin-autocomplete/ I have options set so it shows all the values when you click in the empty text field, a bit like a select, and the option is also set so that the user can only choose from the list of values used by the autocomplete (so it's kind of like a select but with autocomplete functionality). I have two radio buttons below the text field, which determine whether the user chooses from a long list or a short list of possible values. I want to update the values used in the autocomplete when one of these radio buttons is clicked. Currently i'm doing this in a not very clever way by calling autocomplete again on the same text field, with the different array of values, but this creates a situation where both are active at once, and i can see the long list peeking out from behind the short list. What i need to do is either a) dynamically change the values used in the autocomplete or b) remove (unbind?) the autocomplete from the text field before re-initialising it. Either of these would do tbh though option a) is kind of nicer. Any ideas anyone? Here's my current code: function initSubjectLongShortList(field, short_values, long_values){ $(".subject_short_long_list").change(function(){ updateSubjectAutocomplete(field, short_values, long_values); }); updateSubjectAutocomplete(field, short_values, long_values); } function updateSubjectAutocomplete(field, short_values, long_values){ if($(".subject_short_long_list:checked").attr('id') == "subject_long_list"){ initSubjectAutocomplete(field, long_values); } else { initSubjectAutocomplete(field, short_values); } } function initSubjectAutocomplete(field, values){ jQuery(field).autocomplete(values, { minChars: 0, //make it appear as soon as we click in the field max: 2000, scrollHeight: 400, matchContains: true, selectFirst: false }); } cheers, max

    Read the article

  • Good-practices: How to reuse .csproj and .sln files to create your MSBuild script for CI ?

    - by Gishu
    What is the painless/maintainable way of using MSBuild as your build runner ? (Forgive the length of this post) I was just trying my hand at TeamCity (which I must say is awesome w.r.t. learning curve and out of the box functionality). I got an SVN MSBuild NUnit NCover combo working. I was curious as to how moderate to large projects are using MSBuild - I've just pointed MSBuild to my Main sln file. I've spent some time with NAnt some years ago and I found MSBuild to be a bit obtuse. The docs are too dense/detailed for a beginner. MSBuild seems to have some special magic to handle .sln files ; I tried my hand at writing a custom build script by hand, linking/including .csproj files in order (such that I could have custom pre-post build tasks). However it threw up (citing duplicate target imports). I'm assuming most devs wouldn't want to go messing around with msbuild proj files - they'd be making changes to the .csproj and .sln files. Is there some tool / MSBuild task that reverse-engineers a new script from an existing .sln + its .csproj files that I'm unaware of ? If I'm using MSBuild just to do the compile step, I might as well use Nant with an exec task to MSBuild for compiling the solution ? I've this nagging feeling that I'm missing something obvious. My end-goal here is to have a MSBuild build script which builds the solution that acts as a build script instead of a compile step. Allows custom pre/post tasks. (e.g. call nunit to run a nunit project (which seems to be not yet supported via the teamcity web UI)) stays out of the way of the developers making changes to the solution. No redundancy ; shouldn't require devs to make the same change in 2 places

    Read the article

  • ASP MVC: Submitting a form with nested user controls

    - by Nigel
    I'm fairly new to ASP MVC so go easy :). I have a form that contains a number of user controls (partial views, as in System.Web.Mvc.ViewUserControl), each with their own view models, and some of those user controls have nested user controls within them. I intended to reuse these user controls so I built up the form using a hierarchy in this way and pass the form a parent view model that contains all the user controls' view models within it. For example: Parent Page (with form and ParentViewModel) -->ChildControl1 (uses ViewModel1 which is passed from ParentViewModel.ViewModel1 property) -->ChildControl2 (uses ViewModel2 which is passed from ParentViewModel.ViewModel2 property) -->ChildControl3 (uses ViewModel3 which is passed from ViewModel2.ViewModel3 property) I hope this makes sense... My question is how do I retrieve the view data when the form is submitted? It seems the view data cannot bind to the ParentViewModel: public string Save(ParentViewModel viewData)... as viewData.ViewModel1 and viewData.ViewModel2 are always null. Is there a way I can perform a custom binding? Ultimately I need the form to be able to cope with a dynamic number of user controls and perform an asynchronous submission without postback. I'll cross those bridges when I come to them but I mention it now so any answer won't preclude this functionality. Many thanks.

    Read the article

  • Long running stateful service in .NET

    - by Asaf R
    Hi, I need to create a service in .NET that maintains (inner) state in-memory, spawns multiple threads and is generally long-running. There are a lot options - Good-old Windows Service Windows Communication Services Windows Workflow Foundation I really don't know which to choose. Most of the functionality is in a library used by this service, so the service itself is rather simple. On one hand, it's important the service host is as close to "simply working" as possible, which excludes Windows Service. On the other hand, it's important that the service is not taken down by the host just because there's no external activity, which makes WCF kind o' "scary". As for WF, it's strongest selling point is the ability to create processes as, um..., workflows, which is something I don't need nor want. To sum it up, the plethora of Microsoft technologies got me a bit confused. I'd appreciate help regarding the pros and cons of each solution (or other's I've failed to mention) for the problem of a stateful, long running service in .NET Thanks, Asaf P.S., I'm using .NET 4. EDIT: What I mean by the host "simply working" is, for example, that the service I create be reactivated if it crashes. I guess the reason for this question is that I've created Windows Services in the past (I think it was in plain C++ with Win32 API), and I don't want to miss out on something simpler if there's is such as thing. Thanks for all the replies thus far! Asaf.

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • LaTex, align alignment characters between align blocks

    - by ccook
    I would like to align two alignment characters between two align blocks so that I can have some text in the middle of a derivation with equations maintaining the horizontal alignment. For example the following excerpt of latex using align \begin{align*} \frac{\delta \phi}{\delta x_1} = {} &\frac{9}{8}\frac{\delta_1\phi}{\delta_1x_1}-\frac{1}{8}\frac{\delta_3\phi}{\delta_3x_1} \\ & \frac{9}{8}\frac{1}{h_1}\left[\phi(x_1+h_1/2)-\phi(x_i-h_1/2)\right]-\frac{1}{8}\frac{1}{3h_1}\left[\phi(x_i+3h_1/2)-\phi(x_1-3h_1/2)\right] \end{align*} some text in the middle \begin{align*} & \frac{9}{8}\frac{1}{h_1}\left[\phi(x_1+h_1/2)-\phi(x_i-h_1/2)\right]-\frac{1}{8}\frac{1}{3h_1}\left[\phi(x_i+3h_1/2)-\phi(x_1-3h_1/2)\right] \end{align*} Ideally I would like the left of the equation in the second block to line up with that of the second equation in the first block. I could do a workaround by not having text in the middle, however, I would like this functionality. EDIT I would like to have a good amount of text between. Say three to four lines that line up as normal paragraphs. Adding text in the alignment block is the workaround I poorly alluded to.

    Read the article

  • Namespaces combined with TFS / Source Control explanation

    - by Christian
    As an ISV company we slowly run into the "structure your code"-issue. We mainly develop using Visual Studio 2008 and 2010 RC. Languages c# and vb.net. We have our own Team Foundation Server and of course we use Source Control. When we started developing based on the .NET Framework, we also begun using Namespaces in a primitive way. With the time we 'became more mature', i mean we learned to use the namespaces and we structured the code more and more, but only in the solution scope. Now we have about 100 different projects and solutions in our Source Safe. We realized that many of our own classes are coded very redundant, i mean, a Write2Log, GetExtensionFromFilename or similar Function can be found between one and 20 times in all these projects and solutions. So my idea is: Creating one single kind of root folder in Source Control and start an own namespace-hierarchy-structure below this root, let's name it CompanyName. A Write2Log class would then be found in CompanyName.System.Logging. Whenever we create a new solution or project and we need a log function, we will 'namespace' that solution and place it accordingly somewhere below the CompanyName root folder. To have the logging functionality we then import (add) the existing project to the solution. Those 20+ projects/solutions with the write2log class can then be maintained in one single place. To my questions: - is that a good idea, the philosophy of namespaces and source control? - There must be a good book explaining the Namespaces combined with Source Control, yes? any hints/directions/tips? - how do you manage your 50+ projects?

    Read the article

  • managed beans as managed properties

    - by Sean
    I am using JSF 1.1 on WebSphere 6.1. I am building search functionality within an application and am having some issues. I've stripped out the extras, and have left myself with the following: 4 managed beans: SearchController - Controller bean, session scope SearchResults - session scope (store the results) ProductSearch - session scope (store the search conditions) ResultsBacking - Backing bean for DataTable, used to determine which row was clicked, request scope The SearchController bean has, as managed properties, the other 3. All except ResultsBacking are session scoped. If there is only 1 item in the search results, I want to bring up that record directly. I call setFirst(0) for the data table in the ResultsBacking method (I want to use the existing method that handle which item was clicked, so this is called right after the setFirst). When I go to do another search, I get an IllegalArgumentException when calling getRowData in the data table. According to the api, this is thrown 'if now(sic) row data is available at the currently specified row index'. I'm confused as to why this happens. It works the first time but not the second. Do I need to remove the ResultsBacking on a new search to get rid of the old state?

    Read the article

  • Should a Perl constructor return an undef or a "invalid" object?

    - by DVK
    Question: What is considered to be "Best practice" - and why - of handling errors in a constructor?. "Best Practice" can be a quote from Schwartz, or 50% of CPAN modules use it, etc...; but I'm happy with well reasoned opinion from anyone even if it explains why the common best practice is not really the best approach. As far as my own view of the topic (informed by software development in Perl for many years), I have seen three main approaches to error handling in a perl module (listed from best to worst in my opinion): Construct an object, set an invalid flag (usually "is_valid" method). Often coupled with setting error message via your class's error handling. Pros: Allows for standard (compared to other method calls) error handling as it allows to use $obj->errors() type calls after a bad constructor just like after any other method call. Allows for additional info to be passed (e.g. 1 error, warnings, etc...) Allows for lightweight "redo"/"fixme" functionality, In other words, if the object that is constructed is very heavy, with many complex attributes that are 100% always OK, and the only reason it is not valid is because someone entered an incorrect date, you can simply do "$obj->setDate()" instead of the overhead of re-executing entire constructor again. This pattern is not always needed, but can be enormously useful in the right design. Cons: None that I'm aware of. Return "undef". Cons: Can not achieve any of the Pros of the first solution (per-object error messages outside of global variables and lightweight "fixme" capability for heavy objects). Die inside the constructor. Outside of some very narrow edge cases, I personally consider this an awful choice for too many reasons to list on the margins of this question. UPDATE: Just to be clear, I consider the (otherwise very worthy and a great design) solution of having very simple constructor that can't fail at all and a heavy initializer method where all the error checking occurs to be merely a subset of either case #1 (if initializer sets error flags) or case #3 (if initializer dies) for the purposes of this question. Obviously, choosing such a design, you automatically reject option #2.

    Read the article

  • Having trouble adding jquery to charisma

    - by kira423
    I am trying to add some jquery to this Charisma admin panel and have been having nothing but trouble. I am trying to add it to the charisma.js file. This is what I am adding // add multiple select / deselect functionality $("#selectall").click(function () { $('.checkbox').attr('checked', this.checked); }); // if all checkbox are selected, check the selectall checkbox // and viceversa $(".checkbox").click(function(){ if($(".checkbox").length == $(".checkbox:checked").length) { $("#selectall").attr("checked", "checked"); } else { $("#selectall").removeAttr("checked"); } }); I have tried this code wrapped in the anonymous $(function(){ as well as without, and I have inserted it into both $(document).ready(function(){ and docReady() as well as in the head of my code but I am not really "trained" on jquery so I am a bit lost as to what I am doing wrong. My class and div tags are correct for the code, as I have checked them several times for misspellings. I am not sure what I am doing wrong. Is there a better "check" all code I can use here, or am I just putting this all in the wrong place? UPDATE: I think the actual code may be working, I cannot tell, after I click the select all box it seems that I have to click the other boxes 3 times to get the check mark back into the box, so it seems like it is having trouble actually showing that the box is marked. This may be a problem with styling, but I don't know how to correct it.

    Read the article

  • Broken flash movie player! allowFullScreen does not work with anything other than a wmode value of "

    - by lhnz
    I have a flash player on a page which plays videos. I also have modal popups which need to be able to display over the top of the flash player when they are opened, etc... I can't change either of these requirements since they are part of the spec I have been given. Flash seems to ignore z-indexes I set on it with css, and the modal popups will therefore only appear above the video player if I set the video player's wmode to opaque or transparent. However, if I do this then the full screen functionality stops working correctly: when I un-fullscreen the video it stays zoomed in. In short If you open a popup on an item page or another page containing flash the popup should be displayed above this. Flash ignores z-index values. You can stop flash ignoring z-index values by setting wmode to opaque or transparent rather than the default: window. This stops full screen from working correctly. Has anybody else faced this issue before? What can I do to fix it? I was thinking of recreating the video player with wmode=opaque whenever I opened a modal popup and then switching it back to wmode=window when the modal popup is closed, since this would mean that the popup should display above it (as wmode=opaque) and the fullscreen should work correct (as wmode=window). However, this is not ideal at all: as well as being a hack it would also mean that the video would stop playing if somebody clicked a button which opened a popup. Cheers!

    Read the article

  • Windows Service Conundrum

    - by Paul Johnson
    All, I have a Custom object which I have written using VB.NET (.net 2.0). The object instantiates its own threading.timer object and carries out a number of background process including periodic interrogation of an oracle database and delivery of emails via smtp according to data detected in the database. The following is the code implemented in the windows service class Public Class IncidentManagerService 'Fakes Private _fakeRepoFactory As IRepoFactory Private _incidentRepo As FakeIncidentRepo Private _incidentDefinitionRepo As FakeIncidentDefinitionRepo Private _incManager As IncidentManager.Session 'Real Private _started As Boolean = False Private _repoFactory As New NHibernateRepoFactory Private _psalertsEventRepo As IPsalertsEventRepo = _repoFactory.GetPsalertsEventRepo() Protected Overrides Sub OnStart(ByVal args() As String) ' Add code here to start your service. This method should set things ' in motion so your service can do its work. If Not _started Then Startup() _started = True End If End Sub Protected Overrides Sub OnStop() 'Tear down class variables in order to ensure the service stops cleanly _incManager.Dispose() _incidentDefinitionRepo = Nothing _incidentRepo = Nothing _fakeRepoFactory = Nothing _repoFactory = Nothing End Sub Private Sub Startup() Dim incidents As IList(Of Incident) = Nothing Dim incidentFactory As New IncidentFactory incidents = IncidentFactory.GetTwoFakeIncidents _repoFactory = New NHibernateRepoFactory _fakeRepoFactory = New FakeRepoFactory(incidents) _incidentRepo = _fakeRepoFactory.GetIncidentRepo _incidentDefinitionRepo = _fakeRepoFactory.GetIncidentDefinitionRepo 'Start an incident manager session _incManager = New IncidentManager.Session(_incidentRepo, _incidentDefinitionRepo, _psalertsEventRepo) _incManager.Start() End Sub End Class After a little bit of experimentation I arrived at the above code in the OnStart method. All functionality passed testing when deployed from VS2005 on my development PC, however when deployed on a true target machine, the service would not start and responds with the following message: "The service on local computer started and then stopped..." Am I going about this the correct way? If not how can I best implement my incident manager within the confines of the Windows Service class. It seems pointless to implement a timer for the incidentmanager because this already implements its own timer... Any assistance much appreciated. Kind Regards Paul J.

    Read the article

  • F# Active Pattern List.filter or equivalent

    - by akaphenom
    I have a records of types type tradeLeg = { id : int ; tradeId : int ; legActivity : LegActivityType ; actedOn : DateTime ; estimates : legComponents ; entryType : ShareOrDollarBased ; confirmedPrice: DollarsPerShare option; actuals : legComponents option ; type trade = { id : int ; securityId : int ; ricCode : string ; tradeActivity : TradeType ; enteredOn : DateTime ; closedOn : DateTime ; tradeLegs : tradeLeg list ; } Obviously the tradeLegs are a type off of a trade. A leg may be settled or unsettled (or unsettled but price confirmed) - thus I have defined the active pattern: let (|LegIsSettled|LegIsConfirmed|LegIsUnsettled|) (l: tradeLeg) = if Helper.exists l.actuals then LegIsSettled elif Helper.exists l.confirmedPrice then LegIsConfirmed else LegIsUnsettled and then to determine if a trade is settled (based on all legs matching LegIsSettled pattern: let (|TradeIsSettled|TradeIsUnsettled|) (t: trade) = if List.exists ( fun l -> match l with | LegIsSettled -> false | _ -> true) t.tradeLegs then TradeIsSettled else TradeIsUnsettled I can see some advantages of this use of active patterns, however i would think there is a more efficient way to see if any item of a list either matches (or doesn't) an actie pattern without having to write a lambda expression specifically for it, and using List.exist. Question is two fold: is there a more concise way to express this? is there a way to abstract the functionality / expression (fun l - match l with | LegIsSettled - false | _ - true) Such that let itemMatchesPattern pattern item = match item with | pattern -> true | _ -> false such I could write (as I am reusing this design-pattern): let curriedItemMatchesPattern = itemMatchesPattern LegIsSettled if List.exists curriedItemMatchesPattern t.tradeLegs then TradeIsSettled else TradeIsUnsettled Thoughts?

    Read the article

  • Webrick transparent proxy

    - by zzeroo
    Hi there, I've a absolute simple proxy running. require 'webrick' require 'webrick/httpproxy' s = WEBrick::HTTPProxyServer.new(:Port => 8080, :RequestCallback => Proc.new{|req,res| puts req.request_line, req.raw_header}) # Shutdown functionality trap("INT"){s.shutdown} # run the beast s.start This should in my mind not influence the communication in any way. But some sites doesn't work any more. Specially http://lastfm.de 's embedded flash players doesn't work. The header looks link: - -> http://ext.last.fm/2.0/?api%5Fsig=aa3e9ac9edf46ceb9a673cb76e61fef4&flashresponse=true&y=1269686332&streaming=true&playlistURL=lastfm%3A%2F%2Fplaylist%2Ftrack%2F42620245&fod=true&sk=ee93ae4f438767bf0183d26478610732&lang=de&api%5Fkey=da6ae1e99462ee22e81ac91ed39b43a4&method=playlist%2Efetch GET http://play.last.fm/preview/118270350.mp3 HTTP/1.1 Host: play.last.fm User-Agent: Mozilla/5.0 (X11; U; Linux i686; de; rv:1.9.2) Gecko/20100308 Ubuntu/10.04 (lucid) Firefox/3.6 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de,en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Proxy-Connection: keep-alive Cookie: AnonWSSession=ee93ae4f438767bf0183d26478610732; AnonSession=cb8096e3b0d8ec9f4ffd6497a6d052d9-12bb36d49132e492bb309324d8a4100fc422b3be9c3add15ee90eae3190db5fc localhost - - [27/Mar/2010:11:38:52 CET] "GET http://www.lastfm.de/log/flashclient/minor/Track_Loading_Fail/Buffering_Timeout HTTP/1.1" 404 7593 - -> http://www.lastfm.de/log/flashclient/minor/Track_Loading_Fail/Buffering_Timeout localhost - - [27/Mar/2010:11:38:52 CET] "GET http://play.last.fm/preview/118270350.mp3 HTTP/1.1" 302 0 I nead some hints why or what the communication disturb.

    Read the article

  • SIMPLE PHP MVC Framework!

    - by Allen
    I need a simple and basic MVC example to get me started. I dont want to use any of the available packaged frameworks. I am in need of a simple example of a simple PHP MVC framework that would allow, at most, the basic creation of a simple multi-page site. I am asking for a simple example because I learn best from simple real world examples. Big popular frameworks (such as code ignighter) are to much for me to even try to understand and any other "simple" example I have found are not well explained or seem a little sketchy in general. I should add that most examples of simple MVC frameworks I see use mod_rewrite (for URL routing) or some other Apache-only method. I run PHP on IIS. I need to be able to understand a basic MVC framework, so that I could develop my own that would allow me to easily extend functionality with classes. I am at the point where I understand basic design patterns and MVC pretty well. I understand them in theory, but when it comes down to actually building a real world, simple, well designed MVC framework in PHP, i'm stuck. I would really appreciate some help! Edit: I just want to note that I am looking for a simple example that an experienced programmer could whip up in under an hour. I mean simple as in bare bones simple. I dont want to use any huge frameworks, I am trying to roll my own. I need a decent SIMPLE example to get me going.

    Read the article

  • Choosing an installer product that is free and will download/install the .NET Framework

    - by Coder7862396
    I'm currently using the Visual Studio Installer (Setup Project) in Visual Studio 2010 as the installer for MyProgram. It has some quirky bugs and is not very customizable so I would like to switch to another installer product. Here are my requirements: Must be free (and licensed for commercial use) Must install the Windows Installer 3.1 and .NET Framework 4.0 if the client doesn't have them The installer will download them if they are not available The code for detecting the .NET Framework and downloading it must be written by Microsoft (I do not want to have to update hard-coded URLs and registry keys in the future). I know that the Windows SDK includes a setup bootstrap that does this (C:\Program Files\Microsoft SDKs\Windows\v7.0A\Bootstrapper) In the future, when .NET Framework 5 is released and MyProgram uses it, no installer code will need to be changed, the updated installer product should see that MyProgram now uses the .NET Framework version 5 and will install that Here are my current choices: Visual Studio Installer: Automatically detects/downloads/installs Windows Installer and .NET Framework using a bootstrapper Setup.exe (Good!) Limited/buggy functionality (Uninstall shortcuts in the Start Menu cause empty folders to be left behind during uninstall, asking the user if they want a desktop shortcut requires a lot of work, etc.) NSIS: Doesn't natively support the .NET Framework so adding it as a prerequisite requires excessive coding, hardcoded URLS, etc. Inno Setup: Doesn't natively support the .NET Framework so adding it as a prerequisite requires excessive coding, hardcoded URLs, etc. WiX: Steep learning curve... not sure if I want to spend weeks learning it only to find out that it has the same uninstall problem as the Visual Studio Installer (because they both use MSI files) InstallShield LE 2010: Downloading it requires me to setup a fake email account to register just to download it. Then once it is installed it has to contact the company's servers and transmit some private information to them before I'm even allowed to try the free version. This is the most insidious form of DRM that there is and I will not accept it.

    Read the article

  • Qt QFileDialog - native dialogs only with static functions?

    - by darron
    I'm trying to simply save a file. However, I need a filename entered without a suffix to automatically get a default suffix (which setDefaultSuffix() does). I'd rather not completely lose the native save dialog just for this. exec() is not overloaded from QDialog, so it totally bypasses the native hook (ignoring the DontUseNativeDialog option even if it's false). If I disable the file overwrite warning and append the default suffix myself after the function returns, then I'd be re-opening the dialog if the user did not want to overwrite... and that's just ugly. Is there some signal I can catch and quickly inject the default suffix if it's not there? I'm guessing not, since it's a native dialog. Is there something I'm doing wrong with the filter? I only have one filter choice. It should use that extension. This seems pretty lame. Launching the save dialog and simply typing "test" should never result in an extensionless file. "test.", yes. "test" no way. That'll really confuse the users when they hit Load and can't see the file they just saved. I guess the cross-platform part of Qt is giving me lowest common denominator file dialog functionality?

    Read the article

  • Problem with sfRemember cookie / sfGuard Remember me

    - by Tom
    I'm using Symfony 1.4 with Doctrine. Sorry if this is a silly question but what exactly does one need to build on top of the sfDoctrineGuardPlugin to get the "remember me" functionality working? When I login a user, the sfRemember cookie is created with the default 15-day lifetime, and the remember key is saved in the plugin's sf_guard_remember_key table. Without any tweaks to the plugin, the sfGuardSecurityUser SignIn() method creates the cookie, but the Signout() method erases it, leaving no cookie unless you're logged in! Signin(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, $key, time() + $expiration_age); Signout(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, '', time() - $expiration_age); I can see that the database table saves the cookie as a relation of sf_guard_user, but that's not much good if the cookie is gone.... I'd be grateful if someone could tell me what I'm missing here, and ideally, if I prevent the Signout() method from removing the cookie, do I need to write code to read the cookie myself or is this automated somewhere/somehow? I've got box-standard Symfony 1.4 and sfDoctrineGuardPlugin installations. It all just seems totally wrong and the documentation on this is non-existent. Any help would appreciated.

    Read the article

  • How does VS 2005 provide history across all TFS Team Projects when tf.exe cannot?

    - by AakashM
    In Visual Studio 2005, in the TFS Source Control Explorer, these is a top-level node for the TFS Server itself, with a child node for each Team Project. Right-clicking either the server node or the node for a Team Project gives a context menu on which there is a View History item. Selecting this gives you a History window showing the last 200 or so changesets, either for the specific Team Project chosen, or across all Team Projects. It is this history across all Team Projects that I am wondering about. The command-line tf.exe history command provides (as I understand it) basically the same functionality as is provided by the VS TFS Source Control plug-in. But I cannot work out how to get tf.exe history to provide this across-all-Team-Projects history. At a command line, supposing I have C:\ mapped as the root of my workspace, and Foo, Bar, and Baz as Team Projects, I can do C:\> tf history Foo /recursive /stopafter:200 to get the last 200 changesets that affected Team Project Foo; or from within a Team Project folder C:\Bar> tf history *.* /recursive /stopafter:200 which does the same thing for Team Project Bar - note that the wildcard *.* is allowed here. However, none of these work (each gives the error message shown): C:\> tf history /recursive /stopafter:200 The history command takes exactly one item C:\> tf history *.* /recursive /stopafter:200 Unable to determine the source control server C:\> tf history *.* /server:servername /recursive /stopafter:200 Unable to determine the workspace I don't see an option in the docs for tf for specifying a workspace; it seems to only want to determine it from the current folder. So what is VS 2005 doing? Is it internally doing a history on each Team Project in turn and then sticking the results together?? note also that I have tried with Power Tools; tfpt history from the command line gives exactly the same error messages seen here

    Read the article

  • In which order is model binding and validation done in ASP.NET MVC 2?

    - by Simon Bartlett
    I am using ASP.NET MVC 2, and am using a view-model per view approach. I am also using Automapper to map properties from my domain-model to the view-model. Take this example view-model (with Required data annotation attributes for validation purposes): public class BlogPost_ViewModel { public int Id { get; set; } [Required] public string Title { get; set; } [Required] public string Text { get; set; } } In the post editor view I am using a rich text editor (CKeditor). Because CKeditor is a HTML editor, I ideally need CKeditor to HTMLencode the user's input when the form is submitted, so that ASP.NET's input validation does not complain. This is not a problem as CKeditor has this functionality built in, however I need CKeditor's output decoded before mapping back to the domain object (via Automapper). I am wanting to add a new property (to the view-model above) to solve this, as follows: public string HTMLEncodedText { get { return HTMLEncode(Text); } set { Text = HTMLDecode(value); } } I can then bind this property to CKeditor in the view, but still use Automapper to map the 'Text' property in the controller - all without having to turn input-validation off. My question is: do you know how the model binding and validation process in ASP.NET MVC 2 works? Are all model properties binded before validation is carried out? Or is each individual property get validated when it is being set. I think ideally for my idea to work, all properties need to be set before the model is validated.

    Read the article

  • Calling SubmitChanges on DataContext does not update database.

    - by drasto
    In C# ASP.NET MVC application I use Link to SQL to provide data for my application. I have got simple database schema like this: In my controller class I reference this data context called Model (as you can see on the right side of picture in properties) like this: private Model model = new Model(); I've got a table (List) of Series rendered on my page. It renders properly and I was able to add delete functionality to delete Series like this: public ActionResult Delete(int id) { model.Series.DeleteOnSubmit(model.Series.SingleOrDefault(s => s.ID == id)); model.SubmitChanges(); return RedirectToAction("Index"); } Where appropriate action link looks like this: <%: Html.ActionLink("Delete", "Delete", new { id=item.ID })%> Also create (implemented in similar way) works fine. However edit does not work. My edit looks like this: public ActionResult Edit(int id) { return View(model.Series.SingleOrDefault(s => s.ID == id)); } [HttpPost] public ActionResult Edit(Series series) { if (ModelState.IsValid) { UpdateModel(series); series.Title = series.Title + " some string to ensure title has changed"; model.SubmitChanges(); return RedirectToAction("Index"); } I have controlled that my database has a primary key set up correctly. I debugged my application and found out that everything works as expected until the line with model.SubmitChanges();. This command does not apply the changes of Title property(or any other) against the database. Please help.

    Read the article

  • Accessing the selected element inside a templated textblock bound to a wpf listbox

    - by black sensei
    Hello good people , i'm trying to achieve a functionality but i'm don't know how to start it. I'm using vs 2008 sp1 and i'm consuming a webservice which returns a collection (is contactInfo[]) that i bind to a ListBox with little datatemplate on it. <ListBox Margin="-146,-124,-143,-118.808" Name="contactListBox" MaxHeight="240" MaxWidth="300" MinHeight="240" MinWidth="300"> <ListBox.ItemTemplate> <DataTemplate> <TextBlock> <CheckBox Name="contactsCheck" Uid="{Binding fullName}" Checked="contacts_Checked" /><Label Content="{Binding fullName}" FontSize="15" FontWeight="Bold"/> <LineBreak/> <Label Content="{Binding mobile}" FontSize="10" FontStyle="Italic" Foreground="DimGray" /> <Label Content="{Binding email}" FontStyle="Italic" FontSize="10" Foreground="DimGray"/> </TextBlock> </DataTemplate> </ListBox.ItemTemplate> </ListBox> Every works fine so far. so When a checkbox is checked i'll like to access the information of the labels (either the) belonging to the same row or attached to it and append the information to a global variable for example (for each checkbox checked). My problem right now is that i don't know how to do that. Can any one shed some light on how to do that? if you notice Checked="contacts_Checked" that's where i planned to perform the operations. thanks for reading and helping out

    Read the article

  • Simple, fast SQL queries for flat files.

    - by plinehan
    Does anyone know of any tools to provide simple, fast queries of flat files using a SQL-like declarative query language? I'd rather not pay the overhead of loading the file into a DB since the input data is typically thrown out almost immediately after the query is run. Consider the data file, "animals.txt": dog 15 cat 20 dog 10 cat 30 dog 5 cat 40 Suppose I want to extract the highest value for each unique animal. I would like to write something like: cat animals.txt | foo "select $1, max(convert($2 using decimal)) group by $1" I can get nearly the same result using sort: cat animals.txt | sort -t " " -k1,1 -k2,2nr And I can always drop into awk from there, but this all feels a bit awkward (couldn't resist) when a SQL-like language would seem to solve the problem so cleanly. I've considered writing a wrapper for SQLite that would automatically create a table based on the input data, and I've looked into using Hive in single-processor mode, but I can't help but feel this problem has been solved before. Am I missing something? Is this functionality already implemented by another standard tool? Halp!

    Read the article

< Previous Page | 228 229 230 231 232 233 234 235 236 237 238 239  | Next Page >