Search Results

Search found 7418 results on 297 pages for 'argument passing'.

Page 226/297 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • How can we change views in a UISplitViewController other than using the popover and selecting?

    - by wolverine
    I have done a sample app with UISplitViewController studying the example they have provided. I have created three detailviews and have configured them to change by the default means. Either using the left/master view in landscape AND using the popover in the portrait orientation. Now I am trying to move to another view(previous/next) from the currentView by using left/right swipe in each view. For that, what I did was just created a function in the RootViewController. I copy-pasted the same code as that of the tablerow selection used by the popover from the RootViewController. I am calling this function from my current view's controller and is passing the respective index of the view(to be displayed next) from the current view. Function is being called but nothing is happening. Plz help me OR is anyother way to do it other than this complex step? I am giving the function that I used to change the view. - (void) rearrangeViews:(int)viewRow { UIViewController <SubstitutableDetailViewController> *detailViewController = nil; if (viewRow == 0) { DetailViewController *newDetailViewController = [[DetailViewController alloc] initWithNibName:@"DetailView" bundle:nil]; detailViewController = newDetailViewController; } if (viewRow == 1) { SecondDetailViewController *newDetailViewController = [[SecondDetailViewController alloc] initWithNibName:@"SecondDetailView" bundle:nil]; detailViewController = newDetailViewController; } if (viewRow == 2) { ThirdDetailViewController *newDetailViewController = [[ThirdDetailViewController alloc] initWithNibName:@"ThirdDetailView" bundle:nil]; detailViewController = newDetailViewController; } // Update the split view controller's view controllers array. NSArray *viewControllers = [[NSArray alloc] initWithObjects:self.navigationController, detailViewController, nil]; splitViewController.viewControllers = viewControllers; [viewControllers release]; if (rootPopoverButtonItem != nil) { [detailViewController showRootPopoverButtonItem:self.rootPopoverButtonItem]; } [detailViewController release]; }

    Read the article

  • Using Silverlight for Views in ASP.Net MVC - a bad idea?

    - by bplus
    I'm currently writing a small application for use internally at my office. I started out teaching myself some MVC (I've been a C# dev for 3 years). One of the main requirements is editable grids - I quickly realised that silverlight (i have zero silverlight experience) could be a big help in this. I've managed to create a proof of concept of getting MVC and silverlight to talk back an forth by combining these two techniques: Creating a Rest API using MVC MVC SilverLight I also got some help on stackoverflow: silverlight-grids-mvc-http-post Essentially all I'm doing is embedding a silver light object in a view. Serializing the Model data as JSON and passing it to silverlight(using intit params written into the response). The silverlight object can post data back to the controller as JSON. So far this seems like it could work quite well. However I am a bit concerned that I could be painting myself into a corner with this approach, as in I don't have much experience with either technology so I'm worried I'm going get hit with something further down the line that I won't be able to work around. Has anybody else tried doing this? Any advice would be much appreciated!

    Read the article

  • asp.net mvc: What is the correct way to return html from controller to refresh select list?

    - by Mark Redman
    Hi, I am new to ASP.NET MVC, particularly ajax operations. I have a form with a jquery dialog for adding items to a drop-down list. This posts to the controller action. If nothing (ie void method) is returned from the Controller Action the page returns having updated the database, but obviously there no chnage to the form. What would be the best practice in updating the drop down list with the added id/value and selecting the item. I think my options are: 1) Construct and return the html manually that makes up the new <select> tag [this would be easy enough and work, but seems like I am missing something] 2) Use some kind of "helper" to construct the new html [This seems to make sense] 3) Only return the id/value and add this to the list and select the item [This seems like an overkill considering the item needs to be placed in the correct order etc] 4) Use some kind of Partial View [Does this mean creating additional forms within ascx controls? not sure how this would effect submitting the main form its on? Also unless this is reusable by passing in parameters(not sure how thats done) maybe 2 is the option?] UPDATE: Having looked around a bit, it seems that generating html withing the controller is not a good idea. I have seen other posts that render partialviews to strings which I guess is what I need and separates concerns (since the html bits are in the ascx). Any comments on whether that is good practice.

    Read the article

  • Undefined method `add' on a cucumber step that usually works.

    - by Josiah Kiehl
    I have a path defined: when /the admin home\s?page/ "/admin/" I have scenario that is passing: Scenario: Let admins see the admin homepage Given "pojo" is logged in And "pojo" is an "admin" And I am on the admin home page Then I should see "Hi there." And I have a scenario that is failing: Scenario: Review flagged photo Given "pojo" is logged in And "pojo" is an "admin" ...bunch of steps that create stuff in the database... And I am on the admin home page Then ... the rest of the steps The step that fails in the second one is "And I am on the admin home page" which passes just fine in the first scenario. Here's the error I get: And I am on the admin home page # features/step_definitions/web_steps.rb:18 undefined method `add' for {}:Hash (NoMethodError) ./app/controllers/admin_controller.rb:13:in `index' ./app/controllers/admin_controller.rb:11:in `each' ./app/controllers/admin_controller.rb:11:in `index' /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' ./features/step_definitions/web_steps.rb:19:in `/^(?:|I )am on (.+)$/' features/admin.feature:52:in `And I am on the admin home page' This is very odd... why would it be fine in the first case, and not in the second where the only difference are a bunch of steps that create records in the db? [edit] Here's the add stuff to database step: Given /^there is a "([^\"]*)" with the following:$/ do |model, table| model.constantize.create!(table.rows_hash) end

    Read the article

  • Uploading to S3 using Curl

    - by Carl Crawley
    Hi All, I'm currently using cURL to upload a file from my server to S3 using AJAX to call the script. So I have the following: $fullfilepath = '/server/sitepath/files/' . $_POST['file']; $upload_url = 'https://'.$_POST['buckets'].'.s3.amazonaws.com/'; $params = array( 'key'=>$_POST['key'], 'AWSAccessKeyId'=>$_POST['AWSAccessKeyId'], 'acl'=>$_POST['acl'], 'success_action_status'=>$_POST['success_action_status'], 'policy'=>$_POST['policy'], 'signature'=>$_POST['signature'], 'Content-Type'=>$_POST['Content-Type'], 'file'=>"@$fullfilepath" ); $ch = curl_init(); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_URL, $upload_url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); $response = curl_exec($ch); curl_close($ch); echo $response; However, I'm getting an S3 error as follows when it posts and I'm unsure why because I'm not passing JSON to it. <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidPolicyDocument</Code><Message>Invalid Policy: Invalid JSON.</Message><RequestId>B29469C6151BE0E8</RequestId><HostId>BFPk6W2kt1b6hTtx0mEq6dWdN/IhO0gNR5bct//7LAOwJxm1C3PrxS4RPv1blzJ8</HostId></Error> I've googled it for the last hour or so and can't seem to figure it out. If I change the order of the Array fields, it gives me a different error - I believe the order of the posted fields is important somehow. any help would be much appreciated! C

    Read the article

  • Help me understand dynamic layouts in Sinatra

    - by thermans
    Help me understand this; I'm learning Sinatra (and Rails for that matter, er, and Ruby). Say I'm doing a search app. The search form is laid out in one div, and the results will be laid out in another. The search form is rendered into the div by a previous view (maybe from a login form). I want to process the form params, perform the search, and render the results into the results div. If I have a single "yield" in the layout and render the divs from different views, the results div erases the search div when it renders. If I define the divs in the default layout, then just render the content, obviously the layout will be messed up: there would have to be two "yields" and I don't think Sinatra supports passing blocks in to yields. I tried foca's sinatra-content-for plugin, and that seems closer to what I need. But I can't figure out where to place the "yield_content" statements. If I have this haml in my layout: #search -# search form = yield_content :search #results -# search results = yield_content :results ... this in my search view: - content_for :search do %form{:method => "post"... etc. ... and this in the results view: - content_for :results do %table{:class => 'results'... etc. This sort of works but when I render the results view, the search div is emptied out. I would like to have it remain. Am I doing something wrong? How should I set this up?

    Read the article

  • Dependency Injection: How to maintain multiple configurations?

    - by Malax
    Hi StackOverflow, Lets assume we've build a system with a DI framework which is working quite fine. This system currently uses JMS to "talk" with other systems not maintained by us. The majority of our customers like the JMS approach and uses it according to our specification. The component which does all the messaging is injected with Spring into the rest of the application. Now we got the case that one customer cannot implement the JMS solution and want to use another messaging technology. Thats not a problem because we can simply implement a messaging service using this technology and inject it in the rest of the application. But how are we supposed to handle the deployment and maintenance of the configuration? Since the application uses Spring i could imagine to check in all the configurations i have for this application and the system administrator could start the application and passing the name of the DI XML file to specify which configuration should be loaded. But... it just don't feel right. Are there any solutions for such cases available? What are the best-practices you use? I could even imagine more complex scenarios which do not contain only one service substitution... Thanks a lot!

    Read the article

  • What is the best way to use Guice and JMock together?

    - by Yishai
    I have started using Guice to do some dependency injection on a project, primarily because I need to inject mocks (using JMock currently) a layer away from the unit test, which makes manual injection very awkward. My question is what is the best approach for introducing a mock? What I currently have is to make a new module in the unit test that satisfies the dependencies and bind them with a provider that looks like this: public class JMockProvider<T> implements Provider<T> { private T mock; public JMockProvider(T mock) { this.mock = mock; } public T get() { return mock; } } Passing the mock in the constructor, so a JMock setup might look like this: final CommunicationQueue queue = context.mock(CommunicationQueue.class); final TransactionRollBack trans = context.mock(TransactionRollBack.class); Injector injector = Guice.createInjector(new AbstractModule() { @Override protected void configure() { bind(CommunicationQueue.class).toProvider(new JMockProvider<QuickBooksCommunicationQueue>(queue)); bind(TransactionRollBack.class).toProvider(new JMockProvider<TransactionRollBack>(trans)); } }); context.checking(new Expectations() {{ oneOf(queue).retrieve(with(any(int.class))); will(returnValue(null)); never(trans); }}); injector.getInstance(RunResponse.class).processResponseImpl(-1); Is there a better way? I know that AtUnit attempts to address this problem, although I'm missing how it auto-magically injects a mock that was created locally like the above, but I'm looking for either a compelling reason why AtUnit is the right answer here (other than its ability to change DI and mocking frameworks around without changing tests) or if there is a better solution to doing it by hand.

    Read the article

  • Why does my Doctrine DBAL query return no results when quoted?

    - by braveterry
    I'm using the Doctrine DataBase Abstraction Layer (DBAL) to perform some queries. For some reason, when I quote a parameter before passing it to the query, I get back no rows. When I pass it unquoted, it works fine. Here's the relevant snippet of code I'm using: public function get($game) { load::helper('doctrinehelper'); $conn = doctrinehelper::getconnection(); $statement = $conn->prepare('SELECT games.id as id, games.name as name, games.link_url, games.link_text, services.name as service_name, image_url FROM games, services WHERE games.name = ? AND services.key = games.service_key'); $quotedGame = $conn->quote($game); load::helper('loghelper'); $logger = loghelper::getLogger(); $logger->debug("Quoted Game: $quotedGame"); $logger->debug("Unquoted Game: $game"); $statement->execute(array($quotedGame)); $resultsArray = $statement->fetchAll(); $logger->debug("Number of rows returned: " . count($resultsArray)); return $resultsArray; } Here's what the log shows: 01/01/11 17:00:13,269 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 17:00:13,269 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 17:00:13,270 [2112] DEBUG root - Number of rows returned: 0 If I change this line: $statement->execute(array($quotedGame)); to this: $statement->execute(array($game)); I get this in the log: 01/01/11 16:51:42,934 [2112] DEBUG root - Quoted Game: 'Diablo II Lord of Destruction' 01/01/11 16:51:42,935 [2112] DEBUG root - Unquoted Game: Diablo II Lord of Destruction 01/01/11 16:51:42,936 [2112] DEBUG root - Number of rows returned: 1 Have I fat-fingered something?

    Read the article

  • Does Perl auto-vivify variables used as references in subroutine calls?

    - by FM
    I've declared 2010 to be the year of higher-order programming, so I'm learning Haskell. The introduction has a slick quick-sort demo, and I thought, "Hey, that's easy to do in Perl". It turned to be easier than I expected. Note that I don't have to worry about whether my partitions ($less and $more) are defined. Normally you can't use an undefined value as an array reference. use strict; use warnings; use List::MoreUtils qw(part); my @data = (5,6,7,4,2,9,10,9,5,1); my @sorted = qsort(@data); print "@sorted\n"; sub qsort { return unless @_; my $pivot = shift @_; my ($less, $more) = part { $_ < $pivot ? 0 : 1 } @_; # Works, even though $less and $more are sometimes undefined. return qsort(@$less), $pivot, qsort(@$more); } As best I can tell, Perl will auto-vivify a variable that you try to use as a reference -- but only if you are passing it to a subroutine. For example, my call to foo() works, but not the attempted print. use Data::Dumper qw(Dumper); sub foo { print "Running foo(@_)\n" } my ($x); print Dumper($x); # Fatal: Can't use an undefined value as an ARRAY reference. # print @$x, "\n"; # But this works. foo(@$x); # Auto-vivification: $x is now []. print Dumper($x); My questions: Am I understanding this behavior correctly? What is the explanation or reasoning behind why Perl does this? Is this behavior explained anywhere in the docs?

    Read the article

  • is it possible to turn off vdso on glibc side?

    - by heroxbd
    I am aware that passing vdso=0 to kernel can turn this feature off, and that the dynamic linker in glibc can automatic detect and use vdso feature from kernel. Here I met with this problem. There is a RHEL 5.6 box (kernel 2.6.18-238.el5) in my institution where I only have a normal user access, probably suffering from RHEL bug 673616. As I compile a toolchain of linux-headers-3.9/gcc-4.7.2/glibc-2.17/binutils-2.23 on top of it, gcc bootstrap fails in cc1 in stage2 cannnot be run Program received signal SIGSEGV, Segmentation fault. 0x00002aaaaaaca6eb in ?? () (gdb) info sharedlibrary From To Syms Read Shared Object Library 0x00002aaaaaaabba0 0x00002aaaaaac3249 Yes (*) /home/benda/gnto/lib64/ld-linux-x86-64.so.2 0x00002aaaaacd29b0 0x00002aaaaace2480 Yes (*) /home/benda/gnto/usr/lib/libmpc.so.3 0x00002aaaaaef2cd0 0x00002aaaaaf36c08 Yes (*) /home/benda/gnto/usr/lib/libmpfr.so.4 0x00002aaaab14f280 0x00002aaaab19b658 Yes (*) /home/benda/gnto/usr/lib/libgmp.so.10 0x00002aaaab3b3060 0x00002aaaab3b3b50 Yes (*) /home/benda/gnto/lib/libdl.so.2 0x00002aaaab5b87b0 0x00002aaaab5c4bb0 Yes (*) /home/benda/gnto/usr/lib/libz.so.1 0x00002aaaab7d0e70 0x00002aaaab80f62c Yes (*) /home/benda/gnto/lib/libm.so.6 0x00002aaaaba70d40 0x00002aaaabb81aec Yes (*) /home/benda/gnto/lib/libc.so.6 (*): Shared library is missing debugging information. and a simple program #include <sys/time.h> #include <stdio.h> int main () { struct timeval tim; gettimeofday(&tim, NULL); return 0; } get segment fault in the same way if compiled against glibc-2.17 and xgcc from stage1. Both cc1 and the test program can be run on another running RHEL 5.5 (kernel 2.6.18-194.26.1.el5) with gcc-4.7.2/glibc-2.17/binutils-2.23 as normal user. I cannot simply upgrade the box to a newer RHEL version, nor could I turn VDSO off via sysctl or proc. The question is, is there a way to compile glibc so that it turns off VDSO unconditionally?

    Read the article

  • Printing is not working in tomcat, when i start server with services.msc(From client side we could not print )

    - by maya
    I am using JasperReports 1.3.1 to print the report. I am sing eclipse and tomcat for development purpose. In eclipse, when i run the application, the below code will show the listed printer devices and print button. If i click the print button, the report is printing by selected device. PrintRequestAttributeSet printRequestAttributeSet = new HashPrintRequestAttributeSet(); printRequestAttributeSet.add(MediaSizeName.ISO_A5); PrintServiceAttributeSet printServiceAttributeSet = new HashPrintServiceAttributeSet(); JRPrintServiceExporter exporter = new JRPrintServiceExporter(); exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRPrintServiceExporterParameter.PRINT_REQUEST_ATTRIBUTE_SET, printRequestAttributeSet); exporter.setParameter(JRPrintServiceExporterParameter.PRINT_SERVICE_ATTRIBUTE_SET, printServiceAttributeSet); exporter.setParameter(JRPrintServiceExporterParameter.DISPLAY_PAGE_DIALOG, Boolean.FALSE); exporter.setParameter(JRPrintServiceExporterParameter.DISPLAY_PRINT_DIALOG, Boolean.TRUE); exporter.exportReport(); Here I am passing jasperPrint as a parameter which i manually construted.Its working good My problem is: I created war file and pasted in tomcat Apache Software Foundation\Tomcat 6.0\webapps directory and started the tomcat by using services.msc. At this point, its not displaying the listed printer details and also not printing. I put some logger, I found that, the code is hanging with exporter.exportReport(); after this line code is not executing . Please suggest me for how to print from client side using jasper

    Read the article

  • WCF service consuming passively issued SAML token

    - by Neillyboy
    What is the best way to pass an existing SAML token from a website already authenticated via a passive STS? We have built an Identity Provider which is issuing passive claims to the website for authentication. We have this working. Now we would like to add some WCF services into the mix - calling them from the context of the already authenticated web application. Ideally we would just like to pass the SAML token on without doing anything to it (i.e. adding new claims / re-signing). All of the examples I have seen require the ActAs sts implementation - but is this really necessary? This seems a bit bloated for what we want to achieve. I would have thought a simple implementation passing the bootstrap token into the channel - using the CreateChannelActingAs or CreateChannelWithIssuedToken mechanism (and setting ChannelFactory.Credentials.SupportInteractive = false) to call the WCF service with the correct binding (what would that be?) would have been enough. We are using the Fabrikam example code as reference, but as I say, think the ActAs functionality here is overkill for what we are trying to achieve.

    Read the article

  • libav/ffmpeg: avcodec_decode_video2() returns -1 when separating demultiplexing and decoding

    - by unbekannt
    I'm using libav (from a C++ program on Linux and Windows) to decode video streams from a file, which works fine (decoding various formats like H264 and MPEG2) using avformat_open_input(), av_read_frame() and avcodec_decode_video2(). Now I have to separate demultiplexing and decoding. One class will call avformat_open_input() and av_read_frame() and then pass the AVPackets into a queue that is read by another class. There I use avcodec_alloc_context3() to get the AVCodecContext needed for avcodec_decode_video2(). I've tested that with a MPEG2 video stream and it works. Problems arise if I try to decode a H264 stream: avcodec_decode_video2() always returns -1 and outputs "no frame". I understand that additional data (SPS/PPS) is needed to decode this stream, so I've tried to replicate the original AVCodecContext from the demultiplexer in the decoder, but it won't work: Copying the content of the extradata field and setting all other values that differ from the default ones in the decoder: -1 is returned Using the same context (i.e. passing along the pointer) results in a crash I also tried to set CODEC_FLAG2_CHUNKS. avcodec_decode_video2() then always returns packet.size - 3 (??) and frameFinished is never set to 1. In my opinion I have a general problem here that will arise whenever settings from the original CodecContext are needed to decode the AVPackets. I'd be grateful for any hints on how to solve that problem!

    Read the article

  • function objects versus function pointers

    - by kumar_m_kiran
    Hi All, I have two questions related to function objects and function pointers, Question : 1 When I read the different uses sort algorithm of STL, I see that the third parameter can be a function objects, below is an example class State { public: //... int population() const; float aveTempF() const; //... }; struct PopLess : public std::binary_function<State,State,bool> { bool operator ()( const State &a, const State &b ) const { return popLess( a, b ); } }; sort( union, union+50, PopLess() ); Question : Now, How does the statement, sort(union, union+50,PopLess()) work? PopLess() must be resolved into something like PopLess tempObject.operator() which would be same as executing the operator () function on a temporary object. I see this as, passing the return value of overloaded operation i.e bool (as in my example) to sort algorithm. So then, How does sort function resolve the third parameter in this case? Question : 2 Question Do we derive any particular advantage of using function objects versus function pointer? If we use below function pointer will it derive any disavantage? inline bool popLess( const State &a, const State &b ) { return a.population() < b.population(); } std::sort( union, union+50, popLess ); // sort by population PS : Both the above references(including example) are from book "C++ Common Knowledge: Essential Intermediate Programming" by "Stephen C. Dewhurst". I was unable to decode the topic content, thus have posted for help. Thanks in advance for your help.

    Read the article

  • automating hudson builds with ant throwing 403

    - by Christopher Dancy
    We have a hudson server which deploys builds. We have a few services which we want to be able to remotely tell hudson to deploy a certain build ... these services are using ant. So I'm trying to get it working but keeping getting a 403 response when giving a build number like so... <ac:post to="http://hostname:8080/hudson/job/test_release_indexes/build?" verbose="true" wantresponse="true"> <prop name="token" value="indexes"/> <prop name="BUILDNUMBER" value="0354"/> </ac:post> this throws the 403. I've also tried passing it props for the username and password like so ... <ac:post to="http://srulesre2:8080/hudson/job/test_dartmouth_indexes/build?" verbose="true" wantresponse="true"> <prop name="token" value="indexes"/> <prop name="BUILDNUMBER" value="0354"/> <prop name="username" value="test"/> <prop name="password" value="test"/> </ac:post> I've tried a hundred different variations on username and password ... like j_username and j_password or user and pass ... but nothing is working ... keep getting the same 403. And the username and password are valid because I can manually log in with admin privileges. Any ideas would be great

    Read the article

  • Why does XPath.selectNodes(context) always use the whole document in JDOM

    - by Simeon
    Hi, I'm trying to run the same query on several different contexts, but I always get the same result. This is an example xml: <root> <p> <r> <t>text</t> </r> </p> <t>text2</t> </root> So this is what I'm doing: final XPath xpath = XPath.newInstance("//t"); List<Element> result = xpath.selectNodes(thisIsThePelement); // and I've debuged it, it really is the <p> element And I always get both <t> elements in the result list. I need just the <t> inside the <p> I'm passing to the XPath object. Any ideas would be of great help, thanks.

    Read the article

  • Running commands over ssh with Java

    - by Ichorus
    Scenerio: I'd like to run commands on remote machines from a Java program over ssh (I am using OpenSSH on my development machine). I'd also like to make the ssh connection by passing the password rather than setting up keys as I would with 'expect'. Problem: When trying to do the 'expect' like password login the Process that is created with ProcessBuilder cannot seem to see the password prompt. When running regular non-ssh commands (e.g 'ls') I can get the streams and interact with them just fine. I am combining standard error and standard out into one stream with redirectErrorStream(true); so I am not missing it in standard error...When I run ssh with the '-v' option, I see all of the logging in the stream but I do not see the prompt. This is my first time trying to use ProcessBuilder for something like this. I know it would be easier to use Python, Perl or good ol' expect but my boss wants to utilize what we are trying to get back (remote log files and running scripts) within an existing Java program so I am kind of stuck. Thanks in advance for the help!

    Read the article

  • Ruby on Rails: How to use a local variable in a collection_select

    - by mmacaulay
    I have a partial view which I'm passing a local variable into: <%= render :partial => "products/product_row", :locals => { :product => product } %> These are rows in a table, and I want to have a <select> in each row for product categories: <%= collection_select(:product, :category_id, @current_user.categories, :id, :name, options = {:prompt => "-- Select a category --"}, html_options = { :id => "", :class => "product_category" }) %> (Note: the id = "" is there because collection_select tries to give all these select elements the same id.) The problem is that I want to have product.category be selected by default and this doesn't work unless I have an instance variable @product. I can't do this in the controller because this is a collection of products. One way I was able to get around this was to have this line just before the collection_select: <% @product = product %> But this seems very hacky and would be a problem if I ever wanted to have an actual instance variable @product in the controller. I guess one workaround would be to name this instance variable something more specific like @product_select_tmp in hopes of not interfering with anything that might be declared in the controller. This still seems very hacky though, and I'd prefer a cleaner solution. Surely there must be a way to have collection_select use a local variable instead of an instance variable. Note that I've tried a few different ways of calling collection_select with no success: <%= collection_select(product, ... <%= collection_select('product', ... etc. Any help greatly appreciated!

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Boost Unit testing memory reuse causing tests that should fail to pass

    - by Knyphe
    We have started using the boost unit testing library for a large existing code base, and I have run into some trouble with unit tests incorrectly passing, seemingly due to the reuse of memory on the stack. Here is my situation: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } The first test passed correctly, initializing all the variables. The constructor in the second unit test did not correctly set EntityType or DataPosition, but the unit test passed. I was able to get it to fail by placing some variables on the stack in the second test, like so: BOOST_AUTO_TEST_CASE(test_select_base_instantiation_default) { int a, b; SelectBase selectBase(true, _T("abc")); BOOST_CHECK_EQUAL( selectBase.getSelectType(), false); BOOST_CHECK_EQUAL( selectBase.getTypeName(_T("abc")); BOOST_CHECK_EQUAL( selectBase.getEntityType(), -1); BOOST_CHECK_EQUAL( selectBase.getDataPos(), -1); } If there is only one int, only the dataPos CHECK_EQUAL fails, but if there are two, both EntityType and DataPos fail, so it seems pretty clear that this is an issue with the variables being created on the same stack memory or some such. Is there a good way to clear the memory between each unit test, or am I potentially using the library incorrectly or writing bad tests? Any help would be appreciated.

    Read the article

  • In IE8, jquery-ui's dialog set the height of its contents to zero. How can I fix this?

    - by brahn
    I am using jquery UI's dialog widget to render a modal dialog in my web application. I do this by passing the ID of the desired DOM element into the following function: var setupDialog = function (eltId) { $("#" + eltId).dialog({ autoOpen: false, width: 610, minWidth: 610, height: 450, minHeight: 200, modal: true, resizable: false, draggable: false, }); }; Everything works just fine in Firefox, Safari, and Chrome. However, in IE 8 when the dialog is opened only the div.ui-dialog-titlebar is visible -- the div.ui-dialog-contents are not. The problem seems to be that while in the modern browsers, the div.ui-dialog-contents has a specific height set in its style, i.e. after opening the dialog, the resulting HTML is: <div class="ui-dialog-content ui-widget-content" id="invite-friends-dialog" style="width: auto; min-height: 198px; height: 448px">...</div> while in IE8 the height style attribute is set to zero, and the resulting HTML is: <div class="ui-dialog-content ui-widget-content" id="invite-friends-dialog" style="min-height: 0px; width: auto; height: 0px">...</div> What do I need to do to get the height (and min-height) style attributes set correctly?

    Read the article

  • How to append a row to a TableViewSection in Titanium?

    - by Mike Trpcic
    I'm developing an iPhone application in Titanium, and need to append a row to a particular TableViewSection. I can't do this on page load, as it's done dynamically by the user throughout the lifecycle of the application. The documentation says that the TableViewSection has an add method which takes two arguments, but I can't make it work. Here's my existing code: for(var i = 0; i <= product_count; i++){ productsTableViewSection.add( Ti.UI.createTableViewRow({ title:'Testing...' }) ); } That is just passing one argument in, and that causes Titanium to die with an uncaught exception: 2010-04-26 16:57:18.056 MyApplication[72765:207] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 2. The number of rows contained in an existing section after the update (2) must be equal to the number of rows contained in that section before the update (1), plus or minus the number of rows inserted or deleted from that section (0 inserted, 0 deleted).' 2010-04-26 16:57:18.056 MyApplication[72765:207] Stack: ( The exception looks like it did add the row, but it's not allowed to for some reason. Since the documentation says that TableViewSection takes in "view" and "row", I tried the following: for(var i = 0; i <= product_count; i++){ productsTableViewSection.add( Ti.UI.createView({}), Ti.UI.createTableViewRow({ title:'Testing...' }) ); } The above code doesn't throw the exception, but it gives a [WARN]: [WARN] Invalid type passed to function. expected: TiUIViewProxy, was: TiUITableViewRowProxy in -[TiUITableViewSectionProxy add:] (TiUITableViewSectionProxy.m:62) TableViewSections don't seem to support any methods like appendRow, or insertRow, so I don't know where else to go with this. I've looked through the KitchenSink app, but there are no examples that I could find of adding a row to a TableViewSection. Any help is appreciated.

    Read the article

  • Returning true or error message in Ruby

    - by seaneshbaugh
    I'm wondering if writing functions like this is considered good or bad form. def test(x) if x == 1 return true else return "Error: x is not equal to one." end end And then to use it we do something like this: result = test(1) if result != true puts result end result = test(2) if result != true puts result end Which just displays the error message for the second call to test. I'm considering doing this because in a rails project I'm working on inside my controller code I make calls to a model's instance methods and if something goes wrong I want the model to return the error message to the controller and the controller takes that error message and puts it in the flash and redirects. Kinda like this def create @item = Item.new(params[:item]) if [email protected]? result = @item.save_image(params[:attachment][:file]) if result != true flash[:notice] = result redirect_to(new_item_url) and return end #and so on... That way I'm not constructing the error messages in the controller, merely passing them along, because I really don't want the controller to be concerned with what the save_image method itself does just whether or not it worked. It makes sense to me, but I'm curious as to whether or not this is considered a good or bad way of writing methods. Keep in mind I'm asking this in the most general sense pertaining mostly to ruby, it just happens that I'm doing this in a rails project, the actual logic of the controller really isn't my concern.

    Read the article

  • Setting an Excel Range with an Array using Python and comtypes?

    - by technomalogical
    Using comtypes to drive Python, it seems some magic is happening behind the scenes that is not converting tuples and lists to VARIANT types: # RANGE(“C14:D21”) has values # Setting the Value on the Range with a Variant should work, but # list or tuple is not getting converted properly it seems >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Open(r'C:\temp\my_file.xlsx') >>>xl.Visible = True >>>vals=tuple([(x,y) for x,y in zip('abcdefgh',xrange(8))]) # creates: #(('a', 0), ('b', 1), ('c', 2), ('d', 3), ('e', 4), ('f', 5), ('g', 6), ('h', 7)) >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] >>>sheet.Range["C14","D21"].Value() (('foo',1),('foo',2),('foo',3),('foo',4),('foo',6),('foo',6),('foo',7),('foo',8)) >>>sheet.Range["C14","D21"].Value[()] = vals # no error, this blanks out the cells in the Range According to the comtypes docs: When you pass simple sequences (lists or tuples) as VARIANT parameters, the COM server will receive a VARIANT containing a SAFEARRAY of VARIANTs with the typecode VT_ARRAY | VT_VARIANT. This seems to be inline with what MSDN says about passing an array to a Range's Value. I also found this page showing something similar in C#. Can anybody tell me what I'm doing wrong? EDIT I've come up with a simpler example that performs the same way (in that, it does not work): >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Add() >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] # at this point, I manually typed into the range A1:B3 >>> sheet.Range("A1","B3").Value() ((u'AAA', 1.0), (u'BBB', 2.0), (u'CCC', 3.0)) >>>sheet.Range("A1","B3").Value[()] = [(x,y) for x,y in zip('xyz',xrange(3))] # Using a generator expression, per @Mike's comment # However, this still blanks out my range :(

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >