Search Results

Search found 20140 results on 806 pages for 'output formatting'.

Page 495/806 | < Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >

  • DataContractJsonSerializer produces list of hashes instead of hash

    - by Jacques
    I would expect a Dictionary object of the form: Dictionary<string,string> dict = new Dictionary<string,string>() {["blah", "bob"], ["blahagain", "bob"]}; to serialize into JSON in the form of: { "blah": "bob", "blahagain": "bob" } NOT [ { "key": "blah", "value": "bob" }, { "key": "blahagain", "value": "bob"}] What is the reason for what appears to be a monstrosity of a generic attempt at serializing collections? The DataContractJsonSerializer uses the ISerializable interface to produce this thing. It seems to me as though somebody has taken the XML output from ISerializable and mangled this thing out of it. Is there a way to override the default serialization used by .Net here? Could I just derive from Dictionary and override the Serialization methods? Posting to hear of any caveats or suggestions people might have.

    Read the article

  • How should one import large amounts of data for FIT/Fitnesse tests?

    - by Lachlan
    We have a scheduling engine with large amounts of test data to test all the scenarios, so test automation is critical. We're currently hoping to use FIT/Fitnesse. However a single test has quite a large table of test data, so it doesn't fit very well into the mould of "two or three inputs, one or more outputs" that Fitnesse uses in its examples. Hopefully the other functionality of Fitnesse makes it worth using it. I hear that there is a way to initialize an application for a FIT test with an Excel spreadsheet - not the Spreadsheet to Fitness function, mind you - but I haven't been able to find it so far. Once the whole spreadsheet is loaded into the application, and the application does its thing, we plan to compare either a number of output rows, or perhaps just the last row, to see if the test passes. The application is currently pulling test data from a database for manual tests, but writing to a database, then initializing from it, is not preferred because of the performance impact. The application is written in C#.

    Read the article

  • Asp.net RenderControl method not rendering autopostback for dropdownlist

    - by Tanya
    Hi all, i am bit confused as to why asp.net does not render a dropdownlist with the autopostback property set to true when using the RenderControl method. eg Dim sw As New IO.StringWriter Dim tw As New HtmlTextWriter(sw) Dim table As New Table table.Rows.Add(New TableRow) Dim tr As TableRow = table.Rows(0) tr.Cells.Add(New TableCell) Dim tc As TableCell = tr.Cells(0) Dim ddlMyValues As New DropDownList ddlMyValues.ID = "ddl1" ddlMyValues.Items.Add("Test1") ddlMyValues.Items.Add("Test2") ddlMyValues.Items.Add("Test3") ddlMyValues.AutoPostBack = True tc.Controls.Add(ddlMyValues) table.RenderControl(tw) Debug.WriteLine(sw.ToString) my output renders the dropdown list without the onchange="javascript:setTimeout('__doPostBack(\ddl1\',\'\')', 0)" that is generated by asp.net when using the dropdownlist normally. Is there a work around to this?

    Read the article

  • The jsf2 h:outputText tag is not formating the h:outputText with the MessageFormat

    - by Thiago Diniz
    The jsf2 h:outputText tag is not formating the h:outputText with the MessageFormat my faces config <application> <resource-bundle> <base-name>Messages_pt_BR</base-name> <var>bundle</var> </resource-bundle> </application> My resource bundle: ... EventPageTitle=Event: {0} ... My JSF2 XHTML: <h:outputText value="#{bundle.EventPageTitle}" > <f:param value="#{seuTicketEventController.selected.eventName}"/> </h:outputText> The Output: Event: {0} Does anyone knows how to solve this problem? I have searched everywhere but i can't a solution!

    Read the article

  • Maintaining both sides of self-referential many-to-many relationship in Grails domain object

    - by Ali G
    I'm having some problems getting a many-to-many relationship working in grails. Is there anything obviously wrong with the following: class Person { static hasMany = [friends: Person] static mappedBy = [friends: 'friends'] String name List friends = [] String toString() { return this.name } } class BootStrap { def init = { servletContext -> Person bob = new Person(name: 'bob').save() Person jaq = new Person(name: 'jaq').save() jaq.friends << bob println "Bob's friends: ${bob.friends}" println "Jaq's friends: ${jaq.friends}" } } I'd expect Bob to be friends with Jaq and vice-versa, but I get the following output at startup: Running Grails application.. Bob's friends: [] Jaq's friends: [Bob] (I'm using Grails 1.2.0)

    Read the article

  • Embedded views and resources (mvc)

    - by Steve Ward
    Hi I've embedded several views in a library so that I can re-use across projects using this method which works OK: http://www.wynia.org/wordpress/2008/12/aspnet-mvc-plugins/ But one view usings a Javascript file. I've tried marking this as an embedded resource and adding it AssemblyInfo.cs and then referencing this resource using <%= ClientScript.GetWebResourceUrl(this.GetType(), "FullPath.FileName.js")%> This is literally displaying this output in the view WebResource.axd?d=nUxqfqAUQLabLU54W I think this is because Im trying to refer to an embedded resource from an embedded resource. Help appreciated as Im going round in circles.. Steve

    Read the article

  • Using chunked encoding in a POST request to an asmx web service on IIS 6 generates a 404

    - by user175869
    Hi, I'm using a CXF client to communicate with a .net web service running on IIS 6. This request (anonymised): POST /EngineWebService_v1/EngineWebService_v1.asmx HTTP/1.1 Content-Type: text/xml; charset=UTF-8 SOAPAction: "http://.../Report" Accept: */* User-Agent: Apache CXF 2.2.5 Cache-Control: no-cache Pragma: no-cache Host: uat9.gtios.net Connection: keep-alive Transfer-Encoding: chunked followed by 7 chunks of 4089 bytes and one of 369 bytes, generates the following output after the first chunk has been sent: HTTP/1.1 404 Not Found Content-Length: 103 Date: Wed, 10 Feb 2010 13:00:08 GMT Connection: Keep-Alive Content-Type: text/html Anyone know how to get IIS to accept chunked input for a POST? Thanks

    Read the article

  • Error when trying to create a faceted plot in ggplot2

    - by John Horton
    I am trying to make a faceted plot in ggplot2 of the coefficients on the regressors from two linear models with the same predictors. The data frame I constructed is this: r.together> reg coef se y 1 (Intercept) 5.068608671 0.6990873 Labels 2 goodTRUE 0.310575129 0.5228815 Labels 3 indiaTRUE -1.196868662 0.5192330 Labels 4 moneyTRUE -0.586451273 0.6011257 Labels 5 maleTRUE -0.157618168 0.5332040 Labels 6 (Intercept) 4.225580743 0.6010509 Bonus 7 goodTRUE 1.272760149 0.4524954 Bonus 8 indiaTRUE -0.829588862 0.4492838 Bonus 9 moneyTRUE -0.003571476 0.5175601 Bonus 10 maleTRUE 0.977011737 0.4602726 Bonus The "y" column is a label for the model, reg are the regressors and coef and se are what you would think. I want to plot: g <- qplot(reg, coef, facets=.~y, data = r.together) + coord_flip() But when I try to display the plot, I get: > print(g) Error in names(df) <- output : 'names' attribute [2] must be the same length as the vector [1] What's strange is that qplot(reg, coef, colour=y, data = r.together) + coord_flip() plots as you would expect.

    Read the article

  • boost's regex won't compile

    - by myeviltacos
    Hi everyone. I am using boost 1.45.0 on Ubuntu with Code::Blocks as my IDE, and I can't get basic_regex.hpp to compile. I'm pretty sure I set up boost correctly, because I can compile programs using boost::format without any errors. But I'm getting this annoying error, and I don't know how to get rid of it. The code that is provoking the error: boost::regex e("\"http:\\\\/\\\\/localhostr.com\\\\/files\\\\/.+?\""); Compiler output (GCC): obj/Debug/main.o In function `boost::basic_regex<char, boost::regex_traits<char, boost::cpp_regex_traits<char> > >::assign(char const*, char const*, unsigned int)' /home/neal/Documents/boost_1_45_0/boost/regex/v4/basic_regex.hpp|379| undefined reference to `boost::basic_regex<char, boost::regex_traits<char, boost::cpp_regex_traits<char> > >::do_assign(char const*, char const*, unsigned int)'| ||=== Build finished: 1 errors, 0 warnings ===| Did I miss a step when setting up boost, or should I downgrade to another version of boost?

    Read the article

  • Writing an Iron Python debugger

    - by Kragen
    As a learning exercise I'm writing myself a simple extension / plugin / macro framework using IronPython - I've gotten the basics working but I'd like to add some basic debugging support to make my script editor easier to work with. I've been hunting around on the internet a bit and I've found a couple of good resources on writing managed debuggers (including Mike Stall's excellent .Net Debugging blog and the MSDN documentaiton on the CLR Debugging API) - I understand that IronPython is essentially IL however apart from that I'm a tad lost on how to get started, in particular: Are there any significant differences between debugging a dynamic language (such as IronPython) to a static one (such as C#)? Do I need to execute my script in a special way to get IronPython to output suitable debugging information? Is debugging a script running inside the current process going to cause deadlocks, or does IronPython execute my script in a child process? Am I better off looking into how to produce a simple C# debugger first to get the general idea? (I'm not interested in the GUI aspect of making a debugger for now - I've already got a pretty good idea of how this might work)

    Read the article

  • Resizing JPEG image during decoding

    - by Thomas
    I'm working on a program that creates thumbnails of JPEG images on the fly. Now I was thinking: since a JPEG image is built from 8x8-pixel blocks (Wikipedia has a great explanation), would it be possible to skip part of the decoding? Let's say that my thumbnails are at least 8 times smaller than the original image. We could then map each 8x8 block in the input file to 1 pixel in the decoding output, by including only the constant term of the discrete cosine transform. Most of the image data can be discarded right away, and need not be processed. Moreover, the memory usage is reduced by a factor of 64. I don't want to implement this from scratch; that'll easily take a week. Is there any code out there that can do this? If not, is this because this approach isn't worthwhile, or simply because nobody has thought of it yet?

    Read the article

  • Multithreading/Parallel Processing in PHP

    - by manyxcxi
    I have a PHP script that will generate a report using PHPExcel from data queried from a MySQL DB. Currently, it is linear in processing in that it gets the data back from MySQL, reads in the Excel template, writes the data to the template, then outputs it. I have optimized the code to the point that the data is only iterated over once, and there is very little processing done on the PHP side. The query returns hundreds of lines in less than .001 seconds, so it is running fast enough. After some timing I have found my bottlenecks to be (surprise, surprise) reading the template and writing the output. I would like to do this: Spawn a thread/process to read the template Spawn a thread/process to fetch the data Return back to parent thread - Parent thread will wait until both are complete Proceed on as normal My main questions are is this possible, is it worth it? If yes to both, how would you tackle it? Also, it is PHP 5 on CentOS

    Read the article

  • Qt, unit testing and mock objects.

    - by Eye of Hell
    Hello. Qt framework has internal support for testing via QtTest package. Unfortunately, i didn't find any facilities in it that can assist in creating mock objects. Qt signals and slots offers a natural way to create a unit-testing friendly units with input (slots) and output (signals). But is it any easy way to test that calling specified slot in object will result in emitting correct signals with correct arguments? Of course i can manually create a mock objects and connect them to objects being tested, but it's a lot of code. Maybe it's some techniques exists that allows to somehow automate mock objects creation while unit-testing Qt-based applications?

    Read the article

  • ClassNotFoundException error in implementing Bayesian algorithm in Apache Mahout on Hadoop

    - by Shweta
    Hi, I have a problem in executing the Bayesian algorithm in Mahout. I built it with Maven and the job file is in target directory. When run from terminal using hadoop, I'm getting the ClassNotFoundException error. What should be done? $HADOOP_HOME/bin/hadoop jar mahout-core-0.3-SNAPSHOT.job org.apache.mahout.classifier.bayes.mapreduce.bayes.bayesdriver -i test -o output Exception in thread "main" java.lang.ClassNotFoundException: org.apache.mahout.classifier.bayes.mapreduce.bayes.bayesdriver at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at org.apache.hadoop.util.RunJar.main(RunJar.java:149)

    Read the article

  • Html helper to show display name attribute without label

    - by Pedre
    I have this: [Display(Name = "Empresa")] public string Company{ get; set; } In my aspx I have: <th><%: Html.LabelFor(model => model.Company)%></th> And this generates: <th><label for="Company">Empresa</label></th> Are there any html helper extensions to only show the display attribute without label, only plain text? My desired output is this: <th>Empresa</th> Thanks! EDIT I tried DisplayFor or DisplayTextFor as suggested, but they are not valid because they generate: <th>Amazon</th> They return the value of the property... I want the name from the Display attribute.

    Read the article

  • Silverlight: Why can't silverlight see my xaml file referenced in MergedDictionary?

    - by Sergio
    Hi there, I have a silverlight application with a number of styles defined in a seperate xaml file under the directory /Styles. My App.xaml looks like this: <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/Styles/Legend.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> Legend.xaml has its build action set to Content and 'Do Not Copy' as the copy to output directory setting. The message I'm getting in blend is: An error occurred while finding the resource dictionary "/Styles/Legend.xaml" Thanks in advance!

    Read the article

  • Why does Perl's readdir() cache directory entries?

    - by Frank Straetz
    For some reason Perl keeps caching the directory entries I'm trying to read using readdir: opendir(SNIPPETS, $dir_snippets); # or die... while ( my $snippet = readdir(SNIPPETS) ) { print ">>>".$snippet."\n"; } closedir(SNIPPETS); Since my directory contains two files, test.pl and test.man, I'm expecting the following output: . .. test.pl test.man Unfortunately Perl returns a lot of files that have since vanished, for example because I tried to rename them. After I move test.pl to test.yeah Perl will return the following list: . .. test.pl test.yeah test.man What's the reason for this strange behaviour? The documentation for opendir, readdir and closedir doesn't mention some sort of caching mechanism. "ls -l" clearly lists only two files.

    Read the article

  • Integrate Cppcheck with Emacs

    - by N.N.
    Is it possible to integrate Cppcheck with Emacs in a more sophisticated way than simply calling the shell command on the current buffer? I would like Emacs to be able to parse Cppcheck's messages and treat them as messages from a compiler (similar to how compile works), such as using C-x ` to visit the targets of Cppcheck's messages. Here is some example output Cppcheck: $ cppcheck --enable=all test.cpp Checking test.cpp... [test.cpp:4]: (error) Possible null pointer dereference: p - otherwise it is redundant to check if p is null at line 5 [test.cpp:38]: (style) The scope of the variable 'i' can be reduced [test.cpp:38]: (style) Variable 'i' is assigned a value that is never used [test.cpp:23]: (error) Possible null pointer dereference: p [test.cpp:33]: (error) Possible null pointer dereference: p Checking usage of global functions.. [test.cpp:36]: (style) The function 'f' is never used [test.cpp:1]: (style) The function 'f1' is never used [test.cpp:9]: (style) The function 'f2' is never used [test.cpp:26]: (style) The function 'f3' is never used

    Read the article

  • R strsplit and vectorization

    - by James
    When creating functions that use strsplit, vector inputs do not behave as desired, and sapply needs to be used. This is due to the list output that strsplit produces. Is there a way to vectorize the process - that is, the function produces the correct element in the list for each of the elements of the input? For example, to count the lengths of words in a character vector: words <- c("a","quick","brown","fox") > length(strsplit(words,"")) [1] 4 # The number of words (length of the list) > length(strsplit(words,"")[[1]]) [1] 1 # The length of the first word only > sapply(words,function (x) length(strsplit(x,"")[[1]])) a quick brown fox 1 5 5 3 # Success, but potentially very slow Ideally, something like length(strsplit(words,"")[[.]]) where . is interpreted as the being the relevant part of the input vector.

    Read the article

  • Portal And Content - Content Integration - Best Practices

    - by Stefan Krantz
    Lately we have seen an increase in projects that have failed to either get user friendly content integration or non satisfactory performance. Our intention is to mitigate any knowledge gap that our previous post might have left you with, therefore this post will repeat some recommendation or reference back to old useful post. Moreover this post will help you understand ground up how to design, architect and implement business enabled, responsive and performing portals with complex requirements on business centric information publishing. Design the Information Model The key to successful portal deployments is Information modeling, it's a key task to understand the use case you designing for, therefore I have designed a set of question you need to ask yourself or your customer: Question: Who will own the content, IT or Business? Answer: BusinessQuestion: Who will publish the content, IT or Business? Answer: BusinessQuestion: Will there be multiple publishers? Answer: YesQuestion: Are the publishers computer scientist?Answer: NoQuestion: How often do the information changes, daily, weekly, monthly?Answer: Daily, weekly If your answers to the questions matches at least 2, we strongly recommend you design your content with following principles: Divide your pages in to logical sections, where each section is marked with its purpose Assign capabilities to each section, does it contain text, images, formatting and/or is it static and is populated through other contextual information Select editor/design element type WYSIWYG - Rich Text Plain Text - non-format text Image - Image object Static List - static list of formatted informationDynamic Data List - assembled information from multiple data files through CMIS query The result of such design map could look like following below examples: Based on the outcome of the required elements in the design column 3 from the left you will now simply design a data model in WebCenter Content - Site Studio by creating a Region Definition structure matching your design requirements.For more information on how to create a Region definition see following post: Region Definition Post - note see instruction 7 for details. Each region definition can now be used to instantiate data files, a data file will hold the actual data for each element in the region definition. Another way you can see this is to compare the region definition as an extension to the metadata model in WebCenter Content for each data file item. Design content templates With a solid dependable information model we can now proceed to template creation and page design, in this phase focuses on how to place the content sections from the region definition on the page via a Content Presenter template. Remember by creating content presenter templates you will leverage the latest and most integrated technology WebCenter has to offer. This phase is much easier since the you already have the information model and design wire-frames to base the logic on, however there is still few considerations to pay attention to: Base the template on ADF and make only necessary exceptions to markup when required Leverage ADF design components for Tabs, Accordions and other similar components, this way the design in the content published areas will comply with other design areas based on custom ADF taskflows There is no performance impact when using meta data or region definition based data All data access regardless of type, metadata or xml data it can be accessed via the Content Presenter - Node. See below for applied examples on how to access data Access metadata property from Document - #{node.propertyMap['myProp'].value}myProp in this example can be for instance (dDocName, dDocTitle, xComments or any other available metadata) Access element data from data file xml - #{node.propertyMap['[Region Definition Name]:[Element name]'].asTextHtml}Region Definition Name is the expect region definition that the current data file is instantiatingElement name is the element value you like to grab from the data file I recommend you read following  useful post on content template topic:CMIS queries and template creation - note see instruction 9 for detailsStatic List template rendering For more information on templates:Single Item Content TemplateMulti Item Content TemplateExpression Language Internationalization Considerations When integrating content assets via content presenter you by now probably understand that the content item/data file is wired to the page, what is also pretty common at this stage is that the content item/data file only support one language since its not practical or business friendly to mix that into a complex structure. Therefore you will be left with a very common dilemma that you will have to either build a complete new portal for each locale, which is not an good option! However with little bit of information modeling and clear naming convention this can be addressed. Basically you can simply make sure that all content item/data file are named with a predictable naming convention like "Content1_EN" for the English rendition and "Content1_ES" for the Spanish rendition. This way through simple none complex customizations you will be able to dynamically switch the actual content item/data file just before rendering. By following proposed approach above you not only enable a simple mechanism for internationalized content you also preserve the functionality in the content presenter to support business accessible run-time publishing of information on existing and new pages. I recommend you read following useful post on Internationalization topics:Internationalize with Content Presenter Integrate with Review & Approval processes Today the Review and approval functionality and configuration is based out of WebCenter Content - Criteria Workflows. Criteria Workflows uses the metadata of the checked in document to evaluate if the document is under any review/approval process. So for instance if a Criteria Workflow is configured to force any documents with Version = "2" or "higher" and Content Type is "Instructions", any matching content item version on check in will now enter the workflow before getting released for general access. Few things to consider when configuring Criteria Workflows: Make sure to not trigger on version one for Content Items that are Data Files - if you trigger on version 1 you will not only approve an empty document you will also have a content presenter pointing to a none existing document - since the document will only be available after successful completion of the workflow Approval workflows sometimes requires more complex criteria, the recommendation if that is the case is that the meta data triggering such criteria is automatically populated, this can be achieved through many approaches including Content Profiles Criteria workflows are configured and managed in WebCenter Content Administration Applets where you can configure one or more workflows. When you configured Criteria workflows the Content Presenter will support the editors with the approval process directly inline in the "Contribution mode" of the portal. In addition to approve/reject and details of the task, the content presenter natively support the user to view the current and future version of the change he/she is approving. See below for example: Architectural recommendation To support review&approval processes - minimize the amount of data files per page Each CMIS query can consume significant time depending on the complexity of the query - minimize the amount of CMIS queries per page Use Content Presenter Templates based on ADF - this way you minimize the design considerations and optimize the usage of caching Implement the page in as few Data files as possible - simplifies publishing process, increases performance and simplifies release process Named data file (node) or list of named nodes when integrating to pages increases performance vs. querying for data Named data file (node) or list of named nodes when integrating to pages enables business centric page creation and publishing and reduces the need for IT department interaction Summary Just because one architectural decision solves a business problem it doesn't mean its the right one, when designing portals all architecture has to be in harmony and not impacting each other. For instance the most technical complex solution is not always the best since it will most likely defeat the business accessibility, performance or both, therefore the best approach is to first design for simplicity that even a non-technical user can operate, after that consider the performance impact and final look at the technology challenges these brings and workaround them first with out-of-the-box features, after that design and develop functions to complement the short comings.

    Read the article

  • Apache Commons Codec with Android: could not find method

    - by dqminh
    Today I tried including the apache.commons.codec package in my Android application and couldn't get it running. Android could not find method ord.apache.commons.codec.binary.* and output the following errors in DDMS 01-12 08:41:48.161: ERROR/dalvikvm(457): Could not find method org.apache.commons.codec.binary.Base64.encodeBase64URLSafeString, referenced from method com.dqminh.app.util.Util.sendRequest 01-12 08:41:48.161: WARN/dalvikvm(457): VFY: unable to resolve static method 10146: Lorg/apache/commons/codec/binary/Base64;.encodeBase64URLSafeString ([B)Ljava/lang/String; 01-12 08:41:48.161: WARN/dalvikvm(457): VFY: rejecting opcode 0x71 at 0x0004 Any clue on how to solve this problem ? Thanks a lot.

    Read the article

  • getting firewatir to run on mac osx: jssh problems

    - by z3cko
    I am trying to get firewatir to run on Mac OSX Leopard. I have Firefox 3.6rc2 installed but running the most simple script does not work: require 'rubygems' require 'firewatir' ff=FireWatir::Firefox.new ff.goto("http://mail.yahoo.com") i am getting the following error /usr/local/lib/ruby/gems/1.8/gems/firewatir-1.6.5/lib/firewatir/firefox.rb:237:in `set_defaults': Unable to connect to machine : 127.0.0.1 on port 9997. Make sure that JSSh is properly installed and Firefox is running with '-jssh' option (Watir::Exception::UnableToStartJSShException) from /usr/local/lib/ruby/gems/1.8/gems/firewatir-1.6.5/lib/firewatir/firefox.rb:131:in `initialize' from ./watir-test.rb:12:in `new' from ./watir-test.rb:12 even when I am trying to start Firefox with the -jssh option, I get an error (although another one) /Applications/Firefox.app/Contents/MacOS/firefox-bin -jssh the error output in that case: /usr/local/lib/ruby/gems/1.8/gems/firewatir-1.6.5/lib/firewatir/firefox.rb:125:in `initialize': Firefox is running without -jssh (RuntimeError) is there any tutorial or hnt to get firewatir actually running on Mac OSX?

    Read the article

  • How to set target hosts in Fabric file

    - by ssc
    I want to use Fabric to deploy my web app code to development, staging and production servers. My fabfile: def deploy_2_dev(): deploy('dev') def deploy_2_staging(): deploy('staging') def deploy_2_prod(): deploy('prod') def deploy(server): print 'env.hosts:', env.hosts env.hosts = [server] print 'env.hosts:', env.hosts Sample output: host:folder user$ fab deploy_2_dev env.hosts: [] env.hosts: ['dev'] No hosts found. Please specify (single) host string for connection: When I create a set_hosts() task as shown in the Fabric docs, env.hosts is set properly. However, this is not a viable option, neither is a decorator. Passing hosts on the command line would ultimately result in some kind of shell script that calls the fabfile, I would prefer having one single tool do the job properly. It says in the Fabric docs that 'env.hosts is simply a Python list object'. From my observations, this is simply not true. Can anyone explain what is going on here ? How can I set the host to deploy to ?

    Read the article

  • C#/.Net: What are the best practices for setting ResXResourceReader's basepath?

    - by raj.tiwari
    I am working on a Windows Forms application in C#/.Net. I want to use a resource file that contains translations of my strings. My my project in visual studio I have the following hierarchy: Project CS files ... Resources\ resource.en-US.resx I am trying to read in the resource file as follows: m_ResourceReader = new ResXResourceReader("resources/resource.en-US.resx"); When I run this project, Visual Studio seems to look for the resources folder in the bin/Debug output folder of my project. My questions are: What is the right way to reference a resource file? I would like my installer to place this resource file under my application's folder under Program Files\MyApp\resources\resource.en-US.resx. What would be the way to make ResXResourceReader read it from that location. Thanks for your help. -Raj

    Read the article

  • Embedded BI for ASP.NET

    - by Michael Shimmins
    Can anyone recommend a decent business intelligence & reporting application that can be integrated into an OEM application? Primary requirements are: Administrators can define cubes/dimensions etc (or we - the OEM - can predefine some) Report designers can easily inspect data by visually selecting dimensions, filters etc in an adhoc way that is quickly output with little upfront investment Report designers can design a report based on dimensions/filters and save their definition to be run as required Report viewers can view the reports defined by the report designers All of this plus in .NET that can be branded/integrated into our existing web application's look and feel. Permissions need to work off our user/groups system. I've found a few that look good, but they are all in Java and I don't want to ask our clients to install ASP.NET for the app, and then Java, tomcat etc just for reporting. Thanks

    Read the article

< Previous Page | 491 492 493 494 495 496 497 498 499 500 501 502  | Next Page >