Search Results

Search found 11679 results on 468 pages for 'maven assembly plugin'.

Page 424/468 | < Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >

  • Prgramming jquery slider

    - by Mirage
    I want to program the jquery slider myself rather than using any plugin. But i want to know the basic idea. e.g I have <ul> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> <li> <div>content </div> </li> </ul> I want to show horzontally only three items at one time and there will be arroes left and right end. I know jquery basics. But i don't know how should i do in steps. I mean when click on right arrow The left div should slide left and new div com right should come left ANy ideas in sequence what i need to do

    Read the article

  • MVC View Model Intellesense / Compile error

    - by Marty Trenouth
    I have one Library with my ORM and am working with a MVC Application. I have a problem where the pages won't compile because the Views can't see the Model's properties (which are inherited from lower level base classes). They system throws a compile error saying that 'object' does not contain a definition for 'ID' and no extension method 'ID' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?) implying that the View is not seeing the model. In the Controller I have full access to the Model and have check the Inherits from portion of the view to validate the correct type is being passed. Controller: return View(new TeraViral_Blog()); View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<com.models.TeraViral_Blog>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index2 </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Index2</h2> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> </fieldset> </asp:Content>

    Read the article

  • Build a Visual Studio Project without access to referenced dlls

    - by David Reis
    I have a project which has a set of binary dependencies (assembly dlls for which I do no have the source code). At runtime those dependencies are required pre-installed on the machine and at compile time they are required in the source tree, e,g in a lib folder. As I'm also making source code available for this program I would like to enable a simple download and build experience for it. Unfortunately I cannot redistribute the dlls, and that complicates things, since VS wont link the project without access to the referenced dlls. Is there anyway to enable this project to be built and linked in absence of the real referenced dlls? Maybe theres a way to tell VS to link against an auto generated stub of the dll, so that it can rebuild without the original? Maybe there's a third party tool that will do this? Any clues or best practices at all in this area? I realize the person must have access to the dlls to run the code, so it makes sense that he could add them to the build process, but I'm just trying to save them the pain of collecting all the dlls and placing them in the lib folder manually.

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • Concatenate 2 text elements on a line with full-width border using CSS only

    - by Michael Horne
    Okay, I'm a newbie to CSS3, so please be gentle. ;-) I'm working with some Wordpress code (Woocommerce plugin, to be exact), and I'm trying to format a line of code in a sidebar so that 2 separate text items (one in an <a, the other in a <span are all on the same line, the full width of the column, and with a bottom border. It looks something like this (except the bottom border on each text do not go all the way across the enclosing sidebar box): http://www.dalluva.com/temp/browse-catalog.JPG (sorry, I'm new and can't post inline images yet) Here's the code fragment I'm trying to live with (i.e. I don't want to change it): <div class="widget"> ... <ul class="product-categories"> <li class="cat-item"> <a href="http://localhost/dalluva/shop/product-category/books/">Books</a> <span class="count">(5)</span> </li> ... And here's the CSS I have now: .widget ul li a { border-bottom: 1px solid #e9e9e9; line-height:1.0; padding: 5px 0 5px 22px; display: inline-block; } .widget ul li span { border-bottom: 1px solid #e9e9e9; line-height: 1.0; padding: 5px 0 5px 0; display: inline-block; } The output in the image above looks right for this CSS code, but when I change the 'span' CSS to include a width:100%, it causes the span element to wrap to the next line, looking like this: http://www.dalluva.com/temp/browse-catalog-2.JPG I've played with white-space:nowrap, overflow:hidden, etc, but I can't seem to find a way to have both the <a and the <span text on the same line with the border extending the full width of the column. Any suggestions on getting the desired effect through CSS only? Thanks. Michael

    Read the article

  • How to use JQuery Validate to create a popup with all form error when the submit button is clicked?

    - by Larry
    I am using the JQuery Validation plugin for client side form validation. In addition to the colorful styling on invalid form fields, my client requires that a popup message be shown. I only want to show this message when the submit button is click because it would drive the user crazy otherwise. I tried the following code, but errorList is always empty. Anyone know the correct way to do something similar. function popupFormErrors(formId) { var validator = $(formId).validate(); var message = ''; for (var i = 0; i < validator.errorList.length - 1; i++) { message += validator.errorList[i].message + '\n'; } if (message.length > 0) { alert(message); } } $('#btn-form-submit').click(function(){ $('#form-register').submit(); popupFormErrors('#btn-form-submit'); return false; }); $('#form-register').validate({ errorPlacement: function(error, element) {/* no room on page */}, highlight: function(element) { $(element).addClass('invalid-input'); }, unhighlight: function(element) { $(element).removeClass('invalid-input'); }, ... }); Update From the info in the accepted answer I came up with this. var submitClicked = false; $('#btn-form-submit').click(function() { submitClicked = true; $('#form-register').submit(); return false; }); $('#form-register').validate({ errorPlacement: function(error, element) {/* no room on page */}, highlight: function(element) { $(element).addClass('invalid-input'); }, unhighlight: function(element) { $(element).removeClass('invalid-input'); }, showErrors: function(errorsObj) { this.defaultShowErrors(); if (submitClicked) { submitClicked = false; ... create popup from errorsObj... } } ... });

    Read the article

  • Why does my Jabber bot only work if I'm debugging my Perl script?

    - by TheGNUGuy
    I am trying to make a jabber bot from scratch and my script is acting funny. I was originally developing the bot on a remote CentOS box, but I have switched to a local Win7 machine. Right now I'm using ActiveState Perl and I'm using Eclipse with the Perl plugin to run a debug the script. The funny behavior I'm experiencing occurs when I run or debug the script. If I run the script using the debugger it works fine, meaning I can send messages to the bot and it can send messages to me. However when I just execute the script normally the bot sends the successful connection message then it disconnects from my jabber server and the script ends. I'm a novice when it comes to Perl and I can't figure out what I'm doing wrong. My guess is it has something to do with the subroutines and sending the presence of the bot. (I know for sure that it has something to do with sending the bot's presence because if the presence code is removed, the script behaves as expected except the bot doesn't appear to be online.) If anyone can help me with this that would be great. I originally had everything in 1 file but separated them into several trying to figure out my problem here are the pastebin links to my source code. jabberBot.pl: http://pastebin.com/cVifv0mm chatRoutine.pm: http://pastebin.com/JXmMT7av trimSpaces.pm: http://pastebin.com/SkeuWtu1 Thanks again for any help!

    Read the article

  • Javascript tr click event with newly created rows

    - by yalechen
    I am very new to web development. I am currently using tablesorter jquery plugin to create a dynamic table, where the user can add and delete rows. I am having trouble with changing the background color of newly created rows upon clicking. It works fine with rows that are hard coded in html. Here is the relevant code: $(document).ready( function() { $('table.tablesorter td').click( function (event) { $(this).parent('tr').toggleClass('rowclick'); $(this).parent('tr').siblings().removeClass('rowclick'); }); } ) rowclick is a css class here: table.tablesorter tbody tr.rowclick td { background-color: #8dbdd8; } I have tried adding the following to my javascript function that adds a new row: var createClickHandler = function(newrow) { return function(event) { //alert(newrow.cells[0].childNodes[0].data); newrow.toggleClass('rowclick'); newrow.siblings().removeClass('rowclick'); }; } row.onclick = createClickHandler(row); The alert correctly displays the text in the first column of the row when I click the new row. However, my new rows do not respond to the css class. Anyone have any ideas? Thanks in advance.

    Read the article

  • Problems solving oddly acting labels in ie7.

    - by Qwibble
    Okay so this is sort of a double question so I'll split it into two. First part In modern browsers the main bold labels sit above their corresponding form elements, and align to the left as is expected. However in ie7, they randomly site 10-15px inset. I went through the developer tools and could find nothing to fix it. I've made sure all my margins and padding is reset so I don't really understand =S Here's the page demo - link Maybe some of you ie bug fixing genius's know what the problem is? =D Second part Again with labels, this time the in-line ones resident next to the check boxes and radio buttons. In modern browsers again, the side beside the form elements as expected, but not so in ie7 where they take a new line. I've tried floating, changing margins and everything but to no effect in sitting it in-line with the div.checker or div.radio that is created by the uniform Jquery plugin. Here's the page demo - link Sorry for troubling you with my ie7 problems, I know they arent the most fun to solve. Hopefully someone has the patience to help. Matt

    Read the article

  • JQuery autocomplete problem

    - by heffaklump
    Im using JQuerys Autocomplete plugin, but it doesn't autocomplete upon entering anything. Any ideas why it doesnt work? The basic example works, but not mine. var ppl = {"ppl":[{"name":"peterpeter", "work":"student"}, {"name":"piotr","work":"student"}]}; var options = { matchContains: true, // So we can search inside string too minChars: 2, // this sets autocomplete to begin from X characters dataType: 'json', parse: function(data) { var parsed = []; data = data.ppl; for (var i = 0; i < data.length; i++) { parsed[parsed.length] = { data: data[i], // the entire JSON entry value: data[i].name, // the default display value result: data[i].name // to populate the input element }; } return parsed; }, // To format the data returned by the autocompleter for display formatItem: function(item) { return item.name; } }; $('#inputplace').autocomplete(ppl, options); Ok. Updated: <input type="text" id="inputplace" /> So, when entering for example "peter" in the input field. No autocomplete suggestions appear. It should give "peterpeter" but nothing happens. And one more thing. Using this example works perfectly. var data = "Core Selectors Attributes Traversing Manipulation CSS Events Effects Ajax Utilities".split(" "); $("#inputplace").autocomplete(data);

    Read the article

  • NASM - Load code from USB Drive

    - by new123456
    Hola, Would any assembly gurus know the argument (register dl) that signifies the first USB drive? I'm working through a couple of NASM tutorials, and would like to get a physical boot (I can get a clean one with qemu). This is the section of code that loads the "kernel" data from disk: loadkernel: mov si, LMSG ;; 'Loading kernel',13,10,0 call prints ;; ex puts() mov dl, 0x00 ;; The disk to load from mov ah, 0x02 ;; Read operation mov al, 0x01 ;; Sectors to read mov ch, 0x00 ;; Track mov cl, 0x02 ;; Sector mov dh, 0x00 ;; Head mov bx, 0x2000 ;; Buffer end mov es, bx mov bx, 0x0000 ;; Buffer start int 0x13 jc loadkernel mov ax, 0x2000 mov ds, ax jmp 0x2000:0x00 If it makes any difference, I'm running a stock Dell Inspiron 15 BIOS. Apparently, the correct value for me is 0x80. The BIOS loads the hard drives and labels them starting at 0x80 according to this answer. My particular BIOS decides to load the USB drive up as the first, for some reason, so I can boot from there.

    Read the article

  • Zend_Auth and database session SaveHandler

    - by takeshin
    I have created Zend_Auth adapter implementing Zend_Auth_Adapter_Interface (similar to Pádraic's adapter) and created simple ACL plugin. Everything works fine with default session handler. So far, so good. As a next step I have created custom Session SaveHandler to persist session data in the database. My implementation is very similar to this one from parables-demo. Seems that everything is working fine. Session data are properly saved to the database, session objects are serialized, but authentication does not work when I enable this custom SaveHandler. I have debugged the authentication and all works fine up till the next request, when the authentication data are lost. I suspected, that is has something to do with the fact, that I use $adapter->write($object) instead $adapter->write($string), but the same happens with strings. I'm bootstrapping Zend_Application_Resource_Session in the first Bootstrap method, as early as possible. Does Zend_Auth need any extra configuration to persist data in the database? Why the authentity is being lost?

    Read the article

  • Can I delay the keyup event for jquery?

    - by Paul
    I'm using the rottentomatoes movie API in conjunction with twitter's typeahead plugin using bootstrap 2.0. I've been able to integerate the API but the issue I'm having is that after every keyup event the API gets called. This is all fine and dandy but I would rather make the call after a small pause allowing the user to type in several characters first. Here is my current code that calls the API after a keyup event: var autocomplete = $('#searchinput').typeahead() .on('keyup', function(ev){ ev.stopPropagation(); ev.preventDefault(); //filter out up/down, tab, enter, and escape keys if( $.inArray(ev.keyCode,[40,38,9,13,27]) === -1 ){ var self = $(this); //set typeahead source to empty self.data('typeahead').source = []; //active used so we aren't triggering duplicate keyup events if( !self.data('active') && self.val().length > 0){ self.data('active', true); //Do data request. Insert your own API logic here. $.getJSON("http://api.rottentomatoes.com/api/public/v1.0/movies.json?callback=?&apikey=MY_API_KEY&page_limit=5",{ q: encodeURI($(this).val()) }, function(data) { //set this to true when your callback executes self.data('active',true); //Filter out your own parameters. Populate them into an array, since this is what typeahead's source requires var arr = [], i=0; var movies = data.movies; $.each(movies, function(index, movie) { arr[i] = movie.title i++; }); //set your results into the typehead's source self.data('typeahead').source = arr; //trigger keyup on the typeahead to make it search self.trigger('keyup'); //All done, set to false to prepare for the next remote query. self.data('active', false); }); } } }); Is it possible to set a small delay and avoid calling the API after every keyup?

    Read the article

  • jQuery & Prototype Conflict

    - by DPereyra
    Hi, I am using the jQuery AutoComplete plugin in an html page where I also have an accordion menu which uses prototype. They both work perfectly separately but when I tried to implement both components in a single page I get an error that I have not been able to understand. uncaught exception: [Exception... "Component returned failure code: 0x80004005 (NS_ERROR_FAILURE) [nsIDOMViewCSS.getComputedStyle]" nsresult: "0x80004005 (NS_ERROR_FAILURE)" location: "JS frame :: file:///C:/Documents and Settings/Administrator/Desktop/website/js/jquery-1.2.6.pack.js :: anonymous :: line 11" data: no] I found out the file conflicting with jQuery is 'effects.js' which is used by the accordion menu. I tried replacing this file with a newer version but newer seems to break the accordion behavior. My guess is that the 'effects.js' file used in the accordion was modified to obtain the accordion demo output. I also tried using the overriding methods jQuery needs to avoid conflict with other libraries and that did not work. I obtained the accordion demo from the following site: http://www.stickmanlabs.com/accordion/ And the jQuery AutoComplete can be obtained from: http://docs.jquery.com/Plugins/Autocomplete#Setup Has any one else experienced this issue? Thanks.

    Read the article

  • Tool to detect use/abuse of String.Concat (where StringBuilder should be used)

    - by Mark Rushakoff
    It's common knowledge that you shouldn't use a StringBuilder in place of a small number of concatenations: string s = "Hello"; if (greetingWorld) { s += " World"; } s += "!"; However, in loops of a significant size, StringBuilder is the obvious choice: string s = ""; foreach (var i in Enumerable.Range(1,5000)) { s += i.ToString(); } Console.WriteLine(s); Is there a tool that I can run on either raw C# source or a compiled assembly to identify where in the source code that String.Concat is being called? (If you're not familiar, s += "foo" is mapped to String.Concat in the IL output.) Obviously, I can't realistically search through an entire project and evaluate every += to identify whether the lvalue is a string. Ideally, it would only point out calls inside a for/foreach loop, but I would even put up with all the false positives of noting every String.Concat. Also, I'm aware that there are some refactoring tools that will automatically refactor my code to use StringBuilder, but I am only interested in identifying the Concat usage at this point. I routinely run Gendarme and FxCop on my code, and neither of those tools identify what I've described.

    Read the article

  • Why does C# not provide the C++ style 'friend' keyword?

    - by Ash
    The C++ friend keyword allows a class A to designate class B as it's friend. This allows Class B to access the private/protected members of class A. I've never read anything as to why this was left out of C# (and VB.NET). Most answers to this earlier StackOverflow question seem to be saying it is a useful part of C++ and there are good reasons to use it. In my experience I'd have to agree. Another question seems to me to be really asking how to do something similar to friend in a C# application. While the answers generally revolve around nested classes, it doesn't seem quite as elegant as using the friend keyword. The original Design Patterns book uses the friend keyword regularly throughout its examples. So in summary, why is friend missing from C#, and what is the "best practice" way (or ways) of simulating it in C#? (By the way, the "internal" keyword is not the same thing, it allows ALL classes within the entire assembly to access internal members, friend allows you to give access to a class to just one other class.)

    Read the article

  • What is the easiest way to add compression to WCF in Silverlight?

    - by caryden
    I have a silverlight 2 beta 2 application that accesses a WCF web service. Because of this, it currently can only use basicHttp binding. The webservice will return fairly large amounts of XML data. This seems fairly wasteful from a bandwidth usage standpoint as the response, if zipped, would be smaller by a factor of 5 (I actually pasted the response into a txt file and zipped it.). The request does have the "Accept-Encoding: gzip, deflate" - Is there any way have the WCF service gzip (or otherwise compress) the response? I did find this link but it sure seems a bit complex for functionality that should be handled out-of-the-box IMHO. OK - at first I marked the solution using the System.IO.Compression as the answer as I could never "seem" to get the IIS7 dynamic compression to work. Well, as it turns out: Dynamic Compression on IIS7 was working al along. It is just that Nikhil's Web Developer Helper plugin for IE did not show it working. My guess is that since SL hands the web service call off to the browser, that the browser handles it "under the covers" and Nikhil's tool never sees the compressed response. I was able to confirm this by using Fiddler which monitors traffic external to the browser application. In fiddler, the response was, in fact, gzip compressed!! The other problem with the System.IO.Compression solution is that System.IO.Compression does not exist in the Silverlight CLR. So from my perspective, the EASIEST way to enable WCF compression in Silverlight is to enable Dynamic Compression in IIS7 and write no code at all.

    Read the article

  • Cloning input type file and set the value

    - by jribeiro
    I know that it isn't possible to set the value of an input type="file" for security reasons... My problem is: I needed to style an input type="file" so what I did was have a button and hide the file input. like: <a href="#" onclick="$('input[name=&quot;photo1&quot;]').click(); return false;" id="photo1-link"></a> <input type="file" name="photo1" class="fileInput jqtranformdone validate[required]" id="photo1" /> These works great in all browsers except IE which gives me an access denied error on submitting through ajax. I'm using the ajaxSubmit jquery plugin (malsup.com/jquery/form/) So after reading for a while I tried to do: var photo1Val = $('#photo1').val(); var clone1 = $('#photo1').clone().val(photo1Val); $('#photo1').remove(); clone1.appendTo('form'); console.log(photo1Val) //prints the right value C:/fakepath/blablabla.jpg $('form').ajaxSubmit(options); The problem is that after this the value of $('#photo1') is empty... Any ideas how to work around this? Thanks

    Read the article

  • Creating a tar file with checksums included

    - by wazoox
    Here's my problem : I need to archive to tar files a lot ( up to 60 TB) of big files (usually 30 to 40 GB each). I would like to make checksums ( md5, sha1, whatever) of these files before archiving; however not reading every file twice (once for checksumming, twice for tar'ing) is more or less a necessity to achieve a very high archiving performance (LTO-4 wants 120 MB/s sustained, and the backup window is limited). So I'd need some way to read a file, feeding a checksumming tool on one side, and building a tar to tape on the other side, something along : tar cf - files | tee tarfile.tar | md5sum - Except that I don't want the checksum of the whole archive (this sample shell code does just this) but a checksum for each individual file in the archive. I've studied GNU tar, Pax, Star options. I've looked at the source from Archive::Tar. I see no obvious way to achieve this. It looks like I'll have to hand-build something in C or similar to achieve what I need. Perl/Python/etc simply won't cut it performance-wise, and the various tar programs miss the necessary "plugin architecture". Does anyone know of any existing solution to this before I start code-churning ?

    Read the article

  • Is there a way to control how pytest-xdist runs tests in parallel?

    - by superselector
    I have the following directory layout: runner.py lib/ tests/ testsuite1/ testsuite1.py testsuite2/ testsuite2.py testsuite3/ testsuite3.py testsuite4/ testsuite4.py The format of testsuite*.py modules is as follows: import pytest class testsomething: def setup_class(self): ''' do some setup ''' # Do some setup stuff here def teardown_class(self): '''' do some teardown''' # Do some teardown stuff here def test1(self): # Do some test1 related stuff def test2(self): # Do some test2 related stuff .... .... .... def test40(self): # Do some test40 related stuff if __name__=='__main()__' pytest.main(args=[os.path.abspath(__file__)]) The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially. When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after). Essentially what I want the execution to look like is: worker1: executes all tests in testsuite1.py serially worker2: executes all tests in testsuite2.py serially worker3: executes all tests in testsuite3.py serially worker4: executes all tests in testsuite4.py serially while worker1, worker2, worker3 and worker4 are all executed in parallel. Is there a way to achieve this in 'pytest-xidst' framework? The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py: def test_execute_func(testsuite_path): subprocess.process('py.test %s' % testsuite_path) if __name__=='__main__': #Gather all the testsuite names for each testsuite: multiprocessing.Process(test_execute_func,(testsuite_path,))

    Read the article

  • problem adding object to hashtable

    - by daemonkid
    I am trying to call a class method dynamically depending on a condition. This is how I am doing it I have three classes implement a single interface interface IReadFile { string DoStuff(); } The three classes A,B,C implement the interface above. I am trying to add them to a hashtable with the code below _HashT.Add("a", new classA()); _HashT.Add("b", new classB()); _HashT.Add("c", new classC()); This compiles fine, but gives a runtime error.{Object reference not set to an instance of an object.} I was planning to return the correct class to the interface type depending on a parameter that matches the key value. say if I send in a. ClassA is returned to the interface type and the method is called. IReadFile Obj = (IReadFile )_HashT["a"].GetType(); obj.DoStuff(); How do I correct the part above where the objects need to be added to the hashtable? Or do I need to use a different approach? All the classes are in the same assembly and namespace. Thanks for your time.

    Read the article

  • getting CS1502 compiler error on dev environment but not production.

    - by nw
    When I try to run my ASP.NET app from my development environment I get the following error message: Compiler Error Message: CS1502: The best overloaded method match for 'mmars.Printing.printFunctions.SetPrintSummaryProperties(mmars.contextInfo, ref mmars.Printing.printObjSummary)' has some invalid arguments. When I publish and run on our production server I don't get this error. It seems to compile fine when I build from the build menu (in fact if I change the second argument of the bolded function call below, i get a compiler error in visual studio), but now i've suddenly started getting this error message at runtime. So another question I have in addition to getting rid of the error is why is the .NET development server even trying to do JIT compilation on my project if it is already compiled into a DLL? Printing.printObjSummary myPrintObj = new Printing.printObjSummary(); Printing.printFunctions.SetPrintSummaryProperties(ci, ref myPrintObj); printObjects.Add(myPrintObj); This seems to have just suddenly appeared from nowhere today and it's extremely frustrating. Also, though there are no warnings at compile-time, when I get redirected to the page with that first compilation error there are many warnings like the following: Warning: CS0436: The type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs' conflicts with the imported type 'mmars.MMARSSummaryDataItem' in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\assembly\dl3\7179c19a\345f948c_ece7ca01\mmars.DLL'. Using the type defined in 'c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files\root\3dad423c\40569048\App_Code.b0rgpkzr.4.cs'. What's the deal with that? Is the webserver complaining about name conflicts in the source file and dll resulting from the source file?

    Read the article

  • What is the best practice in regards to building composite dtos off of an aggregate root with domain

    - by Chance
    I'm trying to figure out the best approach/practice for assembling a composite data transfer object off of an aggregate root and would love to hear people's thoughts on this. For example, lets say I have a root that has a few domain objects as children. I want to assemble a specific view dto, based on some business logic, that either has attributes or full dto's of it's objects. What I'm struggling with is trying to figure out where that assembly should happen. I can see it going on the domain object of the aggregate root as there is some business logic associated with it. The benefits of this approach from what I've deduced thus far is that it should reduce the inevitable business logic from bleeding outisde of the domain object. It also allows for private methods that take care of tasks that could become more complex from an external builder. The downsides being that the domain object becomes much more entrenched in the application's workflow and represents much more than just the domain object. It also could become very large in the scenario where you need multiple composite Dtos. Alternatively, I could also see it belonging to some form of transfer object assembler where there is a builder for each domain object. The domain objects would still be responsible for GetDto() and UpdateFromDto(dto). Outside of that, the builder would handle the construction and deconstruction of composite dtos. The downside is kind of mentioned above, where I fear this will easily lead to developers unfamiliar with DDD bleeding a ton of business logic into the assembler which is what I want to desperately avoid. Any thoughts would be greatly apperciated.

    Read the article

  • Trouble using termextraction ge

    - by mathee
    I'm trying to install this gem: http://github.com/alexrabarts/term_extraction. It required nokogiri, which I tried installing. I'm getting this as the output: >gem install nokogiri Successfully installed nokogiri-1.4.2.1-x86-mswin32 1 gem installed Installing ri documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options Installing RDoc documentation for nokogiri-1.4.2.1-x86-mswin32... No definition for parse_memory No definition for parse_file No definition for parse_with No definition for get_options No definition for set_options I was able to install the termextraction gem (per the README on git repo): >gem install alexrabarts-term_extraction -s http://gems.github.com Successfully installed alexrabarts-term_extraction-0.1.4 1 gem installed Installing ri documentation for alexrabarts-term_extraction-0.1.4... Installing RDoc documentation for alexrabarts-term_extraction-0.1.4... The issue is that I'm trying to test it out, but I'm getting an "uninitialized constant" error when I use it: ActionView::TemplateError (uninitialized constant ApplicationHelper::TermExtraction) on line #3 of app/views/questions/new. haml: 1: %h1 New question 2: -msg = "testing this context thing let's see what it gives me" 3: -getTerms(msg) 4: #new-question-form 5: .box-background 6: -form_for(@question) do |f| app/helpers/application_helper.rb:42:in `getTerms' app/views/questions/new.haml:3:in `_run_haml_app47views47questions47new46haml' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' haml (2.2.23) lib/haml/helpers/action_view_mods.rb:13:in `render' app/controllers/questions_controller.rb:91:in `new' haml (2.2.23) lib/sass/plugin/rails.rb:20:in `process' Here is application_helper.rb: module ApplicationHelper def getTerms(context) yahoo = TermExtraction::Yahoo.new(:api_key => 'myAPIkey', :context => context) end end I'm not sure what the issue is. Any insight would be greatly appreciated!

    Read the article

  • Rails runner command not saving to cache

    - by mark
    Hi I'm having a bit of a problem with a cron task generated by rails whenever plugin that should store remote data in the rails cache for display. What I have is this: schedule.rb set :path, '/var/www/apps/tuexplore/current' every 1.hour do runner "Weather.cache_remote", :environment => :production end calls this model class Weather def self.cache_remote Rails.cache.write('weather_data', Net::HTTP.get_response(URI.parse(WEATHER_URL)).body) end end Calling whenever returns this PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deploy/.gem/ruby/1.8/bin 0 * * * * /var/www/apps/tuexplore/current/script/runner -e production "Weather.cache_remote" This doesn't work. Calling the weather model method from a controller works fine, but I need to schedule it hourly. The cron task causes a "Cache write: weather_data" entry to appear in the production log but data isn't stored nor output into the page. Additional information, I can log into production console and run Weather.cache_remote, then read the data from the rails cache. I'd be really appreciative if someone could point out the error of my ways. If further explanation is needed please ask. Thanks in advance for any pointers.

    Read the article

< Previous Page | 420 421 422 423 424 425 426 427 428 429 430 431  | Next Page >