Search Results

Search found 11697 results on 468 pages for 'requires sense of humor'.

Page 56/468 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • belongs_to with multiple models

    - by julie p
    Hi there! I am a Rails noob and have a question. I have a feed aggregator that is organized by this general concept: Feed Category (books, electronics, etc) Feed Site Section (home page, books page, etc) Feed (the feed itself) Feed Entry So: class Category < ActiveRecord::Base has_many :feeds has_many :feed_entries, :through => :feeds, :limit => 5 validates_presence_of :name attr_accessible :name, :id end class Section < ActiveRecord::Base has_many :feeds has_many :feed_entries, :through => :feeds, :limit => 5 attr_accessible :name, :id end class Feed < ActiveRecord::Base belongs_to :categories belongs_to :sections has_many :feed_entries validates_presence_of :name, :feed_url attr_accessible :name, :feed_url, :category_id, :section_id end class FeedEntry < ActiveRecord::Base belongs_to :feed belongs_to :category belongs_to :section validates_presence_of :title, :url end Make sense? Now, in my index page, I want to basically say... If you are in the Category Books, on the Home Page Section, give me the feed entries grouped by Feed... In my controller: def index @section = Section.find_by_name("Home Page") @books = Category.find_by_name("Books") end In my view: <%= render :partial => 'feed_list', :locals => {:feed_group => @books.feeds} -%> This partial will spit out the markup for each feed entry in the @books collection of Feeds. Now what I need to do is somehow combine the @books with the @section... I tried this: <%= render :partial => 'feed_list', :locals => {:feed_group => @books.feeds(:section_id => @section.id)} -%> But it isn't limiting by the section ID. I've confirmed the section ID by using the same code in the console... Make sense? Any advice? Thanks!

    Read the article

  • Hotkeys in webapps

    - by Johoo
    When creating webapps, is there any guidelines on which keys you can use for your own hotkeys without overriding too many of the browsers default hotkeys. For example i might want to have a custom copy command for copying entire sets of data that only makes sense for my program instead of just text. The logical combination for this would be ctrl+c but that would destroy the default copy hotkey for normal text. One solution i was thinking about is to only catch the hotkey when it "makes sense" but when you use some advanced custom selection it might be hard to differentiate if your data is focused, if text is selected or both. Right now i am only using single keys as the hotkey, so just 'c' for the example above and this seems to be what most other sites are doing too. The problem is that if you have text input this doesn't work so good. Is this the best solution? To clarify I'm talking about advanced webapps that behave more like normal programs and not just some website presenting information(even though i think these guidlines would be valid for both cases). So for the copy example it might not be a big deal if you can't copy the text in the menu but when ctrl+tab, alt+d or ctrl+e doesn't work i would be really pissed, cough flash cough.

    Read the article

  • Vertical crosshair reposition on Google Map after map resize Issue

    - by joe
    I use the following function to add a crosshair to my Google Map. It works great for horizontal resizing. I can not figure out how to make it work for vertical. After resizing the map it requires the user to either zoom in or out 1 layer, upon which the crosshair snaps to center. var crosshairsSize=17; GMap2.prototype.addCrosshairs=function(){ var container=this.getContainer(); if(this.crosshairs){$(this.crosshairs).remove();} var crosshairs=document.createElement("img"); crosshairs.src='../images/crosshair2.gif'; crosshairs.style.width=crosshairsSize+'px'; crosshairs.style.height=crosshairsSize+'px'; crosshairs.style.border='0'; crosshairs.style.position='relative'; crosshairs.style.top=((parseInt(container.clientHeight)-crosshairsSize)/2)+'px'; crosshairs.style.left="0px"; // The map is centered so 0 will do crosshairs.style.zIndex='500'; container.appendChild(crosshairs); this.crosshairs=crosshairs; return crosshairs;}; I use map.checkResize(); and the map is correct, it's just the crosshair that is off. I've tried firing a javascript zoom in one level but it doesn't work like zooming in with the mouse. It's only zooming in/out with the mouse scroll wheel too... clicking and dragging the zoom slider doesn't work so somehow it's liking the mouse scroll wheel. Firing addCrosshairs(); after resize makes no difference. It places the new crosshair in old spot and still requires a mouse scroll zoom.

    Read the article

  • Authorization and Jquery dialog problem.

    - by bbrepols
    Hi, I have a little problem with a Jquery dialog for an action that requires a role. In my example, the user can click on a delete button and must confirm the action. In my controller, the Delete action requires a role, if the user is in the required role, the object is deleted. The problem: How to alert the user if * the element was deleted (redirect to the Index view) * there was an error (alert with the message) * he doesn't have the rights to delete (alert with the message) Before using the authorize filter, the delete action returned a JSON with a Boolean that indicates if there was an error, an URL to redirect on success and a message to alert on error. As I can't return a JSON from my filter, I created an other method with the authorize filter that returns a partial view with the confirm content. If the user doesn't have the rights, the filter returns a partial view with an unauthorized exception content. The problem: How to distinct which partial view was returned. When I create the dialog, I need to know for the buttons function. Thanks!

    Read the article

  • Multiple threads modifying a collection in Java??

    - by posdef
    Hi, The project I am working on requires a whole bunch of queries towards a database. In principle there are two types of queries I am using: read from excel file, check for a couple of parameters and do a query for hits in the database. These hits are then registered as a series of custom classes. Any hit may (and most likely will) occur more than once so this part of the code checks and updates the occurrence in a custom list implementation that extends ArrayList. for each hit found, do a detail query and parse the output, so that the classes created in (I) get detailed info. I figured I would use multiple threads to optimize time-wise. However I can't really come up with a good way to solve the problem that occurs with the collection these items are stored in. To elaborate a little bit; throughout the execution objects are supposed to be modified by both (I) and (II). I deliberately didn't c/p any code, as it would be big chunks of code to make any sense.. I hope it make some sense with the description above. Thanks,

    Read the article

  • Location redirect - is tracking possible ?

    - by Gerald Ferreira
    Hi there, I was wondering if someone can help me, I have the following script that redirect users to an affiliate link when they click on a banner. <?php $targets = array( 'site1' => 'http://www.site1.com/', 'site2' => 'http://www.site2.com/', 'site3' => 'http://www.site3.com/', 'site4' => 'http://www.site4.com/', ); if (isset($targets[$_GET['id']])) { header('Location: '.$targets[$_GET['id']]); exit; } ?> Is it possible to track when a user hits the banner telling me the referer site as well as the ip address of the person clicking on the banner. hmmmm something like pixel tracking? I have tried to add an iframe that does the tracking but it creates an error Hope it makes sense Thanks! This is more or less how I would have done it in asp <% var Command1 = Server.CreateObject ("ADODB.Command"); Command1.ActiveConnection = MM_cs_stats_STRING; Command1.CommandText = "INSERT INTO stats.g_stats (g_stats_ip, g_stats_referer) VALUES (?, ? ) "; Command1.Parameters.Append(Command1.CreateParameter("varg_stats_ip", 200, 1, 20, (String(Request.ServerVariables("REMOTE_ADDR")) != "undefined" && String(Request.ServerVariables("REMOTE_ADDR")) != "") ? String(Request.ServerVariables("REMOTE_ADDR")) : String(Command1__varg_stats_ip))); Command1.Parameters.Append(Command1.CreateParameter("varg_stats_referer", 200, 1, 255, (String(Request.ServerVariables("HTTP_REFERER")) != "undefined" && String(Request.ServerVariables("HTTP_REFERER")) != "") ? String(Request.ServerVariables("HTTP_REFERER")) : String(Command1__varg_stats_referer))); Command1.CommandType = 1; Command1.CommandTimeout = 0; Command1.Prepared = true; Command1.Execute(); %> I am not sure how to do it in php - unfortunately for me the hosting is only supporting php so I am more or less clueless on how to do it in php I was thinking if I can somehow call a picture I can do it with pixel tracking in anoter asp page, on another server. Hope this makes better sense

    Read the article

  • Why overload true and false instead of defining bool operator?

    - by Joe Enos
    I've been reading about overloading true and false in C#, and I think I understand the basic difference between this and defining a bool operator. The example I see around is something like: public static bool operator true(Foo foo) { return (foo.PropA > 0); } public static bool operator false(Foo foo) { return (foo.PropA <= 0); } To me, this is the same as saying: public static implicit operator bool(Foo foo) { return (foo.PropA > 0); } The difference, as far as I can tell, is that by defining true and false separately, you can have an object that is both true and false, or neither true nor false: public static bool operator true(Foo foo) { return true; } public static bool operator false(Foo foo) { return true; } //or public static bool operator true(Foo foo) { return false; } public static bool operator false(Foo foo) { return false; } I'm sure there's a reason this is allowed, but I just can't think of what it is. To me, if you want an object to be able to be converted to true or false, a single bool operator makes the most sense. Can anyone give me a scenario where it makes sense to do it the other way? Thanks

    Read the article

  • How do I assert that two arbitrary type objects are equivalent, without requiring them to be equal?

    - by Tomas Lycken
    To accomplish this (but failing to do so) I'm reflecting over properties of an expected and actual object and making sure their values are equal. This works as expected as long as their properties are single objects, i.e. not lists, arrays, IEnumerable... If the property is a list of some sort, the test fails (on the Assert.AreEqual(...) inside the for loop). public void WithCorrectModel<TModelType>(TModelType expected, string error = "") where TModelType : class { var actual = _result.ViewData.Model as TModelType; Assert.IsNotNull(actual, error); Assert.IsInstanceOfType(actual, typeof(TModelType), error); foreach (var prop in typeof(TModelType).GetProperties()) { Assert.AreEqual(prop.GetValue(expected, null), prop.GetValue(actual, null), error); } } If dealing with a list property, I would get the expected results if I instead used CollectionAssert.AreEquivalent(...) but that requires me to cast to ICollection, which in turn requries me to know the type listed, which I don't (want to). It also requires me to know which properties are list types, which I don't know how to. So, how should I assert that two objects of an arbitrary type are equivalent? Note: I specifically don't want to require them to be equal, since one comes from my tested object and one is built in my test class to have something to compare with.

    Read the article

  • Please help me with a Git workflow

    - by aaron carlino
    I'm an SVN user hoping to move to Git. I've been reading documentation and tutorials all day, and I still have unanswered questions. I don't know if this workflow will make sense, but here's my situation, and what I would like to get out of my workflow: Multiple developers, all developing locally on their work stations 3 versions of the website: Dev, Staging, Production Here's my dream: A developer works locally on his own branch, say "developer1", tests on his local machine, and commits his changes. Another developer can pull down those changes into his own branch. Merge developer1 - developer2. When the work is ready to be seen by the public, I'd like to be able to "push" to Dev, Staging, or Production. git push origin staging or maybe.. git merge developer1 staging I'm not sure. Like I said, I'm still new to it. Here are my main questions: -Do my websites (Dev, Staging, Production) have to be repositories? And do they have to be "bare" in order to be the recipients of new changes? -Do I want one repository or many, with several branches? -Does this even make sense, or am I on the wrong path? I've read a lot of tutorials, so I'm really hoping someone can just help me out with my specific situation. Thanks so much!

    Read the article

  • how useful is Turing completeness? are neural nets turing complete?

    - by Albert
    While reading some papers about the Turing completeness of recurrent neural nets (for example: Turing computability with neural nets, Hava T. Siegelmann and Eduardo D. Sontag, 1991), I got the feeling that the proof which was given there was not really that practical. For example the referenced paper needs a neural network which neuron activity must be of infinity exactness (to reliable represent any rational number). Other proofs need a neural network of infinite size. Clearly, that is not really that practical. But I started to wonder now if it does make sense at all to ask for Turing completeness. By the strict definition, no computer system nowadays is Turing complete because none of them will be able to simulate the infinite tape. Interestingly, programming language specification leaves it most often open if they are turing complete or not. It all boils down to the question if they will always be able to allocate more memory and if the function call stack size is infinite. Most specification don't really specify this. Of course all available implementations are limited here, so all practical implementations of programming languages are not Turing complete. So, what you can say is that all computer systems are just equally powerful as finite state machines and not more. And that brings me to the question: How useful is the term Turing complete at all? And back to neural nets: For any practical implementation of a neural net (including our own brain), they will not be able to represent an infinite number of states, i.e. by the strict definition of Turing completeness, they are not Turing complete. So does the question if neural nets are Turing complete make sense at all? The question if they are as powerful as finite state machines was answered already much earlier (1954 by Minsky, the answer of course: yes) and also seems easier to answer. I.e., at least in theory, that was already the proof that they are as powerful as any computer.

    Read the article

  • Transitioning from FlexBuilder 3 to FlashBuilder 4 ... there and back again.

    - by Robusto
    It's growing pains time again. Some of our stuff requires FlashBuilder 4 and some still requires FlexBuilder 3. Both are installed OK, and no projects use both IDEs. The trouble is, when I go back to work on a FlexBuilder 3 project it takes freakin' forever to build and I get weird errors like these: This doesn't seem to cause any identifiable problems except to throw up a modal dialog at various points in the build process, forcing user interaction. But I do notice that memory fills up fast in FB3 and generally FB3 starts behaving strangely and ultimately quits once it gets up over 700MB. This is only a temporary bridge situation until we get all projects into FB4, but "temporary" could mean weeks if not months. Does anyone have any advice for how to get through this bridge period? Is there anything I can do to make these two IDEs work and play well together? Failing that, does anyone know what "java.lang.String" is the "reason" for the problem? Does Eclipse have a resource bundle somewhere that is getting corrupted when i go back and forth between the two?

    Read the article

  • ClassLoader exceptions being memoized

    - by Jim
    Hello, I am writing a classloader for a long-running server instance. If a user has not yet uploaded a class definition, I through a ClassNotFoundException; seems reasonable. The problem is this: There are three classes (C1, C2, and C3). C1 depends on C2, C2 depends on C3. C1 and C2 are resolvable, C3 isn't (yet). C1 is loaded. C1 subsequently performs an action that requires C2, so C2 is loaded. C2 subsequently performs an action that requires C3, so the classloader attempts to load C3, but can't resolve it, and an exception is thrown. Now C3 is added to the classpath, and the process is restarted (starting from the originally-loaded C1). The issue is, C2 seems to remember that C3 couldn't be loaded, and doesn't bother asking the classloader to find the class... it just re-throws the memoized exception. Clearly I can't reload C1 or C2 because other classes may have linked to them (as C1 has already linked to C2). I tried throwing different types of errors, hoping the class might not memoize them. Unfortunately, no such luck. Is there a way to prevent the loaded class from binding to the exception? That is, I want the classloader to be allowed to keep trying if it didn't succeed the first time. Thanks!

    Read the article

  • Returning true or error message in Ruby

    - by seaneshbaugh
    I'm wondering if writing functions like this is considered good or bad form. def test(x) if x == 1 return true else return "Error: x is not equal to one." end end And then to use it we do something like this: result = test(1) if result != true puts result end result = test(2) if result != true puts result end Which just displays the error message for the second call to test. I'm considering doing this because in a rails project I'm working on inside my controller code I make calls to a model's instance methods and if something goes wrong I want the model to return the error message to the controller and the controller takes that error message and puts it in the flash and redirects. Kinda like this def create @item = Item.new(params[:item]) if [email protected]? result = @item.save_image(params[:attachment][:file]) if result != true flash[:notice] = result redirect_to(new_item_url) and return end #and so on... That way I'm not constructing the error messages in the controller, merely passing them along, because I really don't want the controller to be concerned with what the save_image method itself does just whether or not it worked. It makes sense to me, but I'm curious as to whether or not this is considered a good or bad way of writing methods. Keep in mind I'm asking this in the most general sense pertaining mostly to ruby, it just happens that I'm doing this in a rails project, the actual logic of the controller really isn't my concern.

    Read the article

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • Using unset member variables within a class or struct

    - by Doug Kavendek
    It's pretty nice to catch some really obvious errors when using unset local variables or when accessing a class or struct's members directly prior to initializing them. In visual studio 2008 you get an "uninitialized local variable used" warning at compile-time and get a run-time check failure at the point of access when debugging. However, if you access an uninitialized struct's member variable through one of its functions, you don't get any warnings or assertions. Obviously the easiest solution is don't do that, but nobody's perfect. For example: struct Test { float GetMember() const { return member; } float member; }; Test test; float f1 = test.member; // Raises warning, asserts in VS debugger at runtime float f2 = test.GetMember(); // No problem, just keeps on going This surprised me, but it makes some sense -- the compiler can't assume calling a function on an unused struct is an error, or how else would you initialize or construct it? And anything fancier just quickly brings up so many other complications that it makes sense that it wouldn't bother classifying which functions are ok to call and when, especially just as a debugging help. I know I can set up my own assertions or error checking within the class itself, but that can complicate some simpler structs. Still, it would seem like within the context of the function call, wouldn't it know insides GetMember() that member wasn't initialized yet? I'm assuming it's not only relying on static compile-time deduction, given the Run-Time Check Failure #3 it raises during execution, so based on my current understanding of it it would seem reasonable for the same checks to apply. Is this just a limitation of this specific compiler/debugger (Visual Studio 2008), or more tied to how C++ works?

    Read the article

  • ApplicationPoolIdentity permissions on Temporary Asp.Net files

    - by Anton
    Hi all, at work I am struggling a bit with the following situation: We have a web application that runs on a WIndows Server 2008 64 bits machine. The app's ApplicationPool is running under the ApplicationPoolIdentity and configured for .net 2 and Classic pipeline mode. This works fine up to the moment that XmlSerialization requires creation of Serializer assemblies where MEF is being used to create a collection of knowntypes. To remedy this I was hoping that granting the ApplicationPoolIdentity rights to the ASP.Net Temporary Files directory would be enough, but alas... What I did was the run the following command from a cmd prompt: icacls "c:\windows\microsoft.net\framework64\v2.0.50727\Temporary ASP.NET Files" /grant "IIS AppPool\MyAppPool":(M) Obviously this did not work, otherwise you would not be reading this :) Strange thing is that whenever I grant the Users or even more specific, the Authenticated Users Group those permissions, it works. What's weird as well (in my eyes) is that before I started granting access the ApplicationPoolIdentity was already a member of IIS_IUSRS which does have Modify rights for the temporary asp files directory. And now I'm left wondering why this situation requires Modify rights for the Authenticated Users group. I thought it could be because the apppool account was missing additional rights (googling for this returned some results, so I tried those), but granting the ApplicationPoolIdentity modification rights to the Windows\Temp directory and/or the application directory itself did not fix it. For now we have a workaround, but I hate that I don't know what is exactly going on here, so I was hoping any of you guys could shed some light on this. Thanx in advance!

    Read the article

  • How to get root as '/' with Kohana3, base_url and mod rewrite.

    - by Drew
    Hi all! I've only just started using Kohana ( 3 hours ago), and so far it's blown my socks off (and I'm wearing slippers, so that's quite impressive). Right now, I have a controller 'Controller_FrontPage' with associated views and models and I'm trying to get it accesible from the root of my website (eg, http://www.mysite.com/). If I edit the default controller in the bootstrap from: Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); to 'controller' => '', I get an error, could not find controller_ (which makes sense), and if I change it to 'controller' => '/', I get an error, could not find controller_/ (which also makes sense). If I set 'controller' => 'FrontPage', everything works fine, but all my links (html::anchor(...)) point to http://www.mysite.com/FrontPage/*. Is there a way to have all the anchors point to http://www.mysite.com/*?

    Read the article

  • Is my objective possible using WCF (and is it the right way to do things?)

    - by David
    I'm writing some software that modifies a Windows Server's configuration (things like MS-DNS, IIS, parts of the filesystem). My design has a server process that builds an in-memory object graph of the server configuration state and a client which requests this object graph. The server would then serialize the graph, send it to the client (presumably using WCF), the server then makes changes to this graph and sends it back to the server. The server receives the graph and proceeds to make modifications to the server. However I've learned that object-graph serialisation in WCF isn't as simple as I first thought. My objects have a hierarchy and many have parametrised-constructors and immutable properties/fields. There are also numerous collections, arrays, and dictionaries. My understanding of WCF serialisation is that it requires use of either the XmlSerializer or DataContractSerializer, but DCS places restrictions on the design of my object-graph (immutable data seems right-out, it also requires parameter-less constructors). I understand XmlSerializer lets me use my own classes provided they implement ISerializable and have the de-serializer constructor. That is fine by me. I spoke to a friend of mine about this, and he advocates going for a Data Transport Object-only route, where I'd have to maintain a separate DataContract object-graph for the transport of data and re-implement my server objects on the client. Another friend of mine said that because my service only has two operations ("GetServerConfiguration" and "PutServerConfiguration") it might be worthwhile just skipping WCF entirely and implementing my own server that uses Sockets. So my questions are: Has anyone faced a similar problem before and if so, are there better approaches? Is it wise to send an entire object graph to the client for processing? Should I instead break it down so that the client requests a part of the object graph as it needs it and sends only bits that have changed (thus reducing concurrency-related risks?)? If sending the object-graph down is the right way, is WCF the right tool? And if WCF is right, what's the best way to get WCF to serialise my object graph?

    Read the article

  • How am I able to create A List<T> containing a generic Interface?

    - by Conrad Clark
    I have a List which must contain IInteract Objects. But IInteract is a generic interface which requires 2 type arguments. My main idea is iterate through a list of Objects and "Interact" one with another if they didn't interact yet. So i have this object List<IObject> WorldObjects = new List<IObject>(); and this one: private List<IInteract> = new List<IInteract>(); Except I can't compile the last line because IInteract requires 2 type arguments. But I don't know what the arguments are until I add them. I could add interactions between Objects of Type A and A... or Objects of Type B and C. I want to create "Interaction" classes which do something with the "acting" object and the "target" object, but I want them to be independent from the objects... so I could add an Interaction between for instance... "SuperUltraClass" and... an "integer". Am I using the wrong approach?

    Read the article

  • getting data from tableviewcell to 2nd view

    - by chubsta
    I am very new to this and am trying to learn by creating a few little apps for myself. I have a navigation-based app where the user taps the row to select a film title - i then want the second view to show details of the film. Thanks to a very helpful person here i am getting the results of the row pressed as 'rowTitle' as follows : (void)tableView:(UITableView *)tableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath { NSString *key = [keys objectAtIndex:indexPath.section]; NSArray *nameSection = [names objectForKey:key]; NSString *rowTitle = [nameSection objectAtIndex:indexPath.row]; NSLog(@"rowTitle = %@", rowTitle); [tableView deselectRowAtIndexPath:indexPath animated:YES]; I am, however, struggling to make the data at 'rowTitle' available to the 2nd view - basically, if i can get the info - for example rowTitle is "aliens2" - i want to be able to add a new extension to the end of the string returned by 'rowTitle' in order to point to an image (if that makes sense) in the second view... something like tempImageName=[** this is where the info from rowTitle needs to be i suppose**]; tempImageType=@".png"; finalImageName=[tempImageName stringByAppendingString:tempImageType]; does this make sense to anyone (apologies if it doesnt - i know what i want but how to explain it is a little more awkward!) Thanks again for any help anyone can give (and any help as to formatting these questions would be useful too obviously!!)!

    Read the article

  • Checking for nil in view in Ruby on Rails

    - by seaneshbaugh
    I've been working with Rails for a while now and one thing I find myself constantly doing is checking to see if some attribute or object is nil in my view code before I display it. I'm starting to wonder if this is always the best idea. My rationale so far has been that since my application(s) rely on user input unexpected things can occur. If I've learned one thing from programming in general it's that users inputting things the programmer didn't think of is one of the biggest sources of run-time errors. By checking for nil values I'm hoping to sidestep that and have my views gracefully handle the problem. The thing is though I typically for various reasons have similar nil or invalid value checks in either my model or controller code. I wouldn't call it code duplication in the strictest sense, but it just doesn't seem very DRY. If I've already checked for nil objects in my controller is it okay if my view just assumes the object truly isn't nil? For attributes that can be nil that are displayed it makes sense to me to check every time, but for the objects themselves I'm not sure what is the best practice. Here's a simplified, but typical example of what I'm talking about: controller code def show @item = Item.find_by_id(params[:id]) @folders = Folder.find(:all, :order => 'display_order') if @item == nil or @item.folder == nil redirect_to(root_url) and return end end view code <% if @item != nil %> display the item's attributes here <% if @item.folder != nil %> <%= link_to @item.folder.name, folder_path(@item.folder) %> <% end %> <% else %> Oops! Looks like something went horribly wrong! <% end %> Is this a good idea or is it just silly?

    Read the article

  • Advantages of Using Linux as primary developer desktop

    - by Nick N
    I want to get some input on some of the advantages of why developers should and need to use Linux as their primary development desktop on a daily basic as opposed to using Windows. This is particulary helpful when your Dev, QA, and Production environments are Linux. The current analogy that I keep coming back to is. If I build my demo car as a Ford Escort, but my project car is a Ford Mustang, it doesn't make sense at all. I'm currently at an IT department that allows dual boot with Windows and Linux, but some run Linux while the vast majority use Windows. Here's several advantages that I've came up with since using Linux as a primary desktop. Same Exact operating system as Dev, QA, and Production Same Scripts (.sh) instead of maintaining (.bat and *.sh). Somewhat mitigated by using cygwin, but still a bit different. Team learns simple commands such as: cd, ls, cat, top Team learns Advanced commands like: pkill, pgrep, chmod, su, sudo, ssh, scp Full access to installs typically for Linux, such as RPM, DEB installs just like the target environments. The list could go on and on, but I want to get some feedback of anything that I may have missed, or even any disadvantages (of course there are some). To me it makes sense to migrate an entire team over to using Linux, and using Virtual Box, running Windows XP VM's to test functional items that 95% of most of the world uses. This is similar but a little different thread going on here as well. link text

    Read the article

  • What scenarios are possible where the VS C# compiler would not compile a reference of a reference?

    - by SuperKing
    Hello, I'm probably asking this question wrong (and that may be why Google isn't helping), but here goes: In Visual Studio I am compiling a C# project (let's call it Project A, the startup project) which has a reference to Project B. Project B has a reference to a Project C, so when A gets built, the dlls for B gets placed in the bin directory of A, as does the dll for C (because B requires C, and A requires B). However, I have apparently made some change recently so that the dll for Project C does not go into the bin directory of Project A when rebuilding the solution. I have no idea what I've done to make this happen. I have not modified the setup of the solution itself, and I have only added additional references to the project files. Code wise, I have commented out most of the actual code in Project B that references classes in Project C, but did not remove the reference from the project itself (I don't think this matters). I was told that perhaps the C# compiler was optimizing somehow so that it was not building Project C, but really I'm out of ideas. I would think someone has run into something similar before Any thoughts? Thanks!

    Read the article

  • How to set App as site.com/ in Kohana3

    - by Drew
    Hi all! I've only just started using Kohana ( 3 hours ago), and so far it's blown my socks off (and I'm wearing slippers, so that's quite impressive). Right now, I have a controller 'Controller_FrontPage' with associated views and models and I'm trying to get it accesible from the root of my website (eg, http://www.mysite.com/). If I edit the default controller in the bootstrap from: Route::set('default', '(<controller>(/<action>(/<id>)))') ->defaults(array( 'controller' => 'welcome', 'action' => 'index', )); to 'controller' => '', I get an error, could not find controller_ (which makes sense), and if I change it to 'controller' => '/', I get an error, could not find controller_/ (which also makes sense). If I set 'controller' => 'FrontPage', everything works fine, but all my links (html::anchor(...)) point to http://www.mysite.com/FrontPage/*. Is there a way to have all the anchors point to http://www.mysite.com/*?

    Read the article

  • Screen information while Windows system is locked (.NET)

    - by Matt
    We have a nightly process that updates applications on a user's pc, and that requires bringing the application down and back up again (not looking to get into changing that process). The problem is that we are building a Windows AppBar on launch which requires a valid screen, and when the system is locked there isn't one in the Screen class. So none of the visual effects are enabled and it shows up real ugly. The only way we currently have around this is to detect a locked screen and just spin and wait until the user unlocks the desktop, then continue launching. Leaving it down isn't an option, as this is a key part of our user's workflow, and they expect it to be up and running if they left it that way the night before. Any ideas?? I can't seem to find the display information anywhere, but it has to be stored off someplace, since the user is still logged in. The contents of the Screen.AllScreens array: ** When Locked: Device Name : DISPLAY Primary : True Bits Per Pixel : 0 Bounds : {X=-1280,Y=0,Width=2560,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=1024} ** When Unlocked: Device Name : \\.\DISPLAY1 Primary : True Bits Per Pixel : 32 Bounds : {X=0,Y=0,Width=1280,Height=1024} Working Area : {X=0,Y=0,Width=1280,Height=994} Device Name : \\.\DISPLAY2 Primary : False Bits Per Pixel : 32 Bounds : {X=-1280,Y=0,Width=1280,Height=1024} Working Area : {X=-1280,Y=0,Width=1280,Height=964}

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >