Search Results

Search found 11759 results on 471 pages for 'isolation level'.

Page 396/471 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • What is the appropriate granularity in building a ViewModel?

    - by JasCav
    I am working on a new project, and, after seeing some of the difficulties of previous projects that didn't provide enough separation of view from their models (specifically using MVC - the models and views began to bleed into each other a bit), I wanted to use MVVM. I understand the basic concept, and I'm excited to start using it. However, one thing that escapes me a bit - what data should be contained in the ViewModel? For example, if I am creating a ViewModel that will encompass two pieces of data so they can be edited in a form, do I capture it like this: public PersonAddressViewModel { public Person Person { get; set; } public Address Address { get; set; } } or like this: public PersonAddressViewModel { public string FirstName { get; set; } public string LastName { get; set; } public string StreetName { get; set; } // ...etc } To me, the first feels more correct for what we're attempting to do. If we were doing more fine grain forms (maybe all we were capturing was FirstName, LastName, and StreetAddress) then it might make more sense to go down to that level. But, I feel like the first is correct since we're capturing ALL Person data in the form and ALL Address data. It seems like it doesn't make sense (and a lot of extra work) to split things apart like that. Appreciate any insight.

    Read the article

  • Etiquette: Version bump my fork of opensource project?

    - by Ross
    This question is about etiquette and open source projects. I have forked an application from github and added two new features. The first feature has been request frequently elsewhere. I have added it. Code & implementation are clean (I think). The second feature is more of a hack. It will be of use to others, but the implementation is a little dirty in useage and more so in code. I need the feature but I don't have the skills to fully implement it properly or to a level that could be considered a worth while contrabution to the main project. How should the versioning work? Do I just bump up my version numbers care-free and push to my master branch? It is annoying to know which version is running, modifed or original, as both have the same version number. But will it be confusing when, months later, my github page has a version number the same as the original but both are actually completely different. (I have made pull requests etc. but that is not the context of my question.) The project I have forked uses ruby jeweler so has a versioning format of: Jeweler tracks the version of your project. It assumes you will be using a version in the format x.y.z. x is the 'major' version, y is the 'minor' version, and z is the patch version. Is this standard for other projects/langauges too? Are my changes patches? Thanks

    Read the article

  • Protocol specification in XML

    - by Mathijs
    Is there a way to specify a packet-based protocol in XML, so (de)serialization can happen automatically? The context is as follows. I have a device that communicates through a serial port. It sends and receives a byte stream consisting of 'packets'. A packet is a collection of elementary data types and (sometimes) other packets. Some elements of packets are conditional; their inclusion depends on earlier elements. I have a C# application that communicates with this device. Naturally, I don't want to work on a byte-level throughout my application; I want to separate the protocol from my application code. Therefore I need to translate the byte stream to structures (classes). Currently I have implemented the protocol in C# by defining a class for each packet. These classes define the order and type of elements for each packet. Making class members conditional is difficult, so protocol information ends up in functions. I imagine XML that looks like this (note that my experience designing XML is limited): <packet> <field name="Author" type="int32" /> <field name="Nickname" type="bytes" size="4"> <condition type="range"> <field>Author</field> <min>3</min> <max>6</min> </condition> </field> </packet> .NET has something called a 'binary serializer', but I don't think that's what I'm looking for. Is there a way to separate protocol and code, even if packets 'include' other packets and have conditional elements?

    Read the article

  • R: building a simple command line plotting tool/Capturing window close events

    - by user275455
    I am trying to use R within a script that will act as a simple command line plot tool. I.e. user pipes in a csv file and they get a plot. I can get to R fine and get the plot to display through various temp file machinations, but I have hit a roadblock. I cannot figure out how to get R to keep running until the users closes the window. If I plot and exit, the plot disappears immediately. If I plot and use some kind of infinite loop, the user cannot close the plot; he must exit by using an interrupt which I don't like. I see there is a getGraphicsEvent function, but it claims that the device is not supported (X11). Anyway, it doesn't appear to actually support an onClose event, only onMouseDown. Any ideas on how to solve this? edit: Thanks to Dirk for the advice to check out the tk interface. Here is the test code that works: require(tcltk) library(tkrplot) ##function to display plot, called by tkrplot and embedded in a window plotIt<-function(){ plot(x=1:10, y=1:10) } ##create top level window tt<-tktoplevel() ##variable to wait on like a condition variable, to be set by event handler done <- tclVar(0) ##bind to the window destroy event, set done variable when destroyed tkbind(tt,"",function() tclvalue(done) <- 1) ##Have tkrplot embed the plot window, then realize it with tkgrid tkgrid(tkrplot(tt,plotIt)) ##wait until done is true tkwait.variable(done)

    Read the article

  • importing symbols from python package into caller's namespace

    - by Paul C
    I have a little internal DSL written in a single Python file that has grown to a point where I would like to split the contents across a number of different directories + files. The new directory structure currently looks like this: dsl/ __init__.py types/ __init__.py type1.py type2.py and each type file contains a class (e.g. Type1). My problem is that I would like to keep the implementation of code that uses this DSL as simple as possible, something like: import dsl x = Type1() ... This means that all of the important symbols should be available directly in the user's namespace. I have tried updating the top-level __init__.py file to import the relevant symbols: from types.type1 import Type1 from types.type2 import Type2 ... print globals() the output shows that the symbols are imported correctly, but they still aren't present in the caller's code (the code that's doing the import dsl). I think that the problem is that the symbols are actually being imported to the 'dsl' namespace. How can I change this so that the classes are also directly available in the caller's namespace?

    Read the article

  • Re-authentication required for registered-path links (to ASP.NET site) coming to IE from PowerPoint

    - by Daniel Halsey
    We're using URL routing based on Phil Haack's example, with config modifications based on MSDN Library article #CC668202, to provide "shareable" links for a ASP.NET forms site, and have run into a strange issue: For users attempting to open links from PowerPoint presentations, and who have IE set as their default browser, using one of these links forces (forms-based) re-authentication, even in the same browser instance with a live session. Info: We know the session is still alive. (Page returns information for the currently logged-in user; confirmed via debug watches) This doesn't happen with other browsers (FF, Chrome) or with other programs (Notepad++) as the URL source. We do not have a default path set, as this caused issues with root path handling at initial login. This primarily happens with PowerPoint, but will also happen in Word and OCS. On some machines, even after changing the default browser, Office apps will continue to use IE for these links, forcing this error. (A potential registry fix for this failed, but even if it had worked, we can't control default browser choice for our users.) We can't figure out if this is an Office oddity or is being caused by our decision to use app-level URL routing (rather than IIS rewriting). Has anyone else encountered this and found a solution?

    Read the article

  • Entity framework with Linq to Entities performance

    - by mare
    If I have a static method like this public static string GetTicClassificationTitle(string classI, string classII, string classIII) { using (TicDatabaseEntities ticdb = new TicDatabaseEntities()) { var result = from classes in ticdb.Classifications where classes.ClassI == classI where classes.ClassII == classII where classes.ClassIII == classIII select classes.Description; return result.FirstOrDefault(); } } and use this method in various places in foreach loops or just plain calling it numerous times, does it create and open new connection every time? If so, how can I tackle this? Should I cache the results somewhere, like in this case, I would cache the entire Classifications table in Memory Cache? And then do queries vs this cached object? Or should I make TicDatabaseEntities variable static and initialize it at class level? Should my class be static if it contains only static methods? Because right now it is not.. Also I've noticed that if I return result.First() instead of FirstOrDefault() and the query does not find a match, it will issue an exception (with FirstOrDefault() there is no exception, it returns null). Thank you for clarification.

    Read the article

  • How can I implement a splay tree that performs the zig operation last, not first?

    - by Jakob
    For my Algorithms & Data Structures class, I've been tasked with implementing a splay tree in Haskell. My algorithm for the splay operation is as follows: If the node to be splayed is the root, the unaltered tree is returned. If the node to be splayed is one level from the root, a zig operation is performed and the resulting tree is returned. If the node to be splayed is two or more levels from the root, a zig-zig or zig-zag operation is performed on the result of splaying the subtree starting at that node, and the resulting tree is returned. This is valid according to my teacher. However, the Wikipedia description of a splay tree says the zig step "will be done only as the last step in a splay operation" whereas in my algorithm it is the first step in a splay operation. I want to implement a splay tree that performs the zig operation last instead of first, but I'm not sure how it would best be done. It seems to me that such an algorithm would become more complex, seeing as how one needs to find the node to be splayed before it can be determined whether a zig operation should be performed or not. How can I implement this in Haskell (or some other functional language)?

    Read the article

  • trying to reenter IT field after a break of over 5 years

    - by josephj1989
    Hello I have had some misfortune in life - I was unwell and had to stay out of work for an extended period of about 5 years.Before that I used to work as an Oracle/Oracle Ebusiness suite consultant (I was charging very good contract rates). But now I am fully recovered ,feeling sharper than ever. But there arent many opportunities in my areas of expertise in a small market like New Zealand and the long absence is no help either. So for the last 5 months I have been training myself in C# ,ASP .NET,WEB technologies like HTML,JQuery,CSS and also SQL Server.I had some previous experience with JAVA and VB .NET (few months). But I am fully confident of my abilities and believe I can hit the ground running given a chance.I used to be an expert with SQL and C language and these skills are portable to SQL Server and C#. Another problem I face is my age- I am over 50. What is your opinion - Am I doing the right thing. Can I get back into an IT career-I am willing to start all over again at a junior level, I am really facing a crisis in my life.

    Read the article

  • PNG Textures not loading on HTC desire

    - by Matthew Tatum
    Hi I'm developing a game for android using OpenGL es and have hit a problem: My game loads fine in the emulator (windows xp and vista from eclipse), it also loads fine on a T-Mobile G2 (HTC Hero) however when I load it on my new HTC Desire none of the textures appear to load correctly (or at all). I'm suspecting the BitmapFactory.decode method although I have no evidence that that is the problem. All of my textures are power of 2 and JPG textures seem to load (although they don't look great quality) but anything that is GIF or PNG just doesn't load at all except for a 2x2 red square which loads fine and one texture that maps to a 3d object but seems to fill each triangle of the mesh with the nearest colour). This is my code for loading images: AssetManager am = androidContext.getAssets(); BufferedInputStream is = null; try { is = new BufferedInputStream(am.open(fileName)); Bitmap bitmap; bitmap = BitmapFactory.decodeStream(is); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } catch(IOException e) { Logger.global.log(Level.SEVERE, e.getLocalizedMessage()); } finally { try { is.close(); } catch(Exception e) { // Ignore. } } thanks

    Read the article

  • Are we in demand?

    - by dotnetdev
    I was made redundant in the end of November. This wasn't because I lacked required skills (although I'm a youngster and in career levels a junior dev - though I knew a lot more than was called for in my job). Anyway, I was laid off due to the whole recession/credit crunch thing going on. I worked for a small company and money got tight and I had to go. I haven't made a thread about this but I have seen threads about others being laid off and experiencing a similar fate. This leads me to the question: What is the job market like for developers? Are we in demand? I ask this question on a global level, but I live in London UK (in case anyone comes across this thread from the same area). I am a .NET dev but my secondary skillset is Flex (actionscript too) and Java, which my personal portfolio is made with. I hope to be strong enough in this to do this commercially, with a few more months of practise. Then I will have more jobs applicable to me. Unfortunately, I use agencies and sites like Jobserve/Monster.com but no new jobs are ever posted on there so when you apply to all the relevant jobs, then what? Whatsmore, a lot of companies are putting a freeze on recruitment. Thanks

    Read the article

  • How to limit TCP writes to particular size and then block untlil the data is read

    - by ustulation
    {Qt 4.7.0 , VS 2010} I have a Server written in Qt and a 3rd party client executable. Qt based server uses QTcpServer and QTcpSocket facilities (non-blocking). Going through the articles on TCP I understand the following: the original implementation of TCP mentioned the negotiable window size to be a 16-bit value, thus maximum being 65535 bytes. But implementations often used the RFC window-scale-extension that allows the sliding window size to be scalable by bit-shifting to yield a maximum of 1 gigabyte. This is implementation defined. This could have resulted in majorly different window sizes on receiver and sender end as the server uses Qt facilities without hardcoding any window size limit. Client 1st asks for all information it can based on the previous messages from the server before handling the new (accumulating) incoming messages. So at some point Server receives a lot of messages each asking for data of several MB's. This the server processes and puts it into the sender buffer. Client however is unable to handle the messages at the same pace and it seems that client’s receiver buffer is far smaller (65535 bytes maybe) than sender’s transmit window size. The messages thus get accumulated at sender’s end until the sender’s buffer is full too after which the TCP writes on sender would block. This however does not happen as sender buffer is much larger. Hence this manifests as increase in memory consumption on the sender’s end. To prevent this from happening, I used Qt’s socket’s waitForBytesWritten() with timeout set to -1 for infinite waiting period. This as I see from the behaviour blocks the thread writing TCP data until the data has actually been sensed by the receiver’s window (which will happen when earlier messages have been processed by the client at application level). This has caused memory consumption at Server end to be almost negligible. is there a better alternative to this (in Qt) if i want to restrict the memory consumption at server end to say x MB's? Also please point out if any of my understandings is incorrect.

    Read the article

  • How can I speed up a 1800-line PHP include? It's slowing my pageload down to 10sec/view

    - by somerandomguy
    I designed my code to put all important functions in a single PHP file that's now 1800 lines long. I call it in other PHP files--AJAX processors, for example--with a simple "require_once("codeBank.php")". I'm discovering that it takes about 10 seconds to load up all those functions, even though I have nothing more than a few global arrays and a bunch of other functions involved. The main AJAX processor code, for example, is taking 8 seconds just to do a simple syntax verification (whose operational function is stored in codeBank.php). When I comment out the require_once, my AJAX response time speeds up from 10sec to 40ms, so it's pretty clear that PHP's trying to do something with those 1800 lines of functions. That's even with APC installed, which is surprising. What should I do to get my code speed back to the sub-100ms level? Am I failing to get the cache's benefit somehow? Do I need to cut that single function bank file into different pieces? Are there other subtle things that I could be doing to screw up my response time? Or barring all that, what are some tools to dig further into which PHP operations are hitting speed bumps?

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • WPF app startup problems

    - by Dave
    My brain is all over the map trying to fully understand Unity right now. So I decided to just dive in and start adding it in a branch to see where it takes me. Surprisingly enough (or maybe not), I am stuck just getting my darn Application to load properly. It seems like the right way to do this is to override OnStartup in App.cs. I've removed my StartupUri from App.xaml so it doesn't create my GUI XAML. My App.cs now looks something like this: public partial class App : Application { private IUnityContainer container { get; set; } protected override void OnStartup(StartupEventArgs e) { container = new UnityContainer(); GUI gui = new GUI(); gui.Show(); } protected override void OnExit(ExitEventArgs e) { container.Dispose(); base.OnExit(e); } } The problem is that nothing happens when I start the app! I put a breakpoint at the container assignment, and it never gets hit. What am I missing? App.xaml is currently set to ApplicationDefinition, but I'd expect this to work because some sample Unity + WPF code I'm looking at (from Codeplex) does the exact same thing, except that it works! I've also started the app by single-stepping, and it eventually hits the first line in App.xaml. When I step into this line, that's when the app just starts "running", but I don't see anything (and my breakpoint isn't hit). If I do the exact same thing in the sample application, stepping into App.xaml puts me right into OnStartup, which is what I'd expect to happen. Argh! Is it a Bad Thing to just put the Unity construction in my GUI's Window_Loaded event handler? Does it really need to be at the App level?

    Read the article

  • Deterministic floating point and .NET

    - by code2code
    How can I guarantee that floating point calculations in a .NET application (say in C#) always produce the same bit-exact result? Especially when using different versions of .NET and running on different platforms (x86 vs x86_64). Inaccuracies of floating point operations do not matter. In Java I'd use strictfp. In C/C++ and other low level languages this problem is essentially solved by accessing the FPU / SSE control registers but that's probably not possible in .NET. Even with control of the FPU control register the JIT of .NET will generate different code on different platforms. Something like HotSpot would be even worse in this case... Why do I need it? I'm thinking about writing a real-time strategy (RTS) game which heavily depends on fast floating point math together with a lock stepped simulation. Essentially I will only transmit user input across the network. This also applies to other games which implement replays by storing the user input. Not an option are: decimals (too slow) fixed point values (too slow and cumbersome when using sqrt, sin, cos, tan, atan...) update state across the network like an FPS: Sending position information for hundreds or a few thousand units is not an option Any ideas?

    Read the article

  • Which is the event listener after doSave() in Symfony?

    - by fesja
    Hi, I've been looking at this event-listeners page http://www.doctrine-project.org/documentation/manual/1_1/pl/event-listeners and I'm not sure which is the listener I have to use to make a change after the doSave() method in the BaseModelForm.class.php. // PlaceForm.class.php protected function doSave ( $con = null ) { ... parent::doSave($con); .... // Only for new forms, insert place into the tree if($this->object->level == null){ $parent = Place::getPlace($this->getValue('parent'), Language::getLang()); ... $node = $this->object->getNode(); $method = ($node->isValidNode() ? 'move' : 'insert') . 'AsFirstChildOf'; $node->$method($parent); //calls $this->object->save internally } return; } What I want to do is to make a custom slug with the ancestors' name of that new place. So if I inserting "San Francisco", the slug would be "usa-california-san-francisco" public function postXXXXXX($event) { ... $event->getInvoker()->slug = $slug; } The problem is that I'm inserting a new object with no reference to its parent. After it's saved, I insert it to the tree. So I can't change the slug until then. I think a Transaction listener could work, but I'm use there is a better way I'm not seeing right now. thanks!

    Read the article

  • At what point is it worth using a database?

    - by radix07
    I have a question relating to databases and at what point is worth diving into one. I am primarily an embedded engineer, but I am writing an application using QT to interface with our controller. We are at an odd point where we have enough data that it would be feasible to implement a database (around 700+ items and growing) to manage everything, but I am not sure it is worth the time right now to deal with. I have no problems implementing the GUI with files generated from excel and parsed in, but it gets tedious and hard to track even with VBA scripts. I have been playing around with converting our data into something more manageable for the application side with Microsoft Access and that seems to be working well. If that works out I am only a step (or several) away from using an SQL database and using the QT library to access and modify it. I don't have much experience managing data at this level and am curious what may be the best way to approach this. So what are some of the real benefits of using a database if any in this case? I realize much of this can be very application specific, but some general ideas and suggestions on how to straddle the embedded/application programming line would be helpful. Thanks

    Read the article

  • I need to sort the facets that come back from SOLR by relevancy

    - by Pinguthepenguin
    I have within my SOLR index song objects which belong to a higher level album object. An example is shown below: <song> <album title>Blood Sugar Sex Magic</album title> <song title>Under the Bridge</song title> <description>A sad song about junkies</description> </song> What I can do at the moment is create a facet on the album title so that a search on songs will also show me what albums contain hits for that keyword. The default behaviour for SOLR is that the facets are shown in the order of most hits to least. However what I want to achieve is the facet list to be sorted according to the relevancy of the top hit for that album. For example a search on the word "sad" may show a facet with one hit for "Blood Sugar Sex Magic" and there may also be an album called "Sad Clown songs" where there are 10 hits. "Sad clown songs" will show as the first facet even though it may be that "Under the bridge" comes up as the most relevant song. My question is how can I get all the facets back but then have them ordered by the relevancy of the songs within them? If I would need to change or extend some underlying SOLR code what would that be? Thanks in advance.

    Read the article

  • Are these saml request-response good enough?

    - by Ashwin
    I have set up a single sign on(SSO) for my services. All the services confirm the identity of the user using the IDPorvider(IDP). In my case I am also the IDP. In my saml request, I have included the following: 1. the level for which auth. is required. 2. the consumer url 3. the destination service url. 4. Issuer Then, encrypting this message with the SP's(service provider) private key and then with the IDP's Public key. Then I am sending this request. The IDP on receiving the request, first decrypts with his own private key and then with SP's public key. In the saml response: 1. destination url 2. Issuer 3. Status of the response Is this good enough? Please give your suggestions?

    Read the article

  • Android opengl releasing textures

    - by user1642418
    I have a bit of a problem. I am developing a game for android + engine and I got stuck. I am getting OpenGL out of memory error and either app crashes or phone hangs after loading a scene multiple times. For example: app launches, shows main menu, 1st level/scene is loaded. Then I go back to main menu, and repeat. It doesnt matter which scene I load, after 4-6 times the error occurs. Some background: Each time when scene is loaded all the resources are released and upon first frame render - needed stuff gets loaded. The performance is more or less ok. Note that I am calling glDeleteTexture method, but I think its not doing its job and releasing memory. Thing is that -when I minimize and open it again - problem doesn't occur, but almost the same things are executed. Problem doesn't occur. This way android releases memory. How do I release/get rid of unused textures properly? This happens on HTC Desire HD ( ice cream sandwich 4.0.4) . Other games works fine, so I bet this is not the problem in ROM.

    Read the article

  • RegEx expression or jQuery selector to NOT match "external" links in href

    - by TrueBlueAussie
    I have a jQuery plugin that overrides link behavior, to allow Ajax loading of page content. Simple enough with a delegated event like $(document).on('click','a', function(){});. but I only want it to apply to links that are not like these ones (Ajax loading is not applicable to them, so links like these need to behave normally): target="_blank" // New browser window href="#..." // Bookmark link (page is already loaded). href="afs://..." // AFS file access. href="cid://..." // Content identifiers for MIME body part. href="file://..." // Specifies the address of a file from the locally accessible drive. href="ftp://..." // Uses Internet File Transfer Protocol (FTP) to retrieve a file. href="http://..." // The most commonly used access method. href="https://..." // Provide some level of security of transmission href="mailto://..." // Opens an email program. href="mid://..." // The message identifier for email. href="news://..." // Usenet newsgroup. href="x-exec://..." // Executable program. href="http://AnythingNotHere.com" // External links Sample code: $(document).on('click', 'a:not([target="_blank"])', function(){ var $this = $(this); if ('some additional check of href'){ // Do ajax load and stop default behaviour return false; } // allow link to work normally }); Q: Is there a way to easily detect all "local links" that would only navigate within the current website? excluding all the variations mentioned above. Note: This is for an MVC 5 Razor website, so absolute site URLs are unlikely to occur.

    Read the article

  • Rails nested attributes with a join model, where one of the models being joined is a new record

    - by gzuki
    I'm trying to build a grid, in rails, for entering data. It has rows and columns, and rows and columns are joined by cells. In my view, I need for the grid to be able to handle having 'new' rows and columns on the edge, so that if you type in them and then submit, they are automatically generated, and their shared cells are connected to them correctly. I want to be able to do this without JS. Rails nested attributes fail to handle being mapped to both a new record and a new column, they can only do one or the other. The reason is that they are a nested specifically in one of the two models, and whichever one they aren't nested in will have no id (since it doesn't exist yet), and when pushed through accepts_nested_attributes_for on the top level Grid model, they will only be bound to the new object created for whatever they were nested in. How can I handle this? Do I have to override rails handling of nested attributes? My models look like this, btw: class Grid < ActiveRecord::Base has_many :rows has_many :columns has_many :cells, :through => :rows accepts_nested_attributes_for :rows, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } accepts_nested_attributes_for :columns, :allow_destroy => true, :reject_if => lambda {|a| a[:description].blank? } end class Column < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :rows, :through => :grid end class Row < ActiveRecord::Base belongs_to :grid has_many :cells, :dependent => :destroy has_many :columns, :through => :grid accepts_nested_attributes_for :cells end class Cell < ActiveRecord::Base belongs_to :row belongs_to :column has_one :grid, :through => :row end

    Read the article

  • Pascals Triangle by recursion

    - by Olpers
    Note : My Class Teacher gave me this question as an assignment... I am not asked to do it but please tell me how to do it with recursion Binomial coefficients can be calculated using Pascal's triangle: 1 n = 0 1 1 1 2 1 1 3 3 1 1 4 6 4 1 n = 4 Each new level of the triangle has 1's on the ends; the interior numbers are the sums of the two numbers above them. Task: Write a program that includes a recursive function to produce a list of binomial coefficients for the power n using the Pascal's triangle technique. For example, Input = 2 Output = 1 2 1 Input = 4 Output = 1 4 6 4 1 done this So Far but tell me how to do this with recursion... #include<stdio.h> int main() { int length,i,j,k; //Accepting length from user printf("Enter the length of pascal's triangle : "); scanf("%d",&length); //Printing the pascal's triangle for(i=1;i<=length;i++) { for(j=1;j<=length-i;j++) printf(" "); for(k=1;k<i;k++) printf("%d",k); for(k=i;k>=1;k--) printf("%d",k); printf("\n"); } return 0; }

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >