Search Results

Search found 39406 results on 1577 pages for 'two legged'.

Page 292/1577 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Tridion Installation

    - by Kevin Brydon
    I am currently upgrading an installation of Tridion from 5.3 to 2011 starting almost from scratch (aside from migrating the database), brand new virtual servers. I just want to ask for some advice on my current server setup... a sanity check. All servers are running Windows Server 2008. The pages on our website are all classic ASP. Database SQL Server cluster. The 5.3 database has been migrated using the DatabaseManager. This is pretty standard and works well (in test anyway). Content Manager A single server to run the Content Manager and the Publisher. There are around 10 people using it at any one time so not under a particularly heavy load. Content Data Store Filesystem located somewhere on the network. One directory for live and one for staging. Content Delivery Two servers (cd1 and cd2) each with the the following server roles installed. cd1 writes to a filesystem content data store for the live website, cd2 writes to the content data store for the staging website. Presentation Two public facing web servers (web1 and web2) serving both the live and staging websites. The web servers read directly from the content data store as its a filesystem. Each of the web servers have the Content Delivery Server installed so that I can use dynamic linking (and other features?). I've so far set up everything but the web servers. Any thoughts? edit Thanks to Ram S who linked me to a decent walkthrough, upvoted. I suppose I should have posed some questions as I didn't really ask a question. I guess I'm a little confused over the content deliver aspect. I have the Content Delivery split in two separate parts. cd1 and cd2 do the work of shifting information from the Content Manager to the Staging/Live web directories. web1 and web2 should do the work of serving the web pages to the outside world and will interact with the content data store (file system). Is this a correct setup? I need some parts of the Content Delivery on my web servers right? Theoretically I could get rid of the cd1 and cd2 servers and use web1 and web2 to do the deployment right? But I suspect this will put the web servers under unnecessary strain should there ever be a big publish. I've been reading the 2011 Installation Manual, Content Delivery section, and I'm finding it quite hard to get my head around!

    Read the article

  • How can I make VS2010 behave like VS2008 w/r/t indentation?

    - by Portman
    Situation I have a plain text file where indentation is important. line 1 line 1.1 (indented two spaces) line 1.2 (indented two spaces) line 1.2.3 (indented four spaces) In Visual Studio 2008, when I pressed enter, the next line would also be indented four spaces. However, in Visual Studio 2010, when I press enter, the next line is indented one tab. Question Does anybody know where, in the mountain of preferences under Tools Options, I can return to the way that Visual Studio 2008 worked? Under Options Text Editor Plain Text Tabs, I see the following: If I select "None", then I get no indentation when I move to the next line. If I select "Block", then I get TAB indentation (even though the previous line is spaces). In Visual Studio 2008, my indentation is set to "Block", and I get spaces. I have no idea what "Smart" indenting is, or why it is disabled.

    Read the article

  • latex tabular vertical alignment to top?

    - by shoosh
    I'm trying to create a simple tabular with two cells of text and two images below them like so: \begin{tabular}[h]{ c | c} \emph{Normal} & \emph{Cone} \\ \includegraphics[width=0.39\textwidth]{images/pipe1} & \includegraphics[width=0.61\textwidth]{images/pipe2} \end{tabular} The first image is shorter than the second and I want it to be aligned at the top of the cell but for some reason it gets aligned to the bottom of the cell instead. I've tried using the array package and do this: \begin{tabular}[h]{ p{0.39\textwidth} | p{0.61\textwidth} } \emph{Normal} & \emph{Cone} \\ \includegraphics[width=0.39\textwidth]{images/pipe1} & \includegraphics[width=0.61\textwidth]{images/pipe2} \end{tabular} But this doesn't change anything. The first image is still aligned to the bottom. Why is that? Could there be something else going on which forces the alignment to stick to the bottom?

    Read the article

  • Windows 64-bit registry v.s. 32-bit registry

    - by George2
    Hello everyone, I heard on Windows x64 architecture, in order to support to run both x86 and x64 application, there is two separate/different sets of Windows registry -- one for x86 application to access and the other for x64 application to access? For example, if a COM registers CLSID in the x86 set of registry, then x64 application will never be able to access the COM component by CLSID, because x86/x64 have different sets of registry? So, my question is whether my understanding of the above sample is correct? I also want to get some more documents to learn this topic, about the two different sets of registry on x64 architecture. (I did some search, but not found any valuable information.) thanks in advance, George

    Read the article

  • linear interpolation on 8bit microcontroller

    - by JB
    I need to do a linear interpolation over time between two values on an 8 bit PIC microcontroller (Specifically 16F627A but that shouldn't matter) using PIC assembly language. Although I'm looking for an algorithm here as much as actual code. I need to take an 8 bit starting value, an 8 bit ending value and a position between the two (Currently represented as an 8 bit number 0-255 where 0 means the output should be the starting value and 255 means it should be the final value but that can change if there is a better way to represent this) and calculate the interpolated value. Now PIC doesn't have a divide instruction so I could code up a general purpose divide routine and effectivly calculate (B-A)/(x/255)+A at each step but I feel there is probably a much better way to do this on a microcontroller than the way I'd do it on a PC in c++ Has anyone got any suggestions for implementing this efficiently on this hardware?

    Read the article

  • insert and mofigy a record in an entity using Core Data

    - by aminfar
    I tried to find the answer of my question on the internet, but I could not. I have a simple entity in Core data that has a Value attribute (that is integer) and a Date attribute. I want to define two methods in my .m file. First method is the ADD method. It takes two arguments: an integer value (entered by user in UI) and a date (current date by default). and then insert a record into the entity based on the arguments. Second method is like an increment method. It uses the Date as a key to find a record and then increment the integer value of that record. I don't know how to write these methods. (assume that we have an Array Controller for the table in the xib file)

    Read the article

  • tinyMCE setup callback versus onAddEditor

    - by Matthew Manela
    When you initialize a tinyMCE editor I have noticed two different ways to get called when the editor is created. One way is using the setup callback that is part of tinyMCE.init: tinyMCE.init({ ... setup : function(ed) { // do things with editor ed } }); The other way is to hook up to the onAddEditor event: tinyMCE.onAddEditor.add(function(mgr,ed) { // do things with editor ed }); What are the differences between using these two methods? Is the editor in a different state in one versus the other? For example, are things not yet loaded if I try to access properties on the editor object. What are reasons to use one over the other?

    Read the article

  • Asynchronous sockets in C#

    - by IVlad
    I'm confused about the correct way of using asynchronous socket methods in C#. I will refer to these two articles to explain things and ask my questions: MSDN article on asynchronous client sockets and devarticles.com article on socket programming. My question is about the BeginReceive() method. The MSDN article uses these two functions to handle receiving data: private static void Receive(Socket client) { try { // Create the state object. StateObject state = new StateObject(); state.workSocket = client; // Begin receiving the data from the remote device. client.BeginReceive( state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReceiveCallback), state); } catch (Exception e) { Console.WriteLine(e.ToString()); } } private static void ReceiveCallback( IAsyncResult ar ) { try { // Retrieve the state object and the client socket // from the asynchronous state object. StateObject state = (StateObject) ar.AsyncState; Socket client = state.workSocket; // Read data from the remote device. int bytesRead = client.EndReceive(ar); if (bytesRead > 0) { // There might be more data, so store the data received so far. state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead)); // Get the rest of the data. client.BeginReceive(state.buffer,0,StateObject.BufferSize,0, new AsyncCallback(ReceiveCallback), state); } else { // All the data has arrived; put it in response. if (state.sb.Length > 1) { response = state.sb.ToString(); } // Signal that all bytes have been received. receiveDone.Set(); } } catch (Exception e) { Console.WriteLine(e.ToString()); } } While the devarticles.com tutorial passes null for the last parameter of the BeginReceive method, and goes on to explain that the last parameter is useful when we're dealing with multiple sockets. Now my questions are: What is the point of passing a state to the BeginReceive method if we're only working with a single socket? Is it to avoid using a class field? It seems like there's little point in doing it, but maybe I'm missing something. How can the state parameter help when dealing with multiple sockets? If I'm calling client.BeginReceive(...), won't all the data be read from the client socket? The devarticles.com tutorial makes it sound like in this call: m_asynResult = m_socClient.BeginReceive (theSocPkt.dataBuffer,0,theSocPkt.dataBuffer.Length, SocketFlags.None,pfnCallBack,theSocPkt); Data will be read from the theSocPkt.thisSocket socket, instead of from the m_socClient socket. In their example the two are one and the same, but what happens if that is not the case? I just don't really see where that last argument is useful or at least how it helps with multiple sockets. If I have multiple sockets, I still need to call BeginReceive on each of them, right?

    Read the article

  • CALayer flickers when drawing a path

    - by Alexey
    I am using a CALayer to display a path via drawLayer:inContext delegate method, which resides in the view controller of the view that the layer belongs to. Each time the user moves their finger on the screen the path is updated and the layer is redrawn. However, the drawing doesn't keep up with the touches: there is always a slight lag in displaying the last two points of the path. It also flickers, but only while displaying the last two-three points again. If I just do the drawing in the view's drawRect, it works fine and the drawing is definitely fast enough. Does anyone know why it behaves like this? I suspect it is something to do with the layer buffering, but I couldn't find any documentation about it.

    Read the article

  • SQL Max Group By Query Help

    - by Hcabnettek
    Hi All, I have a quick question. How do I select the two values I need in one query? Currently I'm doing this, which works fine, but it's obviously running two queries when one should do the trick. I tried MAX(columnA) and GROUP BY ColumnB, but that returns multiple row. I only want one row returned. DECLARE @biID bigint , @dtThreshold DateTime SELECT @biID = MAX(biID) FROM tbPricingCalculationCount WITH (NOLOCK) SELECT @dtThreshold = dtDateTime FROM tbPricingCalculationCount WITH (NOLOCK) WHERE biID = @biID I would like both those variables to be set correctly in one query. How can I do that? Thanks, ~ck

    Read the article

  • The relationship between OPC and DCOM

    - by typoknig
    Hi all, I am trying to grasp the link between OPC and DCOM. I have watched all four of the tutorials here and I think I have a good feeling for what OPC is, but in one of the tutorials (the third one 35 seconds in) the narrator states that OPC is based on DCOM, but I do not understand how the two are really linked. My confusion comes from a question my professor posed in which he asked "How and where would you deploy OPC instead of DCOM and vice-versa." His question makes it seem like the two are not as linked as my research suggests. I'm not looking for anyone to answer the question, I just want to know the relation between OPC and DCOM, then I can figure the rest out. Specifically I would like to know if: 1.) One is always based on the other 2.) One can always be deployed without the other.

    Read the article

  • Problems with makeObjectsPerformSelector inside and outside a class?

    - by QuakAttak
    A friend and I are creating a card game for the iPhone, and in these early days of the project, I'm developing a Deck class and a Card class to keep up with the cards. I'm wanting to test the shuffle method of the Deck class, but I am not able to show the values of the cards in the Deck class instance. The Deck class has a NSArray of Card objects that have a method called displayCard that shows the value and suit using console output(printf or NSLog). In order to show what cards are in a Deck instance all at once, I am using this, [deck makeObjectsPerformSelector:@selector(displayCard)], where deck is the NSArray in the Deck class. Inside of the Deck class, nothing is displayed on the console output. But in a test file, it works just fine. Here's the test file that creates its own NSArray: #import <Foundation/Foundation.h> #import "card.h" int main (int argc, char** argv) { NSAutoreleasePool* pool = [[NSAutoreleasePool alloc] init]; Card* two = [[Card alloc] initValue:2 withSuit:'d']; Card* three = [[Card alloc] initValue:3 withSuit:'h']; Card* four = [[Card alloc] initValue:4 withSuit:'c']; NSArray* deck = [NSArray arrayWithObjects:two,three,four,nil]; //Ok, what if we release the objects in the array before they're used? //I don't think this will work... [two release]; [three release]; [four release]; //Ok, it works... I wonder how... //Hmm... how will this work? [deck makeObjectsPerformSelector:@selector(displayCard)]; //Yay! It works fine! [pool release]; return 0; } This worked beautifully, so I created an initializer around this idea, creating 52 card objects one at a time and adding them to the NSArray using deck = [deck arrayByAddingObject:newCard]. Is the real problem with how I'm using makeObjectsPerformSelector or something before/after it?

    Read the article

  • Automatic type conversion in Java?

    - by davr
    Is there a way to do automatic implicit type conversion in Java? For example, say I have two types, 'FooSet' and 'BarSet' which both are representations of a Set. It is easy to convert between the types, such that I have written two utility methods: /** Given a BarSet, returns a FooSet */ public FooSet barTOfoo(BarSet input) { /* ... */ } /** Given a FooSet, returns a BarSet */ public BarSet fooTObar(FooSet input) { /* ... */ } Now say there's a method like this that I want to call: public void doSomething(FooSet data) { /* .. */ } But all I have is a BarSet myBarSet...it means extra typing, like: doSomething(barTOfoo(myBarSet)); Is there a way to tell the compiler that certain types can automatically be cast to other types? I know this is possible in C++ with overloading, but I can't find a way in Java. I want to just be able to type: doSomething(myBarSet); And the compiler knows to automatically call barTOfoo()

    Read the article

  • "Could not register destruction callback" warn message leads to memory leaks?

    - by Séb
    Hello all, I'm in the exact same situation as this old question: http://stackoverflow.com/questions/2077558/warn-could-not-register-destruction-callback In short: I see a warning saying that a destruction callback could not be registered for some beans. My question is: since the beans whose destruction callback cannot be registered are two persistance beans, could this be the source of a memory leak? I am experiencing a leak in my app. Although the session timeout is set (to 30 minutes), my profiler shows me more instances of the hibernate SessionImpl each time I run a thread dump. The number of instances of SessionImpl is exactly the number of times I tried to login between two thread dumps. Thanks for your help...

    Read the article

  • Does anyone really understand how HFSC scheduling in Linux/BSD works?

    - by Mecki
    I read the original SIGCOMM '97 PostScript paper about HFSC, it is very technically, but I understand the basic concept. Instead of giving a linear service curve (as with pretty much every other scheduling algorithm), you can specify a convex or concave service curve and thus it is possible to decouple bandwidth and delay. However, even though this paper mentions to kind of scheduling algorithms being used (real-time and link-share), it always only mentions ONE curve per scheduling class (the decoupling is done by specifying this curve, only one curve is needed for that). Now HFSC has been implemented for BSD (OpenBSD, FreeBSD, etc.) using the ALTQ scheduling framework and it has been implemented Linux using the TC scheduling framework (part of iproute2). Both implementations added two additional service curves, that were NOT in the original paper! A real-time service curve and an upper-limit service curve. Again, please note that the original paper mentions two scheduling algorithms (real-time and link-share), but in that paper both work with one single service curve. There never have been two independent service curves for either one as you currently find in BSD and Linux. Even worse, some version of ALTQ seems to add an additional queue priority to HSFC (there is no such thing as priority in the original paper either). I found several BSD HowTo's mentioning this priority setting (even though the man page of the latest ALTQ release knows no such parameter for HSFC, so officially it does not even exist). This all makes the HFSC scheduling even more complex than the algorithm described in the original paper and there are tons of tutorials on the Internet that often contradict each other, one claiming the opposite of the other one. This is probably the main reason why nobody really seems to understand how HFSC scheduling really works. Before I can ask my questions, we need a sample setup of some kind. I'll use a very simple one as seen in the image below: Here are some questions I cannot answer because the tutorials contradict each other: What for do I need a real-time curve at all? Assuming A1, A2, B1, B2 are all 128 kbit/s link-share (no real-time curve for either one), then each of those will get 128 kbit/s if the root has 512 kbit/s to distribute (and A and B are both 256 kbit/s of course), right? Why would I additionally give A1 and B1 a real-time curve with 128 kbit/s? What would this be good for? To give those two a higher priority? According to original paper I can give them a higher priority by using a curve, that's what HFSC is all about after all. By giving both classes a curve of [256kbit/s 20ms 128kbit/s] both have twice the priority than A2 and B2 automatically (still only getting 128 kbit/s on average) Does the real-time bandwidth count towards the link-share bandwidth? E.g. if A1 and B1 both only have 64kbit/s real-time and 64kbit/s link-share bandwidth, does that mean once they are served 64kbit/s via real-time, their link-share requirement is satisfied as well (they might get excess bandwidth, but lets ignore that for a second) or does that mean they get another 64 kbit/s via link-share? So does each class has a bandwidth "requirement" of real-time plus link-share? Or does a class only have a higher requirement than the real-time curve if the link-share curve is higher than the real-time curve (current link-share requirement equals specified link-share requirement minus real-time bandwidth already provided to this class)? Is upper limit curve applied to real-time as well, only to link-share, or maybe to both? Some tutorials say one way, some say the other way. Some even claim upper-limit is the maximum for real-time bandwidth + link-share bandwidth? What is the truth? Assuming A2 and B2 are both 128 kbit/s, does it make any difference if A1 and B1 are 128 kbit/s link-share only, or 64 kbit/s real-time and 128 kbit/s link-share, and if so, what difference? If I use the seperate real-time curve to increase priorities of classes, why would I need "curves" at all? Why is not real-time a flat value and link-share also a flat value? Why are both curves? The need for curves is clear in the original paper, because there is only one attribute of that kind per class. But now, having three attributes (real-time, link-share, and upper-limit) what for do I still need curves on each one? Why would I want the curves shape (not average bandwidth, but their slopes) to be different for real-time and link-share traffic? According to the little documentation available, real-time curve values are totally ignored for inner classes (class A and B), they are only applied to leaf classes (A1, A2, B1, B2). If that is true, why does the ALTQ HFSC sample configuration (search for 3.3 Sample configuration) set real-time curves on inner classes and claims that those set the guaranteed rate of those inner classes? Isn't that completely pointless? (note: pshare sets the link-share curve in ALTQ and grate the real-time curve; you can see this in the paragraph above the sample configuration). Some tutorials say the sum of all real-time curves may not be higher than 80% of the line speed, others say it must not be higher than 70% of the line speed. Which one is right or are they maybe both wrong? One tutorial said you shall forget all the theory. No matter how things really work (schedulers and bandwidth distribution), imagine the three curves according to the following "simplified mind model": real-time is the guaranteed bandwidth that this class will always get. link-share is the bandwidth that this class wants to become fully satisfied, but satisfaction cannot be guaranteed. In case there is excess bandwidth, the class might even get offered more bandwidth than necessary to become satisfied, but it may never use more than upper-limit says. For all this to work, the sum of all real-time bandwidths may not be above xx% of the line speed (see question above, the percentage varies). Question: Is this more or less accurate or a total misunderstanding of HSFC? And if assumption above is really accurate, where is prioritization in that model? E.g. every class might have a real-time bandwidth (guaranteed), a link-share bandwidth (not guaranteed) and an maybe an upper-limit, but still some classes have higher priority needs than other classes. In that case I must still prioritize somehow, even among real-time traffic of those classes. Would I prioritize by the slope of the curves? And if so, which curve? The real-time curve? The link-share curve? The upper-limit curve? All of them? Would I give all of them the same slope or each a different one and how to find out the right slope? I still haven't lost hope that there exists at least a hand full of people in this world that really understood HFSC and are able to answer all these questions accurately. And doing so without contradicting each other in the answers would be really nice ;-)

    Read the article

  • Advanced SQL Data Compare throught multiple tables

    - by podosta
    Hello, Consider the situation below. Two tables (A & B), in two environments (DEV & TEST), with records in those tables. If you look the content of the tables, you understand that functionnal data are identical. I mean except the PK and FK values, the name Roger is sill connected to Fruit & Vegetable. In DEV environment : Table A 1 Roger 2 Kevin Table B (italic field is FK to table A) 1 1 Fruit 2 1 Vegetable 3 2 Meat In TEST environment : Table A 4 Roger 5 Kevin Table B (italic field is FK to table A) 7 4 Fruit 8 4 Vegetable 9 5 Meat I'm looking for a SQL Data Compare tool which will tell me there is no difference in the above case. Or if there is, it will generate insert & update scripts with the right order (insert first in A then B) Thanks a lot guys, Grégoire

    Read the article

  • Is it worth learning Perl 6?

    - by Andres
    I have the opportunity to take a two day class on Perl 6 with the Rakudo Compiler. I don't want to start a religious war, but is it worth my time? Is there any reason to believe that Perl 6 will be practical in the real world within the next two years? Does anyone currently use it effectively? Update I took the class and learned a lot. However, after day 1, my mind was a bit overwhelmed. There are tons of cool ideas in perl 6, and it will be neat to see what filters up to other languages. Overall the experience was a positive use of my time, though I wasn't able to absorb as much on the second day. If it were a three day class it would have been unproductive just because there is a limit to how much you can process in a short amount of time.

    Read the article

  • Entity Relationship Diagram - Relationship Strength?

    - by 01010011
    Hi, I am trying to figure out under what circumstances I should use weak (non-identifying) relationships (where the primary key of the related entity does not contain a primary key component of the parent entity), verses when I should use strong (identifying) relationships (primary key of the related entity contains a primary key component of the parent entity). For example, when designing an Entity Relationship Diagram , if I have two entities, (e.g. book and purchaser), how do I know when to choose the solid Crows Foot or the dashed Crows Foot to connect the two entities? Any assistance will be appreciated. Thanks in advance.

    Read the article

  • Google Closure: Setting Input for AutoComplete Dynamically

    - by amuzed
    The Google Closure (GC) Javascript Library makes it very easy to create an AutoComplete UI, as this demo shows - http://closure-library.googlecode.com/svn/trunk/closure/goog/demos/autocomplete-basic.html . Basically, all we have to do is define an array and pass it on as one of the parameters. I'd like to be able to update the array dynamically and have AutoComplete show the changes immediately. Example, if there are two arrays list1 = ["One", "Two", "Three"] list2 = ["1", "2", "3"] and an AutoComplete has been initialized using list1, var suggest = new goog.ui.AutoComplete.Basic(list1, document.getElementById('input'), false); how can I update the existing AutoComplete (suggest) to use list2?

    Read the article

  • Reproduce PIPE functionality in IronPython

    - by Muppet Geoff
    Hi, I am hoping some genious out there can help me out with this... I am using sox to merge and resample a group of WAV files, and pipe the output directly to the input of NeroAACEnc for encoding to AAC format. I originally ran the process in a script, which included: sox.exe d:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav - | neroAacEnc.exe -q 0.5 -if - -of test.m4a This worked as expected. The '-' in the comand line translates as 'Pipe/redirect input/output (stdin/stdout)' - So Sox pipes to stdout, and NeroAACEnc reads from stdin, the | joins them together. I then migrated the whole solution to Python, and the equivalent command became: from subprocess import call, Popen, PIPE runwav = Popen(['sox.exe', 'd:\audio\1.wav', 'd:\audio\2.wav', 'd:\audio\3.wav', '-c', '1', '-r', '22050', '-t', 'wav', '-'], shell=False, stdout=PIPE) runm4b = call(['neroAacEnc.exe', '-q', '0.5', '-if', '-', '-of', 'test.m4a'], shell=False, stdin=runwav.stdout) This also worked like a charm, exactly as expected. Slightly more convoluted, but hey :) Well now I have to move it to IronPython, and the Subprocess module isn't available (the partial implementation that is, doesn't have Popen/PIPE support - plus it seems silly to add a custom library when there is probably a native alternative). I should mention here, that I opted for IronPython over C#, because I am comfortable with Python now - however, there is a chance of moving it again later to C# native, and I am using IronPython to ease myself into it :) I have no C# or .net experience. So far I have the following equivalent, that sets up the 2 processes: from System.Diagnostics import Process wav = Process() wav.StartInfo.UseShellExecute = False wav.StartInfo.RedirectStandardOutput = True wav.StartInfo.FileName = 'sox.exe' wav.StartInfo.Arguments = 'd:\audio\1.wav d:\audio\2.wav d:\audio\3.wav -c 1 -r 22050 -t wav -' wav.Start() m4b = Process() m4b.StartInfo.UseShellExecute = False m4b.StartInfo.RedirectStandardInput = True m4b.StartInfo.FileName = 'neroAacEnc.exe' m4b.StartInfo.Arguments = '-q 0.5 -if - -of test.m4a' m4b.Start() I know that these 2 processes start (I can see Nero and Sox in the task manager) but what I can't figure out (for the life of me) is how to string the two output/input streams together, as with the previous two solutions. I have searched and searched, so I thought I'd ask! If anyone knows either: How to join the two streams with the same net result as the Python and Commandline versions; or A better way to acheive what I am trying to do. I would be extremely grateful! Many thanks in advance, Geoff P.S. A code sample based off the above would be awesome :) or a specific code example of a similar process that I can easily translate... this has broked my brayne.

    Read the article

  • ActionController::MethodNotAllowed

    - by Lowgain
    I have a rails model called 'audioclip'. I orginally created a scaffold with a 'new' action, which I replaced with 'new_record' and 'new_upload', becasue there are two ways to attach audio to this model. Going to /audioclips/new_record doesn't work because it takes 'new_record' as if it was an id. Instead of changing this, I was trying to just create '/record_clip' and '/upload_clip' paths. So in my routes.db I have: map.record_clip '/record_clip', :controller => 'audioclips', :action => 'new_record' map.upload_clip '/upload_clip', :controller => 'audioclips', :action => 'new_upload' When I navigate to /record_clip, I get ActionController::MethodNotAllowed Only get, head, post, put, and delete requests are allowed. I'm not extremely familiar with the inner-workings of routing yet. What is the problem here? (If it helps, I have these two statements above map.resources = :audioclips

    Read the article

  • Should strongly typed partial views on one page in asp.net mvc-2 have one combined view model?

    - by Kai
    Hi guys, I have a question about asp.net mvc-2 strongly typed partial views, and view models. I was just wondering if I can (or should) have two strongly typed partial views on one page, without implementing a whole new view model for that page. For example, I have a page that displays profiles, but also has an inline form to add a quick contact. Each of these entities already has it's own view model, i.e I have a ProfileViewModel and a ContactViewModel. So my view needs two strongly typed partial views, one using an IEnumerable List of ProfileViewModels, and one using a ContactViewModel. Is it possible or desirable to avoid making a third view model, an 'IndexViewModel' for this page, which holds a list of ProfileViewModels and a ContactViewModel? Is not implementing this view model bad practice, or tidier as it results in less view models? Thanks!

    Read the article

  • Style Switcher & Text Resizer Combined?

    - by Stephen
    Hi there, I've came across various style switchers that allow you to change the stylesheet (i.e. Light, Dark, High Contrast), and carious text-resizers that allow you to resize the test (usually with Three A's, small, medium and large). However, I can't seem to find a single switcher/resizer that works well together by allowing permutations of the two. i.e. so the user can choose a dark background with small text, or a dark background with large text, etc. I can only seem to get this working where the user can choose one or the other styles (large text or High Contrast, not a combination of the two). Any ideas on anything that may be suitable for this at all? Thanks, Stephen

    Read the article

  • IBM WebSphere vs Oracle Fusion

    - by Hal
    I have been asked to research the advantages/disadvantages the two application servers, but am new to the space and am having a terrible time finding an unbiased comparison of the two platforms. I understand that this is a broad question and I hate that I can't give a very specific use case (other than it will be an implementation in an organization with out a full time admin dedicated to management and it will be running in a mixed environment against JD Edwards/Oracle and SQLServer). Does anyone know of any (recently published) content that does a reasonable comparison or can any offer any insight into which might be the better choice and why. Any help would be greatly appreciated. Best regards, Hal

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >