Search Results

Search found 6690 results on 268 pages for 'worst practices'.

Page 58/268 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • How many variables is to many when storing in _SESSION?

    - by steve
    Hi - I'm looking for an idea of best practices here. I have a web based application that has a number of hooks into other systems. Let's say 5, and each of these 5 systems has a number of flags to determine different settings the user has selected in said systems, lets say 5 settings per system (so 5*5). I am storing the status of these settings in the user sesion variables and was wondering is that a sufficient way of doing it? I'm learning php as I go along so not sure about any pitfalls that this could run me into!

    Read the article

  • Using function arguments as local variables

    - by Rubys
    Something like this (yes, this doesn't deal with some edge cases - that's not the point): int CountDigits(int num) { int count = 1; while (num >= 10) { count++; num /= 10; } return count; } What's your opinion about this? That is, using function arguments as local variables. Both are placed on the stack, and pretty much identical performance wise, I'm wondering about the best-practices aspects of this. I feel like an idiot when I add an additional and quite redundant line to that function consisting of int numCopy = num, however it does bug me. What do you think? Should this be avoided?

    Read the article

  • Modern way to validate POST-data in MVC 2

    - by zerkms
    There are a lot of articles devoted to working with data in MVC, and nothing about MVC 2. So my question is: what is the proper way to handle POST-query and validate it. Assume we have 2 actions. Both of them operates over the same entity, but each action has it's own separated set of object properties that should be bound in automatic manner. For example: Action "A" should bind only "Name" property of object, taken from POST-request Action "B" should bind only "Date" property of object, taken from POST-request As far as I understand - we cannot use Bind attribute in this case. So - what are the best practices in MVC2 to handle POST-data and probably validate it.

    Read the article

  • How to create custom CSS "on the fly" based on account settings in a Django site?

    - by sdolan
    So I'm writing a Django based website that allows users select a color scheme through an administration interface. I already have middleware/context processors that links the current request (based on domain) to the account. My question is how to dynamically serve the CSS with the account's custom color scheme. I see two options: Add a CSS block to the base template that overrides the styles w/variables passed in through a context processors. Use a custom URL (e.g. "/static/dynamic/css//styles.css") that gets routed to a view that grabs all the necessary values and creates the css file. I'm content with either option, but was wondering if anyone else out there has dealt with similar problems and could give some insight as to "Best Practices".

    Read the article

  • Converting table based layout into a div/css based one.

    - by nimo9367
    I'm supposed to rewrite the UI for a rather large web application. The thing is that the layout is completely based on tables and if I could somehow, semi automatically, convert the tables into divs it would save me a huge amount of time. What are the best practices when doing something like this? Is this a good idea at all? The application use layout files (containing something similar to helpers) that are parsed into html at runtime and the application itself also output html at specific places. So the work will consist of converting these helpers and the htmloutput code within the application.

    Read the article

  • Test Driven Development (TDD) with Rails

    - by macek
    I am looking for TDD resources that are specific to Rails. I've seen the Rails Guide: The Basics of Creating a Rails Plugin which really spurred my interest in the topic. I have the Agile Development with Rails book and I see there's some testing-related information there. However, it seems like the author takes you through the steps of building the app, then adds testing afterward. This isn't really Test Driven Development. Ideally, I'd like a book on this, but a collection of other tutorials or articles would be great if such a book doesn't exist. Things I'd like to learn: Primary goal: Best Practices Unit testing How to utilize Fixtures Possibly using existing development data in place of fixtures What's the community standard here? Writing tests for plugins Testing with session data User is logged in User can access URL /foo/bar Testing success of sending email Thanks for any help!

    Read the article

  • How to setup custom CSS based on account settings in a Django site?

    - by sdolan
    So I'm writing a Django based website that allows users select a color scheme through an administration interface. I already have middleware/context processors that links the current request (based on domain) to the account. My question is how to dynamically serve the CSS with the account's custom color scheme. I see two options: Add a CSS block to the base template that overrides the styles w/variables passed in through a context processors. Use a custom URL (e.g. "/static/dynamic/css//styles.css") that gets routed to a view that grabs all the necessary values and creates the css file. I'm content with either option, but was wondering if anyone else out there has dealt with similar problems and could give some insight as to "Best Practices".

    Read the article

  • Model association changes in production environment, specifically converting a model to polymorphic?

    - by dustmoo
    Hi everyone, I was hoping I could get feedback on major changes to how a model works in an app that is in production already. In my case I have a model Record, that has_many PhoneNumbers. Currently it is a typical has_many belongs_to association with a record having many PhoneNumbers. Of course, I now have a feature of adding temporary, user generated records and these records will have PhoneNumbers too. I 'could' just add the user_record_id to the PhoneNumber model, but wouldn't it be better for this to be a polymorphic association? And if so, if you change how a model associates, how in the heck would I update the production database without breaking everything? .< Anyway, just looking for best practices in a situation like this. Thanks!

    Read the article

  • What is a good rule for when to prepend members with 'this' (C#)?

    - by RichAmberale
    If I am accessing a member field, property, or method, I'm never sure when I should prepend it with 'this'. I am not asking about cases where it is required, like in the case where a local variable has the same name. I am talking about cases where the meaning is exactly the same. Which is more readable? Are there any standards, best practices, or rules of thumb I should be following? Should it just be consistent throughout a class, or an entire code base?

    Read the article

  • Ways to Unit Test Oauth for different services in ruby?

    - by viatropos
    Are there any best practices in writing unit tests when 90% of the time I'm building the Oauth connecting class, I need to actually be logging into the remote service? I am building a rubygem that logs in to Twitter/Google/MySpace, etc., and the hardest part is making sure I have the settings right for that particular provider, and I would like to write tests for that. Is there a recommended way to do that? If I did mocks or stubs, I'd still have to spend that 90% of the time figuring out how to use the service, and would end up writing tests after the fact instead of before...

    Read the article

  • Do you leave historical code commented out in classes that you update?

    - by 18Rabbit
    When you need to obsolete a section of code (say either the business rules changed, or the old system has been reworked to use a new framework or something) do you delete it from the file or do you comment it out and then put in the new functionality? If you comment it out, do you leave a note stating why it was removed and what it was originally intended to do? I ask mainly because I've done a lot of contract work for different places over the years and sometimes it's like excavating a tomb to find the actual code that is still being used. Why comment it out and leave it in the file if source control has a record of what used to be there? If you comment out a method do you also comment out/delete any methods that were exclusively used by that method? What do you think the best practices for this should be?

    Read the article

  • What is best practice as far as using perl-isms (idiomatic expressions) in Perl?

    - by DVK
    A couple of years back I participated in writing the best practices/coding style for our (fairly large and often Perl-using) company. It was done by a committee of "senior" Perl developers. As anything done by consensus, it had parts which everyone disagreed with. Duh. The part that rubbed wrong the most was a strong recommendation to NOT use many Perlisms (loosely defined as code idioms not present in, say C++ or Java), such as "Avoid using '... unless X;' constructs". The main rationale posited for such rules as this one was that non-Perl developers would have much harder time with the Perl code base otherwise. The assumption here I guess is that Perl code jockeys are rarer breed overall - and among new hires to the company - than non-Perlers. I was wondering whether SO has any good arguments to support or reject this logic... it is mostly academic curiosity at this point as the company's Perl coding standard is ossified and will never be revised again as far as I'm aware. P.S. Just to be clear, the question is in the context I noted - the answer for an all-Perl smaller development shop is obviously a resounding "use Perl to its maximum capability".

    Read the article

  • How to populate a generic list of objects in C# from SQL database

    - by developr
    I am just learning ASP.NET c# and trying to incorporate best practices into my applications. Everything that I read says to layer my applications into DAL, BLL, UI, etc based on separation of concerns. Instead of passing datatables around, I am thinking about using custom objects so that I am loosely coupled to my data layer and can take advantage of intellisense in VS. I assume these objects would be considered DTOs? First, where do these objects reside in my layers? BLL, DAL, other? Second, when populating from SQL, should I loop through a data reader to populate the list or first fill a data table, then loop through the table to populate the list? I know you should close the database connection as soon as possible, but it seems like even more overhead to populate the data table and then loop through that for the list. Third, everything I see these days says use Linq2SQL. I am planning to learn Linq2SQL, but at this time I am working with a legacy database that doesn't have foreign keys setup and I do not have the ability to fix it atm. Also, I want to learn more about c# before I start getting into ORM solutions like nHibernate. At the same time I don't want to type out all the connection and SQL plumbing for every query. Is it ok to use the Enterprise DAAB for now?

    Read the article

  • Logic inside an enum

    - by Vivin Paliath
    My colleagues and I were having a discussion regarding logic in enums. My personal preference is to not have any sort of logic in Java enums (although Java provides the ability to do that). The discussion in this cased centered around having a convenience method inside the enum that returned a map: public enum PackageTypes { Letter("01", "Letter"), .. .. Tube("02", "Packaging Tube"); private String packageCode; private String packageDescription; .. .. public static Map<String, String> toMap() { Map<String, String> map = new LinkedHashMap<String, String>(); for(PackageType packageType : PackageType.values()) { map.put(packageType.getPackageCode(), packageType.getPackageDescription()); } return map; } } My personal preference is to pull this out into a service. The argument for having the method inside the enum centered around convenience. The idea was that you don't have to go to a service to get it, but can query the enum directly. My argument centered around separation of concern and abstracting any kind of logic out to a service. I didn't think "convenience" was a strong argument to put this method inside an enum. From a best-practices perspective, which one is better? Or does it simply come down to a matter of personal preference and code style?

    Read the article

  • Help improving a simple assembly function

    - by MPelletier
    I just handed in this function in an assignment. It is done (hence no homework tag). But I would like to see how this can be improved. Essentially, the function sums the squares of all the integers between 1 and the given number, using the following formula: n(n+1)(2n+1)/6 Where n is the maximum number. The function below is made to catch any overflow and return 0 should any occur. UInt32 sumSquares(const UInt32 number) { int result = 0; __asm { mov eax, number //move number in eax mov edx, 2 //move 2 in edx mul edx //multiply (2n) jo end //jump to end if overflow add eax, 1 //addition (2n+1) jo end //jump to end if overflow mov ecx, eax //move (2n+1) in ecx mov ebx, number //move number in ebx add ebx, 1 //addition (n+1) jo end //jump to end if overflow mov eax, number //move number in eax for multiplication mul ebx //multiply n(n+1) jo end //jump to end if overflow mul ecx //multiply n(n+1)(2n+1) jo end //jump to end if overflow mov ebx, 6 //move 6 in ebx div ebx //divide by 6, the result will be in eax mov result, eax //move eax in result end: } return result; } Basically, I want to know what I can improve in there. In terms of best-practices mostly. One thing sounds obvious: smarter overflow check (with a single check for whatever maximum input would cause an overflow).

    Read the article

  • Database structure and source control - best practice

    - by Paddy
    Background I came from several years working in a company where all the database objects were stored in source control, one file per object. We had a list of all the objects that was maintained when new items were added (to allow us to have scripts run in order and handle dependencies) and a VB script that ran to create one big script for running against the database. All the tables were 'create if not exists' and all the SP's etc. were drop and recreate. Up to the present and I am now working in a place where the database is the master and there is no source control for DB objects, but we do use redgate's tools for updating our production database (SQL compare), which is very handy, and requires little work. Question How do you handle your DB objects? I like to have them under source control (and, as we're using GIT, I'd like to be able to handle merge conflicts in the scripts, rather than the DB), but I'm going to be pressed to get past the ease of using SQL compare to update the database. I don't really want to have us updating scripts in GIT and then using SQL compare to update the production database from our DEV DB, as I'd rather have 'one version of the truth', but I don't really want to get into re-writing a custom bit of software to bundle the whole lot of scripts together. I think that visual studio database edition may do something similar to this, but I'm not sure if we will have the budget for it. I'm sure that this has been asked to death, but I can't find anything that seems to quite have the answer I'm looking for. Similar to this, but not quite the same: http://stackoverflow.com/questions/340614/what-are-the-best-practices-for-database-scripts-under-code-control

    Read the article

  • Proper way to implement IXmlSerializable?

    - by Greg
    Once a programmer decides to implement IXmlSerializable, what are the rules and best practices for implementing it? I've heard that GetSchema() should return null and ReadXml should move to the next element before returning. Are these true? And what about WriteXml: should it write a root element for the object or is it assumed that the root is already written? How should child objects be treated and written. Here's a sample of what I have now. I'll update it as I get good responses. public class Calendar: IEnumerable<Gvent>, IXmlSerializable { public XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { if (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Calendar") { _Name = reader["Name"]; _Enabled = Boolean.Parse(reader["Enabled"]); _Color = Color.FromArgb(Int32.Parse(reader["Color"])); if (reader.ReadToDescendant("Event")) { while (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Event") { var evt = new Event(); evt.ReadXml(reader); _Events.Add(evt); } } reader.Read(); } } public void WriteXml(XmlWriter writer) { writer.WriteAttributeString("Name", _Name); writer.WriteAttributeString("Enabled", _Enabled.ToString()); writer.WriteAttributeString("Color", _Color.ToArgb().ToString()); foreach (var evt in _Events) { writer.WriteStartElement("Event"); evt.WriteXml(writer); writer.WriteEndElement(); } } } public class Event : IXmlSerializable { public XmlSchema GetSchema() { return null; } public void ReadXml(XmlReader reader) { if (reader.MoveToContent() == XmlNodeType.Element && reader.LocalName == "Event") { _Title = reader["Title"]; _Start = DateTime.FromBinary(Int64.Parse(reader["Start"])); _Stop = DateTime.FromBinary(Int64.Parse(reader["Stop"])); reader.Read(); } } public void WriteXml(XmlWriter writer) { writer.WriteAttributeString("Title", _Title); writer.WriteAttributeString("Start", _Start.ToBinary().ToString()); writer.WriteAttributeString("Stop", _Stop.ToBinary().ToString()); } }

    Read the article

  • Nested dereferencing arrows in Perl: to omit or not to omit?

    - by DVK
    In Perl, when you have a nested data structure, it is permissible to omit de-referencing arrows to 2d and more level of nesting. In other words, the following two syntaxes are identical: my $hash_ref = { 1 => [ 11, 12, 13 ], 3 => [31, 32] }; my $elem1 = $hash_ref->{1}->[1]; my $elem2 = $hash_ref->{1}[1]; # exactly the same as above Now, my question is, is there a good reason to choose one style over the other? It seems to be a popular bone of stylistic contention (Just on SO, I accidentally bumped into this and this in the space of 5 minutes). So far, none of the usual suspects says anything definitive: perldoc merely says "you are free to omit the pointer dereferencing arrow". Conway's "Perl Best Practices" says "whenever possible, dereference with arrows", but it appears to only apply to the context of dereferencing the main reference, not optional arrows on 2d level of nested data structures. "MAstering Perl for Bioinfirmatics" author James Tisdall doesn't give very solid preference either: "The sharp-witted reader may have noticed that we seem to be omitting arrow operators between array subscripts. (After all, these are anonymous arrays of anonymous arrays of anonymous arrays, etc., so shouldn't they be written [$array-[$i]-[$j]-[$k]?) Perl allows this; only the arrow operator between the variable name and the first array subscript is required. It make things easier on the eyes and helps avoid carpal tunnel syndrome. On the other hand, you may prefer to keep the dereferencing arrows in place, to make it clear you are dealing with references. Your choice." Personally, i'm on the side of "always put arrows in, since itg's more readable and obvious tiy're dealing with a reference".

    Read the article

  • Correct OOP design without getters?

    - by kane77
    I recently read that getters/setters are evil and I have to say it makes sense, yet when I started learning OOP one of the first things I learned was "Encapsulate your fields" so I learned to create class give it some fields, create getters, setters for them and create constructor where I initialize these fields. And every time some other class needs to manipulate this object (or for instance display it) I pass it the object and it manipulate it using getters/setters. I can see problems with this approach. But how to do it right? For instance displaying/rendering object that is "data" class - let's say Person, that has name and date of birth. Should the class have method for displaying the object where some Renderer would be passed as an argument? Wouldn't that violate principle that class should have only one purpose (in this case store state) so it should not care about presentation of this object. Can you suggest some good resources where best practices in OOP design are presented? I'm planning to start a project in my spare time and I want it to be my learning project in correct OOP design..

    Read the article

  • Software development metrics and reporting

    - by David M
    I've had some interesting conversations recently about software development metrics, in particular how they can be used in a reasonably large organisation to help development teams work better. I know there have been Stack Overflow questions about which metrics are good to use - like this one, but my question is more about which metrics are useful to which stakeholders, and at what level of aggregation. As an example, my view is that code coverage is a useful metric in the following ways (and maybe others): For a team's own internal use when combined with other measurements. For facilitating/enabling/mentoring teams, where it might be instructive when considered on a team-by-team basis as a trend (e.g. if team A and B have coverage this month of 75 and 50, I'd be more concerned with team A than B if the previous month they'd had 80 and 40). For senior management when presented as an aggregated statistic across a number of teams or a whole department. But I don't think it's useful for senior management to see this on a team-by-team basis, as this encourages artifical attempts to bolster coverage with tests that merely exercise, rather than test, code. I'm in an organisation with a couple of levels in its management hierarchy, but where the vast majority of managers are technically minded and able (with many still getting their hands dirty). Some of the development teams are leading the way in driving towards agile development practices, but others lag, and there is now a serious mandate from the top for this to be the way the organisation works. A couple of us are starting a programme to encourage this. In this sort of an organisation, what sort of metrics do you think are useful, to whom, why, and at what level of aggregation? I don't want people to feel their performance is being assessed based on a metric that they can artificially influence; at the same time, the senior management are going to want some sort of evidence that progress is being made. What advice or caveats can you provide based on experience in your own organisations? EDIT We are definitely wanting to use metrics as a tool for organisational improvement not as a tool for individual performance measurement.

    Read the article

  • REST API Best practice: How to accept as input a list of parameter values

    - by whatupwilly
    Hi All, We are launching a new REST API and I wanted some community input on best practices around how we should have input parameters formatted: Right now, our API is very JSON-centric (only returns JSON). The debate of whether we want/need to return XML is a separate issue. As our API output is JSON centric, we have been going down a path where our inputs are a bit JSON centric and I've been thinking that may be convenient for some but weird in general. For example, to get a few product details where multiple products can be pulled at once we currently have: http://our.api.com/Product?id=["101404","7267261"] Should we simplify this as: http://our.api.com/Product?id=101404,7267261 Or is having JSON input handy? More of a pain? We may want to accept both styles but does that flexibility actually cause more confusion and head aches (maintainability, documentation, etc.)? A more complex case is when we want to offer more complex inputs. For example, if we want to allow multiple filters on search: http://our.api.com/Search?term=pumas&filters={"productType":["Clothing","Bags"],"color":["Black","Red"]} We don't necessarily want to put the filter types (e.g. productType and color) as request names like this: http://our.api.com/Search?term=pumas&productType=["Clothing","Bags"]&color=["Black","Red"] Because we wanted to group all filter input together. In the end, does this really matter? It may be likely that there are so many JSON utils out there that the input type just doesn't matter that much. I know our javascript clients making AJAX calls to the API may appreciate the JSON inputs to make their life easier. Thanks, Will

    Read the article

  • Why doesn't every class in the .Net framework have a corresponding interface?

    - by Thorsten Lorenz
    Since I started to develop in a test/behavior driven style, I appreciated the ability to mock out every dependency. Since mocking frameworks like Moq work best when told to mock an interface, I now implement an interface for almost every class I create b/c most likely I will have to mock it out in a test eventually. Well, and programming to an interface is good practice, anyways. At times, my classes take dependencies on .Net classes (e.g. FileSystemWatcher, DispatcherTimer). It would be great in that case to have an interface, so I could depend on an IDispatcherTimer instead, to be able to pass it a mock and simulate its behavior to see if my system under test reacts correctly. Unfortunately both of above mentioned classes do not implement such interfaces, so I have to resort to creating adapters, that do nothing else but inherit from the original class and conform to an interface, that I then can use. Here is such an adapter for the DispatcherTimer and the corresponding interface: using System; using System.Windows.Threading; public interface IDispatcherTimer { #region Events event EventHandler Tick; #endregion #region Properties Dispatcher Dispatcher { get; } TimeSpan Interval { get; set; } bool IsEnabled { get; set; } object Tag { get; set; } #endregion #region Public Methods void Start(); void Stop(); #endregion } /// <summary> /// Adapts the DispatcherTimer class to implement the <see cref="IDispatcherTimer"/> interface. /// </summary> public class DispatcherTimerAdapter : DispatcherTimer, IDispatcherTimer { } Although this is not the end of the world, I wonder, why the .Net developers didn't take the minute to make their classes implement these interfaces from the get go. It puzzles me especially since now there is a big push for good practices from inside Microsoft. Does anyone have any (maybe inside) information why this contradiction exists?

    Read the article

  • Database design for invoices, invoice lines & revisions

    - by FreshCode
    I'm designing the 2nd major iteration of a relational database for a franchise's CRM (with lots of refactoring) and I need help on the best database design practices for storing job invoices and invoice lines with a strong audit trail of any changes made to each invoice. Current schema Invoices Table InvoiceId (int) // Primary key JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Comments (nvarchar(MAX)) InvoiceLines Table LineId (int) // Primary key InvoiceId (int) // related to Invoices above Quantity (decimal(9,4)) Title (nvarchar(512)) Comment (nvarchar(512)) UnitPrice (smallmoney) Revision schema InvoiceRevisions Table RevisionId (int) // Primary key InvoiceId (int) JobId (int) StatusId (tinyint) // Pending, Paid or Deleted UserId (int) // auditing user Reference (nvarchar(256)) // unique natural string key with invoice number Date (datetime) Total (smallmoney) Schema design considerations 1. Is it sensible to store an invoice's Paid or Pending status? All payments received for an invoice are stored in a Payments table (eg. Cash, Credit Card, Cheque, Bank Deposit). Is it meaningful to store a "Paid" status in the Invoices table if all the income related to a given job's invoices can be inferred from the Payments table? 2. How to keep track of invoice line item revisions? I can track revisions to an invoice by storing status changes along with the invoice total and the auditing user in an invoice revision table (see InvoiceRevisions above), but keeping track of an invoice line revision table feels hard to maintain. Thoughts? 3. Tax How should I incorporate sales tax (or 14% VAT in SA) when storing invoice data?

    Read the article

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • Reading component parameters and setting defaults

    - by donut
    I'm pulling my hair on this because it should be simple, but can't get it to do the right thing. What's the best practice way of doing the following I've created a custom component that extends <s:Label> so now the Label has additional properties color2 color3 value which can be called like this and are used in the skin. <s:CustomLabel text="Some text" color2="0x939202" color3="0x999999" value="4.5" /> If I don't supply these parameters, I'd like some defaults to be set. So far I've had some success, but it doesn't work 100% of the time, which leads me to think that I'm not following best practices when setting those defaults. I do it now like this: [Bindable] private var myColor2:uint = 0x000000; [Bindable] private var myColor3:uint = 0x000000; [Bindable] private var myValue:Number = 10.0; then in the init function, I do a variation of this to set the default myValue = hostComponent.value; myValue = (hostComponent.value) ? hostComponent.value : 4.5; Sometimes it works, sometimes it doesn't, depending on the type of variable I'm trying to set. I eventually decided to read them as Strings then convert them to the desired type, but it seems that this also works half the time.

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >