Search Results

Search found 1122 results on 45 pages for 'expose'.

Page 35/45 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • ASP.NET/C#: How to use a Subclassed Control on a Page?

    - by Bob Kaufman
    I've subclassed DropDownList to add functionality specific to my application: public class MyDropDownList : DropDownList { ... } ... then referenced it in Web.Config, which is where I figure things start to go wrong: <pages theme="Main"> <controls> <add tagPrefix="bob" tagName="MyDropDownList" src="~/Components/MyDropDownList.cs" /> </controls> </pages> my reference to it does not work: <tr><td>Category</td> <td><bob:MyDropDownList runat="server" ID="Category"... /> and my best clue is the complier error message: "The file 'src' is not a valid [sic] here because it doesn't expose a type." I figure I'm misapplying knowledge of how to create a Web User Control here. What I want to be able to do is refer to this control on an ASP.NET page just like I would the parent DropDownList. Refactoring back into a Web User Control that contains a DropDownList is not desirable, because I want to apply a RequiredFieldValidator to it.

    Read the article

  • Vertex Buffers in opengl

    - by JB
    I'm making a small 3d graphics game/demo for personal learning. I know d3d9 and quite a bit about d3d11 but little about opengl at the moment so I'm intending to abstract out the actual rendering of the graphics so that my scene graph and everything "above" it needs to know little about how to actually draw the graphics. I intend to make it work with d3d9 then add d3d11 support and finally opengl support. Just as a learning exercise to learn about 3d graphics and abstraction. I don't know much about opengl at this point though, and don't want my abstract interface to expose anything that isn't simple to implement in opengl. Specifically I'm looking at vertex buffers. In d3d they are essentially an array of structures, but looking at the opengl interface the equivalent seems to be vertex arrays. However these seem to be organised rather differently where you need a separate array for vertices, one for normals, one for texture coordinates etc and set the with glVertexPointer, glTexCoordPointer etc. I was hoping to be able to implement a VertexBuffer interface much like the the directx one but it looks like in d3d you have an array of structures and in opengl you need a separate array for each element which makes finding a common abstraction quite hard to make efficient. Is there any way to use opengl in a similar way to directx? Or any suggestions on how to come up with a higher level abstraction that will work efficiently with both systems?

    Read the article

  • Regular input in ASP.NET

    - by coffeeaddict
    Here's an example of a regular standard HTML input for my radiobuttonlist: <label><input type="radio" name="rbRSelectionGroup" checked value="0" />None</label> <asp:Repeater ID="rptRsOptions" runat="server"> <ItemTemplate> <div> <label><input type="radio" name="rbRSelectionGroup" value='<%# ((RItem)Container.DataItem).Id %>' /><%# ((RItem)Container.DataItem).Name %></label> </div> </ItemTemplate> </asp:Repeater> I removed some stuff for this thread, one being I put an r for some name that I do not want to expose here so just an fyi. Now, I would assume that this would or should happen: Page loads the first time, the None radio button is checked / defaulted I go and select a different radiobutton in this radiobutton list I do an F5 refresh in my browser The None radio button is pre-selected again after it has come back from the refresh but #4 is not happening. It's retaining the radiobutton that I selected in #2 and I don't know why. I mean in regular HTML it's stateless. So what could be holding this value? I want this to act like a normal input button. I know the question of "why not use an ASP.NET control" will come up. Well there are 2 reasons: The stupid radiobuttonlist bug that everyone knows about I just want to brush up more on standard input tags We are not moving to MVC so this is as close as I'll get and it's ok, because the rest of the team is on par with having mixed ASP.NET controls with standard HTML controls in our pages Anyway my main question here is I'm surprised that it's retaining the change in selection after postback.

    Read the article

  • How to make std::vector's operator[] compile doing bounds checking in DEBUG but not in RELEASE

    - by Edison Gustavo Muenz
    I'm using Visual Studio 2008. I'm aware that std::vector has bounds checking with the at() function and has undefined behaviour if you try to access something using the operator [] incorrectly (out of range). I'm curious if it's possible to compile my program with the bounds checking. This way the operator[] would use the at() function and throw a std::out_of_range whenever something is out of bounds. The release mode would be compiled without bounds checking for operator[], so the performance doesn't degrade. I came into thinking about this because I'm migrating an app that was written using Borland C++ to Visual Studio and in a small part of the code I have this (with i=0, j=1): v[i][j]; //v is a std::vector<std::vector<int> > The size of the vector 'v' is [0][1] (so element 0 of the vector has only one element). This is undefined behaviour, I know, but Borland is returning 0 here, VS is crashing. I like the crash better than returning 0, so if I can get more 'crashes' by the std::out_of_range exception being thrown, the migration would be completed faster (so it would expose more bugs that Borland was hiding).

    Read the article

  • Returning Database Blobs in TurboGears 2.x / FCGI / Lighttpd extremely slow

    - by Tom
    Hey everyone, I am running a TG2 App on lighttpd via flup/fastcgi. We are reading images (~30kb each) from BlobFields in a MySQL database and return those images with a custom mime type via a controller method. Caching these images on the hard disk makes no sense because they change with every request, the only reason we cache these in the DB is that creating these images is quite expensive and the data used to create the images is also present in plain text on the website. Now to the problem itself: When returning such an image, things get extremely slow. The code runs totally fine on paster itself with no visible delay, but as soon as its running via fcgi/lighttpd the described phenomenon happens. I profiled the method of my controller that returns my blob, and the entire method runs in a few miliseconds, but when "return" executes, the entire app hangs for roughly 10 seconds. We could not reproduce the same error with PHP on FCGI. This only seems to happen with Turbogears or Pylons. Here for your consideration the concerned piece of source code: @expose(content_type=CUSTOM_CONTENT_TYPE) def return_img(self, img_id): """ Return a DB persisted image when requested """ img = model.Images.by_id(img_id) #get image from DB response.headers['content-type'] = 'image/png' return img.data # this causes the app to hang for 10 seconds

    Read the article

  • Is it possible for a function called from within an object to have access to that object's scope?

    - by Elliot Bonneville
    I can't think of a way to explain what I'm after more than I've done in the title, so I'll repeat it. Is it possible for an anonymous function called from within an object to have access to that object's scope? The following code block should explain what I'm trying to do better than I can: function myObj(testFunc) { this.testFunc = testFunc; this.Foo = function Foo(test) { this.test = test; this.saySomething = function(text) { alert(text); }; }; var Foo = this.Foo; this.testFunc.apply(this); } var test = new myObj(function() { var test = new Foo(); test.saySomething("Hello world"); }); When I run this, I get an error: "Foo is not defined." How do I ensure that Foo will be defined when I call the anonymous function? Here's a jsFiddle for further experimentation. Edit: I am aware of the fact that adding the line var Foo = this.Foo; to the anonymous function I pass in to my instance of myObj will make this work. However, I'd like to avoid having to expose the variable inside the anonymous function--do I have any other options?.

    Read the article

  • Which frameworks (and associated languages) support class replacement?

    - by Alix
    Hi, I'm writing my master thesis, which deals with AOP in .NET, among other things, and I mention the lack of support for replacing classes at load time as an important factor in the fact that there are currently no .NET AOP frameworks that perform true dynamic weaving -- not without imposing the requirement that woven classes must extend ContextBoundObject or MarshalByRefObject or expose all their semantics on an interface. You can however do this with the JVM thanks to ClassFileTransformer: You extend ClassFileTransformer. You subscribe to the class load event. On class load, you rewrite the class and replace it. All this is very well, but my project director has asked me, quite in the last minute, to give him a list of frameworks (and associated languages) that do / do not support class replacement. I really have no time to look for this now: I wouldn't feel comfortable just doing a superficial research and potentially putting erroneous information in my thesis. So I ask you, oh almighty programming community, can you help out? Of course, I'm not asking you to research this yourselves. Simply, if you know for sure that a particular framework supports / doesn't support this, leave it as an answer. If you're not sure please don't forget to point it out. Thanks so much!

    Read the article

  • Which is the better C# class design for dealing with read+write versus readonly

    - by DanM
    I'm contemplating two different class designs for handling a situation where some repositories are read-only while others are read-write. (I don't foresee any need to a write-only repository.) Class Design 1 -- provide all functionality in a base class, then expose applicable functionality publicly in sub classes public abstract class RepositoryBase { protected virtual void SelectBase() { // implementation... } protected virtual void InsertBase() { // implementation... } protected virtual void UpdateBase() { // implementation... } protected virtual void DeleteBase() { // implementation... } } public class ReadOnlyRepository : RepositoryBase { public void Select() { SelectBase(); } } public class ReadWriteRepository : RepositoryBase { public void Select() { SelectBase(); } public void Insert() { InsertBase(); } public void Update() { UpdateBase(); } public void Delete() { DeleteBase(); } } Class Design 2 - read-write class inherits from read-only class public class ReadOnlyRepository { public void Select() { // implementation... } } public class ReadWriteRepository : ReadOnlyRepository { public void Insert() { // implementation... } public void Update() { // implementation... } public void Delete() { // implementation... } } Is one of these designs clearly stronger than the other? If so, which one and why? P.S. If this sounds like a homework question, it's not, but feel free to use it as one if you want :)

    Read the article

  • Liskov Substition and Composition

    - by FlySwat
    Let say I have a class like this: public sealed class Foo { public void Bar { // Do Bar Stuff } } And I want to extend it to add something beyond what an extension method could do....My only option is composition: public class SuperFoo { private Foo _internalFoo; public SuperFoo() { _internalFoo = new Foo(); } public void Bar() { _internalFoo.Bar(); } public void Baz() { // Do Baz Stuff } } While this works, it is a lot of work...however I still run into a problem: public void AcceptsAFoo(Foo a) I can pass in a Foo here, but not a super Foo, because C# has no idea that SuperFoo truly does qualify in the Liskov Substitution sense...This means that my extended class via composition is of very limited use. So, the only way to fix it is to hope that the original API designers left an interface laying around: public interface IFoo { public Bar(); } public sealed class Foo : IFoo { // etc } Now, I can implement IFoo on SuperFoo (Which since SuperFoo already implements Foo, is just a matter of changing the signature). public class SuperFoo : IFoo And in the perfect world, the methods that consume Foo would consume IFoo's: public void AcceptsAFoo(IFoo a) Now, C# understands the relationship between SuperFoo and Foo due to the common interface and all is well. The big problem is that .NET seals lots of classes that would occasionally be nice to extend, and they don't usually implement a common interface, so API methods that take a Foo would not accept a SuperFoo and you can't add an overload. So, for all the composition fans out there....How do you get around this limitation? The only thing I can think of is to expose the internal Foo publicly, so that you can pass it on occasion, but that seems messy.

    Read the article

  • Which languages support class replacement?

    - by Alix
    Hi, I'm writing my master thesis, which deals with AOP in .NET, among other things, and I mention the lack of support for replacing classes at load time as an important factor in the fact that there are currently no .NET AOP frameworks that perform true dynamic weaving -- not without imposing the requirement that woven classes must extend ContextBoundObject or MarshalByRefObject or expose all their semantics on an interface. You can however do this in Java thanks to ClassFileTransformer: You extend ClassFileTransformer. You subscribe to the class load event. On class load, you rewrite the class and replace it. All this is very well, but my project director has asked me, quite in the last minute, to give him a list of languages that do / do not support class replacement. I really have no time to look for this now: I wouldn't feel comfortable just doing a superficial research and potentially putting erroneous information in my thesis. So I ask you, oh almighty programming community, can you help out? Of course, I'm not asking you to research this yourselves. Simply, if you know for sure that a particular language supports / doesn't support this, leave it as an answer. If you're not sure please don't forget to point it out. Thanks so much!

    Read the article

  • Does adding to a method group count as using a variable?

    - by Vaccano
    I have the following code example taken from the code of a Form: protected void SomeMethod() { SomeOtherMethod(this.OnPaint); } private void SomeOtherMethod(Action<PaintEventArgs> onPaint) { onPaint += MyPaint; } protected void MyPaint(PaintEventArgs e) { // paint some stuff } The second method (SomeOtherMethod) has resharper complaining at me. It says of onPaint that "Value assigned is not used in any execution path". To my mind it was used because I added a method to the list of methods called when a paint was done. But usually when resharper tells me something like this it is because I am not understanding some part of C#. Like maybe when the param goes out of goes out of scope the item I added to the list gets removed (or something like that). I thought I would ask here to see if any one knows what resharper is trying to tell me. (Side Note: I usually just override OnPaint. But I am trying to get OnPaint to call a method in another class. I don't want to expose that method publicly so I thought I would pass in the OnPaint group and add to it.)

    Read the article

  • Threadsafe way of exposing keySet()

    - by Jake
    This must be a fairly common occurrence where I have a map and wish to thread-safely expose its key set: public MyClass { Map<String,String> map = // ... public final Set<String> keys() { // returns key set } } Now, if my "map" is not thread-safe, this is not safe: public final Set<String> keys() { return map.keySet(); } And neither is: public final Set<String> keys() { return Collections.unmodifiableSet(map.keySet()); } So I need to create a copy, such as: public final Set<String> keys() { return new HashSet(map.keySet()); } However, this doesn't seem safe either because that constructor traverses the elements of the parameter and add()s them. So while this copying is going on, a ConcurrentModificationException can happen. So then: public final Set<String> keys() { synchronized(map) { return new HashSet(map.keySet()); } } seems like the solution. Does this look right?

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Going "behind Hibernate's back" to update foreign key values without an associated entity

    - by Alex Cruise
    Updated: I wound up "solving" the problem by doing the opposite! I now have the entity reference field set as read-only (insertable=false updatable=false), and the foreign key field read-write. This means I need to take special care when saving new entities, but on querying, the entity properties get resolved for me. I have a bidirectional one-to-many association in my domain model, where I'm using JPA annotations and Hibernate as the persistence provider. It's pretty much your bog-standard parent/child configuration, with one difference being that I want to expose the parent's foreign key as a separate property of the child alongside the reference to a parent instance, like so: @Entity public class Child { @Id @GeneratedValue Long id; @Column(name="parent_id", insertable=false, updatable=false) private Long parentId; @ManyToOne(cascade=CascadeType.ALL) @JoinColumn(name="parent_id") private Parent parent; private long timestamp; } @Entity public class Parent { @Id @GeneratedValue Long id; @OrderBy("timestamp") @OneToMany(mappedBy="parent", cascade=CascadeType.ALL, fetch=FetchType.LAZY) private List<Child> children; } This works just fine most of the time, but there are many (legacy) cases when I'd like to put an invalid value in the parent_id column without having to create a bogus Parent first. Unfortunately, Hibernate won't save values assigned to the parentId field due to insertable=false, updatable=false, which it requires when the same column is mapped to multiple properties. Is there any nice way to "go behind Hibernate's back" and sneak values into that field without having to drop down to JDBC or implement an interceptor? Thanks!

    Read the article

  • Use date operations on a computed field filter in a view

    - by stephenhay
    I'd like to expose a filter in Views based on a Computed Field. This field should utilize the date operation "less than". Apparently, a date type is not available to computed field (varchar is). However, using varchar results in views treating the field as a string rather than as a date, without the operation I'm looking for. So I tried using a timestamp instead of a date, which allowed me to use an integer data type. The exposed view gives me a "less than" operation, but as it's still not a date, the user must type in a timestamp. Not handy. So I'm thinking the best way to do this is to create a new (hidden) CCK field which is a date field, and set the value of this field to the value of the Computed Field. I can then use the new CCK field for my exposed filter. But I'm not getting anywhere with this, as I'm unsure how to set another CCK field with a Computed Field. To return the value to the computed field itself, I'm using the standard: $node_field[0]['value'] = $newestdate; I would like to set the value of the new CCK field to $newestdate as well. Can anyone help?

    Read the article

  • Start with remoting or with WCF

    - by Sheldon
    Hi. I'm just starting with distributed application development. I need to create (all by myself) an enterprise application for document management. That application will run on an intranet (within the firewall, no internet access is required now, BUT is probably that will be later). The application needs to manage images that will be stored within MySQL Server (as blobs) and those images will be then recovered by the app and eventually one or more of them will be converted to PDF. Performance is the most important non-functional requirement. I have a couple of doubts. What do you suggest to use, .NET Remoting or WCF over TCP-IP (I think second one is the best for the moment I need to expose the business logic over internet, changing the protocol). Where do you suggest to make the transformation of the images to pdf files, I'm using iText. (I have thought to have the business logic stored within the IIS and exposed via WCF, and that business logic to be responsible of getting the images and transforming them to PDF, that because the IIS and the MySQL Server are the same physical machine). I ask about where to do the transformation because the app must be accessible from multiple devices, and for example, for mobile devices, the pdf maybe is not necessary. Thank you very much in advance.

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • Patterns / Solutions to complicated Feature Management

    - by yclian
    Hi all, My company develops CDN / Web-Hosting solution. We have a middleware that's served as a business logic layer and exposes web service for the front-end. I would like to seek for a clean solution to feature management - there're uncertainties and ugly workarounds/solutions in the software that the dev would say "when it happens or is broken, we will fix it". For example, here're the following features that a web publisher can have: Sites limit Bandwidth limit SSL feature + SSL configuration per site If we downgrade a web publisher, when he's having 10 sites, down to 5 sites, we can choose not to suspend the rest of the 5 sites, or we shall prompt for suspension before the downgrade. For the case of bandwidth limit, the downgrade is easy, when the bandwidth check happens, if the publisher has it exceeded, then we will suspend his account. For the case of SSL feature. Every SSL configuration is tied to a site, what shall happen to these configuration object when the SSL feature is downgraded from enabled to disabled? So as you can see, there're many different situations and there are different ways of handling it. I can make a system that examines the impacts and prompts the user to make changes before the downgrade/upgrade. Or a system that ignores the impacts and just upgrade/downgrade. Bad. Or a system designed in a way that the client code need to be aware of the complex feature matrix (or I can expose a helper to the client code to check if a feature is not DEFUNCT) There can be many ways that I am still thinking but puzzled. I am wondering, how would you tackle this issue and is there any recommended patterns or books or software that you think I can refer to? Appreciate your help.

    Read the article

  • Typical Hadoop setup for remote job submission

    - by Artii
    So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines. Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great. I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great. Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar. So to sum it up my questions are, Is this an ok setup for a small test cluster How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it. How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup. Thanks I would greatly appreciate any advice or help on this.

    Read the article

  • Avoiding mass propagation of properties and events for exposure to ViewModels.

    - by firoso
    I have an MVVM application I am developing that is to the point where I'm ready to start putting together a user interface (my client code is largely functional) I'm now running into the issue that I'm trying to get my application data to where I need it so that it can be consumed by the view model and then bound to the view. Unfortunately, it seems that I've either got a few structural oversights, or I'm just going to have to face the reality that I need to be propogating events and raising excessive amounts of errors to notify view models that thier properties have changed. Let me go into some examples of my issue: I have a class "Unit" contained in a class "Test", contained in a class "Session" contained in a class "TestManager" which is contained in "TestDataModel" which is utilized by "TestViewModel" which is databound to by my "TestView" .... WHOA. Now, consider that Unit (the bottom of the heiarchy) has a property called "Results" that is updated periodically, I want to expose that to my viewmodel and then databind it to my view, trouble is, the only way I can really think to do this is to perpetuate events WAY up a chain that say "I've been updated!" and then request the new value... This seems like an aweful way to do this. Alternatively, I could register a static event and raise it, and have the appropriate "Unit view model" grab the event and request the update. This SEEMS better... but... static events? Is that a taboo idea? Also, having an expression like: TestDataModel.TestManager.Session.Test.Unit.Results[i] Seems REALLY gross to have on a View Model. I know this all reeks of a bad design issue, but I can't figure out what I did wrong? Should I be using more singleton/container controlled lifetimes type objects? Register object instances with static helper containers? Obviously these are hard questions to answer without being intimate with the existing structure, but if you've run into situations like this, what did you do to refactor? Should I just live with this, add mass events, and propogate them?

    Read the article

  • Connect Rails model to non-rails database

    - by the_snitch
    I'm creating a new web application (Rails 3 beta), of which pieces of it will access data from a legacy mysql database that a current php application is using. I do not wish to modify the legacy db schema, I just want to be able to read/write to it, as well as the rails application having it's own database using activerecord for the newer stuff. I'm using mysql for the rails app, so I have the adapter installed. How is the best way to do this? For example, I want contacts to come from the old database. Should I create a contacts controller, and manually call sql to get the variables for the views? Or should I create a Contact model, and define attributes that match the fields in the database, and am I able to use it like Contact.mail_address to have it call "SELECT mailaddr FROM contacts WHERE id=Contact.id". Sorry, I've never done much in Rails outside of the standard stuff that is documented well. I'm not sure of what the best approach would be. Ideally, I want the contacts to be presented to my rails application as native as possible, so that I can expose them RESTfully for API access. Any suggestions and code examples would be much appreciated

    Read the article

  • Dangers when deploying Flash/Flex UI test automation hooks to production?

    - by Merlyn Morgan-Graham
    I am interested in doing automated testing against a Flex based UI. I have found out that my best options for UI automation (due to being C# controllable, good licensing conditions, etc) all seem to require that I compile test hooks into my application. Because of this, I am thinking of recommending that these hooks be compiled into our build. I have found a few places on the net that recommend not deploying bits with this instrumentation enabled, and I'd like to know why. Is it a performance drain, or a security risk? If it is a security risk, can you explain how the attack surface is increased? I am not a Flash or Flex developer, though I have some experience with threat modeling. For reference, here's the tools I'm specifically considering: QTP Selenium-Flex API I am having problems finding all the warnings/suggestions I found last night, but here's an example that I can find: http://www.riatest.com/products/getting-started.html Warning! Automation enabled applications expose all properties of all GUI components. This makes them vulnerable to malicious use. Never make automation enabled application publicly available. Always restrict access to such applications and to RIATest Loader to trusted users only. Related question (how to do conditional compilation to insert/remove those hooks): Conditionally including Flex libraries (SWCs) in mxmlc/compc ant tasks

    Read the article

  • Memory management of objects returned by methods (iOS / Objective-C)

    - by iOSNewb
    I am learning Objective-C and iOS programming through the terrific iTunesU course posted by Stanford (http://www.stanford.edu/class/cs193p/cgi-bin/drupal/) Assignment 2 is to create a calculator with variable buttons. The chain of commands (e.g. 3+x-y) is stored in a NSMutableArray as "anExpression", and then we sub in random values for x and y based on an NSDictionary to get a solution. This part of the assignment is tripping me up: The final two [methods] “convert” anExpression to/from a property list: + (id)propertyListForExpression:(id)anExpression; + (id)expressionForPropertyList:(id)propertyList; You’ll remember from lecture that a property list is just any combination of NSArray, NSDictionary, NSString, NSNumber, etc., so why do we even need this method since anExpression is already a property list? (Since the expressions we build are NSMutableArrays that contain only NSString and NSNumber objects, they are, indeed, already property lists.) Well, because the caller of our API has no idea that anExpression is a property list. That’s an internal implementation detail we have chosen not to expose to callers. Even so, you may think, the implementation of these two methods is easy because anExpression is already a property list so we can just return the argument right back, right? Well, yes and no. The memory management on this one is a bit tricky. We’ll leave it up to you to figure out. Give it your best shot. Obviously, I am missing something with respect to memory management because I don't see why I can't just return the passed arguments right back. Thanks in advance for any answers!

    Read the article

  • c# - what approach can I use to extend a group of classes that include implemented methods? (see des

    - by Greg
    Hi, I want to create an extendible package I am writing that has Topology, Node & Relationship classes. The idea is these base classes would have the various methods in them necessary to base graph traversal methods etc. I would then like to be able to reuse this by extending the package. For example the base requirements might see Relationship with a parentNode & childNode. Topology would have a List of Nodes and List of Relationships. Topology would have methods like FindChildren(int depth). Then the usage would be to extend these such that additional attributes for Node and Relationships could be added etc. QUESTION - What would be the best approach to package & expose the base level classes/methods? (it's kind of like a custom collection but with multiple facets). Would the following concepts come into play: Interfaces - would this be a good idea to have ITopology, INode etc, or is this not required as the user would extend these classes anyway? Abstract Classes - would the base classes be abstract classes Custom Generic Collection - would some approach using this concept assist (but how would this work if there are the 3 different classes) thanks

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >