Search Results

Search found 9254 results on 371 pages for 'approach'.

Page 29/371 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How to access a flash object embedded via swfobject's embedSWF?

    - by Andree
    Hi there, I have to call an ActionScript method via Javascript, but I have a problem accessing the flash object itself. I embed the flash file via the help of swfobject. Previously, when I use the static publishing approach, I could easily get the flash object by calling these methods: swfobject.registerObject("flash_object", "9", "expressInstall.swf"); var flash_object = swfobject.getObjectById("flash_object"); For some technical reasons, now I have to use the dynamic publishing approach (using swfobject.embedSWF). But, as mentioned in the documentation, the method getObjectById can only be used if you use static publishing approach. Now, how can I access the flash object? Cheers, Andree

    Read the article

  • Attaching a static library to an iphone/ipad application

    - by Jack
    Hello, which is the best approach to include a static library with into an application for iPhone or iPad? I could choose to compile the library supplying right platform and building a library file with the ar utility and then add as a framework to the project including the source of the library .c/.h and compile them together with the application The first approach seems simpler, because I won't care about managing all specific settings of the library I want to include but how can I create the library both for iPhone and iPad and allow xcode to use the right library upon linking? The second approach seems more complex since xcode will take care about compiling my application and the library (with different settings I suppose) then how should I go? I can easily add sources of the lib but I'll have to include the make scripts to allow xcode use them to build in the right way. Any suggestions about how to proceed? The library I'm trying to include is libssh. (I know that this library, of course, has already been compiled and tried succesfully on iPhone) Thanks in advance.

    Read the article

  • OpenLayers, Layers: Tiled vs. single tile

    - by Chau
    Each time we add a new layer to our OpenLayers based website (data provided primarily by a GeoServer server), we discuss whether to use a single-tile or a tiled approach. Some of the parameters we evaluate are the following: Using the tiled approach we get: Slow but continuous buildup of the viewport Lots of small images Client side caching possibilities Blocking of the loading pipeline (6 requests at a time) Jerky feeling when navigating during load Using the single-tile approach we get: Smoother feeling when navigating during load Time delay before layer is loaded One large image for each layer No caching of the single tile We have a lot of data editing in the layers, thus a tile-cache might not be that efficient. Are there any best-practices when it comes to tiling? Progressing towards infinitely fast hardware and unlimited data connections, the discussion becomes irrelevant, but what configuration do you percieve as the most user-pleasing?

    Read the article

  • typesafe NotifyPropertyChanged using linq expressions

    - by bitbonk
    Form Build your own MVVM I have the following code that lets us have typesafe NotifyOfPropertyChange calls: public void NotifyOfPropertyChange<TProperty>(Expression<Func<TProperty>> property) { var lambda = (LambdaExpression)property; MemberExpression memberExpression; if (lambda.Body is UnaryExpression) { var unaryExpression = (UnaryExpression)lambda.Body; memberExpression = (MemberExpression)unaryExpression.Operand; } else memberExpression = (MemberExpression)lambda.Body; NotifyOfPropertyChange(memberExpression.Member.Name); } How does this approach compare to standard simple strings approach performancewise? Sometimes I have properties that change at a very high frequency. Am I safe to use this typesafe aproach? After some first tests it does seem to make a small difference. How much CPU an memory load does this approach potentially induce?

    Read the article

  • Managing Unique IDs in stateless (web) DB4O applications

    - by Phil.Wheeler
    I'm playing around with building a new web application using DB4O - piles of fun and some really interesting stuff learned. The one thing I'm struggling with is DB4O's current lack of support for stateless applications (i.e. web apps, mostly) and the need for automatically-generated IDs. There are a number of creative and interesting approaches that I've been able to find that hook into DB4O's events, use GUIDs rather than numeric IDs or for whatever reason avoid using any system of ID at all. While each approach has its merits, I'm wondering if the less-elegant approach might equally be the best fit. Consider the following pseudo-code: If ID == 0 or null Set ID = (typeof(myObject)).Count myObject.Save It seems like such a blindingly simple approach, it's usually about here that I start thinking, "I've missed something really obvious". Have I?

    Read the article

  • OAuth on iPhone: using Safari or UIWebView?

    - by athanhcong
    Hi all, When I implement OAuth in iPhone, I have a dilemma to choose Safari or UIWebView to open the Twitter pages for user authentication? I write some advantage and disadvantage of both case: Using UIWebWeb. The disadvantage is users have to enter their credentials in our application. It's maybe risky phishing. The advantage is this approach will not quit our app. Using Safari for user to authenticate (this approach automatically callbacks to our application) Addvantage: less risky. Disadvantage: have to quit our app A good reference link about this: http://fireeagle.yahoo.net/developer/documentation/oauth_best_practice Which approach do you prefer? Any response is appreciate. Thanks.

    Read the article

  • Entity Framework: Data Centric vs. Object Centric

    - by Eric J.
    I'm having a look at Entity Framework and everything I'm reading takes a data centric approach to explaining EF. By that I mean that the fundamental relationships of the system are first defined in the database and objects are generated that reflect those relationships. Examples Quickstart (Entity Framework) Using Entity Framework entities as business objects? The EF documentation implies that it's not necessary to start from the database layer, e.g. Developers can work with a consistent application object model that can be mapped to various storage schemas When designing a new system (simplified version), I tend to first create a class model, then generate business objects from the model, code business layer stuff that can't be generated, and then worry about persistence (or rather work with a DBA and let him worry about the most efficient persistence strategy). That object centric approach is well supported by ORM technologies such as (n)Hibernate. Is there a reasonable path to an object centric approach with EF? Will I be swimming upstream going that route? Any good starting points?

    Read the article

  • Windows Azure Queues, WCF, MSMQ integration

    - by user104295
    Hi there, I have a scenario where I need a desktop console app to communicate with a Windows Azure Queue... the most important thing is that the message is received by the server eventually. Also, the desktop app may be disconnected from the Internet sometimes. In the traditional WCF+MSMQ approach you'd be able to send a message which would be cached in MSMQ until MSMQ could reach the Server's MSMQ and send the message. What's the equivalent when Windows Azure is the server-side? Is it possible for the same approach to be used, where MSMQ just communicates with a Windows Azure Queue rather than an MSMQ on a Windows Server? Maybe Windows Azure Queue is the wrong approach? I have heard about something called message buffer, but don't know what this is (yet!). thanks for your help Kris

    Read the article

  • Aop, Unity, Interceptors and ASP.NET MVC Controller Action Methods

    - by Richard Ev
    Using log4net we would like to log all calls to our ASP.NET MVC controller action methods. The logs should include information about any parameters that were passed to the controller. Rather than hand-coding this in each action method we hope to use an AoP approach with Interceptors via Unity. We already have this working with some other classes that use interfaces (using the InterfaceInterceptor). However, we're not sure how to proceed with our controllers. Should we re-work them slightly to use an interface, or is there a simpler approach? Edit The VirtualMethodInterceptor seems to be the correct approach, however using this results in the following exception: System.ArgumentNullException: Value cannot be null. Parameter name: str at System.Reflection.Emit.DynamicILGenerator.Emit(OpCode opcode, String str) at Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context)

    Read the article

  • Using 'git pull' vs 'git checkout -f' for website deployment

    - by Michelle
    I've found two common approaches to automatically deploying website updates using a bare remote repo. The first requires that the repo is cloned into the document root of the webserver and in the post-update hook a git pull is used. cd /srv/www/siteA/ || exit unset GIT_DIR git pull hub master The second approach adds a 'detached work tree' to the bare repository. The post-receive hook uses git checkout -f to replicate the repository's HEAD into the work directory which is the webservers document root i.e. GIT_WORK_TREE=/srv/www/siteA/ git checkout -f The first approach has the advantage that changes made in the websites working directory can be committed and pushed back to the bare repo (however files should not be updated on the live server). The second approach has the advantage that the git directory is not within the document root but this is easily solved using htaccess. Is one method objectively better than the other in terms of best practice? What other advantages and disadvantages am I missing?

    Read the article

  • Fibonacci using SBN in OISC in Machine Language

    - by velociraptor
    Hello, I want to generate fibonacci series using SBN in an OISC architecture. My initial approach is to implement it in assembly language first and then convert it to machine language. The first steps involve storing 0 and 1 in 2 registers and then subtract 1 from 0 and repeatedly subtract 1 in the consequent steps. Everytime it will generate a negative number and since its negative, it branches off and fetches the absolute value finding instruction. Is my approach correct? My confusion in the meaning of OISC. Correct me if i'm wrong, if i perform a subtraction and then an absolute value finding, it means that that i'm using 2 instructions everytime. or is it that in the OISC processor both these instructions are done at the sametime which would mean that my approach is correct. Please help. thank you all

    Read the article

  • testing .mobile mime format with capybara / rspec

    - by Chris Beck
    For detecting and responding to mobile user agents, I'm using Mime::Type.register_alias "text/html", :mobile and the approach I'm wondering what is the best approach to test with capybara. This article suggests setting up an iphone driver with Capybara.register_driver :iphone do |app| http://blog.plataformatec.com.br/2011/03/configuring-user-agents-with-capybara-selenium-webdriver/ but I'd like a more flexible approach where the mime type is set via the url extension localhost/index.mobile and where I can do this visit user_path( format: :mobile) Rails understands the extension and sets the format in the params hash, but how do I get the url helper methods to add that to all urls as a file extension?

    Read the article

  • How can I have a Visual Studio Report(.rdlc) with account information and also a chart in the same r

    - by Paul Mendoza
    I'm working on a report in Visual Studio 2008 with their Report tooling and I'm not sure how to approach this conceptually. I have a report I want to generate. At the top of the report will be a bunch of information about a customer of our site (Name, Address, Phone). Then below will be a chart of the purchases that customer has each month. My problem is that I want the content at the top of the page to use a query that selects from my Users table in my database. But then I need another query that gets all of the purchases grouped by month. I've thought one way to approach this would be to place a subreport on the parent report that only contained the chart. The parent report would have the details of the account. Is this the correct approach?

    Read the article

  • Parameter passing Vs Table Valued Parameters Vs XML to SQL 2008 from .Net Application

    - by Harryboy
    As We are working on a asp .net project there three ways one can update data into database when there are multiple rows updation / insertion required Let's assume we need to update employee education detail (which could be 1,3,5 or 10 records) Method to Update Data Pass value as parameter (Traditional approach), If 10 records are there then 10 round trip required Pass data as xml and write logic inside your stored procedure to get that data from xml and update the table (only single roundtrip required) Use Table valued parameters (only single roundtrip required) Note : Data is available as List, so i need to convert it to xml or any other format if i need to pass. There are no. of places in entire application we need to update data in bulk (or multiple records) I just need your suggestions that Which method will be faster (please mention if there are some other overheads) Manageability or testability concern with any approach Any other bottleneck or issue with any of the approach (Serialization /Deserialization concern or limit on size of the data passing) Any other method you suggest for same operations Thanks

    Read the article

  • How to decouple an app's agile development from a database using BDUF?

    - by Rob Wells
    G'day, I was reading the article "Database as a Fortress" by Dan Chak from the excellent book "97 Things Every Software Architect Should Know" (sanitised Amazon link) which suggests that databases should not be designed using an agile approach. There's an SO question on agile approaches and databases "Agile development and database changes" which has some excellent answers covering agile development approaches. In fact, one of the answers supplies a brilliant idea of what's needed for each update of the DB. ;-) But after reading Dan Chak's article, I am left wondering if an agile approach is really suitable for large scale systems. This of course leads on to the question of how best to decouple an agile approach for the application that is interacting with the BDUF database design without adding complicated translation layers in the final design employed? Any suggestions? cheers,

    Read the article

  • Bash or python for changing spacing in files

    - by Werner
    Hi, I have a set of 10000 files. In all of them, the second line, looks like: AAA 3.429 3.84 so there is just one space (requirement) between AAA and the two other columns. The rest of lines on each file are completely different and correspond to 10 columns of numbers. Randomly, in around 20% of the files, and due to some errors, one gets BBB 3.429 3.84 so now there are two spaces between the first and second column. This is a big error so I need to fix it, changing from 2 to 1 space in the files where the error takes place. The first approach I thought of was to write a bash script that for each file reads the 3 values of the second line and then prints them with just one space, doing it for all the files. I wonder what do oyu think about this approach and if you could suggest something better, bashm python or someother approach. Thanks

    Read the article

  • migrate database from sybase to mysql

    - by jindalsyogesh
    I have been trying to migrate a database from sybase to Mysql. This is my approach: Generate pojo classes from my sybase database using hibernate in eclipse Use these pojo classes to generate the schema in mysql database Then somehow migrate the data from sybase to mysql I guess this approach should work??? Please let me know if there is any better or easier approach. The thing is I am not even able to get the first step done. I added hibernate plugin in eclipse from this link: [http://download.jboss.org/jbosstools/updates/stable/][1] I added sybase jar file to my project classpath Then I added hibernate console configuration file Then I added hibernate configuration file Then I added hibernate code generation configuration When I try to run the code generation configuration file, I am getting java.lang.NullPointerException and I have no idea how to fix it. I searched a lot of forums, tried to google it, but I not able to find any solution. Can anybody tell me what mistake I am making here or point to some hibernate tutorial for eclipse??

    Read the article

  • What's the correct way to stop a background process on Mac OS X?

    - by mcsheffrey
    I have an application with 2 components: a desktop application that users interact with, and a background process that can be enabled from the desktop application. Once the background process is enabled, it will run as a user launch agent independently of the desktop app. However, what I'm wondering is what to do when the user disables the background process. At this point I want to stop the background process but I'm not sure what the best approach is. The 3 options that I see are: Use the 'kill' command. Direct, but not reliable and just seems somewhat "wrong". Use an NSMachPort to send an exit request from the desktop app to the background process. This is the best approach I've thought of but I've run into an implementation problem (I'll be posting this in a separate query) and I'd like to be sure that the approach is right before going much further. Something else??? Thank you in advance for any help/insight that you can offer.

    Read the article

  • insert into several inheritance tables with OUTPUT - sql servr 2005

    - by csetzkorn
    Hi, I have a bunch of items – for simplicity reasons – a flat table with unique names seeded via bulk insert: create table #items ( ItemName NVARCHAR(255) ) The database has this structure: create table Statements ( Id INT IDENTITY NOT NULL, Version INT not null, FurtherDetails varchar(max) null, ProposalDateTime DATETIME null, UpdateDateTime DATETIME null, ProposerFk INT null, UpdaterFk INT null, primary key (Id) ) create table Item ( StatementFk INT not null, ItemName NVARCHAR(255) null, primary key (StatementFk) ) Here Item is a child of Statement (inheritance). I would like to insert items in #items using a set based approach (avoiding triggers and loops). Can this be achieved with OUTPUT in my scenario. A ‘loop based’ approach is just too slow where I use something like this: insert into Statements (Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk) VALUES (1, null, getdate(), getdate(), @user_id, @user_id) etc. This is a start for the OUTPUT based approach – but I am not sure whether this would work in my case as ItemName is only inserted into Item: insert into Statements ( Version, FurtherDetails, ProposalDateTime, UpdateDateTime, ProposerFk, UpdaterFk ) output inserted.Id ... ??? Thanks. Best wishes, Christian

    Read the article

  • Usability: Save changes using "Apply" button or after every single change?

    - by mr.b
    I am interested in hearing opinions and experiences of fellow developers on topic of designing user interface, usability AND maintainability-wise. Common approach is to allow users to tweak options and after form gets "dirty", enable "Apply" button, and user has possibility to back out by pressing cancel. This is most common approach on Windows platform (I believe MS usability guidelines say to do so as well). Another way is to apply changes after every single change has been made to options. Example, user checks some checkbox, and change is applied. User changes value of some text box, and change is applied after box looses focus, etc. You get the point. This approach is most common on Mac OSX. Regardless of my personal opinion (which is that Apple is better at usability, but software I usually write targets Windows users), what do you people think?

    Read the article

  • Razor support of generic extension methods

    - by Brian
    Hello, With regards to the Razor view engine, say I want to render Html.TextBoxFor<SomeModel>(i => i.Name), it doesn't seem that the inline syntax works as in: @Html.TextBoxFor<SomeModel>(i => i.Name) This doesn't seem to work because it interprets the generic as an HTML tag. I could use a code-block approach, but then what's the best approach to output the content? The HTML string returned from this method, do I response.write it, or is there a syntax for it, or what's the approach? Thanks.

    Read the article

  • Where is the chink in Google Chrome's armor?

    - by kudlur
    While browsing with Chrome, I noticed that it responds extremely fast (in comparison with IE and Firefox on my laptop) in terms of rendering pages, including JavaScript heavy sites like gmail. This is what googlebook on Chrome has to say tabs are hosted in process rather than thread. compile javascript using V8 engine as opposed to interpreting. Introduce new virtual machine to support javascript heavy apps introduce "hidden class transitions" and apply dynamic optimization to speed up things. Replace inefficient "Conservative garbage colllection" scheme with more precise garbage collection scheme. Introduce their own task scheduler and memory manager to manage the browser environment. All this sounds so familiar, and Microsoft has been doing such things for long time.. Windows os, C++, C# etc compilers, CLR, and so on. So why isn't Microsoft or any other browser vendor taking Chrome's approach? Is there a flaw in Chrome's approach? If not, is the rest of browser vendor community caught unaware with Google's approach?

    Read the article

  • Count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers such that all numbers are co-prime to 'C'

    - by rajneesh2k10
    I came across the solution of a problem which involve dynamic-programming approach, solved using a three dimensional matrix. Link to actual problem is: http://community.topcoder.com/stat?c=problem_statement&pm=12189&rd=15177 Solution to this problem is here under MuddyRoad2: http://apps.topcoder.com/wiki/display/tc/SRM+555 In the last paragraph of explanation, author describes a dynamic programming approach to count the number of ways in which a number 'A' can be broken into a sum of 'B' numbers (not necessarily different), such that every number is co-prime to 3 and the order in which these numbers appear does matter. I am not able to grasp that approach. Can anyone help me understand how DP is acting here. I can't understand what is a state here and how it is derived from the previous state.

    Read the article

  • Should I be using abstract methods in this Python scenario?

    - by sfjedi
    I'm not sure my approach is good design and I'm hoping I can get a tip. I'm thinking somewhere along the lines of an abstract method, but in this case I want the method to be optional. This is how I'm doing it now... from pymel.core import * class A(object): def __init__(self, *args, **kwargs): if callable(self.createDrivers): self._drivers = self.createDrivers(*args, **kwargs) select(self._drivers) class B(A): def createDrivers(self, *args, **kwargs): c1 = circle(sweep=270)[0] c2 = circle(sweep=180)[0] return c1, c2 b = B() In the above example, I'm just creating 2 circle arcs in PyMEL for Maya, but I fully intend on creating more subclasses that may or may not have a createDrivers method at all! So I want it to be optional and I'm wondering if my approach is—well, if my approach could be improved?

    Read the article

  • Is it a problem if i query again and again to SQL Server 2005 and 2000?

    - by learner
    Window app i am constructing is for very low end machines (Celeron with max 128 RAM). From the following two approaches which one is the best (I don't want that application becomes memory hog for low end machines):- Approach One:- Query the database Select GUID from Table1 where DateTime <= @givendate which is returning me more than 300 thousands records (but only one field i.e. GUID - 300 thousands GUIDs). Now running a loop to achieve next process of this software based on GUID. Second Approach:- Query the database Select Top 1 GUID from Table1 where DateTime <= @givendate with top 1 again and again until all 300 thousands records done. It will return me only one GUID at a time, and I can do my next step of operation. What do you suggest which approach will use the less Memory Resources?? (Speed / performance is not the issue here).

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >