Search Results

Search found 16054 results on 643 pages for 'reference architecture'.

Page 46/643 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • SOA, Governance, and Drugs

    Why is IT governance important in service oriented architecture (SOA)? IT Governance provides a framework for making appropriate decisions based on company guidelines and accepted standards. This framework also outlines each stakeholder’s responsibilities and authority when making important architectural or design decisions. Furthermore, this framework of governance defines parameters and constraints that are used to give context and perspective when making decisions. The use of governance as it applies to SOA ensures that specific design principles and patterns are used when developing and maintaining services. When governance is consistently applied systems the following benefits are achieved according to Anne Thomas Manes in 2010. Governance makes sure that services conform to standard interface patterns, common data modeling practices, and promotes the incorporation of existing system functionality by building on top of other available services across a system. Governance defines development standards based on proven design principles and patterns that promote reuse and composition. Governance provides developers a set of proven design principles, standards and practices that promote the reduction in system based component dependencies.  By following these guidelines, individual components will be easier to maintain. For me personally, I am a fan of IT governance, and feel that it valuable part of any corporate IT department. However, depending on how it is implemented can really affect the value of using IT governance.  Companies need to find a way to ensure that governance does not become extreme in its policies and procedures. I know for me personally, I would really dislike working under a completely totalitarian or laissez-faire version of governance. Developers need to be able to be creative in their designs and too much governance can really impede the design process and prevent the most optimal design from being developed. On the other hand, with no governance enforced, no standards will be followed and accepted design patterns will be ignored. I have personally had to spend a lot of time working on this particular scenario and I have found that the concept of code reuse and composition is almost nonexistent.  Based on this, too much time and money is wasted on redeveloping existing aspects of an application that already exist within the system as a whole. I think moving forward we will see a staggered form of IT governance, regardless if it is for SOA or IT in general.  Depending on the size of a company and the size of its IT department,  I can see IT governance as a layered approach in that the top layer will be defined by enterprise architects that focus on abstract concepts pertaining to high level design, general  guidelines, acceptable best practices, and recommended design patterns.  The next layer will be defined by solution architects or department managers that further expand on abstracted guidelines defined by the enterprise architects. This layer will contain further definitions as to when various design patterns, coding standards, and best practices are to be applied based on the context of the solutions that are being developed by the department. The final layer will be defined by the system designer or a solutions architect assed to a project in that they will define what design patterns will be used in a solution, naming conventions, as well as outline how a system will function based on the best practices defined by the previous layers. This layered approach allows for IT departments to be flexible in that system designers have creative leeway in designing solutions to meet the needs of the business, but they must operate within the confines of the abstracted IT governance guidelines.  A real world example of this can be seen in the United States as it pertains to governance of the people in that the US government defines rules and regulations in the abstract and then the state governments take these guidelines and applies them based on the will of the people in each individual state. Furthermore, the county or city governments are the ones that actually enforce these rules based on how they are interpreted by local community.  To further define my example, the United States government defines that marijuana is illegal. Each individual state has the option to determine this regulation as it wishes in that the state of Florida determines that all uses of the drug are illegal, but the state of California legally allows the use of marijuana for medicinal purposes only. Based on these accepted practices each local government enforces these rules in that a police officer will arrest anyone in the state of Florida for having this drug on them if they walk down the street, but in California if a person has a medical prescription for the drug they will not get arrested.  REFERENCESThomas Manes, Anne. (2010). Understanding SOA Governance: http://www.soamag.com/I40/0610-2.php

    Read the article

  • How to change Fluent NHibernate reference column name on a HasMany relationship using IHasManyConven

    - by snicker
    Currently I'm using Fluent NHibernate to generate my database schema, but I want the entities in a HasMany relationship to point to a different column for the reference. IE, this is what NHibernate will generate in the creation DDL: alter table `Pony` add index (Stable_ID), add constraint Ponies_Stable foreign key (Stable_Id) references `Stable` (Id); This is what I want to have: alter table `Pony` add index (Stable_ID), add constraint Ponies_Stable foreign key (Stable_Id) references `Stable` (EntityId); Where Stable.ID would be the primary key and Stable.EntityId is just another column that I set. I have a class already that looks like this: public class ForeignKeyReferenceConvention : IHasManyConvention { public void Apply(IOneToManyCollectionInstance instance) { instance.Cascade.All(); //What goes here so that I can change the reference column? } } What do I have to do to get the reference column to change?

    Read the article

  • Visual Studio 2010: adding a service reference to a 2008 generated wsdl

    - by Snake
    Doesn't produce a app.config . In my team there is a guy who has Visual Studio 2008, he created a webservice. Then there is me, adding this webservice to a console project. Adding the service reference goes without problems but no valid app.config is generated. It's just empty <configuration> </configuration> When I disable 'reuse types' in my service reference it works but then I get an ambiguous error. Is this a bug? I found http://stackoverflow.com/questions/2159107/visual-studio-does-not-generate-app-config-content-when-add-service-reference this one, but there is no solution there, so I thought I bump the problem up again. Thanks

    Read the article

  • Drupal - Grabbing and Looping NID of CCK Nodereference field

    - by GaxZE
    Hello, cant seem to work out how i grab multiple nids of a node reference field. $node-field_name[0]['nid'] picks up the node id of the cck node reference field. however when that cck node reference field has more than one value i get stuck! my php is abit sketchy atm so working with arrays and loops is being quite difficult! here is my code: field_industry as $item) { ? "

    Read the article

  • Change Casing in WCF Service Reference

    - by Eric J.
    I'm creating a service reference to a web service written in Java. The generated classes now follow the Java casing convention used in the web service, for example class names are camelCase rather than PascalCase. Is there a way to get the desired casing from the service reference? CLARIFICATION: With WSE based services, one could modify the generated Reference.cs to provide .NET standard casing and use XmlElementAttribute to map to the Java naming presented by the external web service, like this: [System.Xml.Serialization.XmlElementAttribute("resultType", Form=System.Xml.Schema.XmlSchemaForm.Unqualified)] [System.Runtime.Serialization.DataMember] public virtual MyResultType ResultType { ... } Not terribly maintenance-friendly without writing custom code to either generate the proxy code or modify it after it's been generated. What I'm after is one or more options to present a WCF generated client proxy to calling applications using the .NET casing conventions, achieving the same as I did previously with WSE. Hopefully with less manual effort.

    Read the article

  • Does GC guarantee that cleared References are enqueued to ReferenceQueue in topological order?

    - by Dimitris Andreou
    Say there are two objects, A and B, and there is a pointer A.x --> B, and we create, say, WeakReferences to both A and B, with an associated ReferenceQueue. Assume that both A and B become unreachable. Intuitively B cannot be considered unreachable before A is. In such a case, do we somehow get a guarantee that the respective references will be enqueued in the intuitive (topological when there are no cycles) order in the ReferenceQueue? I.e. ref(A) before ref(B). I don't know - what if the GC marked a bunch of objects as unreachable, and then enqueued them in no particular order? I was reviewing Finalizer.java of guava, seeing this snippet: private void cleanUp(Reference<?> reference) throws ShutDown { ... if (reference == frqReference) { /* * The client no longer has a reference to the * FinalizableReferenceQueue. We can stop. */ throw new ShutDown(); } frqReference is a PhantomReference to the used ReferenceQueue, so if this is GC'ed, no Finalizable{Weak, Soft, Phantom}References can be alive, since they reference the queue. So they have to be GC'ed before the queue itself can be GC'ed - but still, do we get the guarantee that these references will be enqueued to the ReferenceQueue at the order they get "garbage collected" (as if they get GC'ed one by one)? The code implies that there is some kind of guarantee, otherwise unprocessed references could theoretically remain in the queue. Thanks

    Read the article

  • How to structure game states in an entity/component-based system

    - by Eva
    I'm making a game designed with the entity-component paradigm that uses systems to communicate between components as explained here. I've reached the point in my development that I need to add game states (such as paused, playing, level start, round start, game over, etc.), but I'm not sure how to do it with my framework. I've looked at this code example on game states which everyone seems to reference, but I don't think it fits with my framework. It seems to have each state handling its own drawing and updating. My framework has a SystemManager that handles all the updating using systems. For example, here's my RenderingSystem class: public class RenderingSystem extends GameSystem { private GameView gameView_; /** * Constructor * Creates a new RenderingSystem. * @param gameManager The game manager. Used to get the game components. */ public RenderingSystem(GameManager gameManager) { super(gameManager); } /** * Method: registerGameView * Registers gameView into the RenderingSystem. * @param gameView The game view registered. */ public void registerGameView(GameView gameView) { gameView_ = gameView; } /** * Method: triggerRender * Adds a repaint call to the event queue for the dirty rectangle. */ public void triggerRender() { Rectangle dirtyRect = new Rectangle(); for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); dirtyRect.add(graphicsComponent.getDirtyRect()); } gameView_.repaint(dirtyRect); } /** * Method: renderGameView * Renders the game objects onto the game view. * @param g The graphics object that draws the game objects. */ public void renderGameView(Graphics g) { for (GameObject object : getRenderableObjects()) { GraphicsComponent graphicsComponent = object.getComponent(GraphicsComponent.class); if (!graphicsComponent.isVisible()) continue; GraphicsComponent.Shape shape = graphicsComponent.getShape(); BoundsComponent boundsComponent = object.getComponent(BoundsComponent.class); Rectangle bounds = boundsComponent.getBounds(); g.setColor(graphicsComponent.getColor()); if (shape == GraphicsComponent.Shape.RECTANGULAR) { g.fill3DRect(bounds.x, bounds.y, bounds.width, bounds.height, true); } else if (shape == GraphicsComponent.Shape.CIRCULAR) { g.fillOval(bounds.x, bounds.y, bounds.width, bounds.height); } } } /** * Method: getRenderableObjects * @return The renderable game objects. */ private HashSet<GameObject> getRenderableObjects() { return gameManager.getGameObjectManager().getRelevantObjects( getClass()); } } Also all the updating in my game is event-driven. I don't have a loop like theirs that simply updates everything at the same time. I like my framework because it makes it easy to add new GameObjects, but doesn't have the problems some component-based designs encounter when communicating between components. I would hate to chuck it just to get pause to work. Is there a way I can add game states to my game without removing the entity-component design? Does the game state example actually fit my framework, and I'm just missing something? EDIT: I might not have explained my framework well enough. My components are just data. If I was coding in C++, they'd probably be structs. Here's an example of one: public class BoundsComponent implements GameComponent { /** * The position of the game object. */ private Point pos_; /** * The size of the game object. */ private Dimension size_; /** * Constructor * Creates a new BoundsComponent for a game object with initial position * initialPos and initial size initialSize. The position and size combine * to make up the bounds. * @param initialPos The initial position of the game object. * @param initialSize The initial size of the game object. */ public BoundsComponent(Point initialPos, Dimension initialSize) { pos_ = initialPos; size_ = initialSize; } /** * Method: getBounds * @return The bounds of the game object. */ public Rectangle getBounds() { return new Rectangle(pos_, size_); } /** * Method: setPos * Sets the position of the game object to newPos. * @param newPos The value to which the position of the game object is * set. */ public void setPos(Point newPos) { pos_ = newPos; } } My components do not communicate with each other. Systems handle inter-component communication. My systems also do not communicate with each other. They have separate functionality and can easily be kept separate. The MovementSystem doesn't need to know what the RenderingSystem is rendering to move the game objects correctly; it just need to set the right values on the components, so that when the RenderingSystem renders the game objects, it has accurate data. The game state could not be a system, because it needs to interact with the systems rather than the components. It's not setting data; it's determining which functions need to be called. A GameStateComponent wouldn't make sense because all the game objects share one game state. Components are what make up objects and each one is different for each different object. For example, the game objects cannot have the same bounds. They can have overlapping bounds, but if they share a BoundsComponent, they're really the same object. Hopefully, this explanation makes my framework less confusing.

    Read the article

  • TFS Build Server Cannot find Assembly Reference

    - by Steve Syfuhs
    I looked at this post http://stackoverflow.com/questions/547468/assembly-references-wont-resolve-properly-on-our-build-server but it didn't help the issue. I am (extremely) new to TFS, and just installed 2010 on a VM. I imported a project and got everything working-ish. I went to create a new build through team explorer, and set it up to build on each check-in. It build's locally just fine, but when it's built on check-in it dies on a 3rd party assembly reference. The reference is not in the GAC, but part of the local references. There is only one 3rd party dll, and the projects only reference each other in the solution. I have a feeling I'm missing some important step with regards to TFS and references. Any ideas? EDIT: This a test installation...there is nothing else installed on this box, with the exception of SQL and IIS.

    Read the article

  • Out of Memory on Update or Delete of Service Reference

    - by Kelly
    I have a Service Reference for a WCF project that has just over a hundred endpoints in my ServiceHost web.config. Every time I attempt to update or delete the Service Reference, it fails with an out of memory exception. I am running Vista Ultimate SP2 64-bit with 8GB RAM. I can work around it by going outside the project and deleting the Service References folder, then coming back in and re-adding the Reference. Is this the only workaround that you know of? Thanks!

    Read the article

  • HASH reference error with HTTP::Message::decodable

    - by scarba05
    Hi, I'm getting an "Can't use an undefined value as a HASH reference" error trying to call HTTP::Message::decodable() using Perl 5.10 / libwww installed on Debian Lenny OS using the aptitude package manager. I'm really stuck so would appreciate some help please. Here's the error: Can't use an undefined value as a HASH reference at (eval 2) line 1. at test.pl line 4 main::__ANON__('Can\'t use an undefined value as a HASH reference at enter code here`(eval 2)...') called at (eval 2) line 1 HTTP::Message::__ANON__() called at test.pl line 6 Here's the code: use strict; use HTTP::Request::Common; use Carp; $SIG{ __DIE__ } = sub { Carp::confess( @_ ) }; print HTTP::Message::decodable();

    Read the article

  • WCF Service Library Reference in a Web Form (asp.net)

    - by Abu Hamzah
    i am not sure if this is the correct way of doing but i read that you not suppose to have a WCF Service Library reference in your web form project rather you add endpoints to your web.config, is that true? here is what i have done: 1) create a WCF service library project 2) create a simple service called "MyService.svc" WebForm: 1) Create a web project 2) create a WCF Service and in it i have this code <%@ ServiceHost Language="C#" Service="WCFJQuery.ContactBLL.Implementation.ContactUs" Factory="System.ServiceModel.Activation.WebScriptServiceHostFactory" %> 3) right click on the web proejct and "Add Reference" and add teh MyService.dll reference from WCF service library project. is this something how you suppose to do?

    Read the article

  • Web service reference location?

    - by Damien Dennehy
    I have a Visual Studio 2008 solution that's currently consisting of three projects: A DataFactory project for Business Logic/Data Access. A Web project consisting of the actual user interface, pages, controls, etc. A Web.Core project consisting of utility classes, etc. The application requires consuming a web service. Normally I'd add the service reference to the Web project, but I'm not sure if this is best practice or not. The following options are open to me: Add the reference to the Web project. Add the reference to the Web.Core project, and create a wrapper method that Web will call to consume the web service. Add a new project called Web.Services, and copy step 2. This project is expected to increase in size so I'm open to any suggestions.

    Read the article

  • Adding Service Reference to a WCF Service in Silverlight project defaulting to XmlSerialization for

    - by Shravan
    Hi, I am adding a WCF Service Reference in a Silverlight project, it is generating code with XmlSerialization attributes for DataMembers than SOAP Serialization. But, if the same WCF service reference is added in an ASP.Net project, is generating code with SOAP Serialization attribtues. Can anybody let me know what could be the cause for it, and how can I force reference to generate SOAP Serialization? XmlSerialization - [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Xml", "4.0.30319.1")] SOAP Serialization - [System.CodeDom.Compiler.GeneratedCodeAttribute("System.Runtime.Serialization", "4.0.0.0")] These are the attributes in the code generated for types, which I am looking into when saying it is using XmlSerialization/SOAP Serialization

    Read the article

  • Returning references while using shared_ptrs

    - by Goose Bumper
    Suppose I have a rather large class Matrix, and I've overloaded operator== to check for equality like so: bool operator==(Matrix &a, Matrix &b); Of course I'm passing the Matrix objects by reference because they are so large. Now i have a method Matrix::inverse() that returns a new Matrix object. Now I want to use the inverse directly in a comparison, like so: if (a.inverse()==b) { ... }` The problem is, this means the inverse method needs to return a reference to a Matrix object. Two questions: Since I'm just using that reference in this once comparison, is this a memory leak? What happens if the object-to-be-returned in the inverse() method belongs to a boost::shared_ptr? As soon as the method exits, the shared_ptr is destroyed and the object is no longer valid. Is there a way to return a reference to an object that belongs to a shared_ptr?

    Read the article

  • Reference to a pointer question

    - by Yogesh Arora
    Please refer to the code below. In this code i am storing the const char* returned by test.c_str() into a reference. My question is Will the data be correctly refering to the contents of test. I am thinking that ptr returned by test.c_str() will be a temporary and if i bound it to a reference that reference will not be valid. Is my thinking correct class RefPtrTest { std::string test; StoringClass storingClass; public: RefPtrTest(): test("hello"), storingClass(test.c_str()) { } } where StoringClass is class StoringClass { const char*& data; public: StoringClass (const char*& input): data(input) { } }

    Read the article

  • C++0x rvalue references and temporaries

    - by Doug
    (I asked a variation of this question on comp.std.c++ but didn't get an answer.) Why does the call to f(arg) in this code call the const ref overload of f? void f(const std::string &); //less efficient void f(std::string &&); //more efficient void g(const char * arg) { f(arg); } My intuition says that the f(string &&) overload should be chosen, because arg needs to be converted to a temporary no matter what, and the temporary matches the rvalue reference better than the lvalue reference. This is not what happens in GCC and MSVC. In at least G++ and MSVC, any lvalue does not bind to an rvalue reference argument, even if there is an intermediate temporary created. Indeed, if the const ref overload isn't present, the compilers diagnose an error. However, writing f(arg + 0) or f(std::string(arg)) does choose the rvalue reference overload as you would expect. From my reading of the C++0x standard, it seems like the implicit conversion of a const char * to a string should be considered when considering if f(string &&) is viable, just as when passing a const lvalue ref arguments. Section 13.3 (overload resolution) doesn't differentiate between rvalue refs and const references in too many places. Also, it seems that the rule that prevents lvalues from binding to rvalue references (13.3.3.1.4/3) shouldn't apply if there's an intermediate temporary - after all, it's perfectly safe to move from the temporary. Is this: Me misreading/misunderstand the standard, where the implemented behavior is the intended behavior, and there's some good reason why my example should behave the way it does? A mistake that the compiler vendors have somehow all made? Or a mistake based on common implementation strategies? Or a mistake in e.g. GCC (where this lvalue/rvalue reference binding rule was first implemented), that was copied by other vendors? A defect in the standard, or an unintended consequence, or something that should be clarified?

    Read the article

  • Creating reference movies for iPhone on the fly

    - by Mad Oxyn
    We are working on an online mobile video app. The videos we want to play on mobile phones are being generated by a server, as there can be dynamic content in the server (based on user input). Now for iPhone we would like to play the video in the best possible resolution based on the connection speed at the time of downloading the movie. This can be done using reference movies. However, because our videos are being generated on the fly, we need to generate this reference movie on the fly as well. Is there a way to generate reference movies on the fly on a Linux server using some command line tool, PHP or Java? Or on a DOS server maybe? Any help will be much appreciated.

    Read the article

  • Alternates to C++ Reference/Pointer Syntax

    - by Jon Purdy
    What languages other than C and C++ have explicit reference and pointer type qualifiers? People seem to be easily confused by the right-to-left reading order of types, where char*& is "a reference to a pointer to a character", or a "character-pointer reference"; do any languages with explicit references make use of a left-to-right reading order, such as &*char/ref ptr char? I'm working on a little language project, and legibility is one of my key concerns. It seems to me that this is one of those questions to which it's easy for a person but hard for a search engine to provide an answer. Thanks in advance!

    Read the article

  • Alternatives to C++ Reference/Pointer Syntax

    - by Jon Purdy
    What languages other than C and C++ have explicit reference and pointer type qualifiers? People seem to be easily confused by the right-to-left reading order of types, where char*& is "a reference to a pointer to a character", or a "character-pointer reference"; do any languages with explicit references make use of a left-to-right reading order, such as &*char/ref ptr char? I'm working on a little language project, and legibility is one of my key concerns. It seems to me that this is one of those questions to which it's easy for a person but hard for a search engine to provide an answer. Thanks in advance!

    Read the article

  • When are referenced Assemblies loaded?

    - by Daniel
    I wrote a program that makes a reference to Microsoft.Web.Administration.dll, which is not present on Windows Server 2003. The program checks for the os and does not reference the dll if the os is 2003. if(OSVersion == WindowsServer2003) //do the job without referencing the Microsoft.Web.Administration.<br> else if(OSVersion == WindowsServer2008) //reference the Microsoft.Web.Administration.dll file.<br> When I tested this program on Windows Server 2003, an error occured telling me it couldn't locate the Microsoft.Web.Administration.dll. But when I separated the if-else block into 2 different methods as below, and the error did not occur. if(OSVersion == WindowsServer2003) //do the job without referencing the Microsoft.Web.Administration.<br> else if(OSVersion == WindowsServer2008) //DoIt2008Style(); So I wanted to know about reference file loading time in more detail. could you point me to some resources?

    Read the article

  • Plagued by multithreaded bugs

    - by koncurrency
    On my new team that I manage, the majority of our code is platform, TCP socket, and http networking code. All C++. Most of it originated from other developers that have left the team. The current developers on the team are very smart, but mostly junior in terms of experience. Our biggest problem: multi-threaded concurrency bugs. Most of our class libraries are written to be asynchronous by use of some thread pool classes. Methods on the class libraries often enqueue long running taks onto the thread pool from one thread and then the callback methods of that class get invoked on a different thread. As a result, we have a lot of edge case bugs involving incorrect threading assumptions. This results in subtle bugs that go beyond just having critical sections and locks to guard against concurrency issues. What makes these problems even harder is that the attempts to fix are often incorrect. Some mistakes I've observed the team attempting (or within the legacy code itself) includes something like the following: Common mistake #1 - Fixing concurrency issue by just put a lock around the shared data, but forgetting about what happens when methods don't get called in an expected order. Here's a very simple example: void Foo::OnHttpRequestComplete(statuscode status) { m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } So now we have a bug in which Shutdown could get called while OnHttpNetworkRequestComplete is occuring on. A tester finds the bug, captures the crash dump, and assigns the bug to a developer. He in turn fixes the bug like this. void Foo::OnHttpRequestComplete(statuscode status) { AutoLock lock(m_cs); m_pBar->DoSomethingImportant(status); } void Foo::Shutdown() { AutoLock lock(m_cs); m_pBar->Cleanup(); delete m_pBar; m_pBar=nullptr; } The above fix looks good until you realize there's an even more subtle edge case. What happens if Shutdown gets called before OnHttpRequestComplete gets called back? The real world examples my team has are even more complex, and the edge cases are even harder to spot during the code review process. Common Mistake #2 - fixing deadlock issues by blindly exiting the lock, wait for the other thread to finish, then re-enter the lock - but without handling the case that the object just got updated by the other thread! Common Mistake #3 - Even though the objects are reference counted, the shutdown sequence "releases" it's pointer. But forgets to wait for the thread that is still running to release it's instance. As such, components are shutdown cleanly, then spurious or late callbacks are invoked on an object in an state not expecting any more calls. There are other edge cases, but the bottom line is this: Multithreaded programming is just plain hard, even for smart people. As I catch these mistakes, I spend time discussing the errors with each developer on developing a more appropriate fix. But I suspect they are often confused on how to solve each issue because of the enormous amount of legacy code that the "right" fix will involve touching. We're going to be shipping soon, and I'm sure the patches we're applying will hold for the upcoming release. Afterwards, we're going to have some time to improve the code base and refactor where needed. We won't have time to just re-write everything. And the majority of the code isn't all that bad. But I'm looking to refactor code such that threading issues can be avoided altogether. One approach I am considering is this. For each significant platform feature, have a dedicated single thread where all events and network callbacks get marshalled onto. Similar to COM apartment threading in Windows with use of a message loop. Long blocking operations could still get dispatched to a work pool thread, but the completion callback is invoked on on the component's thread. Components could possibly even share the same thread. Then all the class libraries running inside the thread can be written under the assumption of a single threaded world. Before I go down that path, I am also very interested if there are other standard techniques or design patterns for dealing with multithreaded issues. And I have to emphasize - something beyond a book that describes the basics of mutexes and semaphores. What do you think? I am also interested in any other approaches to take towards a refactoring process. Including any of the following: Literature or papers on design patterns around threads. Something beyond an introduction to mutexes and semaphores. We don't need massive parallelism either, just ways to design an object model so as to handle asynchronous events from other threads correctly. Ways to diagram the threading of various components, so that it will be easy to study and evolve solutions for. (That is, a UML equivalent for discussing threads across objects and classes) Educating your development team on the issues with multithreaded code. What would you do?

    Read the article

  • Getting Reference to Calling Activity from AsyncTask (NOT as an inner class)

    - by stormin986
    Is it at all possible, from within an AsyncTask that is NOT an inner class of the calling Activity class, to get a reference to the instance of Activity that initiated execution of the AsyncTask? I am aware of this thread, however it doesn't exactly address how to reference the calling Activity. Some suggest passing a reference to the Activity as a parameter to the AsyncTask constructor, however, it's reported that doing so will always result in a NullPointerException. So, I'm at a loss. My AsyncTask provides robust functionality, and I don't want to have to duplicate it as an inner class in every Activity that wants to use it. There must be an elegant solution.

    Read the article

  • How to reference filtered ng-repeat items from a controller in AngularJS

    - by ParkerDigital
    I have some JSON data that is rendered out via the ng-repeat directive, and the results are then filtered via some checkboxes/drop-downs, and some custom filter functions in my controller. I now want to add a function to my controller that is triggered by an 'ng-change' on some of the checkboxes, that can reference the current list of items in my 'ng-repeat'. I realise I can reference these values from a custom filter, for example $scope.filterProvider = function(item), but this function is then called for each and every item in the ng-repeat, which isn't what I want - I want the function to just be called each time a checkbox is checked/unchecked, and I need that function to be able to reference the items in my ng-repeat...does that make sense to anyone?! And if so, does anyone know how I can do that? Thanks :-)

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >