Search Results

Search found 9727 results on 390 pages for 'general concepts'.

Page 8/390 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • You Probably Already Have a “Private Cloud”

    - by BuckWoody
    I’ve mentioned before that I’m not a fan of the word “Cloud”. It’s too marketing-oriented, gimmicky and non-specific. A better definition (in many cases) is “Distributed Computing”. That means that some or all of the computing functions are handled somewhere other than under your specific control. But there is a current use of the word “Cloud” that does not necessarily mean that the computing is done somewhere else. In fact, it’s a vector of Cloud Computing that can better be termed “Utility Computing”. This has to do with the provisioning of a computing resource. That means the setup, configuration, management, balancing and so on that is needed so that a user – which might actually be a developer – can do some computing work. To that person, the resource is just “there” and works like they expect, like the phone system or any other utility. The interesting thing is, you can do this yourself. In fact, you probably already have been, or are now. It’s got a cool new trendy term – “Private Cloud”, but the fact is, if you have your setup automated, the HA and DR handled, balancing and performance tuning done, and a process wrapped around it all, you can call yourself a “Cloud Provider”. A good example here is your E-Mail system. your users – pretty much your whole company – just logs into e-mail and expects it to work. To them, you are the “Cloud” provider. On your side, the more you automate and provision the system, the more you act like a Cloud Provider. Another example is a database server. In this case, the “end user” is usually the development team, or perhaps your SharePoint group and so on. The data professionals configure, monitor, tune and balance the system all the time. The more this is automated, the more you’re acting like a Cloud Provider. Lots of companies help you do this in your own data centers, from VMWare to IBM and many others. Microsoft's offering in this is based around System Center – they have a “cloud in a box” provisioning system that’s actually pretty slick. The most difficult part of operating a Private Cloud is probably the scale factor. In the case of Windows and SQL Azure, we handle this in multiple ways – and we're happy to share how we do it. It’s not magic, and the algorithms for balancing (like the one we started with called Paxos) are well known. The key is the knowledge, infrastructure and people. Sure, you can do this yourself, and in many cases such as top-secret or private systems, you probably should. But there are times where you should evaluate using Azure or other vendors, or even multiple vendors to spread your risk. All of this should be based on client need, not on what you know how to do already. So congrats on your new role as a “Cloud Provider”. If you have an E-mail system or a database platform, you can just put that right on your resume.

    Read the article

  • Map Library: Client-side or Server-side?

    - by Mahdi
    As I have already asked here, I have to implement a Multi-Platform Map application. Now I have Mapstraction as an option which uses Javascript to implement the desired functionality. My question is, "Is there any reason/benefit to implement such a library (let say, Adapters) in Server-side (in my case, PHP)?" As these maps are all based on Javascript, there is a big reason to use Javascript again to make the adapter also, so it would not be dependent to PHP, Java, or .NET for example. But is that all? I wish to hear your ideas and comments also. :)

    Read the article

  • Breaking The Promise of Web Service Interoperability

    The promise of web service interoperability is achievable if certain technical and non-technical issues are dealt with properly. As the world gets smaller and smaller thanks to our growing global economy the need for security is increasing. The use of security is vital in the transferring of data from one server to another. As new security standards and protocols are created, the environments for web service hosts and clients must be in sync so that they can communicate on the same standard and protocols. For example, if a new protocol x can only be implemented on computers built after 2010 then all computers built prior to 2010 will not be able to connect to any web service hosts that only use this protocol in its security policy. If both the host and client of a web service cannot communicate using a set of common standards and protocols then web services are not available to these clients thus breaking the promise of interoperability. Another limiting factor of web services is governmental policies and regulations. I have experienced this first hand last year when I had to work on a project that dealt with personally identifiable information (PII) regarding US and Canadian Citizens. Currently the Canadian government regulates that any data pertaining to Canadian citizens must be store in Canada only. The issue that we had was that fact that we are a US based company that sometimes works with Canadian PII as part of a service that we provide. As you can see we are US based company and dealing with Canadian Data, so we had to place a file server inside the border of Canada in order for us to continue working for our Canadian customers.

    Read the article

  • First Experience with Web Services

    When I first started programming with Microsoft .Net (1.0 Framework) I had a strong desire to learn how search engines indexed web sites. At that time I was a working as a search engine spammer creating web pages to generate traffic for specific themes for various clients. One way I attempted to better understand .Net was to build a web spider that analyzed web pages on demand. An example of the spider is hosted at AddLinkz.com. After my spider was built I had no real idea what I could/should do with it until I found the MSN Search API. I used this web service to compare its results with my spider. Additionally, I used the API to feed my .Net web spider new URLs from the API based on specific search terms. MSN’s search API was very easy to use, I just had to request information by calling a web URL with parameters via a Get request and the results were returned in XML. At that time all requests were limited to XML responses and a maximum of 1,000 results per query.   Since then the entire API has gone through several reconstructions, rebranding and new search services.  Microsoft’s new Bing API replaced the older MSN search API and added several new search capabilities. These new features allow search data to be returned for web searches, image searches, new searches, and related search terms to name a few. Bing API Version 2.0 SourceTypes Web Searches for web content Sushi Image Searches for images on the web Sushi News Searches news stories Sushi InstantAnswer Searches Encarta online what is sushi, convert 5 feet to meters, x*5=7, and 2 plus 2 Spell Searches Encarta dictionary for spelling suggestions Phonebook Searches phonebook entries sushi in Los Angeles RelatedSearch Returns the query strings most similar to yours Ad Returns advertisements to incorporate with results I currently plan to start using the web search feature from the new Bing 2.0 API in an open source project related to exception management. Currently, it is still in the conception phase.

    Read the article

  • Why do I need two Instances in Windows Azure?

    - by BuckWoody
    Windows Azure as a Platform as a Service (PaaS) means that there are various components you can use in it to solve a problem: Compute “Roles” - Computers running an OS and optionally IIS - you can have more than one "Instance" of a given Role Storage - Blobs, Tables and Queues for Storage Other Services - Things like the Service Bus, Azure Connection Services, SQL Azure and Caching It’s important to understand that some of these services are Stateless and others maintain State. Stateless means (at least in this case) that a system might disappear from one physical location and appear elsewhere. You can think of this as a cashier at the front of a store. If you’re in line, a cashier might take his break, and another person might replace him. As long as the order proceeds, you as the customer aren’t really affected except for the few seconds it takes to change them out. The cashier function in this example is stateless. The Compute Role Instances in Windows Azure are Stateless. To upgrade hardware, because of a fault or many other reasons, a Compute Role's Instance might stop on one physical server, and another will pick it up. This is done through the controlling fabric that Windows Azure uses to manage the systems. It’s important to note that storage in Azure does maintain State. Your data will not simply disappear - it is maintained - in fact, it’s maintained three times in a single datacenter and all those copies are replicated to another for safety. Going back to our example, storage is similar to the cash register itself. Even though a cashier leaves, the record of your payment is maintained. So if a Compute Role Instance can disappear and re-appear, the things running on that first Instance would stop working. If you wrote your code in a Stateless way, then another Role Instance simply re-starts that transaction and keeps working, just like the other cashier in the example. But if you only have one Instance of a Role, then when the Role Instance is re-started, or when you need to upgrade your own code, you can face downtime, since there’s only one. That means you should deploy at least two of each Role Instance not only for scale to handle load, but so that the first “cashier” has someone to replace them when they disappear. It’s not just a good idea - to gain the Service Level Agreement (SLA) for our uptime in Azure it’s a requirement. We point this out right in the Management Portal when you deploy the application: (Click to enlarge) When you deploy a Role Instance you can also set the “Upgrade Domain”. Placing Roles on separate Upgrade Domains means that you have a continuous service whenever you upgrade (more on upgrades in another post) - the process looks like this for two Roles. This example covers the scenario for upgrade, so you have four roles total - One Web and one Worker running the "older" code, and one of each running the new code. In all those Roles you want at least two instances, and this example shows that you're covered for High Availability and upgrade paths: The take-away is this - always plan for forward-facing Roles to have at least two copies. For Worker Roles that do background processing, there are ways to architect around this number, but it does affect the SLA if you have only one.

    Read the article

  • Software development, basics of design, conventions and scalability

    - by goce ribeski
    I need to improve my programming skills in order to achieve better scalability for the software I'm working on. Purpose is to learn the rules of adding new modules and features, so when it comes to maintaining existing ones there is some concept. So, I'm looking for a good book, tutorial or websites where I can continue to read about this. Currently, what I know and what I do is: to design relational database(3NF), make separate class for each table put that in MVC implement modular programming ...write code and hope for the best... I presume that next things I need to learn more deeply are: programming codex(naming, commenting, conventions...), organize functions building interfaces organizing custom made libraries, organizing API that I'm using, documenting, team work... ... At last what my job is, it does't need to affect your answer, PHP CodeIgniter developer.

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • 'dock' working in simple words

    - by Shirish11
    I would like to know the idea behind the working of 'docking' in applications. I have worked with applications where components from individual forms are docked on a single main form to provide the necessary GUI. But I don't have any idea whats happening in the background. According to Wikipedia A dock is a graphical user interface element that typically provides the user with a way of launching, switching between, and monitoring running programs or applications. Now I am a bit confused if it is some component or an event or a property or something else. EDIT : The applications was developed in Delphi on windows platform. There is something more in Delphi (manual dock and automatic dock).

    Read the article

  • Are non Turing-complete languages considered programming languages at all?

    - by user1598390
    Reading a recent question: Is it actually possible to have a 'useful' programming language that isn't Turing complete?, I've come to wonder if non Turing-complete programming languages are considered programming languages at all. Since Turing-completeness means a language has to have variables to store values as well as control structures ( for, while )... Is a language that lacks these features considered a programming language ?

    Read the article

  • What programming concept is used in Nokia Lumina City Lens application [closed]

    - by gowri
    I am totally impressed about the Nokia City Lens application. How does the Nokia Lumia City Lens app work? Nokia Lumia City Lens app detects shops, restaurants, etc. by scanning the visual environment. But how can it detect shops or anything by only scanning visual information? Because we need a 360 degree view to detect a location. Because we can't simply match visual information and get that data. The visuals will change proportionally with distance and angle. So how does this app match the location and retrieve the information? Can anyone explain the concept What technology or algorithm is used in this app?

    Read the article

  • need some concrete examples on user stories, tasks and how they relate to functional and technical specifications

    - by gideon
    Little heads up, Im the only lonely dev building/planning/mocking my project as I go. Ive come up with a preview release that does only the core aspects of the system, with good business value and I've coded most of the UI as dirty throw-able mockups over nicely abstracted and very minimal base code. In the end I know quite well what my clients want on the whole. I can't take agile-ish cowboying anymore because Im completely dis-organized and have no paper plan and since my clients are happy, things are getting more complex with more features and ideas. So I started using and learning Agile & Scrum Here are my problems: I know what a functional spec is.(sample): Do all user stories and/or scenarios become part of the functional spec? I know what user stories and tasks are. Are these kinda user stories? I dont see any Business Value reason added to them. I made a mind map using freemind, I had problems like this: Actor : Finance Manager Can Add a Financial Plan into the system because well thats the point of it? What Business Value reason do I add for things like this? Example : A user needs to be able to add a blog article (in the blogger app) because..?? Its the point of a blogger app, it centers around that feature? How do I go into the finer details and system definitions: Actor: Finance Manager Action: Adds a finance plan. This adding is a complicated process with lots of steps. What User Story will describe what a finance plan in the system is ?? I can add it into the functional spec under definitions explaining what a finance plan is and how one needs to add it into the system, but how do I get to the backlog planning from there? Example: A Blog Article is mostly a textual document that can be written in rich text in the system. To add a blog article one must...... But how do you create backlog list/features out of this? Where are the user stories for what a blog article is and how one adds/removes it? Finally, I'm a little confused about the relations between functional specs and user stories. Will my spec contain user stories in them with UI mockups? Now will these user stories then branch out tasks which will make up something like a technical specification? Example : EditorUser Can add a blog article. Use XML to store blog article. Add a form to add blog. Add Windows Live Writer Support. That would be agile tasks but would that also be part of/or form the technical specs? Some concrete examples/answers of my questions would be nice!!

    Read the article

  • What is your most unusual javascript concept you've ever seen ?

    - by Cybrix
    Hi, I've learned javascript at school but since I'm working with it and study about it every day, I've found very particular aspect of javascript that I didn't know about. Which at first, was very hard to understand for me and finally, I found it very usefull and easy to implement. And in the final, it gives to my code some kind of "beauty". An example I've once seen: function getter( input ) { result = { foo1 : 'bar1', foo2 : 'bar2', foo3 : 'bar3' }[input] || input || "default"; return result; } Do you guys have other examples of particular use you make of Javascript ? Thank you PS: I use the term particular use because it might be unusual for any Javascript beginner. I believe this question is most likely to belong to the community wiki.

    Read the article

  • Inexpensive generation of hierarchical unique IDs

    - by romaninsh
    My application is building a hierarchical structure like this: root = { 'id': 'root', 'children': [ { 'name': 'root_foo', 'children': [] }, { 'id': 'root_foo2', 'children': [ { 'id': 'root_foo2_bar', 'children': [] } ] } ] } in other words, it's a tree of nodes, where each node might have child elements and unique identifier I call "id". When a new child is added, I need to generate a unique identifier for it, however I have two problems: identifiers are getting too long adding many children takes slower, as I need to find first available id My requirement is: naming of a child X must be determined only from the state in their ancestors When I re-generate tree with same contents, the IDs must be same or in other words, when we have nodes A and B, creating child in A, must not affect the name given to children of B. I know that one way to optimize would be to introduce counter in each node and append it to the names which will solve my performance issue, but will not address the issue with the "long identifiers". Could you suggest me the algorithm for quickly coming up with new IDs?

    Read the article

  • The how of a collision engine

    - by JXPheonix
    This is a very, very broad question - what is the general algorithm of how a collision engine works? No code in specific, but rather, just a general idea of how a collision engine does what it does, constantly refreshing the points of an object and comparing it to other objects? (see, I have the general gist of it here.) A collision engine is basically an engine used in games (generally) so that your player (call him Bob), whenever bob moves into a wall, Bob stops, Bob does not walk through the wall. They also generally handle the gravity in a game and environmental things like that.

    Read the article

  • NSTIC Next Steps

    - by Paul Laurent
    Normal 0 Today and tomorrow, we'll see our next steps in the National Strategy for Trusted Identities in Cyberspace (NSTIC) governance roadmap as NIST hosts the NSTIC Privacy Workshop at the MIT Media Lab in Cambridge, MA.  I’m here live but the proceedings are already underway and you can tune in remotely to the webcast here. Questions can also be lobbed in via the Tweetosphere at #NSTIC.

    Read the article

  • Why do we use to talk about addresses and memory of variable in C?

    - by user2720323
    Why do we use to talk about addresses and memory of variable in C, where in other languages (like in Java, .Net etc) we do not talk about variable address and memory in a program, we will directly use the variables. But in C Language we are listening the word address and memory. How to explain this? I hope C is high level language designed over the assembly language. So C is a thin layer over assembly language (in assembly language we will use memory locations to store a variable and track a variable). But in other languages these addresses and memory related things are wrapped in that specific language, so that we will not listen these words.

    Read the article

  • What would you call the concept of CofeeScript or Sass to be?

    - by MaG3Stican
    There is this rising trend with web development of making new pseudo languages to extend the functionality of JavaScript, CSS and HTML given that those are static and their metamorphosis or evolution is painfully slow due to the variety of browser providers. So I am currently having a concept dilema on how to categorize them for a book I was made to write by my employer as no one seems to have a name for these pseudo languages. A tiny list of them : JavaScript: LiveScript, Metalua, Uberscript, EmberScript. HTML: Razor, Java Scriptlets. CSS : LESS, Sass. I believe the concept of these pseudo languages and a language or an extension of a language is quite different. First these languages do not extend any functionality currently existing on HTML or CSS or JavaScript, they simply work around it. And also they do not "compile" to an intermediate language, they are merely 1-1 translated to something that only then can be compiled. What would you call them?

    Read the article

  • 3-4 old computers = general purpose cluster?

    - by TheLQ
    I have 3 old computers lying around right now running a P2 at 800 MHz(?), Intel Mobile 1.6 GHz, AMD Athlon XP 2000+ at 1.66 GHz, and (might not use this) P4 at 2.7 GHz, all with 512 MB Ram, and am considering clustering them together for fun/knowledge. They would be running an undecided version of linux, preferably ubuntu based. The issue is what I want to use it for: general computing and occasional video encoding. By general computing I mean day to day tasks. However I'm not sure if every program started by a single X session is going to exist on the same machine, defeating the purpose of such a system. Will programs be split up or exist on one machine? Second, assuming this is running 100baseT ethernet (not sure if the PCI slot itself could handle Gigabit), would the speed of having a program exist over the network be an issue? It seems that the constant asking of various things in RAM would be quite slow. And before you say "buy another computer!", that's not the point of this question. I'm asking would it be usable, not necessarily practical. And yes I know, this is going to be extreamly power consuming.

    Read the article

  • arrays format (Javascript)

    - by João Melo
    i have a list of users, with minions, something like this: User52: minion10 minion12 User32: minion13 minion11 i've been keeping in an array where the "location" is the id, like this: Users: [52]User minions: [10]minion [12]minion [32]User minions: [13]minion [11]minion so i can access them easily like this: user[UserID].minions[MinionID] (ex: user[32].minions[11]) but when i print it or send it by json i get something like this: {,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,minion,,,,,,,,,,,,,,minion} but should i keep using like this or should i change to something like this: User = function(){ this.minions = ...; this.getMinion = function(value){ for(var m in this.minions){ if(this.minions[m].id == value){ return this.minions[m]; break; } } } } and get it like this: user.getMinion(MinionID); Question: i get better performance using a "short" array but using loops every time i need a minion, or using "long" arrays, but no need for loop and getting values directly from the id "name"?

    Read the article

  • Legal Applications of Metamorphic Code

    - by V_P
    Firstly, I would like to state that I already understand the 'vx' applications for Metamorphic code. I am not here to ask a question related to any of those topics as that would be inappropriate in this context. I would like to know if anyone has ever used 'Metamorphic' code in practice, for purposes other than those previously stated, if so, what was the reasoning for using said concept. In essence I am trying to discover a purpose for this concept, if any, other than circumventing anti-virus scanners and the like.

    Read the article

  • What is a byte stream actually?

    - by user2720323
    Can anyone explain me what byte stream actually contains? Does it contain bytes (hex data) or binary data or english letters only? I am also confused about the term Raw Data. If someone asked me to "reverse the 4 byte data", then what should I assume the data is hex code or binary code? Can anyone please clarify this for me. I have read so many articles and in java and c. They used to talk these words frequently but never understood them clearly.

    Read the article

  • What are the most important concepts to understand for "fluency in developer English"?

    - by Edward Tanguay
    In April, I'm going to be giving a talk called **English 2.0 - Understanding the Language of Developers" to a group of English teachers. The purpose is in two hours to give them a quick background in key concepts so that they can better understand developer blogs and podcasts and are able to ask better questions when talking to developers. What do you think are the most important concepts to understand, concepts that developers take for granted but the general public is not familiar with? Here are a few ideas: version control abstractions pub/sub push vs. pull debugging modularity three-tier architecture class/object "spaghetti code" vs. OOP exception throwing crowd sourcing refactoring the cloud DRY - don't repeat yourself client/server unit testing designer/developer

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >