Search Results

Search found 3311 results on 133 pages for 'computing theory'.

Page 3/133 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • The future of cloud computing? [closed]

    - by Vimvq1987
    As far as I know, cloud computing is growing rapidly. Amazon EC2, Google App Engine, Microsoft Windows Azure...But I can't imagine how cloud computing will change the world. Will cloud computing will play the main role in software industry? Will our data be stored at one place and then can be accessed from any where? Shall we need powerful PCs no more because everything will be processed at "cloud"? Thank you so much

    Read the article

  • Cloud computing?

    - by Suraj
    I'm writing a report advising on future technologies that a manufacturing company could use. I've highlighted a number of advanced manufacturing technologies such as CAD etc. However, I want to bring cloud computing into the report just to score some extra points. I am not sure how one would bring together cloud computing with the advanced technologies though. Basically what would be the process of integrating these technologies into a cloud computing "environment"? Say the organisation buys a CAD package, how could they make use of cloud computing here?

    Read the article

  • Open source Distributed computing tool

    - by Prasenjit Chatterjee
    I want to set up distributed computing on my Local Area Network consisting a bunch of PCs. Say for the time being each one has the same OS - Windows 7. Is there any opensource tool available so that I can share the resources of these PCs over the LAN and increase the speed of my applications and the memory space. I know that if its a graphics intensive application then, it is not very practical, because the speed of LAN is much slower than Graphics processors. But I only want to share general applications, some basic softwares, Programming language IDEs etc. Can anyone shed some light on it? Thanks in Advance..

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by Koobz
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's infrastructure to support multiple databases etc.

    Read the article

  • Cloud Computing - Multiple Physical Computers, One Logical Computer

    - by bundini
    I know that you can set up multiple virtual machines per physical computer. I'm wondering if it's possible to make multiple physical computers behave as one logical unit? Fundamentally the way I imagine it working is that you can throw 10 computers into a facility one day. You've got one client that requires the equivalent of two computers worth, and 100 others that eat up the remaining 8. As demands change you're just reallocating logical resources, maybe the 2 computer client now requires a third physical system. You just add it to the cloud, and don't worry about sharding the database, or migrating data over to a new server. Can it work this way? If yes, why would anyone ever do things like hand partition their database servers anymore? Just add more computing resources. You scale horizontally with the hardware, but your server appears to scale vertically. There's no need to modify your application's supporting infrastructure to support multiple databases etc.

    Read the article

  • cloud computing ? Eucalyptus

    - by neolix
    Hi Greeting!! I want to setup small cloud computing using our old 2 core server system? we are new to cloud system we have google for the same. We are looking host VM's on top any one has done pls share me doc or how to ? we have 50 plus server which we are not using. 2 core each 4GB RAM, 1TB HDD centos is my base os we looking host windows. Right now we can use this server only paravirtualization ignore my english Thanks

    Read the article

  • Graph theory in python

    - by Dan
    I was wondering how people deal with graph theory in python? How is a graph stored? Are there libraries for this? For example how would I input a graph and then find its Chromatic polynomial? Or its girth? Or the number of unique spanning trees? How about problems that involve edge weight like salesman problems? I don't need all of these answered, I'm just looking for a method or tool set that will be able to help me approach solve problems like this. Thanks, Dan

    Read the article

  • What is the relaxation condition in graph theory

    - by windopal
    Hi, I'm trying to understand the main concepts of graph theory and the algorithms within it. Most algorithms seem to contain a "Relaxation Condition" I'm unsure about what this is. Could some one explain it to me please. An example of this is dijkstras algorithm, here is the pseudo-code. 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: // Initializations 3 dist[v] := infinity // Unknown distance function from source to v 4 previous[v] := undefined // Previous node in optimal path from source 5 dist[source] := 0 // Distance from source to source 6 Q := the set of all nodes in Graph // All nodes in the graph are unoptimized - thus are in Q 7 while Q is not empty: // The main loop 8 u := vertex in Q with smallest dist[] 9 if dist[u] = infinity: 10 break // all remaining vertices are inaccessible from source 11 remove u from Q 12 for each neighbor v of u: // where v has not yet been removed from Q. 13 alt := dist[u] + dist_between(u, v) 14 if alt < dist[v]: // Relax (u,v,a) 15 dist[v] := alt 16 previous[v] := u 17 return dist[] Thanks

    Read the article

  • Set Theory and .NET

    - by MasterMax1313
    Recently I came across a situation where set theory and set math fit what I was doing to the letter (granted there was an easier way to accomplish what I needed - i.e. LINQ - but I didn't think of that at the time). However I didn't know of any generic set libraries. Granted IEnumerables provide some set operations (Union, etc.), but nothing like Intersection or set comparison. Can anyone point out something that fits here? Something that implements set math using a generic type?

    Read the article

  • Is there a theory for "transactional" sequences of failing and no-fail actions?

    - by Ross Bencina
    My question is about writing transaction-like functions that execute sequences of actions, some of which may fail. It is related to the general C++ principle "destructors can't throw," no-fail property, and maybe also with multi-phase transactions or exception safety. However, I'm thinking about it in language-neutral terms. My concern is with correctly designing error handling in C++ functions that must be reliable. I would like to know what the concepts below are called so that I can learn more about them. I'm sorry that I can't ask the question more directly. Since I don't know this area I have provided an example to explain my question. The question is at the end. Here goes: Consider a sequence of steps or actions executed sequentially, where actions belong to one of two classes: those that always succeed, and those that may fail. In the examples below: S stands for an action that always succeeds (called "no-fail" in some settings). F stands for an action that may fail (for example, it might fail to allocate memory or do I/O that could fail). Consider a sequences of actions (executed sequentially from left to right): S->S->S->S Since each action in the sequence above succeeds, the whole sequence succeeds. On the other hand, the following sequence may fail because the last action may fail: S->S->S->F So, claim: a sequence has the no-fail (S) property if and only if all of its actions are no-fail. Now, I'm interested in action sequences that form "atomic transactions", with "failure atomicity," i.e. where either the whole sequence completes successfully, or there is no effect. I.e. if some action fails, the earlier ones must be rolled back. This requires that any successfully executed actions prior to a failing action must always be able to be rolled back. Consider the sequence: S->S->S->F S<-S<-S In the example above, the first row is the forward path of the transaction, and the second row are inverse actions (executed from right to left) that can be used to roll back if the final top row actions fails. It seems to me that for a transaction to support failure atomicity, the following invariant must hold: Claim: To support failure atomicity (either completion or complete roll-back on failure) all actions preceding the latest failable (F) action on the forward path (marked * in the example below) must have no-fail (S) inverses. The following is an example of a sequence that supports failure atomicity: * S->F->F->F S<-S<-S Further, if we want the transaction to be able to attempt cancellation mid-way through, but still guarantee either full completion or full rollback then we need the following property: Claim: To support failure atomicity and cancellation mid-way through execution, in the face of errors in the inverse (cancellation) path, all actions following the earliest failable (F) inverse on the reverse path (marked *) must be no-fail (S). F->F->F->S->S S<-S<-F<-F * I believe that these two conditions guarantee that an abortable/cancelable transaction will never get "stuck". My questions are: What is the study and theory of these properties called? are my claims correct? and what else is there to know? UPDATE 1: Updated terminology: what I previously called "robustness" is called atomicity in the database literature. UPDATE 2: Added explicit reference to failure atomicity, which seems to be a thing.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Big Data – Role of Cloud Computing in Big Data – Day 11 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the NewSQL. In this article we will understand the role of Cloud in Big Data Story What is Cloud? Cloud is the biggest buzzword around from last few years. Everyone knows about the Cloud and it is extremely well defined online. In this article we will discuss cloud in the context of the Big Data. Cloud computing is a method of providing a shared computing resources to the application which requires dynamic resources. These resources include applications, computing, storage, networking, development and various deployment platforms. The fundamentals of the cloud computing are that it shares pretty much share all the resources and deliver to end users as a service.  Examples of the Cloud Computing and Big Data are Google and Amazon.com. Both have fantastic Big Data offering with the help of the cloud. We will discuss this later in this blog post. There are two different Cloud Deployment Models: 1) The Public Cloud and 2) The Private Cloud Public Cloud Public Cloud is the cloud infrastructure build by commercial providers (Amazon, Rackspace etc.) creates a highly scalable data center that hides the complex infrastructure from the consumer and provides various services. Private Cloud Private Cloud is the cloud infrastructure build by a single organization where they are managing highly scalable data center internally. Here is the quick comparison between Public Cloud and Private Cloud from Wikipedia:   Public Cloud Private Cloud Initial cost Typically zero Typically high Running cost Unpredictable Unpredictable Customization Impossible Possible Privacy No (Host has access to the data Yes Single sign-on Impossible Possible Scaling up Easy while within defined limits Laborious but no limits Hybrid Cloud Hybrid Cloud is the cloud infrastructure build with the composition of two or more clouds like public and private cloud. Hybrid cloud gives best of the both the world as it combines multiple cloud deployment models together. Cloud and Big Data – Common Characteristics There are many characteristics of the Cloud Architecture and Cloud Computing which are also essentially important for Big Data as well. They highly overlap and at many places it just makes sense to use the power of both the architecture and build a highly scalable framework. Here is the list of all the characteristics of cloud computing important in Big Data Scalability Elasticity Ad-hoc Resource Pooling Low Cost to Setup Infastructure Pay on Use or Pay as you Go Highly Available Leading Big Data Cloud Providers There are many players in Big Data Cloud but we will list a few of the known players in this list. Amazon Amazon is arguably the most popular Infrastructure as a Service (IaaS) provider. The history of how Amazon started in this business is very interesting. They started out with a massive infrastructure to support their own business. Gradually they figured out that their own resources are underutilized most of the time. They decided to get the maximum out of the resources they have and hence  they launched their Amazon Elastic Compute Cloud (Amazon EC2) service in 2006. Their products have evolved a lot recently and now it is one of their primary business besides their retail selling. Amazon also offers Big Data services understand Amazon Web Services. Here is the list of the included services: Amazon Elastic MapReduce – It processes very high volumes of data Amazon DynammoDB – It is fully managed NoSQL (Not Only SQL) database service Amazon Simple Storage Services (S3) – A web-scale service designed to store and accommodate any amount of data Amazon High Performance Computing – It provides low-tenancy tuned high performance computing cluster Amazon RedShift – It is petabyte scale data warehousing service Google Though Google is known for Search Engine, we all know that it is much more than that. Google Compute Engine – It offers secure, flexible computing from energy efficient data centers Google Big Query – It allows SQL-like queries to run against large datasets Google Prediction API – It is a cloud based machine learning tool Other Players Besides Amazon and Google we also have other players in the Big Data market as well. Microsoft is also attempting Big Data with the Cloud with Microsoft Azure. Additionally Rackspace and NASA together have initiated OpenStack. The goal of Openstack is to provide a massively scaled, multitenant cloud that can run on any hardware. Thing to Watch The cloud based solutions provides a great integration with the Big Data’s story as well it is very economical to implement as well. However, there are few things one should be very careful when deploying Big Data on cloud solutions. Here is a list of a few things to watch: Data Integrity Initial Cost Recurring Cost Performance Data Access Security Location Compliance Every company have different approaches to Big Data and have different rules and regulations. Based on various factors, one can implement their own custom Big Data solution on a cloud. Tomorrow In tomorrow’s blog post we will discuss about various Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Cloud Computing = Elasticity * Availability

    - by Herve Roggero
    What is cloud computing? Is hosting the same thing as cloud computing? Are you running a cloud if you already use virtual machines? What is the difference between Infrastructure as a Service (IaaS) and a cloud provider? And the list goes on… these questions keep coming up and all try to fundamentally explain what “cloud” means relative to other concepts. At the risk of over simplification, answering these questions becomes simpler once you understand the primary foundations of cloud computing: Elasticity and Availability.   Elasticity The basic value proposition of cloud computing is to pay as you go, and to pay for what you use. This implies that an application can expand and contract on demand, across all its tiers (presentation layer, services, database, security…).  This also implies that application components can grow independently from each other. So if you need more storage for your database, you should be able to grow that tier without affecting, reconfiguring or changing the other tiers. Basically, cloud applications behave like a sponge; when you add water to a sponge, it grows in size; in the application world, the more customers you add, the more it grows. Pure IaaS providers will provide certain benefits, specifically in terms of operating costs, but an IaaS provider will not help you in making your applications elastic; neither will Virtual Machines. The smallest elasticity unit of an IaaS provider and a Virtual Machine environment is a server (physical or virtual). While adding servers in a datacenter helps in achieving scale, it is hardly enough. The application has yet to use this hardware.  If the process of adding computing resources is not transparent to the application, the application is not elastic.   As you can see from the above description, designing for the cloud is not about more servers; it is about designing an application for elasticity regardless of the underlying server farm.   Availability The fact of the matter is that making applications highly available is hard. It requires highly specialized tools and trained staff. On top of it, it's expensive. Many companies are required to run multiple data centers due to high availability requirements. In some organizations, some data centers are simply on standby, waiting to be used in a case of a failover. Other organizations are able to achieve a certain level of success with active/active data centers, in which all available data centers serve incoming user requests. While achieving high availability for services is relatively simple, establishing a highly available database farm is far more complex. In fact it is so complex that many companies establish yearly tests to validate failover procedures.   To a certain degree certain IaaS provides can assist with complex disaster recovery planning and setting up data centers that can achieve successful failover. However the burden is still on the corporation to manage and maintain such an environment, including regular hardware and software upgrades. Cloud computing on the other hand removes most of the disaster recovery requirements by hiding many of the underlying complexities.   Cloud Providers A cloud provider is an infrastructure provider offering additional tools to achieve application elasticity and availability that are not usually available on-premise. For example Microsoft Azure provides a simple configuration screen that makes it possible to run 1 or 100 web sites by clicking a button or two on a screen (simplifying provisioning), and soon SQL Azure will offer Data Federation to allow database sharding (which allows you to scale the database tier seamlessly and automatically). Other cloud providers offer certain features that are not available on-premise as well, such as the Amazon SC3 (Simple Storage Service) which gives you virtually unlimited storage capabilities for simple data stores, which is somewhat equivalent to the Microsoft Azure Table offering (offering a server-independent data storage model). Unlike IaaS providers, cloud providers give you the necessary tools to adopt elasticity as part of your application architecture.    Some cloud providers offer built-in high availability that get you out of the business of configuring clustered solutions, or running multiple data centers. Some cloud providers will give you more control (which puts some of that burden back on the customers' shoulder) and others will tend to make high availability totally transparent. For example, SQL Azure provides high availability automatically which would be very difficult to achieve (and very costly) on premise.   Keep in mind that each cloud provider has its strengths and weaknesses; some are better at achieving transparent scalability and server independence than others.    Not for Everyone Note however that it is up to you to leverage the elasticity capabilities of a cloud provider, as discussed previously; if you build a website that does not need to scale, for which elasticity is not important, then you can use a traditional host provider unless you also need high availability. Leveraging the technologies of cloud providers can be difficult and can become a journey for companies that build their solutions in a scale up fashion. Cloud computing promises to address cost containment and scalability of applications with built-in high availability. If your application does not need to scale or you do not need high availability, then cloud computing may not be for you. In fact, you may pay a premium to run your applications with cloud providers due to the underlying technologies built specifically for scalability and availability requirements. And as such, the cloud is not for everyone.   Consistent Customer Experience, Predictable Cost With all its complexities, buzz and foggy definition, cloud computing boils down to a simple objective: consistent customer experience at a predictable cost.  The objective of a cloud solution is to provide the same user experience to your last customer than the first, while keeping your operating costs directly proportional to the number of customers you have. Making your applications elastic and highly available across all its tiers, with as much automation as possible, achieves the first objective of a consistent customer experience. And the ability to expand and contract the infrastructure footprint of your application dynamically achieves the cost containment objectives.     Herve Roggero is a SQL Azure MVP and co-author of Pro SQL Azure (APress).  He is the co-founder of Blue Syntax Consulting (www.bluesyntax.net), a company focusing on cloud computing technologies helping customers understand and adopt cloud computing technologies. For more information contact herve at hroggero @ bluesyntax.net .

    Read the article

  • Grid computing projects similar to NGrid (thread based)

    - by DivdeAndConquer
    Hello there, first time poster. This is a great place for reading about programming problems. I've been looking at some grid computing projects for .Net/Mono and stumbled upon NGrid. NGrid seems really appealing for grid computing because you simply pass threads to it and there is very little modification you have to make to your code. However, I see that NGrid (http://ngrid.sourceforge.net/?page=overview) is still at version 0.7 and hasn't been updated since May 2008. So, I'm wondering if there are any other grid computing projects that use a similar thread-passing architecture and if anyone has had success using NGrid.

    Read the article

  • Understanding: cloud-server, cloud-hosting, cloud-computing, the cloud

    - by Abel
    There's a lot of buzz about these subjects and there seems little consensus on the terms. Is that just me not understanding the subject, or is there a clear meaning for each of these terms? Are there more elaborate terms or descriptions that describe what a cloud provider has, is or offers? EDIT: rewritten question, apparently it was unclear, partially due to the bloat I added.

    Read the article

  • Color Theory: How to convert Munsell HVC to RGB/HSB/HSL

    - by Ian Boyd
    I'm looking at at document that describes the standard colors used in dentistry to describe the color of a tooth. They quote hue, value, chroma values, and indicate they are from the 1905 Munsell description of color: The system of colour notation developed by A. H. Munsell in 1905 identifies colour in terms of three attributes: HUE, VALUE (Brightness) and CHROMA (saturation) [15] HUE (H): Munsell defined hue as the quality by which we distinguish one colour from another. He selected five principle colours: red, yellow, green, blue, and purple; and five intermediate colours: yellow-red, green-yellow, blue-green, purple-blue, and red-purple. These were placed around a colour circle at equal points and the colours in between these points are a mixture of the two, in favour of the nearer point/colour (see Fig 1.). VALUE (V): This notation indicates the lightness or darkness of a colour in relation to a neutral grey scale, which extends from absolute black (value symbol 0) to absolute white (value symbol 10). This is essentially how ‘bright’ the colour is. CHROMA (C): This indicates the degree of divergence of a given hue from a neutral grey of the same value. The scale of chroma extends from 0 for a neutral grey to 10, 12, 14 or farther, depending upon the strength (saturation) of the sample to be evaluated. There are various systems for categorising colour, the Vita system is most commonly used in Dentistry. This uses the letters A, B, C and D to notate the hue (colour) of the tooth. The chroma and value are both indicated by a value from 1 to 4. A1 being lighter than A4, but A4 being more saturated than A1. If placed in order of value, i.e. brightness, the order from brightest to darkest would be: A1, B1, B2, A2, A3, D2, C1, B3, D3, D4, A3.5, B4, C2, A4, C3, C4 The exact values of Hue, Value and Chroma for each of the shades is shown below (16) So my question is, can anyone convert Munsell HVC into RGB, HSB or HSL? Hue Value (Brightness) Chroma(Saturation) === ================== ================== 4.5 7.80 1.7 2.4 7.45 2.6 1.3 7.40 2.9 1.6 7.05 3.2 1.6 6.70 3.1 5.1 7.75 1.6 4.3 7.50 2.2 2.3 7.25 3.2 2.4 7.00 3.2 4.3 7.30 1.6 2.8 6.90 2.3 2.6 6.70 2.3 1.6 6.30 2.9 3.0 7.35 1.8 1.8 7.10 2.3 3.7 7.05 2.4 They say that Value(Brightness) varies from 0..10, which is fine. So i take 7.05 to mean 70.5%. But what is Hue measured in? i'm used to hue being measured in degrees (0..360). But the values i see would all be red - when they should be more yellow, or brown. Finally, it says that Choma/Saturation can range from 0..10 ...or even higher - which makes it sound like an arbitrary scale. So can anyone convert Munsell HVC to HSB or HSL, or better yet, RGB?

    Read the article

  • Web Site Serving, Cloud-Computing, oh, my

    - by Frank
    I'm planning a software based service. To give it a bit of context (type of traffic), assume it similar to facebook in nature (with a little GitHub thrown in). I've been trying to understand my different hosting options. I've been using a shared host with GoDaddy for years just fine. I currently host a Wordpress web site there and I've not had any problems. Quite frankly, they've taken good care of me. However, the nature of a shared hosting environment is limited in nature. For example, I can't do anything but host a web site there. For example, I can not run a Mercurial server. Last time I attempted to build a web application with the intention of eventually launching it via GoDaddy, I ran in to all sorts of troubles because it was shared-hosted. Assembly issues, etc. At the time, the cost and time sank my project. (The lack of direct access was also frustrating.) (to be fair to godaddy, this was over 3 years ago) I've been looking at Rackspace or Amazon as a possible cloud solution but it seems to be just processing power and bandwidth (and an OS). From what I understand, I'd need to get Apache and MySQL Working on my own. The way cloud hosting is priced, however, seems appealing. I figure my final option might be to use a virtual private host. I think this would be more flexible than a shared-host site but less scalable than a cloud based server. So, I guess my question is what is an appropriate solution for someone who intends to build a web application service? I figure that I need to establish a hosting environment now rather than later so I can plan to effectively use the environment. I'd prefer to be fairly economical to start out with. I really can't afford to pay $999 (or even $99) while I build up the site and get the core functionality online but at the same time, I'd like to have the selected environment grow as needed. Thank you.

    Read the article

  • Theory of formal languages - Automaton

    - by dader51
    Hi everybody ! I'm wondering about formal languages. I have a kind of parser : It reads à xml-like serialized tree structure and turn it into a multidimmensionnal array. I figured out that i need at least three variables to achieve the job : $tree = array(); // a new array $pTree = array(&$tree); // a new array which the first element points to $tree; $deep = 0; plus the one containing the sentence splitted into words. My point is on the similarities between the algorithm deing used and the differents kinds of automatons ( state machines turing machines stack ... ). The $words variable is the "tape" of the automaton, the test/conditions of the algorithm are transitions, $deep is the state and $tree is the output. I cannont figure what is $pTree. So the question is : which is the automaton I implictly use here, and to which formal languages family does it fit ? And what's about recursion ?

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • Discrete problem of probability theory [closed]

    - by calejero
    A jury consists of 12 persons each of which has, before the trial started, a probability of 0.4 to vote in favor of the defendant's innocence. During the trial, the lawyer has a probability of 0.6 to change the mind of each juror who was biased against the accused. How likely is the defendant to be acquitted if he needs 10 votes in favor?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >