Search Results

Search found 2261 results on 91 pages for 'numerical computing'.

Page 17/91 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Any frameworks or library allow me to run large amount of concurrent jobs schedully?

    - by Yoga
    Are there any high level programming frameworks that allow me to run large amount of concurrent jobs schedully? e.g. I have 100K of urls need to check their uptime every 5 minutes Definitely I can write a program to handle this, but then I need to handle concurrency, queuing, error handling, system throttling, job distribution etc. Will there be a framework that I only focus on a particular job (i.e. the ping task) and the system will take care of the scaling and error handling for me? I am open to any language.

    Read the article

  • How to rewrite a TCP MMOG server designed to run in a single machine, in a distributed way?

    - by Dokkat
    I have a MMOG server running on C++, using winsockets. My server won't support more than 200 players. I had the idea of redesigning it so it will use multiple servers instead of one, so, maybe, for example, each server could take care of a number of players, and, if it was too laggy, it could transfer the responsability of that player to other server. I'm not sure of how to program a consistent game logic like that, though. Are there techniques for this?

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Combine auto-syncing cloud and VCS

    - by ComFreek
    This question brought me to another question: is there any VCS/tool for a VCS which automatically backups your source code between the last checkout and current changes? I had the problem of loosing uncommited source code changes just one week ago. I did not want to commit yet because the changes were incomplete. But then, an error when moving the data to an USB stick caused the data loss. That's the opposite what a cloud service (like Google Drive, SkyDrive, DropBox, ...) does: it tracks each change you made! Have you lost your data? That's no problem because you have the latest version online. So what would a combined solution look like? It would offer full functionality of a VCS including auto-syncing of any intermediate changes between two commits/checkouts to a temporary online location.

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

  • Do you I think I should integrate something like AWS or other cloud service from the initial phases of my project?

    - by Kareem Ergawy
    Do you I think I should integrate something like AWS or other cloud service from the initial phases of my project or I should be working on the front and back end components regularly and integrate AWS later? I am starting to work on a mobile service. From day one, I wish to make sure that my service will be scalable and able to handle large loads of requests. This is my first time in architecting a large scale system from the beginning so I can't decide what is best.

    Read the article

  • Efficient algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please clarify the solution or suggest any new approach/resources for better performance. I am basically searching for the algorithms not the physical optimizations and I also know that many commercial organizations have provided these solution but I just wanted to know more the underlying algorithms. Thanks in advance.

    Read the article

  • What is lightweight lock in distributed shared memory systems?

    - by Kutluhan Metin
    I started reading Tanenbaum's Distributed Systems book a while ago. I read about two phase locking and timestamp reordering in transactions chapter. While having a deeper look from google I heard of lightweight transactions/lightweight transactional memory. But I couldn't find any good explanation and implementation. So what is lightweight memory? What are the benefits of lightweight locks? And how can I implement them?

    Read the article

  • Where do you earn more money (Autonomous Systems vs Distributed Systems)? [closed]

    - by Puckl
    I am interested in both topics and I can choose between them for my computer science master. I think the distributed systems master focuses more on software technologies and the autononmous systems master is focused on robotics and machine learning. Do you get good jobs in the fild of machine learning without a Ph.D.? I guess there are more jobs available in the Software-Tech world, is this right? Where do you earn more money? (It is not the only criteria, but it matters)

    Read the article

  • Storing Projects on Google Drive (Cloud)

    - by JamesKraw
    I've started using Google Drive for my cloud needs and backing up pretty much everything. I've got the app installed so it auto-sync's all my content in most things. My question is this, I am currently coding for iOS (although this applies to any coding project) and am split on storing my project files on Google Drive while using sync. My theory is that if I did use it, I'd never have to worry about system crashes or lost code before backups, but if I do use it it will be sync'ing a-lot and I thought there might be problems with it detecting changes and trying to sync for example half way through compiling. Bandwidth isn't an issue as I have fast connection and unlimited monthly allowance. Has anyone ever used this, or similar cloud-based sync'ing (dropbox etc) for this and knows whether it works or not or whether there are any potential problems etc.

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

  • NTRU Pseudo-code for computing Polynomial Inverses

    - by Neville
    Hello all. I was wondering if anyone could tell me how to implement line 45 of the following pseudo-code. Require: the polynomial to invert a(x), N, and q. 1: k = 0 2: b = 1 3: c = 0 4: f = a 5: g = 0 {Steps 5-7 set g(x) = x^N - 1.} 6: g[0] = -1 7: g[N] = 1 8: loop 9: while f[0] = 0 do 10: for i = 1 to N do 11: f[i - 1] = f[i] {f(x) = f(x)/x} 12: c[N + 1 - i] = c[N - i] {c(x) = c(x) * x} 13: end for 14: f[N] = 0 15: c[0] = 0 16: k = k + 1 17: end while 18: if deg(f) = 0 then 19: goto Step 32 20: end if 21: if deg(f) < deg(g) then 22: temp = f {Exchange f and g} 23: f = g 24: g = temp 25: temp = b {Exchange b and c} 26: b = c 27: c = temp 28: end if 29: f = f XOR g 30: b = b XOR c 31: end loop 32: j = 0 33: k = k mod N 34: for i = N - 1 downto 0 do 35: j = i - k 36: if j < 0 then 37: j = j + N 38: end if 39: Fq[j] = b[i] 40: end for 41: v = 2 42: while v < q do 43: v = v * 2 44: StarMultiply(a; Fq; temp;N; v) 45: temp = 2 - temp mod v 46: StarMultiply(Fq; temp; Fq;N; v) 47: end while 48: for i = N - 1 downto 0 do 49: if Fq[i] < 0 then 50: Fq[i] = Fq[i] + q 51: end if 52: end for 53: {Inverse Poly Fq returns the inverse polynomial, Fq, through the argument list.} The function StarMultiply returns a polynomial (array) stored in the variable temp. Basically temp is a polynomial (I'm representing it as an array) and v is an integer (say 4 or 8), so what exactly does temp = 2-temp mod v equate to in normal language? How should i implement that line in my code. Can someone give me an example. The above algorithm is for computing Inverse polynomials for NTRUEncrypt key generation. The pseudo-code can be found on page 28 of this document. Thanks in advance.

    Read the article

  • SAP se lance dans le Cloud Computing et présente la première application de sa nouvelle gamme « On-Demand »

    SAP se lance dans le Cloud Computing Et présente la première application de sa nouvelle génération d'applications « On-Demand » Cette année, la participation de SAP au très prestigieux salon CeBIT s'articule autour de la promotion de la nouvelle génération de ses solutions « On-Demand », alliant la puissance du Cloud Computing à la flexibilité du paiement à l'utilisation en mode SaaS. Pour répondre aux attentes des entreprises qui cherchent aujourd'hui à optimiser leurs processus business et les adapter à leurs métiers sans réinvestir dans leurs systèmes d'informations, SAP introduit une nouvelle gamme de solutions On-Demand, intégrée à la suite logicielle SAP Business Suite.

    Read the article

  • Ouverture de la rubrique Cloud Computing, pour trouver les ressources nécessaires à la compréhension et à l'utilisation du "Cloud"

    Bonjour à tous, La rubrique Cloud Computing vient de voir le jour à l'adresse http://cloud-computing.developpez.com. Cette rubrique contiendra des news et toutes les ressources nécessaires à la compréhension, à l'utilisation et au développement pour et avec le "Cloud". Si vous avez des idées de tutoriels, d'articles, de sources ou encore de Q/R pour de prochaines FAQ, n'hésitez pas à nous en faire part. Très cordialement, Gordon...

    Read the article

  • Ouverture de la rubrique Cloud Computing, pour trouver les ressources nécessaires à la compréhension et à l'utilisation du "Cloud"

    Bonjour à tous, La rubrique Cloud Computing vient de voir le jour à l'adresse http://cloud-computing.developpez.com. Cette rubrique contiendra des news et toutes les ressources nécessaires à la compréhension, à l'utilisation et au développement pour et avec le "Cloud" (de Windows Azure aux Google Apps en passant par Salesforce et les serveurs HPC). Si vous avez des idées de tutoriels, d'articles, de sources ou encore de Q/R pour de prochaines FAQ, n'hésitez pas à nous en faire part. Très cordialement, Gordon...

    Read the article

  • Using logarithms to normalize a vector to avoid overflow

    - by muscicapa
    http://stackoverflow.com/questions/2293762/problem-with-arithmetic-using-logarithms-to-avoid-numerical-underflow-take-2 Having seen the above and having seen softmax normalization I was trying to normalize a vector while avoiding overflow - that is (x1 x2 x3 x4 ... xn) the normalized form for me has the sum of squares as 1.0 So what I thought of doing is s=(2*log(x1)+2*log(x2)+...+2*log(xn))/2 so the two factor can be taken off and finally the normalized vector is exp(log(x1)-s), , ..., exp(log(xn)-s) but I am evidently doing something wrong here, what?

    Read the article

  • What are the memory-management capabilities of MySQL + JDBC (in light of autonomic computing)?

    - by Adel
    I'm interested in implementing some kind of autonomic-computing functionality using MySQL. By autonomic-computing I mean roughly some failsafe abilities, whereby the application appears to be at least slightly "intelligent" For reference, the main parts of autonomic computing we'd like are the "self-configuring" and "self-healing" features (the other two - "self-optimizing" and "self-protecting", are too abstract/futuristic for us, at this time). Sofor example, if we have a sample Java application that utilizes a MySQL database, we might want to automatically restart the MySQL database if we take up too much memory. Or maybe we want to have the ability to dynamiccally adjust the database memory as needed. So for example, when we start the application the database begins with a 56 Megabyte buffer; but then as we insert so many rows we want to have it automatically jump up to 512 MB, then to 1024, until a max of 4096 MB. Does all of the above suggest that MySQL is too "weak" for the task? Do you suggest using Oracle database? My professor believes that by using Java we can basically make up for any memory-management deficiencies that MySQL has in relation to Oracle DB. I'm new to MySQL , but have experience with Oracle. If all of the above sounds wishy-washy, it is because I'm still fleshing it out. thanks

    Read the article

  • C or Ada for engineering computations?

    - by yCalleecharan
    Hi,as an engineer I currently use C to write programs dealing with numerical methods. I like C as it's very fast. I don't want to move to C++ and I have been reading a bit about Ada which has some very good sides. I believe that much of the software in big industries have been or more correctly were written in Ada. I would like to know how C compares with Ada. Is Ada fast as C? I understand that no language is perfect but I would like to know if Ada was designed for scientific computing. Thanks a lot...

    Read the article

  • Computing MD5SUM of large files in C#

    - by spkhaira
    I am using following code to compute MD5SUM of a file - byte[] b = System.IO.File.ReadAllBytes(file); string sum = BitConverter.ToString(new MD5CryptoServiceProvider().ComputeHash(b)); This works fine normally, but if I encounter a large file (~1GB) - e.g. an iso image or a DVD VOB file - I get an Out of Memory exception. Though, I am able to compute the MD5SUM in cygwin for the same file in about 10secs. Please suggest how can I get this to work for big files in my program. Thanks

    Read the article

  • running a python script where dependencies are not avail: distributed computing

    - by sadhu_
    Hi, I have access to a grid (running condor) that would (potentially) allow to very substantially reduce how long by nltk based nlp tasks take. unfortunately, i dont have root access on the cluster so cannot install new packages, only run whatever is available on the linux boxes. python is of course available, but nltk isnt - i was wondering however, if there might be a way around this somehow ? is there a way i can somehow still distribute the task in a self-contained 'package' of some sort? Thanks for your hel

    Read the article

  • computing "node closure" of graph with removal

    - by Fakrudeen
    Given a directed graph, the goal is to combine the node with the nodes it is pointing to and come up with minimum number of these [lets give the name] super nodes. The catch is once you combine the nodes you can't use those nodes again. [first node as well as all the combined nodes - that is all the members of one super node] The greedy approach would be to pick the node with maximum out degree and combine that node with nodes it is pointing to and remove all of them. Do this every time with the nodes which are not removed yet from graph. The greedy is O(V), but this won't necessarily output minimum number super nodes. So what is the best algorithm to do this?

    Read the article

  • Computing orientation of a square and displaying an object with the same orientation

    - by Robin
    Hi, I wrote an application which detects a square within an image. To give you a good understanding of how such an image containing such a square, in this case a marker, could look like: What I get, after the detection, are the coordinates of the four corners of my marker. Now I don't know how to display an object on my marker. The object should have the same rotation/angle/direction as the marker. Are there any papers on how to achieve that, any algorithms that I can use that proofed to be pretty solid/working? It doesn't need to be a working solution, it could be a simple description on how to achieve that or something similar. If you point me at a library or something, it should work under linux, windows is not needed but would be great in case I need to port the application at some point. I already looked at the ARToolkit but they you camera parameter files and more complex matrices while I only got the four corner points and a single image instead of a whole video / camera stream.

    Read the article

  • computing z-scores for 2D matrices in scipy/numpy in Python

    - by user248237
    How can I compute the z-score for matrices in Python? Suppose I have the array: a = array([[ 1, 2, 3], [ 30, 35, 36], [2000, 6000, 8000]]) and I want to compute the z-score for each row. The solution I came up with is: array([zs(item) for item in a]) where zs is in scipy.stats.stats. Is there a better built-in vectorized way to do this? Also, is it always good to z-score numbers before using hierarchical clustering with euclidean or seuclidean distance? Can anyone discuss the relative advantages/disadvantages? thanks.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >