Search Results

Search found 3000 results on 120 pages for 'computing capacity'.

Page 17/120 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • How to manage many mobile device users at server side?

    - by Rami
    I built a social Android application in which users can see other users around them by GPS location. At the beginning thing went well as I had low number of users, but now that I have increasing number of users (about 1500 +100 every day) it has revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holds all the users profiles objects, currently 1500 and this number will increase as more users register. Why I'm doing it? Every user that requests for the users around him compares his GPS with other users and checks if they are in his 10km radius. This happens every five minutes on average. Consequently, I can't get the users from db every time because GAE read/write operation quota will tear me apart. The problem with this design is? As the number of users increases, the Hashmap turns to null every 4-6 hours, I think that this time is getting shorter, but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it becomes null, but this causes DOS to my users for 30 sec, so I'm looking for better solution. I'm guessing that it happens because the size of the hashmap. Am I right? I have been advised to use a spatial database, but that means that I can't work with GAE any more and it means that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • What kind of hosting do I need?

    - by Robert Smith
    I migrated this question from serverfault. Hopefully this is the appropriate place. I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have a custom made forum (rather than a built-in forum like the ones you can find in plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. By the way, I was told that Linodo is not all that different to Amazon EC2 but this website is supposed to work 24/7, so I can't take advantage of Linodo's flexibility regarding creating and deleting instances. Thanks in advance.

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Any frameworks or library allow me to run large amount of concurrent jobs schedully?

    - by Yoga
    Are there any high level programming frameworks that allow me to run large amount of concurrent jobs schedully? e.g. I have 100K of urls need to check their uptime every 5 minutes Definitely I can write a program to handle this, but then I need to handle concurrency, queuing, error handling, system throttling, job distribution etc. Will there be a framework that I only focus on a particular job (i.e. the ping task) and the system will take care of the scaling and error handling for me? I am open to any language.

    Read the article

  • How to rewrite a TCP MMOG server designed to run in a single machine, in a distributed way?

    - by Dokkat
    I have a MMOG server running on C++, using winsockets. My server won't support more than 200 players. I had the idea of redesigning it so it will use multiple servers instead of one, so, maybe, for example, each server could take care of a number of players, and, if it was too laggy, it could transfer the responsability of that player to other server. I'm not sure of how to program a consistent game logic like that, though. Are there techniques for this?

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Combine auto-syncing cloud and VCS

    - by ComFreek
    This question brought me to another question: is there any VCS/tool for a VCS which automatically backups your source code between the last checkout and current changes? I had the problem of loosing uncommited source code changes just one week ago. I did not want to commit yet because the changes were incomplete. But then, an error when moving the data to an USB stick caused the data loss. That's the opposite what a cloud service (like Google Drive, SkyDrive, DropBox, ...) does: it tracks each change you made! Have you lost your data? That's no problem because you have the latest version online. So what would a combined solution look like? It would offer full functionality of a VCS including auto-syncing of any intermediate changes between two commits/checkouts to a temporary online location.

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

  • Efficient algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please clarify the solution or suggest any new approach/resources for better performance. I am basically searching for the algorithms not the physical optimizations and I also know that many commercial organizations have provided these solution but I just wanted to know more the underlying algorithms. Thanks in advance.

    Read the article

  • Do you I think I should integrate something like AWS or other cloud service from the initial phases of my project?

    - by Kareem Ergawy
    Do you I think I should integrate something like AWS or other cloud service from the initial phases of my project or I should be working on the front and back end components regularly and integrate AWS later? I am starting to work on a mobile service. From day one, I wish to make sure that my service will be scalable and able to handle large loads of requests. This is my first time in architecting a large scale system from the beginning so I can't decide what is best.

    Read the article

  • What is lightweight lock in distributed shared memory systems?

    - by Kutluhan Metin
    I started reading Tanenbaum's Distributed Systems book a while ago. I read about two phase locking and timestamp reordering in transactions chapter. While having a deeper look from google I heard of lightweight transactions/lightweight transactional memory. But I couldn't find any good explanation and implementation. So what is lightweight memory? What are the benefits of lightweight locks? And how can I implement them?

    Read the article

  • Where do you earn more money (Autonomous Systems vs Distributed Systems)? [closed]

    - by Puckl
    I am interested in both topics and I can choose between them for my computer science master. I think the distributed systems master focuses more on software technologies and the autononmous systems master is focused on robotics and machine learning. Do you get good jobs in the fild of machine learning without a Ph.D.? I guess there are more jobs available in the Software-Tech world, is this right? Where do you earn more money? (It is not the only criteria, but it matters)

    Read the article

  • Storing Projects on Google Drive (Cloud)

    - by JamesKraw
    I've started using Google Drive for my cloud needs and backing up pretty much everything. I've got the app installed so it auto-sync's all my content in most things. My question is this, I am currently coding for iOS (although this applies to any coding project) and am split on storing my project files on Google Drive while using sync. My theory is that if I did use it, I'd never have to worry about system crashes or lost code before backups, but if I do use it it will be sync'ing a-lot and I thought there might be problems with it detecting changes and trying to sync for example half way through compiling. Bandwidth isn't an issue as I have fast connection and unlimited monthly allowance. Has anyone ever used this, or similar cloud-based sync'ing (dropbox etc) for this and knows whether it works or not or whether there are any potential problems etc.

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

  • NTRU Pseudo-code for computing Polynomial Inverses

    - by Neville
    Hello all. I was wondering if anyone could tell me how to implement line 45 of the following pseudo-code. Require: the polynomial to invert a(x), N, and q. 1: k = 0 2: b = 1 3: c = 0 4: f = a 5: g = 0 {Steps 5-7 set g(x) = x^N - 1.} 6: g[0] = -1 7: g[N] = 1 8: loop 9: while f[0] = 0 do 10: for i = 1 to N do 11: f[i - 1] = f[i] {f(x) = f(x)/x} 12: c[N + 1 - i] = c[N - i] {c(x) = c(x) * x} 13: end for 14: f[N] = 0 15: c[0] = 0 16: k = k + 1 17: end while 18: if deg(f) = 0 then 19: goto Step 32 20: end if 21: if deg(f) < deg(g) then 22: temp = f {Exchange f and g} 23: f = g 24: g = temp 25: temp = b {Exchange b and c} 26: b = c 27: c = temp 28: end if 29: f = f XOR g 30: b = b XOR c 31: end loop 32: j = 0 33: k = k mod N 34: for i = N - 1 downto 0 do 35: j = i - k 36: if j < 0 then 37: j = j + N 38: end if 39: Fq[j] = b[i] 40: end for 41: v = 2 42: while v < q do 43: v = v * 2 44: StarMultiply(a; Fq; temp;N; v) 45: temp = 2 - temp mod v 46: StarMultiply(Fq; temp; Fq;N; v) 47: end while 48: for i = N - 1 downto 0 do 49: if Fq[i] < 0 then 50: Fq[i] = Fq[i] + q 51: end if 52: end for 53: {Inverse Poly Fq returns the inverse polynomial, Fq, through the argument list.} The function StarMultiply returns a polynomial (array) stored in the variable temp. Basically temp is a polynomial (I'm representing it as an array) and v is an integer (say 4 or 8), so what exactly does temp = 2-temp mod v equate to in normal language? How should i implement that line in my code. Can someone give me an example. The above algorithm is for computing Inverse polynomials for NTRUEncrypt key generation. The pseudo-code can be found on page 28 of this document. Thanks in advance.

    Read the article

  • SAP se lance dans le Cloud Computing et présente la première application de sa nouvelle gamme « On-Demand »

    SAP se lance dans le Cloud Computing Et présente la première application de sa nouvelle génération d'applications « On-Demand » Cette année, la participation de SAP au très prestigieux salon CeBIT s'articule autour de la promotion de la nouvelle génération de ses solutions « On-Demand », alliant la puissance du Cloud Computing à la flexibilité du paiement à l'utilisation en mode SaaS. Pour répondre aux attentes des entreprises qui cherchent aujourd'hui à optimiser leurs processus business et les adapter à leurs métiers sans réinvestir dans leurs systèmes d'informations, SAP introduit une nouvelle gamme de solutions On-Demand, intégrée à la suite logicielle SAP Business Suite.

    Read the article

  • Ouverture de la rubrique Cloud Computing, pour trouver les ressources nécessaires à la compréhension et à l'utilisation du "Cloud"

    Bonjour à tous, La rubrique Cloud Computing vient de voir le jour à l'adresse http://cloud-computing.developpez.com. Cette rubrique contiendra des news et toutes les ressources nécessaires à la compréhension, à l'utilisation et au développement pour et avec le "Cloud". Si vous avez des idées de tutoriels, d'articles, de sources ou encore de Q/R pour de prochaines FAQ, n'hésitez pas à nous en faire part. Très cordialement, Gordon...

    Read the article

  • Ouverture de la rubrique Cloud Computing, pour trouver les ressources nécessaires à la compréhension et à l'utilisation du "Cloud"

    Bonjour à tous, La rubrique Cloud Computing vient de voir le jour à l'adresse http://cloud-computing.developpez.com. Cette rubrique contiendra des news et toutes les ressources nécessaires à la compréhension, à l'utilisation et au développement pour et avec le "Cloud" (de Windows Azure aux Google Apps en passant par Salesforce et les serveurs HPC). Si vous avez des idées de tutoriels, d'articles, de sources ou encore de Q/R pour de prochaines FAQ, n'hésitez pas à nous en faire part. Très cordialement, Gordon...

    Read the article

  • Netbook performs hard shutdown without warning on low battery power

    - by Steve Kroon
    My Asus EEE netbook performs a hard shutdown when it reaches low battery power, without giving any warning - i.e. the power just goes off, without any shutdown process. I can't find anything in the syslog, and no error messages are printed before it happens. I've had this problem on previous (K)Ubuntu versions, and hoped updating to Ubuntu Precise would help resolve the issue, but it hasn't. The option in the Power application for "when power is critically low" is currently blank - the only options are a (grayed-out) hibernate and "Power off". I have re-installed indicator-power to no effect. The time remaining reported by acpi is unstable, as is the time remaining reported by gnome-power-statistics. (For example, running acpi twice in succession, I got 2h16min, and then 3h21min remaining. These sorts of jumps in the remaining time are also in the gnome-power-statistics graphs.) It might be possible to write a script to give me advance warning (as per @RanRag's comment below), but I would prefer to isolate why I don't get a critical battery notification from the system before this happens, so that I can take action as appropriate (suspend/shutdown/plug in power) when I get a notification. Some additional information on the battery: kroon@minia:~$ upower -i /org/freedesktop/UPower/devices/battery_BAT0 native-path: /sys/devices/LNXSYSTM:00/device:00/PNP0A08:00/PNP0C0A:00/power_supply/BAT0 vendor: ASUS model: 1005P power supply: yes updated: Fri Aug 17 07:31:23 2012 (9 seconds ago) has history: yes has statistics: yes battery present: yes rechargeable: yes state: charging energy: 33.966 Wh energy-empty: 0 Wh energy-full: 34.9272 Wh energy-full-design: 47.52 Wh energy-rate: 3.7692 W voltage: 12.61 V time to full: 15.3 minutes percentage: 97.248% capacity: 73.5% technology: lithium-ion History (charge): 1345181483 97.248 charging 1345181453 97.155 charging 1345181423 97.062 charging 1345181393 96.970 charging History (rate): 1345181483 3.769 charging 1345181453 3.899 charging 1345181423 4.061 charging 1345181393 4.201 charging kroon@minia:~$ cat /proc/acpi/battery/BAT0/state present: yes capacity state: ok charging state: charging present rate: 332 mA remaining capacity: 3149 mAh present voltage: 12612 mV kroon@minia:~$ cat /proc/acpi/battery/BAT0/info present: yes design capacity: 4400 mAh last full capacity: 3209 mAh battery technology: rechargeable design voltage: 10800 mV design capacity warning: 10 mAh design capacity low: 5 mAh cycle count: 0 capacity granularity 1: 44 mAh capacity granularity 2: 44 mAh model number: 1005P serial number: battery type: LION OEM info: ASUS

    Read the article

  • What happens to the storage capacity when I uninstall Ubuntu?

    - by shole1202
    I used the wubi installer for Ubuntu 12.04. After having trouble with getting the Operating System to boot, I tried uninstalling it with wubi. From 'My Computer' (in Windows 7), I noticed the maximum capacity of my hard drive drop from 256gb to 238gb. I have tried using some methods with the command prompt to locate the missing storage, but Windows now only recognizes that the storage on the disk to have 238gb instead of the original 256. Is there any way to recover that memory?

    Read the article

  • What are the memory-management capabilities of MySQL + JDBC (in light of autonomic computing)?

    - by Adel
    I'm interested in implementing some kind of autonomic-computing functionality using MySQL. By autonomic-computing I mean roughly some failsafe abilities, whereby the application appears to be at least slightly "intelligent" For reference, the main parts of autonomic computing we'd like are the "self-configuring" and "self-healing" features (the other two - "self-optimizing" and "self-protecting", are too abstract/futuristic for us, at this time). Sofor example, if we have a sample Java application that utilizes a MySQL database, we might want to automatically restart the MySQL database if we take up too much memory. Or maybe we want to have the ability to dynamiccally adjust the database memory as needed. So for example, when we start the application the database begins with a 56 Megabyte buffer; but then as we insert so many rows we want to have it automatically jump up to 512 MB, then to 1024, until a max of 4096 MB. Does all of the above suggest that MySQL is too "weak" for the task? Do you suggest using Oracle database? My professor believes that by using Java we can basically make up for any memory-management deficiencies that MySQL has in relation to Oracle DB. I'm new to MySQL , but have experience with Oracle. If all of the above sounds wishy-washy, it is because I'm still fleshing it out. thanks

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >