Search Results

Search found 3521 results on 141 pages for 'parallel computing'.

Page 26/141 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • How to manage many mobile device users at server side?

    - by Rami
    I built a social Android application in which users can see other users around them by GPS location. At the beginning thing went well as I had low number of users, but now that I have increasing number of users (about 1500 +100 every day) it has revealed a major problem in my design. In my Google App Engine servlet I have static HashMap that holds all the users profiles objects, currently 1500 and this number will increase as more users register. Why I'm doing it? Every user that requests for the users around him compares his GPS with other users and checks if they are in his 10km radius. This happens every five minutes on average. Consequently, I can't get the users from db every time because GAE read/write operation quota will tear me apart. The problem with this design is? As the number of users increases, the Hashmap turns to null every 4-6 hours, I think that this time is getting shorter, but I'm not sure. I'm fixing this by reloading the users from the db every time I detect that it becomes null, but this causes DOS to my users for 30 sec, so I'm looking for better solution. I'm guessing that it happens because the size of the hashmap. Am I right? I have been advised to use a spatial database, but that means that I can't work with GAE any more and it means that I need to build my big server all over again and lose my existing DB. Is there something I can do with the existing tools? Thanks.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • What kind of hosting do I need?

    - by Robert Smith
    I migrated this question from serverfault. Hopefully this is the appropriate place. I have been trying to answer this question but I haven't found an specific answer to my situation. As I want to pay for what I need, I thought I could get a good answer here. I have a custom made forum (rather than a built-in forum like the ones you can find in plugins, e.g. WP-Forum or phpBB type of software) in Django. I don't want to use Apache and modwsgi because it's usually very memory-hungry and I can't afford a big server. I prefer a combination of nginx and gunicorn which I think is very efficient (maybe you can also tell me what you think about that). I'm expecting to receive 10,000 to 20,000 visits each month with 15,000 to 30,000 page impressions. I have reviewed some cloud services like Amazon EC2 or Rackspace and other more traditional services (Linodo). This site won't use videos or big images and I certainly don't need a huge amount of bandwidth (200GB would be definitely too much). I need shell access so shared hosting is out of the question. What do I need to run a website like that without problems? What about RAM? 256MB would be enough (that's the amount of RAM offered by small instances in Amazon and Rackspace)? Do you know of any alternative to those I mentioned? If you need more information to provide a useful answer, please don't hesitate to ask. By the way, I was told that Linodo is not all that different to Amazon EC2 but this website is supposed to work 24/7, so I can't take advantage of Linodo's flexibility regarding creating and deleting instances. Thanks in advance.

    Read the article

  • Hosting and scaling of a facebook application on cloud?

    - by DhruvPathak
    We would be building a facebook application in django(Python), but still not sure of where to host it economically,and with a good provision to scale in case the app gets viral. Some details about the app: i) Would be HTML based like a website,using django as a framework. ii) 100K is the number of expected pageviews in a day,if the app is viral. iii) The users will not generate any media content,only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment,cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost ? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. ( Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management ? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • How to rewrite a TCP MMOG server designed to run in a single machine, in a distributed way?

    - by Dokkat
    I have a MMOG server running on C++, using winsockets. My server won't support more than 200 players. I had the idea of redesigning it so it will use multiple servers instead of one, so, maybe, for example, each server could take care of a number of players, and, if it was too laggy, it could transfer the responsability of that player to other server. I'm not sure of how to program a consistent game logic like that, though. Are there techniques for this?

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Combine auto-syncing cloud and VCS

    - by ComFreek
    This question brought me to another question: is there any VCS/tool for a VCS which automatically backups your source code between the last checkout and current changes? I had the problem of loosing uncommited source code changes just one week ago. I did not want to commit yet because the changes were incomplete. But then, an error when moving the data to an USB stick caused the data loss. That's the opposite what a cloud service (like Google Drive, SkyDrive, DropBox, ...) does: it tracks each change you made! Have you lost your data? That's no problem because you have the latest version online. So what would a combined solution look like? It would offer full functionality of a VCS including auto-syncing of any intermediate changes between two commits/checkouts to a temporary online location.

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • Do you I think I should integrate something like AWS or other cloud service from the initial phases of my project?

    - by Kareem Ergawy
    Do you I think I should integrate something like AWS or other cloud service from the initial phases of my project or I should be working on the front and back end components regularly and integrate AWS later? I am starting to work on a mobile service. From day one, I wish to make sure that my service will be scalable and able to handle large loads of requests. This is my first time in architecting a large scale system from the beginning so I can't decide what is best.

    Read the article

  • Efficient algorithm for Virtual Machine(VM) Consolidation in Cloud

    - by devansh dalal
    PROBLEM: We have N physical machines(PMs) each with ram Ri, cpu Ci and a set of currently scheduled VMs each with ram requirement ri and ci respectively Moving(Migrating) any VM from one PM to other has a cost associated which depends on its ram ri. A PM with no VMs is shut down to save power. Our target is to minimize the weighted sum of (N,migration cost) by migrating some VMs i.e. minimize the number of working PMs as well as not to degrade the service level due to excessive migrations. My Approach: Brute Force approach is choosing the minimum loaded PM and try to fit its VMs to other PMs by First Fit Decreasing algorithm or we can select the victim PMs and target PMs based on their loading level and shut down victims if possible by moving their VMs to targets. I tried this Greedy approach on the Data of Baadal(IIT-D cloud) but It isn't giving promising results. I have also tried to study the Ant colony optimization for dynamic VM consolidating but was unable to understand very much. I used the links. http://dumas.ccsd.cnrs.fr/docs/00/72/52/15/PDF/Esnault.pdf http://hal.archives-ouvertes.fr/docs/00/72/38/56/PDF/RR-8032.pdf Would anyone please clarify the solution or suggest any new approach/resources for better performance. I am basically searching for the algorithms not the physical optimizations and I also know that many commercial organizations have provided these solution but I just wanted to know more the underlying algorithms. Thanks in advance.

    Read the article

  • What is lightweight lock in distributed shared memory systems?

    - by Kutluhan Metin
    I started reading Tanenbaum's Distributed Systems book a while ago. I read about two phase locking and timestamp reordering in transactions chapter. While having a deeper look from google I heard of lightweight transactions/lightweight transactional memory. But I couldn't find any good explanation and implementation. So what is lightweight memory? What are the benefits of lightweight locks? And how can I implement them?

    Read the article

  • Where do you earn more money (Autonomous Systems vs Distributed Systems)? [closed]

    - by Puckl
    I am interested in both topics and I can choose between them for my computer science master. I think the distributed systems master focuses more on software technologies and the autononmous systems master is focused on robotics and machine learning. Do you get good jobs in the fild of machine learning without a Ph.D.? I guess there are more jobs available in the Software-Tech world, is this right? Where do you earn more money? (It is not the only criteria, but it matters)

    Read the article

  • Storing Projects on Google Drive (Cloud)

    - by JamesKraw
    I've started using Google Drive for my cloud needs and backing up pretty much everything. I've got the app installed so it auto-sync's all my content in most things. My question is this, I am currently coding for iOS (although this applies to any coding project) and am split on storing my project files on Google Drive while using sync. My theory is that if I did use it, I'd never have to worry about system crashes or lost code before backups, but if I do use it it will be sync'ing a-lot and I thought there might be problems with it detecting changes and trying to sync for example half way through compiling. Bandwidth isn't an issue as I have fast connection and unlimited monthly allowance. Has anyone ever used this, or similar cloud-based sync'ing (dropbox etc) for this and knows whether it works or not or whether there are any potential problems etc.

    Read the article

  • Parallelize code using CUDA [migrated]

    - by user878944
    If I have a code which takes struct variable as input and manipulate it's elements, how can I parallelize this using CUDA? void BackpropagateLayer(NET* Net, LAYER* Upper, LAYER* Lower) { INT i,j; REAL Out, Err; for (i=1; i<=Lower->Units; i++) { Out = Lower->Output[i]; Err = 0; for (j=1; j<=Upper->Units; j++) { Err += Upper->Weight[j][i] * Upper->Error[j]; } Lower->Error[i] = Net->Gain * Out * (1-Out) * Err; } } Where NET and LAYER are structs defined as: typedef struct { /* A LAYER OF A NET: */ INT Units; /* - number of units in this layer */ REAL* Output; /* - output of ith unit */ REAL* Error; /* - error term of ith unit */ REAL** Weight; /* - connection weights to ith unit */ REAL** WeightSave; /* - saved weights for stopped training */ REAL** dWeight; /* - last weight deltas for momentum */ } LAYER; typedef struct { /* A NET: */ LAYER** Layer; /* - layers of this net */ LAYER* InputLayer; /* - input layer */ LAYER* OutputLayer; /* - output layer */ REAL Alpha; /* - momentum factor */ REAL Eta; /* - learning rate */ REAL Gain; /* - gain of sigmoid function */ REAL Error; /* - total net error */ } NET; What I could think of is to first convert the 2d Weight into 1d. And then send it to kernel to take the product or just use the CUBLAS library. Any suggestions?

    Read the article

  • Dividing sections inside an omp parallel for : OpenMP

    - by Sayan Ghosh
    Hi, I have a situation like: #pragma omp parallel for private(i, j, k, val, p, l) for (i = 0; i < num1; i++) { for (j = 0; j < num2; j++) { for (k = 0; k < num3; k++) { val = m[i + j*somenum + k*2] if (val != 0) for (l = start; l <= end; l++) { someFunctionThatWritesIntoGlobalArray((i + l), j, k, (someFunctionThatGetsValueFromAnotherArray((i + l), j, k) * val)); } } } for (p = 0; p < num4; p++) { m[p] = 0; } } Thanks for reading, phew! Well I am noticing a very minor difference in the results (0.999967[omp] against 1[serial]), when I use the above (which is 3 times faster) against the serial implementation. Now I know I am doing a mistake here...especially the connection between loops is evident. Is it possible to parallelize this using omp sections? I tried some options like making shared(p) {doing this, I got correct values, as in the serial form}, but there was no speedup then. Any general advice on handling openmp pragmas over a slew of for loops would also be great for me!

    Read the article

  • Parallel scroll textarea and webpage with jquery

    - by Roger Rogers
    This is both a conceptual and how-to question: In wiki formatting, or non WYSIWYG editor scenarios, you typically have a textarea for content entry and then an ancillary preview pane to show results, just like StackOverflow. This works fairly well, except with larger amounts of text, such as full page wikis, etc. I have a concept that I'd like critical feedback/advice on: Envision a two pane layout, with the preview content on the left side, taking up ~ 2/3 of the page, and the textarea on the right side, taking up ~ 1/3 of the page. The textarea would float, to remain in view, even if the user scrolls the browser window. Furthermore, if the user scrolls the textarea content, supposing it has exceeded the textarea's frame size, the page would scroll so that the content presently showing in the textarea syncs/is parallel with the content showing in the browser window. I'm imagining a wiki scenario, where going back and forth between markup and preview is frustrating. I'm curious what others think; is there anything out there like this? Any suggestions on how to attack this functionality (ideally using jquery)? Thanks

    Read the article

  • Computing complex math equations in python

    - by dassouki
    Are there any libraries or techniques that simplify computing equations ? Take the following two examples: F = B * { [ a * b * sumOf (A / B ''' for all i ''' ) ] / [ sumOf(c * d * j) ] } where: F = cost from i to j B, a, b, c, d, j are all vectors in the format [ [zone_i, zone_j, cost_of_i_to_j], [..]] This should produce a vector F [ [1,2, F_1_2], ..., [i,j, F_i_j] ] T_ij = [ P_i * A_i * F_i_j] / [ SumOf [ Aj * F_i_j ] // j = 1 to j = n ] where: n is the number of zones T = vector [ [1, 2, A_1_2, P_1_2], ..., [i, j, A_i_j, P_i_j] ] F = vector [1, 2, F_1_2], ..., [i, j, F_i_j] so P_i would be the sum of all P_i_j for all j and Aj would be sum of all P_j for all i I'm not sure what I'm looking for, but perhaps a parser for these equations or methods to deal with multiple multiplications and products between vectors? To calculate some of the factors, for example A_j, this is what i use from collections import defaultdict A_j_dict = defaultdict(float) for A_item in TG: A_j_dict[A_item[1]] += A_item[3] Although this works fine, I really feel that it is a brute force / hacking method and unmaintainable in the case we want to add more variables or parameters. Are there any math equation parsers you'd recommend? Side Note: These equations are used to model travel. Currently I use excel to solve a lot of these equations; and I find that process to be daunting. I'd rather move to python where it pulls the data directly from our database (postgres) and outputs the results into the database. All that is figured out. I'm just struggling with evaluating the equations themselves. Thanks :)

    Read the article

  • php mysql parallel array checkboxes

    - by gramware
    I have an array of checkboxes that I edit at once to set up a 'tinyint' field. the problem comes in when i uncheck the checkbox and post the vales to mysql. since it posts an array of checkboxes and another parallel array of values to edit, unchecking a checkbox results in the 0 value been ignored by PHP_POST and hence the checkbox array will be less by the number of unchecked values in the form while the array to be edited will have all the records in the form. here is the submit code while($row=mysql_fetch_array($result)) { $checked = ($row[active]==1) ? 'checked="checked"' : ''; ... echo "<input type='hidden' name='TrID[]' value='$TrID'>"; echo "<input type='checkbox' name='active1[]' value='$row[active]''$checked' >"; ... and the processing php script $userid = ($_POST['TrID']); $checked= ($_POST['active']); $i=0; foreach ($userid as $usid) { if ($checked[$i]==1){ $check = 1; } else{ $check = 0; } $qry1 ="UPDATE `epapers`.`clientelle` SET `active` = '$check' WHERE `clientelle`.`user_id` = '$usid' "; $result = mysql_query($qry1); $i++; }

    Read the article

  • Building a new cluster for mathematical calculations (Win/Lin)

    - by Muhammad Farhan
    I would like to build a new cluster to perform heavy mathematical calculations in Matlab and Abaqus. One of my friend told me that distributed computing is way faster than parallel computing, which is very true after reading a bit on the internet. However, I have never clustered before. Current workstation I own: Dell Precision T5400 2 x Intel Xeon 2.5 GHz 16 GB RAM (2GB x 8) 1 x Western Digital 1TB HDD 7200 rpm 1 x nVidia Quadro FX4600 768MB GPU 1 x 870W PSU OS: Windows 7 Ultimate 64-bit 2nd WS: I can buy another WS similar configuration to the one I own I am not bothered about OS, I am willing to cluster with either Windows or Linux. However, my software are compatible with windows 64-bit only. Please help me setup a cluster. Thank you.

    Read the article

  • NTRU Pseudo-code for computing Polynomial Inverses

    - by Neville
    Hello all. I was wondering if anyone could tell me how to implement line 45 of the following pseudo-code. Require: the polynomial to invert a(x), N, and q. 1: k = 0 2: b = 1 3: c = 0 4: f = a 5: g = 0 {Steps 5-7 set g(x) = x^N - 1.} 6: g[0] = -1 7: g[N] = 1 8: loop 9: while f[0] = 0 do 10: for i = 1 to N do 11: f[i - 1] = f[i] {f(x) = f(x)/x} 12: c[N + 1 - i] = c[N - i] {c(x) = c(x) * x} 13: end for 14: f[N] = 0 15: c[0] = 0 16: k = k + 1 17: end while 18: if deg(f) = 0 then 19: goto Step 32 20: end if 21: if deg(f) < deg(g) then 22: temp = f {Exchange f and g} 23: f = g 24: g = temp 25: temp = b {Exchange b and c} 26: b = c 27: c = temp 28: end if 29: f = f XOR g 30: b = b XOR c 31: end loop 32: j = 0 33: k = k mod N 34: for i = N - 1 downto 0 do 35: j = i - k 36: if j < 0 then 37: j = j + N 38: end if 39: Fq[j] = b[i] 40: end for 41: v = 2 42: while v < q do 43: v = v * 2 44: StarMultiply(a; Fq; temp;N; v) 45: temp = 2 - temp mod v 46: StarMultiply(Fq; temp; Fq;N; v) 47: end while 48: for i = N - 1 downto 0 do 49: if Fq[i] < 0 then 50: Fq[i] = Fq[i] + q 51: end if 52: end for 53: {Inverse Poly Fq returns the inverse polynomial, Fq, through the argument list.} The function StarMultiply returns a polynomial (array) stored in the variable temp. Basically temp is a polynomial (I'm representing it as an array) and v is an integer (say 4 or 8), so what exactly does temp = 2-temp mod v equate to in normal language? How should i implement that line in my code. Can someone give me an example. The above algorithm is for computing Inverse polynomials for NTRUEncrypt key generation. The pseudo-code can be found on page 28 of this document. Thanks in advance.

    Read the article

  • using AsyncTask class for parallel operationand displaying a progress bar

    - by Kumar
    I am displaying a progress bar using Async task class and simulatneously in parallel operation , i want to retrieve a string array from a function of another class that takes some time to return the string array. The problem is that when i place the function call in doing backgroung function of AsyncTask class , it gives an error in Doing Background and gives the message as cant change the UI in doing Background .. Therefore , i placed the function call in post Execute method of Asynctask class . It doesnot give an error but after the progress bar has reached 100% , then the screen goes black and takes some time to start the new activity. How can i display the progress bar and make the function call simultaneously.??plz help , m in distress here is the code package com.integrated.mpr; import android.app.Activity; import android.app.ProgressDialog; import android.content.Intent; import android.os.AsyncTask; import android.os.Bundle; import android.os.Handler; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; public class Progess extends Activity implements OnClickListener{ static String[] display = new String[Choose.n]; Button bprogress; @Override protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.progress); bprogress = (Button) findViewById(R.id.bProgress); bprogress.setOnClickListener(this); } @Override public void onClick(View v) { // TODO Auto-generated method stub switch(v.getId()){ case R.id.bProgress: String x ="abc"; new loadSomeStuff().execute(x); break; } } public class loadSomeStuff extends AsyncTask<String , Integer , String>{ ProgressDialog dialog; protected void onPreExecute(){ dialog = new ProgressDialog(Progess.this); dialog.setProgressStyle(ProgressDialog.STYLE_HORIZONTAL); dialog.setMax(100); dialog.show(); } @Override protected String doInBackground(String... arg0) { // TODO Auto-generated method stub for(int i = 0 ;i<40;i++){ publishProgress(5); try { Thread.sleep(1000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } dialog.dismiss(); String y ="abc"; return y; } protected void onProgressUpdate(Integer...progress){ dialog.incrementProgressBy(progress[0]); } protected void onPostExecute(String result){ display = new Logic().finaldata(); Intent openList = new Intent("com.integrated.mpr.SENSITIVELIST"); startActivity(openList); } } }

    Read the article

  • Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)

    - by Nailuj
    I'm developing an application (winforms C# .NET 4.0) where I access a lookup functionality from a 3rd party through a simple HTTP request. I call an url with a parameter, and in return I get a small string with the result of the lookup. Simple enough. The challenge is however, that I have to do lots of these lookups (a couple of thousands), and I would like to limit the time needed. Therefore I would like to run requests in parallel (say 10-20). I use a ThreadPool to do this, and the short version of my code looks like this: public void startAsyncLookup(Action<LookupResult> returnLookupResult) { this.returnLookupResult = returnLookupResult; foreach (string number in numbersToLookup) { ThreadPool.QueueUserWorkItem(lookupNumber, number); } } public void lookupNumber(Object threadContext) { string numberToLookup = (string)threadContext; string url = @"http://some.url.com/?number=" + numberToLookup; WebClient webClient = new WebClient(); Stream responseData = webClient.OpenRead(url); LookupResult lookupResult = parseLookupResult(responseData); returnLookupResult(lookupResult); } I fill up numbersToLookup (a List<String>) from another place, call startAsyncLookup and provide it with a call-back function returnLookupResult to return each result. This works, but I found that I'm not getting the throughput I want. Initially I thought it might be the 3rd party having a poor system on their end, but I excluded this by trying to run the same code from two different machines at the same time. Each of the two took as long as one did alone, so I could rule out that one. A colleague then tipped me that this might be a limitation in Windows. I googled a bit, and found amongst others this post saying that by default Windows limits the number of simultaneous request to the same web server to 4 for HTTP 1.0 and to 2 for HTTP 1.1 (for HTTP 1.1 this is actually according to the specification (RFC2068)). The same post referred to above also provided a way to increase these limits. By adding two registry values to [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] (MaxConnectionsPerServer and MaxConnectionsPer1_0Server), I could control this myself. So, I tried this (sat both to 20), restarted my computer, and tried to run my program again. Sadly though, it didn't seem to help any. I also kept an eye on the Resource Monitor (see screen shot) while running my batch lookup, and I noticed that my application (the one with the title blacked out) still only was using two TCP connections. So, the question is, why isn't this working? Is the post I linked to using the wrong registry values? Is this perhaps not possible to "hack" in Windows any longer (I'm on Windows 7)? Any ideas would be highly appreciated :) And just in case anyone should wonder, I have also tried with different settings for MaxThreads on ThreadPool (everyting from 10 to 100), and this didn't seem to affect my throughput at all, so the problem shouldn't be there either.

    Read the article

  • SAP se lance dans le Cloud Computing et présente la première application de sa nouvelle gamme « On-Demand »

    SAP se lance dans le Cloud Computing Et présente la première application de sa nouvelle génération d'applications « On-Demand » Cette année, la participation de SAP au très prestigieux salon CeBIT s'articule autour de la promotion de la nouvelle génération de ses solutions « On-Demand », alliant la puissance du Cloud Computing à la flexibilité du paiement à l'utilisation en mode SaaS. Pour répondre aux attentes des entreprises qui cherchent aujourd'hui à optimiser leurs processus business et les adapter à leurs métiers sans réinvestir dans leurs systèmes d'informations, SAP introduit une nouvelle gamme de solutions On-Demand, intégrée à la suite logicielle SAP Business Suite.

    Read the article

  • Ikoula lance un nouveau serveur dédié, le Green GPU propose « 192 CUDA Parallel Processor Cores » aux professionnels de la création graphique

    Ikoula lance un nouveau serveur dédié le Green GPU propose « 192 CUDA Parallel Processor Cores » aux professionnels de la création graphiqueL'hébergeur français Ikoula propose à la location un nouveau serveur dédié qui intègre une carte graphique professionnelle ou GPU. La Nvidia Quadro 2000D est la carte retenue pour le lancement de cette nouvelle offre de serveur dédié. La Quadro 2000D bénéficie du coeur de la technologie Fermi de Nvidia et propose 192 CUDA Parallel Processor Cores, le tout accompagné...

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >