Search Results

Search found 9545 results on 382 pages for 'least privilege'.

Page 46/382 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Can anyone give me tips how to solve this using Graphs in C or Java?

    - by peiska
    Can anyone give me tips how to solve this using Graphs in C or Java? I have a rectangular sector, that I have to escape, and I have energy that goes disappear every step that i give in the area. I have to give the only one possible solution, the one that uses the least number of steps. If there are at least two sectors with the same number of steps (X1, Y1) and (X2, Y2) then choose the first if X1 < X2 or if X1 = X2 and Y1 < Y2. the position( 1,1) corresponds to the upper left corner. Examples: This is one sector,and i start with 40 of energy and in the position (3,3) 12 11 12 11 3 12 12 12 11 11 12 2 1 13 11 11 12 2 13 2 14 10 11 13 3 2 1 12 10 11 13 13 11 12 13 12 12 11 13 11 13 12 13 12 12 11 11 11 11 13 13 10 10 13 11 12 the best solution to exit the sector is the position (5, 1) the remain energy is 12 and i need 8 steeps to leave the area. for this sector i start with 8 of energy and in the position (3,4). 4 3 3 2 2 3 2 2 5 2 2 2 3 3 2 1 2 2 3 2 2 4 3 3 2 2 4 1 3 1 4 3 2 3 1 2 2 3 3 0 3 4 And for this one there is no way out, cause it looses all the energy.

    Read the article

  • How to keep your unit tests simple and isolated and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • How can i get rid of 'ORA-01489: result of string concatenation is too long' in this query?

    - by core_pro
    this query gets the dominating sets in a network. so for example given a network A<----->B B<----->C B<----->D C<----->E D<----->C D<----->E F<----->E it returns B,E B,F A,E but it doesn't work for large data because i'm using string methods in my result. i have been trying to remove the string methods and return a view or something but to no avail With t as (select 'A' as per1, 'B' as per2 from dual union all select 'B','C' from dual union all select 'B','D' from dual union all select 'C','B' from dual union all select 'C','E' from dual union all select 'D','C' from dual union all select 'D','E' from dual union all select 'E','C' from dual union all select 'E','D' from dual union all select 'F','E' from dual) ,t2 as (select distinct least(per1, per2) as per1, greatest(per1, per2) as per2 from t union select distinct greatest(per1, per2) as per1, least(per1, per2) as per1 from t) ,t3 as (select per1, per2, row_number() over (partition by per1 order by per2) as rn from t2) ,people as (select per, row_number() over (order by per) rn from (select distinct per1 as per from t union select distinct per2 from t) ) ,comb as (select sys_connect_by_path(per,',')||',' as p from people connect by rn > prior rn ) ,find as (select p, per2, count(*) over (partition by p) as cnt from ( select distinct comb.p, t3.per2 from comb, t3 where instr(comb.p, ','||t3.per1||',') > 0 or instr(comb.p, ','||t3.per2||',') > 0 ) ) ,rnk as (select p, rank() over (order by length(p)) as rnk from find where cnt = (select count(*) from people) order by rnk ) select distinct trim(',' from p) as p from rnk where rnk.rnk = 1`

    Read the article

  • Signable, streamable, "readable" archive format?

    - by alexvoda
    Is there any archive format that offers the following: be digitally sign-able with a digital certificate from a trusted source like Verisign - for preventing changes to the file (I am not referring to read only, but in case the file was changed it should no longer be signed telling the user this is not the original file) be stream-able - be able to be opened even if not all of the content has been transfered (also not strictly linearly) be "readable" - be able to read the data without extracting to a temporary folder (AFAIK if you open a file in a zip archive it is extracted first, and this stays true even for zip based formats like OOXML. This is not what I want) be portable - support on at least Windows, Linux and Mac OS X is a must, or at least future support be free of patents - Be open source - also preferably a license that allows commercial use(as far as i know GPL a share-alike licence so it doesn't allow comercial use, BSD on the other hand alows it) Note: Though it may come in handy eventually I can not think right now of a scenario that would require both point 1 and point 2 simultaneously. Or lets leave it a be able to check the signature only when the whole file was downloaded. I am not interested in: being able to be compressed being supported on legacy systems Does any existing archive format fit this description (tar evolutions like DAR and pax come to mind) ? If there is, are there programing libraries available for the above mentioned OSs? If not, would it be hard to create such a thing? EDIT: clarrified piont 5 EDIT 2: added a note to clarify point 1 and 2 P.S.: This is my first question on StackOverflow

    Read the article

  • using load instead of other I/O command

    - by Amadou
    How can I modify this program using load-ascii command to read (x,y)? n=0; sum_x = 0; sum_y = 0; sum_x2 = 0; sum_xy = 0; disp('This program performs a least-squares fit of an'); disp('input data set to a straight line. Enter the name'); disp('of the file containing the input (x,y) pairs: '); filename = input(' ','s'); [fid,msg] = fopen(filename,'rt'); if fid<0 disp(msg); else [in,count]=fscanf(fid, '%g %g',2); while ~feof(fid) x=in(1); y=in(2); n=n+1; sum_x=sum_x+x; sum_y=sum_y+y; sum_x2=sum_x2+x.^2; sum_xy=sum_xy+x*y; [in,count] = fscanf(fid, '%f',[1 2]); end fclose(fid); x_bar = sum_x / n; y_bar = sum_y / n; slope = (sum_xy - sum_x*y_bar) / (sum_x2 - sum_x*x_bar); y_int = y_bar - slope * x_bar; fprintf('Regression coefficients for the least-squares line:\n'); fprintf('Slope (m) =%12.3f\n',slope); fprintf('Intercept (b) =%12.3f\n',y_int); fprintf('No of points =%12d\n',n); end

    Read the article

  • How to convert many thousands of lines of VBScript to C#?

    - by Ross Patterson
    I have a collection of about 10,000 small VBScript programs (50-100 lines each) and a small collection of larger ones, and I'm looking for a way to convert them to C# without resorting to by-hand transliteration. The programs are automated test cases for a web application, written for HP/Mercury's QuickTest Pro, and I'm trying to turn them into test cases for Selenium. Luckily, the tests appear to be well-written, using a library of building blocks and idioms (the larger programs), so the test cases actually resemble a domain-specific language more than they do VBScript, and the QTP-ness is well-buried inside the libraries. Ideally, what I'm searching for is a tool that can do the syntactic transformation from VBScript to C# for both the dsl-ish test cases and also the more complicated building-block libraries. That would leave me with a manual cleanup of the libraries, and probably very little work on the test cases. If I could find a VBScript-to-VB.NET translator, I'd take that also, as I suspect I could compile the VB.NET and then de-compile to C# using .NET Relector or something similar. Plan B is to write a translator of my own for the test cases, since they're in a very straight-line style, but it wouldn't help with the libraries. Any suyggestions? I haven't written a compiler in at least 15 years, and while I haven't forgotten how, I'm not looking forward to it - least of all for VBScript!

    Read the article

  • OO model for nsxmlparser when delegate is not self

    - by richard
    Hi, I am struggling with the correct design for the delegates of nsxmlparser. In order to build my table of Foos, I need to make two types of webservice calls; one for the whole table and one for each row. It's essentially a master-query then detail-query, except the master-query-result-xml doesn't return enough information so i then need to query the detail for each row. I'm not dealing with enormous amounts of data. Anyway - previously I've just used NSXMLParser *parser = [[NSXMLParser alloc]init]; [parser setDelegate:self]; [parser parse]; and implemented all the appropriate delegate methods in whatever class i'm in. In attempt at cleanliness, I've now created two separate delegate classes and done something like: NSXMLParser *xp = [[NSXMLParser alloc]init]; MyMasterXMLParserDelegate *masterParserDelegate = [[MyMasterXMLParser]alloc]init]; [xp setDelegate:masterParserDelegate]; [xp parse]; In addition to being cleaner (in my opinion, at least), it also means each of the -parser:didStartElement implementations don't spend most of the time trying to figure out which xml they're parsing. So now the real crux of the problem. Before i split out the delegates, i had in the main class that was also implementing the delegate methods, a class-level NSMutableArray that I would just put my objects-created-from-xml in when -parser:didEndElement found the 'end' of each record. Now the delegates are in separate classes, I can't figure out how to have the -parser:didEndElement in the 'detail' delegate class "return" the created object to the calling class. At least, not in a clean OO way. I'm sure i could do it with all sorts of nasty class methods. Does the question make sense? Thanks.

    Read the article

  • Reversible numerical calculations in Prolog

    - by user8472
    While reading SICP I came across logic programming chapter 4.4. Then I started looking into the Prolog programming language and tried to understand some simple assignments in Prolog. I found that Prolog seems to have troubles with numerical calculations. Here is the computation of a factorial in standard Prolog: f(0, 1). f(A, B) :- A > 0, C is A-1, f(C, D), B is A*D. The issues I find is that I need to introduce two auxiliary variables (C and D), a new syntax (is) and that the problem is non-reversible (i.e., f(5,X) works as expected, but f(X,120) does not). Naively, I expect that at the very least C is A-1, f(C, D) above may be replaced by f(A-1,D), but even that does not work. My question is: Why do I need to do this extra "stuff" in numerical calculations but not in other queries? I do understand (and SICP is quite clear about it) that in general information on "what to do" is insufficient to answer the question of "how to do it". So the declarative knowledge in (at least some) math problems is insufficient to actually solve these problems. But that begs the next question: How does this extra "stuff" in Prolog help me to restrict the formulation to just those problems where "what to do" is sufficient to answer "how to do it"?

    Read the article

  • Numerical calculations in Prolog

    - by user8472
    While reading SICP I came across logic programming chapter 4.4. Then I started looking into the Prolog programming language and tried to understand some simple assignments in Prolog. I found that Prolog seems to have troubles with numerical calculations. Here is the computation of a factorial in standard Prolog: f(0, 1). f(A, B) :- A > 0, C is A-1, f(C, D), B is A*D. The issues I find is that I need to introduce two auxiliary variables (C and D), a new syntax (is) and that the problem is non-reversible (i.e., f(5,X) works as expected, but f(X,120) does not). Naively, I expect that at the very least C is A-1, f(C, D) above may be replaced by f(A-1,D), but even that does not work. My question is: Why do I need to do this extra "stuff" in numerical calculations but not in other queries? I do understand (and SICP is quite clear about it) that in general information on "what to do" is insufficient to answer the question of "how to do it". So the declarative knowledge in (at least some) math problems is insufficient to actually solve these problems. But that begs the next question: How does this extra "stuff" in Prolog help me to restrict the formulation to just those problems where "what to do" is sufficient to answer "how to do it"?

    Read the article

  • How can I optimize the SELECT statement running on an Oracle database?

    - by Elvis Lou
    I have a SELECT statement in ORACLE: SELECT COUNT(DISTINCT ds1.endpoint_msisdn) multiple30, dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status) subscription_status FROM daily_summary ds1 join daily_summary ds2 ON ds1.endpoint_msisdn = ds2.endpoint_msisdn, daily_summary_static dss1, daily_summary_static dss2, (SELECT NULL subscription_status FROM dual UNION ALL SELECT -2 subscription_status FROM dual) x WHERE ds1.summary_ts >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND ds1.summary_ts <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss1.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss2.last_active >= To_date('10-04-2012', 'dd-mm-yyyy') - 30 AND dss2.last_active <= To_date('10-04-2012', 'dd-mm-yyyy') AND dss1.service <> dss2.service AND ( dss1.company_scope = 2 OR dss1.company_scope = 5 ) AND ( dss2.company_scope = 2 OR dss2.company_scope = 5 ) AND dss1.company_scope = dss2.company_scope AND ds1.endpoint_noc_id = dss1.endpoint_noc_id AND ds1.endpoint_host_id = dss1.endpoint_host_id AND ds1.endpoint_instance_id = dss1.endpoint_instance_id AND ds2.endpoint_noc_id = dss2.endpoint_noc_id AND ds2.endpoint_host_id = dss2.endpoint_host_id AND ds2.endpoint_instance_id = dss2.endpoint_instance_id AND dss1.endpoint_provisioning_id = dss2.endpoint_provisioning_id AND Least(1, ds1.total_actions) = 1 AND Least(1, ds2.total_actions) = 1 GROUP BY dss1.service, dss1.endpoint_provisioning_id, dss1.company_scope, Nvl(x.subscription_status, dss1.subscription_status); This query took about 26 minutes to return in my environment, but if I remove the section: dss1.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss1.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND dss2.last_active >= to_date('10-04-2012','dd-mm-yyyy') - 30 AND dss2.last_active <= to_date('10-04-2012','dd-mm-yyyy') AND it only took 20 seconds to run. We have index on the column last_active, I don't know why the section slow down the performance so much? any ideas?

    Read the article

  • PHP regex for password validation

    - by Fabio Anselmo
    I not getting the desired effect from a script. I want the password to contain A-Z, a-z, 0-9, and special chars. A-Z a-z 0-9 2 special chars 2 string length = 8 So I want to force the user to use at least 2 digits and at least 2 special chars. Ok my script works but forces me to use the digits or chars back to back. I don't want that. e.g. password testABC55$$ is valid - but i don't want that. Instead I want test$ABC5#8 to be valid. So basically the digits/special char can be the same or diff - but must be split up in the string. PHP CODE: $uppercase = preg_match('#[A-Z]#', $password); $lowercase = preg_match('#[a-z]#', $password); $number = preg_match('#[0-9]#', $password); $special = preg_match('#[\W]{2,}#', $password); $length = strlen($password) >= 8; if(!$uppercase || !$lowercase || !$number || !$special || !$length) { $errorpw = 'Bad Password';

    Read the article

  • Splitting a set of object into several subsets of 'similar' objects

    - by doublep
    Suppose I have a set of objects, S. There is an algorithm f that, given a set S builds certain data structure D on it: f(S) = D. If S is large and/or contains vastly different objects, D becomes large, to the point of being unusable (i.e. not fitting in allotted memory). To overcome this, I split S into several non-intersecting subsets: S = S1 + S2 + ... + Sn and build Di for each subset. Using n structures is less efficient than using one, but at least this way I can fit into memory constraints. Since size of f(S) grows faster than S itself, combined size of Di is much less than size of D. However, it is still desirable to reduce n, i.e. the number of subsets; or reduce the combined size of Di. For this, I need to split S in such a way that each Si contains "similar" objects, because then f will produce a smaller output structure if input objects are "similar enough" to each other. The problems is that while "similarity" of objects in S and size of f(S) do correlate, there is no way to compute the latter other than just evaluating f(S), and f is not quite fast. Algorithm I have currently is to iteratively add each next object from S into one of Si, so that this results in the least possible (at this stage) increase in combined Di size: for x in S: i = such i that size(f(Si + {x})) - size(f(Si)) is min Si = Si + {x} This gives practically useful results, but certainly pretty far from optimum (i.e. the minimal possible combined size). Also, this is slow. To speed up somewhat, I compute size(f(Si + {x})) - size(f(Si)) only for those i where x is "similar enough" to objects already in Si. Is there any standard approach to such kinds of problems? I know of branch and bounds algorithm family, but it cannot be applied here because it would be prohibitively slow. My guess is that it is simply not possible to compute optimal distribution of S into Si in reasonable time. But is there some common iteratively improving algorithm?

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • How do the major C# DI/IoC frameworks compare?

    - by Slomojo
    At the risk of stepping into holy war territory, What are the strengths and weaknesses of these popular DI/IoC frameworks, and could one easily be considered the best? ..: Ninject Unity Castle.Windsor Autofac StructureMap Are there any other DI/IoC Frameworks for C# that I haven't listed here? In context of my use case, I'm building a client WPF app, and a WCF/SQL services infrastructure, ease of use (especially in terms of clear and concise syntax), consistent documentation, good community support and performance are all important factors in my choice. Update: The resources and duplicate questions cited appear to be out of date, can someone with knowledge of all these frameworks come forward and provide some real insight? I realise that most opinion on this subject is likely to be biased, but I am hoping that someone has taken the time to study all these frameworks and have at least a generally objective comparison. I am quite willing to make my own investigations if this hasn't been done before, but I assumed this was something at least a few people had done already. Second Update: If you do have experience with more than one DI/IoC container, please rank and summarise the pros and cons of those, thank you. This isn't an exercise in discovering all the obscure little containers that people have made, I'm looking for comparisons between the popular (and active) frameworks.

    Read the article

  • java programming

    - by Baiba
    ok i have version of t code, please tell me what i need to do when i need to get out of program The INDEX NUMBER OF COLUMN IN WHICH ARE LEAST ZEROS? class Uzd{ public static void main(String args[]){ int mas[][]= {{3,4,7,5,0}, {4,5,3,0,1}, {8,2,4,0,3}, {7,0,2,0,1}, {0,0,1,3,0}}; int nul_mas[] = new int[5]; int nul=0; for(int j=0;j<5;j++){// nul=0; for(int i=0;i<5;i++){ if(mas[i][j]==0){ nul++; } } nul_mas[j]=nul; } for(int i=0;i<5;i++){ for(int j=0;j<5;j++){ System.out.print(mas[i][j]); } System.out.println(); } System.out.println();// atstarpe System.out.println("///zeros in each column///"); for(int i=0;i<5;i++){System.out.print(nul_mas[i]);} System.out.println(); }} and after running it shows: 34750 45301 82403 70201 00130 ///zeros in each column/// But i need not in each column but i need to get out index of column in which zeros are least! in this situation it is column nubmer 2!! 12032

    Read the article

  • Automating Excel 2010 using F#

    - by Clive Norman
    I have been searching for a FAQ to tell me how to open a Excel Workbook/Worksheet and also how to Save the File once I have finished. I notice that in most FAQ and all the books I have purchased on F# one is show how to create a new Workbook/Worksheet but is never shown how to either open or Save it. Being a newbie to F# I would very much appreciate it if anyone could kindly provide me with either an answer or perhaps a few pointers? Update As for why F# and not C# or VB? I am pleased to say that inspite of being a newbie (with the exception of Forth, VBA & Excel 2003, 2007 & 2010 and Visual Basic) I can do this in both VB, VBA & C# and since I've been retired on medical grounds, with plenty of time unfortunately on my hands, I like to continually set myself challenges to keep my little grey cells active and being a sucker for trying new languages....well! F# is now an intergral part of Visual Studio 2010 so I thought - why not. Consider this - if we are not willing to use or at least try a new languages - I would always be wonder if I might have prefer it to VBA, VB, C# ..... and if you look at it from another point of view, if no one is going to use it - why create it in the first place? I suppose you can say if cave men hadn't experimented and made fire by rubbing two sticks together - where would we be now and would matches have been invented? Although an complete answer would be good, I prefer a few pointers, to keep my challenge going. And lastly but not least - thank you for taking the trouble to respond!

    Read the article

  • jQuery $.ajax calls success handler when reuqest fails because of browser reloading

    - by Martin
    I have the following code: $.ajax({ type: "POST", url: url, data: sendable, dataType: "json", success: function(data) { if(customprocessfunc) customprocessfunc(data); }, error: function(XMLHttpRequest, textStatus, errorThrown){ // error handler here } }); I have a timer which makes AJAX requests often. If I do not receive anything in 'data', I show an error message to the user - it means, something wnet bad on the server. The problem is when user reloads the page while the AJAX call is in progress. I can see in the firebug that the AJAX call fails (URL is colored red and no HTTP status is displayed) so I expect that jQuery will stop the reuqest or at least go to the error handler. But it goes to the success handler and passes null in the 'data' variable. As a result, when user reloads the page, sometimes he can see my big red message about unknown error (because data is null). Is there any way to make jQuery abort the request on complete reloading all at least not to call my success function? I have no way to know in the success handler why the data is null - did it came empty from the server or the call was aborted because of reload.

    Read the article

  • To implement a remote desktop sharing solution

    - by Cameigons
    Hi, I'm on planning/modeling phase to develop a remote desktop sharing solution, which must be web browser based. In other words: an user will be able to see and interact with someone's remote desktop using his web-browser. Everything the user who wants to share his desktop will need, besides his browser, is installing an add-in, which he's going to be prompted about when necessary. The add-in is required since (afaik) no browser technology allows desktop control from an app running within the browser alone. The add-in installation process must be as simple and transparent as possible to the user (similar to AdobeConnectNow, in case anyone's acquainted with it). The user can share his desktop with lots of people at the same time, but concede desktop control to only one of them at a time(makes no sense being otherwise). Project requirements: All technology employed must be open-source license compatible Both front ends are going to be in flash (browser) Must work on Linux, Windows XP(and later) and MacOSX. Must work at least with IE7(and later) and Firefox3.0(and later). At the very least, once the sharer's stream hits the server from where it'll be broadcast, hereon it must be broadcasted in flv (so I'm thinking whether to do the encoding at the client's machine (the one sharing the desktop) or send it in some other format to the server and encode it there). Performance and scalability are important: It must be able to handle hundreds of dozens of users(one desktop sharer, the rest viewers) We'll definitely be using red5. My doubts concern mostly implementing the desktop publisher side (add-in and streamer): 1) Are you aware of other projects that I could look into for ideas? (I'm aware of bigbluebutton.org and code.google.com/p/openmeetings) 2) Should I base myself on VNC ? 3) Bearing in mind the need to have it working cross-platform, what language should I go with? (My team is very used with java and I have some knowledge of C/C++, but anything goes really). 4) Any other advices are appreciated.

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Overriding form submit based on counting elements with jquery.each

    - by MrGrigg
    I am probably going about this all wrong, but here's what I'm trying to do: I have a form that has approximately 50 select boxes that are generated dynamically based on some database info. I do not have control over the IDs of the text boxes, but I can add a class to each of them. Before the form is submitted, the user needs to select at least one item from the select box, but no more than four. I'm a little bit sleepy, and I'm unfamiliar with jQuery overall, but I'm trying to override $("form").submit, and here's what I'm doing. Any advice or suggestions are greatly appreciated. $("form").submit(function() { $('.sportsCoachedValidation').each(function() { if ($('.sportsCoachedValidation :selected').text() != 'N/A') { sportsSelected++ } }); if (sportsSelected >= 1 && sportsSelected <= 4) { return true; } else if (sportsSelected > 4) { alert('You can only coach up to four sports.'); sportsSelected = 0; return false; } else { alert('Please select at least one coached sport.'); sportsSelected = 0; return false; } });

    Read the article

  • How to programmatically create a node in Drupal 8?

    - by chapka
    I'm designing a new module in Drupal 8. It's a long-term project that won't be going public for a few months at least, so I'm using it as a way to figure out what's new. In this module, I want to be able to programmatically create nodes. In Drupal 7, I would do this by creating the object, then calling "node_submit" and "node_save". These functions no longer exist in Drupal 8. Instead, according to the documentation, "Modules and scripts may programmatically submit nodes using the usual form API pattern." I'm at a loss. What does this mean? I've used Form API to create forms in Drupal 7, but I don't get what the docs are saying here. What I'm looking to do is programmatically create at least one and possibly multiple new nodes, based on information not taken directly from a user-presented form. I need to be able to: 1) Specify the content type 2) Specify the URL path 3) Set any other necessary variables that would previously have been handled by the now-obsolete node_object_prepare() 4) Commit the new node object I would prefer to be able to do this in an independent, highly abstracted function not tied to a specific block or form. So what am I missing?

    Read the article

  • Do the 'up to date' guarantees provided by final field in Java's memory model extend to indirect ref

    - by mattbh
    The Java language spec defines semantics of final fields in section 17.5: The usage model for final fields is a simple one. Set the final fields for an object in that object's constructor. Do not write a reference to the object being constructed in a place where another thread can see it before the object's constructor is finished. If this is followed, then when the object is seen by another thread, that thread will always see the correctly constructed version of that object's final fields. It will also see versions of any object or array referenced by those final fields that are at least as up-to-date as the final fields are. My question is - does the 'up-to-date' guarantee extend to the contents of nested arrays, and nested objects? An example scenario: Thread A constructs a HashMap of ArrayLists, then assigns the HashMap to final field 'myFinal' in an instance of class 'MyClass' Thread B sees a (non-synchronized) reference to the MyClass instance and reads 'myFinal', and accesses and reads the contents of one of the ArrayLists In this scenario, are the members of the ArrayList as seen by Thread B guaranteed to be at least as up to date as they were when MyClass's constructor completed?

    Read the article

  • Design for fastest page download

    - by mexxican
    I have a file with millions of URLs/IPs and have to write a program to download the pages really fast. The connection rate should be at least 6000/s and file download speed at least 2000 with avg. 15kb file size. The network bandwidth is 1 Gbps. My approach so far has been: Creating 600 socket threads with each having 60 sockets and using WSAEventSelect to wait for data to read. As soon as a file download is complete, add that memory address(of the downloaded file) to a pipeline( a simple vector ) and fire another request. When the total download is more than 50Mb among all socket threads, write all the files downloaded to the disk and free the memory. So far, this approach has been not very successful with the rate at which I could hit not shooting beyond 2900 connections/s and downloaded data rate even less. Can somebody suggest an alternative approach which could give me better stats. Also I am working windows server 2008 machine with 8 Gig of memory. Also, do we need to hack the kernel so as we could use more threads and memory. Currently I can create a max. of 1500 threads and memory usage not going beyond 2 gigs [ which technically should be much more as this is a 64-bit machine ]. And IOCP is out of question as I have no experience in that so far and have to fix this application today. Thanks Guys!

    Read the article

  • Reading and writing in parallel

    - by Malfist
    I want to be able to read and write a large file in parallel, or if not in parallel, at least in blocks so that I don't use up so much memory. This is my current code: // Define memory stream which will be used to hold encrypted data. MemoryStream memoryStream = new MemoryStream(); // Define cryptographic stream (always use Write mode for encryption). CryptoStream cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write); //start encrypting using (BinaryReader reader = new BinaryReader(File.Open(fileIn, FileMode.Open))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = reader.Read(buffer, 0, buffer.Length); cryptoStream.Write(buffer, 0, read); } while (read == buffer.Length); } // Finish encrypting. cryptoStream.FlushFinalBlock(); // Convert our encrypted data from a memory stream into a byte array. //byte[] cipherTextBytes = memoryStream.ToArray(); //write our memory stream to a file memoryStream.Position = 0; using (BinaryWriter writer = new BinaryWriter(File.Open(fileOut, FileMode.Create))) { byte[] buffer = new byte[1024 * 1024]; int read = 0; do { read = memoryStream.Read(buffer, 0, buffer.Length); writer.Write(buffer, 0, read); } while (read == buffer.Length); } // Close both streams. memoryStream.Close(); cryptoStream.Close(); As you can see, it reads the entire file into memory, encrypts it, then writes it out. If I happen to be encrypting files that are very large (2GB+) it tends not to work, or at the very least, consumes ~97% of my memory. How could I do it in a more effective manner?

    Read the article

  • Filtering string in Python

    - by Ecce_Homo
    I am making algorithm for checking the string (e-mail) - like "E-mail addres is valid" but their are rules. First part of e-mail has to be string that has 1-8 characters (can contain alphabet, numbers, underscore [ _ ]...all the parts that e-mail contains) and after @ the second part of e-mail has to have string that has 1-12 characters (also containing all legal expressions) and it has to end with top level domain .com EDIT email = raw_input ("Enter the e-mail address:") length = len (email) if length > 20 print "Address is too long" elif lenght < 5: print "Address is too short" if not email.endswith (".com"): print "Address doesn't contain correct domain ending" first_part = len (splitting[0]) second_part = len(splitting[1]) account = splitting[0] domain = splitting[1] for c in account: if c not in "abcdefghijklmopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.": print "Invalid char", "->", c,"<-", "in account name of e-mail" for c in domain: if c not in "abcdefghijklmopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.": print "Invalid char", "->", c,"<-", "in domain of e-mail" if first_part == 0: print "You need at least 1 character before the @" elif first_part> 8: print "The first part is too long" if second_part == 4: print "You need at least 1 character after the @" elif second_part> 16: print "The second part is too long" else: # if everything is fine return this print "E-mail addres is valid" EDIT: After reproting what is wrong with our input, now I need to make Python recognize valid address and return ("E-mail adress is valid") This is the best i can do with my knowledge....and we cant use regular expressions, teacher said we are going to learn them later.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >