Search Results

Search found 24266 results on 971 pages for 'api design'.

Page 314/971 | < Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >

  • Codechef practice question help needed - find trailing zeros in a factorial

    - by manugupt1
    I have been working on this for 24 hours now, trying to optimize it. The question is how to find the number of trailing zeroes in factorial of a number in range of 10000000 and 10 million test cases in about 8 secs. The code is as follows: #include<iostream> using namespace std; int count5(int a){ int b=0; for(int i=a;i>0;i=i/5){ if(i%15625==0){ b=b+6; i=i/15625; } if(i%3125==0){ b=b+5; i=i/3125; } if(i%625==0){ b=b+4; i=i/625; } if(i%125==0){ b=b+3; i=i/125; } if(i%25==0){ b=b+2; i=i/25; } if(i%5==0){ b++; } else break; } return b; } int main(){ int l; int n=0; cin>>l; //no of test cases taken as input int *T = new int[l]; for(int i=0;i<l;i++) cin>>T[i]; //nos taken as input for the same no of test cases for(int i=0;i<l;i++){ n=0; for(int j=5;j<=T[i];j=j+5){ n+=count5(j); //no of trailing zeroes calculted } cout<<n<<endl; //no for each trialing zero printed } delete []T; } Please help me by suggesting a new approach, or suggesting some modifications to this one.

    Read the article

  • GRAPH PROBLEM: find an algorithm to determine the shortest path from one point to another in a recta

    - by newba
    I'm getting such an headache trying to elaborate an appropriate algorithm to go from a START position to a EXIT position in a maze. For what is worth, the maze is rectangular, maxsize 500x500 and, in theory, is resolvable by DFS with some branch and bound techniques ... 10 3 4 7 6 3 3 1 2 2 1 0 2 2 2 4 2 2 5 2 2 1 3 0 2 2 2 2 1 3 3 4 2 3 4 4 3 1 1 3 1 2 2 4 2 2 1 Output: 5 1 4 2 Explanation: Our agent looses energy every time he gives a step and he can only move UP, DOWN, LEFT and RIGHT. Also, if the agent arrives with a remaining energy of zero or less, he dies, so we print something like "Impossible". So, in the input 10 is the initial agent's energy, 3 4 is the START position (i.e. column 3, line 4) and we have a maze 7x6. Think this as a kind of labyrinth, in which I want to find the exit that gives the agent a better remaining energy (shortest path). In case there are paths which lead to the same remaining energy, we choose the one which has the small number of steps, of course. I need to know if a DFS to a maze 500x500 in the worst case is feasible with these limitations and how to do it, storing the remaining energy in each step and the number of steps taken so far. The output means the agent arrived with remaining energy= 5 to the exit pos 1 4 in 2 steps. If we look carefully, in this maze it's also possible to exit at pos 3 1 (column 3, row 1) with the same energy but with 3 steps, so we choose the better one. With these in mind, can someone help me some code or pseudo-code? I have troubles working this around with a 2D array and how to store the remaining energy, the path (or number of steps taken)....

    Read the article

  • Is O_NONBLOCK being set a property of the file descriptor or underlying file?

    - by Daniel Trebbien
    From what I have been reading on The Open Group website on fcntl, open, read, and write, I get the impression that whether O_NONBLOCK is set on a file descriptor, and hence whether non-blocking I/O is used with the descriptor, should be a property of that file descriptor rather than the underlying file. Being a property of the file descriptor means, for example, that if I duplicate a file descriptor or open another descriptor to the same file, then I can use blocking I/O with one and non-blocking I/O with the other. Experimenting with a FIFO, however, it appears that it is not possible to have a blocking I/O descriptor and non-blocking I/O descriptor to the FIFO simultaneously (so whether O_NONBLOCK is set is a property of the underlying file [the FIFO]): #include <errno.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> int main(int argc, char **argv) { int fds[2]; if (pipe(fds) == -1) { fprintf(stderr, "`pipe` failed.\n"); return EXIT_FAILURE; } int fd0_dup = dup(fds[0]); if (fd0_dup <= STDERR_FILENO) { fprintf(stderr, "Failed to duplicate the read end\n"); return EXIT_FAILURE; } if (fds[0] == fd0_dup) { fprintf(stderr, "`fds[0]` should not equal `fd0_dup`.\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should not have `O_NONBLOCK` set.\n"); return EXIT_FAILURE; } if (fcntl(fd0_dup, F_SETFL, fcntl(fd0_dup, F_GETFL) | O_NONBLOCK) == -1) { fprintf(stderr, "Failed to set `O_NONBLOCK` on `fd0_dup`\n"); return EXIT_FAILURE; } if ((fcntl(fds[0], F_GETFL) & O_NONBLOCK)) { fprintf(stderr, "`fds[0]` should still have `O_NONBLOCK` unset.\n"); return EXIT_FAILURE; // RETURNS HERE } char buf[1]; if (read(fd0_dup, buf, 1) != -1) { fprintf(stderr, "Expected `read` on `fd0_dup` to fail immediately\n"); return EXIT_FAILURE; } else if (errno != EAGAIN) { fprintf(stderr, "Expected `errno` to be `EAGAIN`\n"); return EXIT_FAILURE; } return EXIT_SUCCESS; } This leaves me thinking: is it ever possible to have a non-blocking I/O descriptor and blocking I/O descriptor to the same file and if so, does it depend on the type of file (regular file, FIFO, block special file, character special file, socket, etc.)?

    Read the article

  • Aren't Information Expert / Tell Don't Ask at odds with Single Responsibility Principle?

    - by moffdub
    It is probably just me, which is why I'm asking the question. Information Expert, Tell Don't Ask, and SRP are often mentioned together as best practices. But I think they are at odds. Here is what I'm talking about: Code that favors SRP but violates Tell Don't Ask, Info Expert: Customer bob = ...; // TransferObjectFactory has to use Customer's accessors to do its work, // violates Tell Don't Ask CustomerDTO dto = TransferObjectFactory.createFrom(bob); Code that favors Tell Don't Ask / Info Expert but violates SRP: Customer bob = ...; // Now Customer is doing more than just representing the domain concept of Customer, // violates SRP CustomerDTO dto = bob.toDTO(); If they are indeed at odds, that's a vindication of my OCD. Otherwise, please fill me in on how these practices can co-exist peacefully. Thank you. Edit: someone wants a definition of the terms - Information Expert: objects that have the data needed for the operation should host the operation Tell Don't Ask: don't ask objects for data in order to do work; tell the objects to do the work Single Responsibility Principle: each object should have a narrowly defined responsibility

    Read the article

  • Flex: Push the Button

    - by Rachel
    For what real time scenarios/use cases one should go to Flex Technology ? What real time problems you have solved using Flex Technology ? What real time problems have you faced because of using Flex Technology and what was your work around for that use case ?

    Read the article

  • Perl vs Python: implementation of algorithms to deal with advanced data structures

    - by user350571
    I'm learning perl and everytime I search for perl stuff in the internet I get some random page with people saying that perl should die because code written in it looks like a lesson in steganography. Then they say that python is clean and stuff like that. Now, I know that those comparisons are always stupid and made by fellows that feel that languages are a extension of their boring personality so, let me ask instead: can you give me the implementation of a widely known algorithm to deal with a data structure like red-black trees in both languages so I can compare?

    Read the article

  • How do we greatly optimize our MySQL database (or replace it) when using joins?

    - by jkaz
    Hi there, This is the first time I'm approaching an extremely high-volume situation. This is an ad server based on MySQL. However, the query that is used incorporates a lot of JOINs and is generally just slow. (This is Rails ActiveRecord, btw) sel = Ads.find(:all, :select = '*', :joins = "JOIN campaigns ON ads.campaign_id = campaigns.id JOIN users ON campaigns.user_id = users.id LEFT JOIN countries ON countries.campaign_id = campaigns.id LEFT JOIN keywords ON keywords.campaign_id = campaigns.id", :conditions = [flashstr + "keywords.word = ? AND ads.format = ? AND campaigns.cenabled = 1 AND (countries.country IS NULL OR countries.country = ?) AND ads.enabled = 1 AND campaigns.dailyenabled = 1 AND users.uenabled = 1", kw, format, viewer['country'][0]], :order = order, :limit = limit) My questions: Is there an alternative database like MySQL that has JOIN support, but is much faster? (I know there's Postgre, still evaluating it.) Otherwise, would firing up a MySQL instance, loading a local database into memory and re-loading that every 5 minutes help? Otherwise, is there any way I could switch this entire operation to Redis or Cassandra, and somehow change the JOIN behavior to match the (non-JOIN-able) nature of NoSQL? Thank you!

    Read the article

  • Testing system where App-level and Request-level IoC containers exist

    - by Bobby
    My team is in the process of developing a system where we're using Unity as our IoC container; and to provide NHibernate ISessions (Units of work) over each HTTP Request, we're using Unity's ChildContainer feature to create a child container for each request, and sticking the ISession in there. We arrived at this approach after trying others (including defining per-request lifetimes in the container, but there are issues there) and are now trying to decide on a unit testing strategy. Right now, the application-level container itself is living in the HttpApplication, and the Request container lives in the HttpContext.Current. Obviously, neither exist during testing. The pain increases when we decided to use Service Location from our Domain layer, to "lazily" resolve dependencies from the container. So now we have more components wanting to talk to the container. We are also using MSTest, which presents some concurrency dilemmas during testing as well. So we're wondering, what do the bright folks out there in the SO community do to tackle this predicament? How does one setup an application that, during "real" runtime, relies on HTTP objects to hold the containers, but during test has the flexibility to build-up and tear-down the containers consistently, and have the ServiceLocation bits get to those precise containers. I hope the question is clear, thanks!

    Read the article

  • Criteria SpatialRestrictions.IsWithinDistance NHibernate.Spatial

    - by idjones82
    Has anyone implemented this, or know if it would be difficult to implement this/have any pointers? public static SpatialRelationCriterion IsWithinDistance(string propertyName, object anotherGeometry, double distance) { // TODO: Implement throw new NotImplementedException(); } from NHibernate.Spatial.Criterion.SpatialRestrictions I can use "where NHSP.Distance(PROPERTY, :point)" in hql. But want to combine this query with my existing Criteria query. for the moment I'm creating a rough polygon, and using criteria.Add(SpatialRestrictions.Intersects("PROPERTY", myPolygon));

    Read the article

  • Perl vs Python but with more style than normally

    - by user350571
    I'm learning perl and everytime I search for perl stuff in the internet I get some random page with people saying that perl should die because code written in it looks like a lesson in steganography. Then they say that python is clean and stuff like that. Now, I know that those comparisons are always stupid and made by fellows that feel that languages are a extension of their boring personality so, let me ask instead: can you give me the implementation of a widely known algorithm to deal with a data structure like red-black trees in both languages so I can compare?

    Read the article

  • Circular Dependency Solution

    - by gfoley
    Our current project has ran into a circular dependency issue. Our business logic assembly is using classes and static methods from our SharedLibrary assembly. The SharedLibrary contains a whole bunch of helper functions, such as a SQL Reader class, Enumerators, Global Variables, Error Handling, Logging and Validation. The SharedLibrary needs access to the Business objects, but the Business objects need access to SharedLibrary. The old developers solved this obvious code smell by replicating the functionality of the business objects in the shared library (very anti-DRY). I've spent a day now trying to read about my options to solve this but i'm hitting a dead end. I'm open to the idea of architecture redesign, but only as a last resort. So how can i have a Shared Helper Library which can access the business objects, with the business objects still accessing the Shared Helper Library?

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • O(log n) algorithm for computing rank of union of two sorted lists?

    - by Eternal Learner
    Given two sorted lists, each containing n real numbers, is there a O(log?n) time algorithm to compute the element of rank i (where i coresponds to index in increasing order) in the union of the two lists, assuming the elements of the two lists are distinct? I can think of using a Merge procedure to merge the 2 lists and then find the A[i] element in constant time. But the Merge would take O(n) time. How do we solve it in O(log n) time?

    Read the article

  • GORM ID generation and belongsTo association ?

    - by fabien-barbier
    I have two domains : class CodeSetDetail { String id String codeSummaryId static hasMany = [codes:CodeSummary] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_set_detail_id', generator: 'assigned' } } and : class CodeSummary { String id String codeClass String name String accession static belongsTo = [codeSetDetail:CodeSetDetail] static constraints = { id(unique:true,blank:false) } static mapping = { version false id column:'code_summary_id', generator: 'assigned' } } I get two tables with columns: code_set_detail: code_set_detail_id code_summary_id and code_summary: code_summary_id code_set_detail_id (should not exist) code_class name accession I would like to link code_set_detail table and code_summary table by 'code_summary_id' (and not by 'code_set_detail_id'). Note : 'code_summary_id' is define as column in code_set_detail table, and define as primary key in code_summary table. To sum-up, I would like define 'code_summary_id' as primary key in code_summary table, and map 'code_summary_id' in code_set_detail table. How to define a primary key in a table, and also map this key to another table ?

    Read the article

  • Using Facebook Connect auth.RevokeAuthorization in ASP.NET

    - by Focus
    I'm confused as to how revoking authorization works in the ASP.NET Toolkit. I've tried issuing the following: ConnectSession connect = new ConnectSession(FacebookHelper.ApiKey(), FacebookHelper.SecretKey()); Auth x = new Auth(fbSession); x.RevokeAuthorization(); But I get an object reference error during the RevokeAuthorization call. Here's the call definition. Any idea what I'm doing wrong?

    Read the article

  • Unit of Work Pattern in .Net

    - by Jazza
    Does anyone have any concrete examples of a simple Unit of Work pattern in C# or Visual Basic that would handle the following scenario? I'm writing a WinForms application in which a customer can have multiple addresses associated with it. The user can add, edit and delete addresses belonging to the customer before the customer is saved back to the database. Therefore, at the time of saving, all the original addresses need to be deleted from the database and the new addresses added in a single transaction.

    Read the article

  • Retrieve position of a google maps v3 marker to Qt in a desktop app with QtWebKit

    - by nelas
    I'm building a Qt app with Python where you can point and click at a (google) map and get the coordinates of the location. The map is shown through a QWebView loading a simple HTML page and the user can create markers by clicking. Screenshot of the widget after clicking on the map: However, I'm having trouble to retrieve the just-clicked location coordinates back to Qt (so that I can use them as variables -- and, for example, show them in the QLineEdits on the topleft corner above, as current location of the marker). This is the relevant part of the HTML file: <script type="text/javascript"> var map; function initialize() { var local = new google.maps.LatLng(-23.4,-40.3); var myOptions = { zoom: 5, center: local, mapTypeId: google.maps.MapTypeId.ROADMAP } map = new google.maps.Map(document.getElementById("map_canvas"), myOptions); google.maps.event.addListener(map, 'rightclick', function(event) { placeMarker(event.latLng); }); } function placeMarker(location) { var clickedLocation = new google.maps.LatLng(location); var marker = new google.maps.Marker({ position: location, map: map }); map.setCenter(location); } function dummyTxt() { return 'This works.'; } </script> I've been trying with evaluateJavaScript, but was not able to retrieve the coordinates. I tried to created a function to access the position with marker.getPosition(), but with no luck. The dummy below works though.. newplace = QWebView.page().mainFrame().evaluateJavaScript(QString('dummyTxt()')) >>> print newplace.toString() This works. Any suggestions on how to get the coordinates back to Qt?

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Google's Oauth for Installed apps vs. Oauth for Web Apps

    - by burgerguy
    So I'm having trouble understanding something... If you do Oauth for Web Apps, you register your site with a callback URL and get a unique consumer secret key. But once you've obtained an Oauth for Web Apps token, you don't have to generate Oauth calls to the google server from your registered domain. I regularly use my key and token from scripts running via an apache server at localhost on my laptop and Google never says "you're not sending this request from the registered domain." It just sends me the data. Now, as I understand it, if you do Oauth for Installed Apps, you use "anonymous" instead of a secret key you got from Google. I've been thinking of just using the OAuth for Web Apps auth method, then passing that token to an installed app that has my secret code embedded in its innards. The worry is that the code could be discovered by bad people. But what's more secure... making them work for the secret code or letting them default to anonymous? What really goes bad if the "secret" is discovered when the alternative is using "anonymous" as the secret?

    Read the article

  • Is it possible to create static classes in PHP (like in C#)?

    - by aleemb
    I want to create a static class in PHP and have it behave like it does in C#, so Constructor is automatically called on the first call to the class No instantiation required Something of this sort... static class Hello { private static $greeting = 'Hello'; private __construct() { $greeting .= ' There!'; } public static greet(){ echo $greeting; } } Hello::greet(); // Hello There!

    Read the article

  • handling long running large transactions with perl dbi

    - by 1stdayonthejob
    I've got a large transaction comprising of getting lots of data from database A, do some manipulations with this data, then inserting the manipulated data into database B. I've only got permissions to select in database A but I can create tables and insert/update etc in database B. The manipulation and insertion part is written in perl and already in use for loading data into database B from other data sources, so all that's required is to get the necessary data from database A and using it to initialize the perl classes. How can I go about doing this so I can easily track back and pick up from where the error happened if any error occurs during the manipulation or insertion procedures (database disconnection, problems with class initialization because of invalid values, hard disk failure etc...)? Doing the transaction in one go doesn't seem like a good option because the amount data from database A means it would take at least a day or 2 for data manipulation and insertion into database B. The data from database A can be grouped into around 1000 groups using unique keys, with each key containing 1000s of rows each. One way I thought I could do is to write a script that does commits per group, meaning I've got to track which group has already been inserted into database B. The only way I can think of to track the progress of which groups have been processed or not is either in a log file or in a table in database B. A second way I thought could work is to dump all the necessary fields needed for loading the classes for manipulation and insertion into a flatfile, read the file to initialize the classes and insert into database B. This also means that I got to do some logging, but should narrow it down to the exact row in the flatfile if any error occurs. The script will look something like this: use strict; use warnings; use DBI; #connect to database A my $dbh = DBI->connect('dbi:oracle:my_db', $user, $password, { RaiseError => 1, AutoCommit => 0 }); #statement to get data based on group unique key my $sth = $dbh->prepare($my_sql); my @groups; #I have a list of this already open my $fh, '>>', 'my_logfile' or die "can't open logfile $!"; eval { foreach my $g (@groups){ #subroutine to check if group has already been processed, either from log file or from database table next if is_processed($g); $sth->execute($g); my $data = $sth->fetchall_arrayref; #manipulate $data, then use it to load perl classes for insertion into database B #. #. #. } print $fh "$g\n"; }; if ($@){ $dbh->rollback; die "something wrong...rollback"; } So if any errors do occur, I can just run this script again and it should skip the groups or rows that have been processed and continue. Both these methods is just variations on the same theme, and both require going back to where I've been tracking my progress (in table or file), skip the ones that've been commited to database B and process the remaining data. I'm sure there's a better way of doing this but am struggling to think of other solutions. Is there another way of handling large transactions between databases that require data manipulation between getting data out from one and inserting into another? The process doesn't need to be all in Perl, as long as I can reuse the perl classes for manipulating and inserting the data into the database.

    Read the article

  • I cannot grok MVC, what it is, and what it is not?

    - by Hao
    I cannot grok what MVC is, what mindset or programming model should I acquire so MVC stuff can instantly "lightbulb" on my head? If not instantly, what simple programs/projects should I try to do first so I can apply the neat things MVC brings to programming. OOP is intuitive and easier, object is all around us, and the benefits of code reuse using OOP-paradigm instantly click to anyone. You can probably talk to anybody about OOP in a few minutes and lecture some examples and they would get it. While OOP somehow raise the intuitiveness aspect of programming, MVC seems to do the opposite. I'm getting negative thoughts that some future employers(or even clients) would look down upon me for not using MVC technology. Though I probably get the skinnable aspect of MVC, but when I try to apply it to my own project, I don't know where to start. And also some programmers even have diverging views on how to accomplish MVC properly. Take this for instance from Jeff's post about MVC: The view is simply how you lay the data out, how it is displayed. If you want a subset of some data, for example, my opinion is that is a responsibility of the model. So maybe some programmers use MVC, but they somehow inadvertently use the View or the Controller to extract a subset of data. Why we can't have a definitive definition of what and how to accomplish MVC properly? And also, when I search for MVC .NET programs, most of it applies to web programs, not desktop apps, this intrigue me further. My guess is, this is most advantageous to web apps, there's not much problem about intermixed view(html) and controller(program code) in desktop apps.

    Read the article

< Previous Page | 310 311 312 313 314 315 316 317 318 319 320 321  | Next Page >