Search Results

Search found 2628 results on 106 pages for 'lexical analysis'.

Page 89/106 | < Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >

  • I have two choices of Master's classes this fall. Which is the most useful?

    - by ahplummer
    (For background purposes and context): I am a Software Engineer, and manage other Software Engineers currently. I kind of wear two hats right now: one of a programmer, and one as a 'team lead'. In this regard, I've started going back to school to get my Master's degree with an emphasis in Computer Science. I already have a Bachelor's in Computer Science, and have been working in the field for about 13 years. Our primary development environment is a Windows environment, writing in .NET, Delphi, and SQL Server. Choice #1: CST 798 DATA VISUALIZATION Course Description: Basically, this is a course on the "Processing" language: http://processing.org/ Choice #2: CST 711 INFORMATICS Course Description: (From catalog): Informatics is the science of the use and processing of data, information, and knowledge. This course covers a variety of applied issues from information technology, information management at a variety of levels, ranging from simple data entry, to the creation, design and implementation of new information systems, to the development of models. Topics include basic information representation, processing, searching, and organization, evaluation and analysis of information, Internet-based information access tools, ethics and economics of information sharing.

    Read the article

  • Encryption puzzle / How to create a ProxyStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documenation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the ProxyStub looks like.

    Read the article

  • ECM (Niche Vs Mass Market)

    - by Luj Reyes
    Hi Everyone, I recently started a little company with a couple of guys. Ours is the typical startup, a lot of ideas, dreams, talent and work hours :P. Our initial business plan was to develop a DM (Document Manager) with several features found on DropBox and other tools but with a big differentiator. Then we got in the team this Business Guy (I must say that several of us could be called 'Business Guys' but we are mainly hackers, he is just Another 'Networking Guy'), and along with him came this market analysis for a DM aimed at a very specific and narrow niche. We have many elements to believe in his market study and the idea is the classic "The market is X million, so if we grab a 10%...", and the market is really there to grab because all big providers deemed it too little and fled, let's say that the market is 5 million USD and demand very specific features. If we decide to go for this niche product we face a sales cycle of about 7 months, and the main goal of these revenue is to develop more ambitious projects. (Institutional VC is out of the question if you want to keep a marginal ownership of your company in my country). The only overlap between the niche and the mass market product features is the ability to store documents; everything else requires that we focus all of our efforts towards one or the other. I've studied a lot about the differences between Mass and Niche Markets, but I want to hear from people with actual experience. So everything comes down to this: If you have a really “saleable” idea what is the right thing to do: to go for the niche or go for the big prize and target primarily the mass market? Thanks for your input

    Read the article

  • Writing catch block with cleanup operations in Java ...

    - by kedarmhaswade
    I was not able to find any advise on catch blocks in Java that involve some cleanup operations which themselves could throw exceptions. The classic example is that of stream.close() which we usually call in the finally clause and if that throws an exception, we either ignore it by calling it in a try-catch block or declare it to be rethrown. But in general, how do I handle cases like: public void doIt() throws ApiException { //ApiException is my "higher level" exception try { doLower(); } catch(Exception le) { doCleanup(); //this throws exception too which I can't communicate to caller throw new ApiException(le); } } I could do: catch(Exception le) { try { doCleanup(); } catch(Exception e) { //ignore? //log? } throw new ApiException(le); //I must throw le } But that means I will have to do some log analysis to understand why cleanup failed. If I did: catch(Exception le) { try { doCleanup(); } catch(Exception e) { throw new ApiException(e); } It results in losing the le that got me here in the catch block in the fist place. What are some of the idioms people use here? Declare the lower level exceptions in throws clause? Ignore the exceptions during cleanup operation?

    Read the article

  • An unusual type signature

    - by Travis Brown
    In Monads for natural language semantics, Chung-Chieh Shan shows how monads can be used to give a nicely uniform restatement of the standard accounts of some different kinds of natural language phenomena (interrogatives, focus, intensionality, and quantification). He defines two composition operations, A_M and A'_M, that are useful for this purpose. The first is simply ap. In the powerset monad ap is non-deterministic function application, which is useful for handling the semantics of interrogatives; in the reader monad it corresponds to the usual analysis of extensional composition; etc. This makes sense. The secondary composition operation, however, has a type signature that just looks bizarre to me: (<?>) :: (Monad m) => m (m a -> b) -> m a -> m b (Shan calls it A'_M, but I'll call it <?> here.) The definition is what you'd expect from the types; it corresponds pretty closely to ap: g <?> x = g >>= \h -> return $ h x I think I can understand how this does what it's supposed to in the context of the paper (handle question-taking verbs for interrogatives, serve as intensional composition, etc.). What it does isn't terribly complicated, but it's a bit odd to see it play such a central role here, since it's not an idiom I've seen in Haskell before. Nothing useful comes up on Hoogle for either m (m a -> b) -> m a -> m b or m (a -> b) -> a -> m b. Does this look familiar to anyone from other contexts? Have you ever written this function?

    Read the article

  • How to reliably replace a library-defined error handler with my own?

    - by sharptooth
    On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone. Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler. Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called. To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code. How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?

    Read the article

  • ploting 3d graph in matlab?

    - by lytheone
    Hello, I am currently a begineer, and i am using matlab to do a data analysis. I have a a text file with data at the first row is formatted as follow: time;wave height 1;wave height 2;....... I have column until wave height 19 and rows total 4000 rows. Data in the first column is time in second. From 2nd column onwards, it is wave height elevation which is in meter. At the moment I like to ask matlab to plot a 3d graph with time on the x axis, wave elevation on the y axis, and wave elevation that correspond to wave height number from 1 to 19, i.e. data in column 2 row 10 has a let say 8m which is correspond to wave height 1 and time at the column 1 row 10. I have try the following: clear; filename='abc.daf'; path='C:\D'; a=dlmread([path '\' filename],' ', 2, 1); [nrows,ncols]=size(a); t=a(1:nrows,1);%define t from text file for i=(1:20), j=(2:21); end wi=a(:,j); for k=(2:4000), l=k; end r=a(l,:); But everytime i use try to plot them, the for loop wi works fine, but for r=a(l,:);, the plot only either give me the last time data only but i want all data in the file to be plot. Is there a way i can do that. I am sorry as it is a bit confusing but i will be very thankful if anyone can help me out. Thank you!!!!!!!!!!

    Read the article

  • Memory usage in Flash / Flex / AS3

    - by ggambett
    I'm having some trouble with memory management in a flash app. Memory usage grows quite a bit, and I've tracked it down to the way I load assets. I embed several raster images in a class Embedded, like this [Embed(source="/home/gabriel/text_hard.jpg")] public static var ASSET_text_hard_DOT_jpg : Class; I then instance the assets this way var pClass : Class = Embedded[sResource] as Class; return new pClass() as Bitmap; At this point, memory usage goes up, which is perfectly normal. However, nulling all the references to the object doesn't free the memory. Based on this behavior, looks like the flash player is creating an instance of the class the first time I request it, but never ever releases it - not without references, calling System.gc(), doing the double LocalConnection trick, or calling dispose() on the BitmapData objects. Of course, this is very undesirable - memory usage would grow until everything in the SWFs is instanced, regardless of whether I stopped using some asset long ago. Is my analysis correct? Can anything be done to fix this?

    Read the article

  • Encryption puzzle / How to create a PassStub for a Remote Assistance ticket

    - by Jon Clegg
    I am trying to create a ticket for Remote Assistance. Part of that requires creating a PassStub parameter. As of the documentation: http://msdn.microsoft.com/en-us/library/cc240115(PROT.10).aspx PassStub: The encrypted novice computer's password string. When the Remote Assistance Connection String is sent as a file over e-mail, to provide additional security, a password is used.<16 In part 16 they detail how to create as PassStub. In Windows XP and Windows Server 2003, when a password is used, it is encrypted using PROV_RSA_FULL predefined Cryptographic provider with MD5 hashing and CALG_RC4, the RC4 stream encryption algorithm. As PassStub looks like this in the file: PassStub="LK#6Lh*gCmNDpj" If you want to generate one yourself run msra.exe in Vista or run the Remote Assistance tool in WinXP. The documentation says this stub is the result of the function CryptEncrypt with the key derived from the password and encrypted with the session id (Those are also in the ticket file). The problem is that CryptEncrypt produces a binary output way larger then the 15 byte PassStub. Also the PassStub isn't encoding in any way I've seen before. Some interesting things about the PassStub encoding. After doing statistical analysis the 3rd char is always a one of: !#$&()+-=@^. Only symbols seen everywhere are: *_ . Otherwise the valid characters are 0-9 a-z A-Z. There are a total of 75 valid characters and they are always 15 bytes. Running msra.exe with the same password always generates a different PassStub, indicating that it is not a direct hash but includes the rasessionid as they say. Some other ideas I've had is that it is not the direct result of CryptEncrypt, but a result of the rasessionid in the MD5 hash. In MS-RA (http://msdn.microsoft.com/en-us/library/cc240013(PROT.10).aspx). The "PassStub Novice" is simply hex encoded, and looks to be the right length. The problem is I have no idea how to go from any hash to way the PassStub looks like.

    Read the article

  • Reconstructing trees from a "fingerprint"

    - by awshepard
    I've done my SO and Google research, and haven't found anyone who has tackled this before, or at least, anyone who has written about it. My question is, given a "universal" tree of arbitrary height, with each node able to have an arbitrary number of branches, is there a way to uniquely (and efficiently) "fingerprint" arbitrary sub-trees starting from the "universal" tree's root, such that given the universal tree and a tree's fingerprint, I can reconstruct the original tree? For instance, I have a "universal" tree (forgive my poor illustrations), representing my universe of possibilities: Root / / / | \ \ ... \ O O O O O O O (Level 1) /|\/|\...................\ (Level 2) etc. I also have tree A, a rooted subtree of my universe Root / /|\ \ O O O O O / Etc. Is there a way to "fingerprint" the tree, so that given that fingerprint, and the universal tree, I could reconstruct A? I'm thinking something along the lines of a hash, a compression, or perhaps a functional/declarative construction? Big-O analysis (in time or space) is a plus. As a for-instance, a nested expression like: {{(Root)},{(1),(2),(3)},{(2,3),(1),(4,5)}...} representing the actual nodes present at each level in the tree is probably valid, but can it be done more efficiently?

    Read the article

  • How to identify ideas and concepts in a given text

    - by Nick
    I'm working on a project at the moment where it would be really useful to be able to detect when a certain topic/idea is mentioned in a body of text. For instance, if the text contained: Maybe if you tell me a little more about who Mr Balzac is, that would help. It would also be useful if I could have a description of his appearance, or even better a photograph? It'd be great to be able to detect that the person has asked for a photograph of Mr Balzac. I could take a really naïve approach and just look for the word "photo" or "photograph", but this would obviously be no good if they wrote something like: Please, never send me a photo of Mr Balzac. Does anyone know where to start with this? Is it even possible? I've looked into things like nltk, but I've yet to find an example of someone doing something similar and am still not entirely sure what this kind of analysis is called. Any help that can get me off the ground would be great. Thanks!

    Read the article

  • Oracle why does creating trigger fail when there is a field called timestamp?

    - by Omar Kooheji
    I've just wasted the past two hours of my life trying to create a table with an auto incrementing primary key bases on this tutorial, The tutorial is great the issue I've been encountering is that the Create Target fails if I have a column which is a timestamp and a table that is called timestamp in the same table... Why doesn't oracle flag this as being an issue when I create the table? Here is the Sequence of commands I enter: Creating the Table: CREATE TABLE myTable (id NUMBER PRIMARY KEY, field1 TIMESTAMP(6), timeStamp NUMBER, ); Creating the Sequence: CREATE SEQUENCE test_sequence START WITH 1 INCREMENT BY 1; Creating the trigger: CREATE OR REPLACE TRIGGER test_trigger BEFORE INSERT ON myTable REFERENCING NEW AS NEW FOR EACH ROW BEGIN SELECT test_sequence.nextval INTO :NEW.ID FROM dual; END; / Here is the error message I get: ORA-06552: PL/SQL: Compilation unit analysis terminated ORA-06553: PLS-320: the declaration of the type of this expression is incomplete or malformed Any combination that does not have the two lines with a the word "timestamp" in them works fine. I would have thought the syntax would be enough to differentiate between the keyword and a column name. As I've said I don't understand why the table is created fine but oracle falls over when I try to create the trigger... CLARIFICATION I know that the issue is that there is a column called timestamp which may or may not be a keyword. MY issue is why it barfed when I tried to create a trigger and not when I created the table, I would have at least expected a warning. That said having used Oracle for a few hours, it seems a lot less verbose in it's error reporting, Maybe just because I'm using the express version though. If this is a bug in Oracle how would one who doesn't have a support contract go about reporting it? I'm just playing around with the express version because I have to migrate some code from MySQL to Oracle.

    Read the article

  • What is a good programming language/environment for Linux database applications?

    - by Dkellygb
    I could use some advice on my move from the Windows world to Linux. For my business, I have used VB6 and Microsoft Access with both Access databases and SQL server in the past. The easy to use forms, report writers and programming language were perfect for CRUD apps and analysis for our small hotel/restaurant business. After using Linux at home for some time I would like to convert our small business. Our server is already a Linux box using Samba. I am happy with the OpenOffice.org applications instead of Microsoft Office. The only thing which is holding me back is a desktop database application where I can develop the forms and reports we require. Base does not seem to be up to the job yet from my experience. I would like something like VB.Net with visual studio (express) but I would like to avoid Mono – I just don’t see the point of it. (You can correct me if I’m wrong.) But a good collection of forms, controls and a good report writer would be ideal. I have looked at web based stuff like Ruby on Rails, but I think a webserver for our 5 pc network is overkill. I don’t mind running a proper database on our Ubuntu 9.10 server. I may have exposed a few prejudices above but my mind is open. Any thoughts?

    Read the article

  • Which key value store is the most promising/stable?

    - by Mike Trpcic
    I'm looking to start using a key/value store for some side projects (mostly as a learning experience), but so many have popped up in the recent past that I've got no idea where to begin. Just listing from memory, I can think of: CouchDB MongoDB Riak Redis Tokyo Cabinet Berkeley DB Cassandra MemcacheDB And I'm sure that there are more out there that have slipped through my search efforts. With all the information out there, it's hard to find solid comparisons between all of the competitors. My criteria and questions are: (Most Important) Which do you recommend, and why? Which one is the fastest? Which one is the most stable? Which one is the easiest to set up and install? Which ones have bindings for Python and/or Ruby? Edit: So far it looks like Redis is the best solution, but that's only because I've gotten one solid response (from ardsrk). I'm looking for more answers like his, because they point me in the direction of useful, quantitative information. Which Key-Value store do you use, and why? Edit 2: If anyone has experience with CouchDB, Riak, or MongoDB, I'd love to hear your experiences with them (and even more so if you can offer a comparative analysis of several of them)

    Read the article

  • How to perform Linq select new with datetime in SQL 2008

    - by kd7iwp
    In our C# code I recently changed a line from inside a linq-to-sql select new query as follows: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.Year.ToString() + "-" + p.OrderDate.Value.Month.ToString() + "-" + p.OrderDate.Value.Day.ToString() : "") To: OrderDate = (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") The change makes the line smaller and cleaner. It also works fine with our SQL 2008 database in our development environment. However, when the code deployed to our production environment which uses SQL 2005 I received an exception stating: Nullable Type must have a value. For further analysis I copied (p.OrderDate.HasValue ? p.OrderDate.Value.ToString("yyyy-mm-dd") : "") into a string (outside of a Linq statement) and had no problems at all, so it only causes an in issue inside my Linq. Is this problem just something to do with SQL 2005 using different date formats than from SQL 2008? Here's more of the Linq: dt = FilteredOrders.Where(x => x != null).Select(p => new { Order = p.OrderId, link = "/order/" + p.OrderId.ToString(), StudentId = (p.PersonId.HasValue ? p.PersonId.Value : 0), FirstName = p.IdentifierAccount.Person.FirstName, LastName = p.IdentifierAccount.Person.LastName, DeliverBy = p.DeliverBy, OrderDate = p.OrderDate.HasValue ? p.OrderDate.Value.Date.ToString("yyyy-mm-dd") : ""}).ToDataTable(); This is selecting from a List of Order objects. The FilteredOrders list is from another linq-to-sql query and I call .AsEnumerable on it before giving it to this particular select new query. Doing this in regular code works fine: if (o.OrderDate.HasValue) tempString += " " + o.OrderDate.Value.Date.ToString("yyyy-mm-dd");

    Read the article

  • What is the the relation between programming and mathematics?

    - by Math Grad
    Programmers seem to think that their work is quite mathematical. I understand this when you try to optimize something in performance, find the most efficient alogithm, etc.. But it patently seems false when you look at a billing application for a shop, or a systems software riddled with I/O calls. So what is it exactly? Is computation and associated programming really mathematical? Here I have in mind particularly the words of the philosopher Schopenhauer in mind: That arithmetic is the basest of all mental activities is proved by the fact that it is the only one that can be accomplished by means of a machine. Take, for instance, the reckoning machines that are so commonly used in England at the present time, and solely for the sake of convenience. But all analysis finitorum et infinitorum is fundamentally based on calculation. Therefore we may gauge the “profound sense of the mathematician,” of whom Lichtenberg has made fun, in that he says: “These so-called professors of mathematics have taken advantage of the ingenuousness of other people, have attained the credit of possessing profound sense, which strongly resembles the theologians’ profound sense of their own holiness.” I lifted the above quote from here. It seems that programmers are doing precisely the sort of mechanized base mental activity the grand old man is contemptuous about. So what exactly is the deal? Is programming really the "good" kind of mathematics, or just the baser type, or altogether something else just meant for business not to be confused with a pure discipline?

    Read the article

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • mysql subselect alternative

    - by Arnold
    Hi, Lets say I am analyzing how high school sports records affect school attendance. So I have a table in which each row corresponds to a high school basketball game. Each game has an away team id and a home team id (FK to another "team table") and a home score and an away score and a date. I am writing a query that matches attendance with this seasons basketball games. My sample output will be (#_students_missed_class, day_of_game, home_team, away_team, home_team_wins_this_season, away_team_wins_this_season) I now want to add how each team did the previous season to my analysis. Well, I have their previous season stored in the game table but i should be able to accomplish that with a subselect. So in my main select statement I add the subselect: SELECT COUNT(*) FROM game_table WHERE game_table.date BETWEEN 'start of previous season' AND 'end of previous season' AND ( (game_table.home_team = team_table.id AND game_table.home_score > game_table.away_score) OR (game_table.away_team = team_table.id AND game_table.away_score > game_table.home_score)) In this case team-table.id refers to the id of the home_team so I now have all their wins calculated from the previous year. This method of calculation is neither time nor resource intensive. The Explain SQL shows that I have ALL in the Type field and I am not using a Key and the query times out. I'm not sure how I can accomplish a more efficient query with a subselect. It seems proposterously inefficient to have to write 4 of these queries (for home wins, home losses, away wins, away losses). I am sure this could be more lucid. I'll absolutely add color tomorrow if anyone has questions

    Read the article

  • Asymptotic runtime of list-to-tree function

    - by Deestan
    I have a merge function which takes time O(log n) to combine two trees into one, and a listToTree function which converts an initial list of elements to singleton trees and repeatedly calls merge on each successive pair of trees until only one tree remains. Function signatures and relevant implementations are as follows: merge :: Tree a -> Tree a -> Tree a --// O(log n) where n is size of input trees singleton :: a -> Tree a --// O(1) empty :: Tree a --// O(1) listToTree :: [a] -> Tree a --// Supposedly O(n) listToTree = listToTreeR . (map singleton) listToTreeR :: [Tree a] -> Tree a listToTreeR [] = empty listToTreeR (x:[]) = x listToTreeR xs = listToTreeR (mergePairs xs) mergePairs :: [Tree a] -> [Tree a] mergePairs [] = [] mergePairs (x:[]) = [x] mergePairs (x:y:xs) = merge x y : mergePairs xs This is a slightly simplified version of exercise 3.3 in Purely Functional Data Structures by Chris Okasaki. According to the exercise, I shall now show that listToTree takes O(n) time. Which I can't. :-( There are trivially ceil(log n) recursive calls to listToTreeR, meaning ceil(log n) calls to mergePairs. The running time of mergePairs is dependent on the length of the list, and the sizes of the trees. The length of the list is 2^h-1, and the sizes of the trees are log(n/(2^h)), where h=log n is the first recursive step, and h=1 is the last recursive step. Each call to mergePairs thus takes time (2^h-1) * log(n/(2^h)) I'm having trouble taking this analysis any further. Can anyone give me a hint in the right direction?

    Read the article

  • Interesting questions related to lighttpd on Amazon EC2

    - by terence410
    This problem appeared today and I have no idea what is going on. Please share you ideas. I have 1 EC2 DB server (MYSQL + NFS File Sharing + Memcached). And I have 3 EC2 Web servers (lighttpd) where it will mounted the NFS folders on the DB server. Everything going smoothly for months but suddenly there is an interesting phenomenon. In every 8 minutes to 10 minutes, PHP file will be unreachable. This will last about 1 minute and then back to normal. Normal files like .html file are unaffected. All servers have the same problem exactly at the same time. I have spent one whole day to analysis the reason. Finally, I find out when the problem appear, the file descriptor of lighttpd suddenly increased a lot. I used ls /proc/1234/fd | wc -l to check the number of fd. The # of fd is around 250 in normal time. However, when the problem appeared, it will be raised to 1500 and then back to normal. It sounds funny, right? Do you have any idea what's going on? ======================== The CPU graph of one of the web server.

    Read the article

  • Access Violation in std::pair

    - by sameer karjatkar
    I have an application which is trying to populate a pair. Out of nowhere the application crashes. The Windbg analysis on the crash dump suggests: PRIMARY_PROBLEM_CLASS: INVALID_POINTER_READ DEFAULT_BUCKET_ID: INVALID_POINTER_READ STACK_TEXT: 0389f1dc EPFilter32!std::vector<std::pair<unsigned int,unsigned int>,std::allocator<std::pair<unsigned int,unsigned int> > >::size+0xc INVALID_POINTER_READ_c0000005_Test.DLL!std::vector_std::pair_unsigned_int, unsigned_int_,std::allocator_std::pair_unsigned_int,unsigned_int_____::size Following is the code snap in the code where it fails: for (unsigned i1 = 0; i1 < size1; ++i1) { for (unsigned i2 = 0; i2 < size2; ++i2) { const branch_info& b1 = en1.m_branches[i1]; //Exception here :crash const branch_info& b2 = en2.m_branches[i2]; } } where branch_info is std::pair<unsigned int,unsigned int> and the en1.m_branches[i1] fetches me a pair value.

    Read the article

  • MySQL table data transformation -- how can I dis-aggreate MySQL time data?

    - by lighthouse65
    We are coding for a MySQL data warehousing application that stores descriptive data (User ID, Work ID, Machine ID, Start and End Time columns in the first table below) associated with time and production quantity data (Output and Time columns in the first table below) upon which aggregate (SUM, COUNT, AVG) functions are applied. We now wish to dis-aggregate time data for another type of analysis. Our current data table design: +---------+---------+------------+---------------------+---------------------+--------+------+ | User ID | Work ID | Machine ID | Event Start Time | Event End Time | Output | Time | +---------+---------+------------+---------------------+---------------------+--------+------+ | 080025 | ABC123 | M01 | 2008-01-24 16:19:15 | 2008-01-24 16:34:45 | 2120 | 930 | +---------+---------+------------+---------------------+---------------------+--------+------+ Reprocessing dis-aggregation that we would like to do would be to transform table content based on a granularity of minutes, rather than the current production event ("Event Start Time" and "Event End Time") granularity. The resulting reprocessing of existing table rows would look like: +---------+---------+------------+---------------------+--------+ | User ID | Work ID | Machine ID | Production Minute | Output | +---------+---------+------------+---------------------+--------+ | 080025 | ABC123 | M01 | 2010-01-24 16:19 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:20 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:21 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:23 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:24 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:25 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:26 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:27 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:28 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:29 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:30 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:31 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:22 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:33 | 133 | | 080025 | ABC123 | M01 | 2010-01-24 16:34 | 133 | +---------+---------+------------+---------------------+--------+ So the reprocessing would take an existing row of data created at the granularity of production event and modify the granularity to minutes, eliminating redundant (Event End Time, Time) columns while doing so. It assumes a constant rate of production and divides output by the difference in minutes plus one to populate the new table's Output column. I know this can be done in code...but can it be done entirely in a MySQL insert statement (or otherwise entirely in MySQL)? I am thinking of a INSERT ... INTO construction but keep getting stuck. An additional complexity is that there are hundreds of machines to include in the operation so there will be multiple rows (one for each machine) for each minute of the day. Any ideas would be much appreciated. Thanks.

    Read the article

  • Which version of Grady Booch's OOA/D book should I buy?

    - by jackj
    Grady Booch's "Object-Oriented Analysis and Design with Applications" is available brand new in both the 2nd edition (1993) and the 3rd edition (2007), while many used copies of both editions are available. Here are my concerns: 1) The 2nd edition uses C++: given that I just finished reading my first two C++ books (Accelerated C++ and C++ Primer) I guess practical tips can only help, so the 2nd edition is probably best (I think the 3rd edition has absolutely no code). On the other hand, the C++ books I read insist on the importance of using standard C++, whereas Booch's 2nd edition was published before the 1998 standard. 2) The 2nd edition is shorter (608 pages vs. 720) so, I guess, it will be slightly easier to get through. 3) The 3rd edition uses UML 2.0, whereas the 2nd edition is pre-UML. Some reviews say that the notation in the 2nd edition is close enough to UML, so it doesn't matter, but I don't know if I should be worrying about this or not. 4) The 2nd edition is available in good-shape used copies for considerably less than what the 3rd one goes for. Given all the above factors, do you think I should buy the 2nd or the 3rd edition? Recommendations on other books are also welcome but I would prefer it if whoever answers has read at least one of the versions of Booch's book (preferably both!). I have already bought but not read GoF and Riel's books. I also know that I should practice a lot with real-life code. Thanks.

    Read the article

  • Hadoop/MapReduce: Reading and writing classes generated from DDL

    - by Dave
    Hi, Can someone walk me though the basic work-flow of reading and writing data with classes generated from DDL? I have defined some struct-like records using DDL. For example: class Customer { ustring FirstName; ustring LastName; ustring CardNo; long LastPurchase; } I've compiled this to get a Customer class and included it into my project. I can easily see how to use this as input and output for mappers and reducers (the generated class implements Writable), but not how to read and write it to file. The JavaDoc for the org.apache.hadoop.record package talks about serializing these records in Binary, CSV or XML format. How do I actually do that? Say my reducer produces IntWritable keys and Customer values. What OutputFormat do I use to write the result in CSV format? What InputFormat would I use to read the resulting files in later, if I wanted to perform analysis over them?

    Read the article

  • Generating easy-to-remember random identifiers

    - by Carl Seleborg
    Hi all, As all developers do, we constantly deal with some kind of identifiers as part of our daily work. Most of the time, it's about bugs or support tickets. Our software, upon detecting a bug, creates a package that has a name formatted from a timestamp and a version number, which is a cheap way of creating reasonably unique identifiers to avoid mixing packages up. Example: "Bug Report 20101214 174856 6.4b2". My brain just isn't that good at remembering numbers. What I would love to have is a simple way of generating alpha-numeric identifiers that are easy to remember. Examples would be "azil3", "ulmops", "fel2way", etc. I just made these up, but they are much easier to recognize when you see many of them at once. I know of algorithms that perform trigram analysis on text (say you feed them a whole book in German) and that can generate strings that look and feel like German words. This requires lots of data, though, and makes it slightly less suitable for embedding in an application just for this purpose. Do you know of anything else? Thanks! Carl

    Read the article

< Previous Page | 85 86 87 88 89 90 91 92 93 94 95 96  | Next Page >