Search Results

Search found 5330 results on 214 pages for 'geek history'.

Page 192/214 | < Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >

  • Help with dynamic-wind and call/cc

    - by josh
    I am having some trouble understanding the behavior of the following Scheme program: (define c (dynamic-wind (lambda () (display 'IN)(newline)) (lambda () (call/cc (lambda (k) (display 'X)(newline) k))) (lambda () (display 'OUT)(newline)))) As I understand, c will be bound to the continution created right before "(display 'X)". But using c seems to modify itself! The define above prints (as I expected) IN, X and OUT: IN X OUT And it is a procedure: #;2> c #<procedure (a9869 . results1678)> Now, I would expect that when it is called again, X would be printed, and it is not! #;3> (c) IN OUT And now c is not a procedure anymore, and a second invokation of c won't work! #;4> c ;; the REPL doesn't answer this, so there are no values returned #;5> (c) Error: call of non-procedure: #<unspecified> Call history: <syntax> (c) <eval> (c) <-- I was expecting that each invokation to (c) would do the same thing -- print IN, X, and OUT. What am I missing?

    Read the article

  • Job Opportunities

    - by James
    I have a few questions about my job opportunities and I appreaciate it if people could give me some feedback on what I should have in front of me. I am graduatating from a University of Wisconsin--La Crosse this December with a degree in CS and a math minor. I have a cumulative GPA of 3.84 and a major GPA of 4.0 right now (though I still have many classes in front of me). I already have a degree from the U of Minnesota (History, 3.69 GPA) and have worked in the business world for 3+ years (working for a small company in the baseball world, doing some computer programming, statistical research, operations work, technical writing, etc.) I know Java and C well, also am comfortable with Perl. I should have a good grasp of SQL by graduation. I am looking to get a nice programming job (and will be open to moving). Anyone have any advice on things I should learn etc? Also, I would like to know what everyone thinks about my chances of landing a decent job (I realize that is subjective). Also, any ideas on salary I should be looking for (say I am working a metropolitan area). Thanks.

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Making that move from junior > mid level

    - by dotnetdev
    Hi, Before I start, I know there is another thread about this very issue (http://stackoverflow.com/questions/2352874/moving-from-junior-developer-to-mid-level). I am in this very same situation, but of course every person and the company/employment-history is not the same. In my current company, I have not done one piece of coding from start to finish with the oversight of my manager and a Project Manager to manage the work/deadlines etc. I am basically an odd-jobs type of guy. The coding I do is on the side to whatever boring spreadsheet/word document I have to write. Very illogical that you're a coder and you're doing it in secret. In another job I had for 3 months (Was made redundant), it required 1 years experience, perhaps because of the fact I was the sole developer. It wasn't too hard, but then I was solely responsible and I learnt a lot from that. I had 2 other 3 months jobs (contracts), so I have been working for 1 year 9 months. I know found a job which I'm in the last stage for, which needs 3 years .NET experience and 2 years Sharepoint. How can I know if I am ready for this job? My current job has been going on for 1 year, but it doesn't mean squat apart from explaining how I have spent my time. It does not tell me what level I am at (apart from the huge skills gap I have opened up against my peers because I practise at home). So 1 year of doing nothing at work, but 1 year of doing loads at home. In fact, I take 1 week off and do more at home then in the company since I started. How can I know if I am ready for such a job? I am generally very confident given all I've achieved in coding, but I have no idea what a job with this sort of experience entails (what day-to-day-problems I would be facing). Is there any advice on how to handle this transition? Thanks

    Read the article

  • Which Database to choose?

    - by Sundar
    I have the following criteria Database should be protected with a username and password. It should not be possible to copy the database file and use it else were like MS Access. There will be no central database server. Each machine will run their own database server locally and user will initiate synchronization. Concept is inspired from distributed version control system like Git. So it should have good replication support. Strong consistency is not needed. Users will synchronize each other database when they need. In case of conflicts it should be possible to find the conflict and present it (from application) to the user for fixing it. Revisions of data if available it will be good. e.g. Entire history of change to a invoice. I explored document oriented database and inclined towards the same. But I dont know what to choose. Database is small it will not reach even 1GB in the next few years (say 3 years). Please feel free to suggest any database which you think might be suitable. Any pointers is highly appreciated. Thanks in advance.

    Read the article

  • Setup.exe files downloading without cab files over poor connections

    - by Colin
    We have customers who are trying to download a setup.exe file over mobile connections that appear to be very slow. They have reported that when they click on the downloaded setup.exe, the install wizard starts up, but part way through the wizard they get an error message indicating that a cab file is corrupt or missing. They couriered a problem tablet to us, and we downloaded the file without a problem but I could replicate the problem by using https to download the file (https is normally used to access the rest of the site, although it is not necessary for the download). When I did this the downloaded file was 2.8MB. It should be 8MB. I don't think that https is the root cause of the problem because I can see the download link in the browser history using http, so I know the customer tried to download using http. I think that the issue is that the poor connection is preventing a complete download, but the browser is acting as if it is complete. Is there a way to ensure the file is downloaded fully, or not at all? Why does the browser not indicate that the download is incomplete?

    Read the article

  • TeamCity and pending Git merge branch commit keeps build with failed tests

    - by Vladimir
    We use TeamCity for continuous integration and Git for source control. Generally it works pretty well - convenient, modern and good us quick feedback when tests fails. There is a strange behavior related to Git merge specifics. Here are steps of the case: First developer pulls from master repo. Second developer pulls from master repo. First developer makes commit A locally. Second developer makes commit B locally; Second developer pushes commit B. First developer want to push commit A but unable because he have to pull commit B first. First developer pull's from remote reposity. First developer pushes commit A and generated merge branch commit. The history of commits in master repo is following: B second developer A first developer merge branch first developer. Now let's assume that Second Developer fixed some failing tests in his commit B. What TeamCity will do is following: Commit B arrives - TeamCity makes build #1 with all tests passed Commit A arrives - TeamCity makes build #2 (without commit B) test bar becomes Red! TeamCity thought that Pending "Merge Branch" commit doesn't contain any changes (any new files) - but it actually does contain the merge of commit B, so the TeamCity don't want to make new build here and make tests green. Here are two problems: 1. In our case we have failed tests returning back in second commit (commit A) 2. TeamCity don't want to make a new build and make tests back green. Does anybody know how to fix both of this problems. I consider some reasonable general approach.

    Read the article

  • Should I store generated code in source control

    - by Ron Harlev
    This is a debate I'm taking a part in. I would like to get more opinions and points of view. We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project. Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment. I would like to have a way to trace back the history of the code, even if it is usually treated as a black box. Any arguments or counter arguments? UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue. I won't select an accepted answer at this point for the reasons mentioned above.

    Read the article

  • How to migrate project from RCS to git? (SOLVED)

    - by Norman Ramsey
    I have a 20-year-old project that I would like to migrate from RCS to git, without losing the history. All web pages suggest that the One True Path is through CVS. But after an hour of Googling and trying different scripts, I have yet to find anything that successfully converts my RCS project tree to CVS. I'm hoping the good people at Stackoverflow will know what actually works, as opposed to what is claimed to work and doesn't. (I searched Stackoverflow using both the native SO search and a Google search, but if there's a helpful answer in the database, I missed it.) UPDATE: The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export was repaired on 14 April 2009, and this version seems to work for me. This tool converts straight to git with no intermediate CVS. Thanks Giuseppe and Jakub!!! Things that did not work that I still remember: The rcs-to-cvs script that ships in the contrib directory of the CVS sources The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export in versions before 13 April 2010 The rcs2cvs script found in a document called "CVS-RCS- HOW-TO Document for Linux"

    Read the article

  • How can UISearchDisplayController autorelease cause crash in a different view controller?

    - by Tofrizer
    Hi, I have two view controllers A and B. From A, I navigate to view controller B as follows: // in View Controller A // navigateToB method -(void) navigateToB { BViewController *bViewController = [[BViewController alloc] initWithNibName: @"BView" bundle:nil]; bViewController.bProperty1 = SOME_STRING_CONSTANT; bViewController.title = @"A TITLE OF A VC's CHOOSING"; [self.navigationController pushViewController: bViewController animated:YES]; [bViewController release]; //<----- releasing 0x406c1e0 } In BViewController, the property bPropery1 is defined with copy as below (note, B also contains UITableView and other properties): @property (nonatomic, copy) NSString *bProperty1; Everything appeared to work fine when navigating back and forth between A and B. That is until I added a UISearchDisplayController to the table view contained in BViewController. Now when I navigate out of B, back to A, the app crashes. Stack trace shows what looks the search display controller being autoreleased at time of crash: #0 0x009663a7 in ___forwarding___ #1 0x009426c2 in __forwarding_prep_0___ #2 0x018c8539 in -[UISearchDisplayController _destroyManagedTableView] #3 0x018c8ea4 in -[UISearchDisplayController dealloc] #4 0x00285ce5 in NSPopAutoreleasePool NSZombies shows: -[BViewController respondsToSelector:]: message sent to deallocated instance 0x406c1e0 And malloc history on this points to the bViewController already released in A's navigateToB method above: Call [2] [arg=132]: thread_a065e720 |start ... <snip> ..._sendActionsForEvents:withEvent:] | -[UIControl sendAction:to:forEvent:] | - [UIApplication sendAction:to:from:forEvent:] | -[**AViewController navigateToB**] | +[NSObject alloc] | +[NSObject allocWithZone:] | _internal_class_createInstance | _internal_class_createInstanceFromZone | calloc | malloc_zone_calloc Can someone please give me any ideas on what is happening here? In navigateToB method, once the bViewController is released (after pushViewController), that's should be it for bViewController. Nothing else even knows about it as it is local to the navigateToB method block and it has been released. When navigating from B back to A, nothing is invoked in viewDidLoad, viewWillAppear etc that will re-enter navigateToB. It looks like somehow search display controller has a reference to something in my AViewController and so as it is autoreleased it is taking this "something" down with it but I cannot understand how this is possible, especially as I'm using copy to pass data between A and B. I'm going potty over this. I'm sure this is my mistake somewhere and so I turn to you, Stack Overflow legends for any words of wisdom or advice on how to resolve this. Many Thanks.

    Read the article

  • Pass a hidden jqGrid value when editing on ASP.Net MVC

    - by taylonr
    I have a jqGrid in an ASP.Net MVC. The grid is defined as: $("#list").jqGrid({ url: '<%= Url.Action("History", "Farrier", new { id = ViewData["horseId"]}) %>', editurl: '/Farrier/Add', datatype: 'json', mtype: 'GET', colNames: ['horseId', 'date', 'notes'], colModel: [ { name: 'horseId', index: 'horseId', width: 250, align: 'left', editable:false, editrules: {edithidden: true}, hidden: true }, { name: 'date', index: 'farrierDate', width: 250, align: 'left', editable:true }, { name: 'notes', index: 'farrierNotes', width: 100, align: 'left', editable: true } ], pager: jQuery('#pager'), rowNum: 5, rowList: [5, 10, 20, 50], sortname: 'farrierDate', sortorder: "DESC", viewrecords: true }); What I want to be able to do, add a row to the grid, where the horseId is either a) not displayed or b) greyed out. But is passed to the controller when saving. The way it's set up is this grid will only have 1 horse id at a time (it will exist on a horse's property page.) The only time I've gotten anything to work is when I made it editable, but then that opens it up for the user to modify the id, which isn't a good idea. So is there some way I can set this value before submitting the data? it does exist as a variable on this page, if that helps any (and I've checked that it isn't null). Thanks

    Read the article

  • Are local variables in Fortran 77 static or stack dynamic?

    - by mm2887
    For my programming languages class one hw problem asks: Are local variables in FORTRAN static or stack dynamic? Are local variables that are INITIALIZED to a default value static or stack dynamic? Show me some code with an explanation to back up your answer. Hint: The easiest way to check this is to have your program test the history sensitivity of a subprogram. Look at what happens when you initialize the local variable to a value and when you don’t. You may need to call more than one subprogram to lock in your answer with confidence. I wrote a few subroutines: - create a variable - print the variable - initialize the variable to a value - print the variable again Each successive call to the subroutine prints out the same random value for the variable when it is uninitialized and then it prints out the initialized value. What is this random value when the variable is uninitialized? Does this mean Fortran uses the same memory location for each call to the subroutine or it dynamically creates space and initializes the variable randomly? My second subroutine also creates a variable, but then calls the first subroutine. The result is the same except the random number printed of the uninitialized variable is different. I am very confused. Please help! Thank you so much.

    Read the article

  • Should I move big data blobs in JSON or in separate binary connection?

    - by Amagrammer
    QUESTION: Is it better to send large data blobs in JSON for simplicity, or send them as binary data over a separate connection? If the former, can you offer tips on how to optimize the JSON to minimize size? If the latter, is it worth it to logically connect the JSON data to the binary data using an identifier that appears in both, e.g., as "data" : "< unique identifier " in the JSON and with the first bytes of the data blob being < unique identifier ? CONTEXT: My iPhone application needs to receive JSON data over the 3G network. This means that I need to think seriously about efficiency of data transfer, as well as the load on the CPU. Most of the data transfers will be relatively small packets of text data for which JSON is a natural format and for which there is no point in worrying much about efficiency. However, some of the most critical transfers will be big blobs of binary data -- definitely at least 100 kilobytes of data, and possibly closer to 1 megabyte as customers accumulate a longer history with the product. (Note: I will be caching what I can on the iPhone itself, but the data still has to be transferred at least once.) It is NOT streaming data. I will probably use a third-party JSON SDK -- the one I am using during development is here. Thanks

    Read the article

  • Database Design Question regaurding duplicate information.

    - by galford13x
    I have a database that contains a history of product sales. For example the following table CREATE TABLE SalesHistoryTable ( OrderID, // Order Number Unique to all orders ProductID, // Product ID can be used as a Key to look up product info in another table Price, // Price of the product per unit at the time of the order Quantity, // quantity of the product for the order Total, // total cost of the order for the product. (Price * Quantity) Date, // Date of the order StoreID, // The store that created the Order PRIMARY KEY(OrderID)); The table will eventually have millions of transactions. From this, profiles can be created for products in different geographical regions (based on the StoreID). Creating these profiles can be very time consuming as a database query. For example. SELECT ProductID, StoreID, SUM(Total) AS Total, SUM(Quantity) QTY, SUM(Total)/SUM(Quantity) AS AvgPrice FROM SalesHistoryTable GROUP BY ProductID, StoreID; The above query could be used to get the Information based on products for any particular store. You could then determine which store has sold the most, has made the most money, and on average sells for the most/least. This would be very costly to use as a normal query run anytime. What are some design descisions in order to allow these types of queries to run faster assuming storage size isn’t an issue. For example, I could create another Table with duplicate information. Store ID (Key), Product ID, TotalCost, QTY, AvgPrice And provide a trigger so that when a new order is received, the entry for that store is updated in a new table. The cost for the update is almost nothing. What should be considered when given the above scenario?

    Read the article

  • Ideas for a rudimentary software licensing implementation

    - by Ross
    I'm trying to decide how to implement a very basic licensing solution for some software I wrote. The software will run on my (hypothetical) clients' machines, with the idea being that the software will immediately quit (with a friendly message) if the client is running it on greater-than-n machines (n being the number of licenses they have purchased). Additionally, the clients are non-tech-savvy to the point where "basic" is good enough. Here is my current design, but given that I have little to no experience in the topic, I wanted to ask SO before I started any development on it: A remote server hosts a MySQL database with a table containing two columns: client-key and license quantity The client-side application connects to the MySQL database on startup, offering it's client-key that I've put into a properties file packaged into the distribution (I would create a new distribution for each new client) Chances are, I'll need a second table to store validation history, so that with some short logic, the software can decide if it can be run on a given machine (maybe a sliding window of n machines using the software per 24 hours) If the software cannot establish a connection to the MySQL database, or decides that it's over the n allowed machines per day, it closes The connection info for the remote server hosting the MySQL database should be hard-coded into the app? (That sounds like a bad idea, but otherwise they could point it to some other always-validates-to-success server) I think that about covers my initial design. The intent being that while it certainly isn't full-proof, I think I've made it at least somewhat difficult to create an easily-sharable cracking solution. Also, I can easily adjust the license amount for a given client/key pair. I gotta figure this has been done a million times before, so tell me about a better solution that's just as simple to implement and provides the same (low) amount of security. In the event that external libraries are used, I prefer Java, as that's what the software has been written in.

    Read the article

  • git workflow incorporating many, but not all commits from many forks

    - by becomingGuru
    I have a git repo. It has been forked several times and many independent commits are made on top of it. Everything normal, like what happens in many github hosted projects. Now, what exact workflow should I follow, if I want to see all that commits individually and apply the ones I like. The workflow I followed, which is not the optimal is to create a branch of the name github-username and merge the changes into my master and undo any changes in the commit I dont need manually (there are not many, so it worked). What I want is the ability to see all commits from different forks individually and cherry pick and apply them on top of my master. What is the workflow to follow for that? And what gui (gitk?) enables me to see all different individual commits. I realize that merge should be a primary part of the workflow and not cherry-pick as it creates a different commit (from git's point of view). Even rebasing other's changes on top of mine might not preserve the history on the graph to indicate that it is his commits I have rebased. So then, How do I ignore just a few commits from a lot of them? I think github should have a "apply this commit on top of my master" thing in their graph after each commit node; so I can just pull it, after doing all that.

    Read the article

  • Adding changes from one Mercurial repository to another

    - by Patrik Hägne
    When changing the VCS for my project FakeItEasy from SVN to Mercurial on Google Code I was a bit too eager (I'm funny like that). What I did was just checking the latest version out of SVN and then commiting that checkout as the first revision of the new Mercurial repo. This obviously has the effect that all history is lost. Later when getting a bit better acustomed to Mercurial I realized that there is such a thing as a "convert extension" that allows you to convert a SVN repo into a Mercurial repo. Now what I want to do is to convert the old SVN repo and then have all change sets from the currently existing Mercurial repo imported into this converted repo except the very first commit to Mercurial. I've converted the SVN repo to a local Mercurial repo but now is when I'm stuck. I thought I'd be able to use the convert extension to bring the current Mercurial repository into the converted one and having a splice map remove the first commit but I can not seem to get this to work. I've also tried to just use convert without splice map to get all change sets from the current Mercurial repo into the converted one and the rebase the second version in the current to the last commit from the old SVN repository but I can't get that to work either. To make this clearer lets say I have these two repositories: A: revA1-revA2 B: revB1-revB2-revB3 (Where revB1 is actually a copy of revA2) Now I want to combine these two into the new repository containing this: C: revA1-revA2-revB2-revB3

    Read the article

  • How to create database records for a financial year

    - by David A Gibson
    Hello, I need to create entities (specifically contracts) in a database table that are associated with a financial year. These contracts will be applied to projects. Any contract variation will be recorded by creating a new contract record for the same financial year but the original will remain associated with the project as a history. The projects can last several years and so at any one time a project will have a live contract record for each year as well as any number of historic contracts for that year. All of which is incidental but I'm trying to provide some context. If the contracts where for the year - it would be easy and I'd just store the Year either as a date field with the 1st of January or just an Integer. However the contracts run for financial years and I don't know how to approach this. I don't want a separate table containing the financial years as I don't want the users to have to maintain this. I don't want to store the financial year as a string "2009/2010" as this is not ideal for sorting/extracting the data. Any ideas will be helpful, my best so far is to have starting and ending year in 2 columns and just "KNOW" that starting is April of the year etc Thanks

    Read the article

  • modifying ajax returned data before showing

    - by Nick
    I'm processing subscribtion form with jQuery/ajax and need to display results with success function (they are different depending on if email exists in database). But the trick is I do not need h2 and first "p" tag. How can I show only div#alertmsg and second "p" tag? I've tried revoming unnecessary elements with method described here, but it didn't work. Thanks in advance. Here is my code: var dataString = $('#subscribe').serialize(); $.ajax({ type: "POST", url: "/templates/willrock/pommo/user/process.php", data: dataString, success: function(data){ var result = $(data); $('#success').append(result); } Here is the data returned: <h2>Subscription Review</h2> <p><a href="url" onClick="history.back(); return false;"><img src="/templates/willrock/pommo/themes/shared/images/icons/back.png" alt="back icon" class="navimage" /> Back to Subscription Form</a></p> <div id="alertmsg" class="error"> <ul> <li>Email address already exists. Duplicates are not allowed.</li> </ul> </div> <p><a href="login.php">Update your records</a></p>

    Read the article

  • Compressibility Example

    - by user285726
    From my algorithms textbook: The annual county horse race is bringing in three thoroughbreds who have never competed against one another. Excited, you study their past 200 races and summarize these as prob- ability distributions over four outcomes: first (“first place”), second, third, and other. Outcome Aurora Whirlwind Phantasm 0.15 0.30 0.20 first 0.10 0.05 0.30 second 0.70 0.25 0.30 third 0.05 0.40 0.20 other Which horse is the most predictable? One quantitative approach to this question is to look at compressibility. Write down the history of each horse as a string of 200 values (first, second, third, other). The total number of bits needed to encode these track- record strings can then be computed using Huffman’s algorithm. This works out to 290 bits for Aurora, 380 for Whirlwind, and 420 for Phantasm (check it!). Aurora has the shortest encoding and is therefore in a strong sense the most predictable. How did they get 420 for Phantasm? I keep getting 400 bytes, as so: Combine first, other = 0.4, combine second, third = 0.6. End up with 2 bits encoding each position. Is there something I've misunderstood about the Huffman encoding algorithm? Textbook available here: http://www.cs.berkeley.edu/~vazirani/algorithms.html (page 156).

    Read the article

  • subtotals in columns usind reshape2 in R

    - by user1043144
    I have spent some time now learning RESHAPE2 and plyr but I still do not get it. This time I have a problem with (a) subtotals and (b) passing different aggregate functions . Here an example using data from the excellent tutorial on the blog of mrdwab http://news.mrdwab.com/ # libraries library(plyr) library(reshape2) # get data and add few more variables book.sales = read.csv("http://news.mrdwab.com/data-booksales") book.sales$Stock = book.sales$Quantity + 10 book.sales$SubjCat[(book.sales$Subject == 'Economics') | (book.sales$Subject == 'Management') ] <- '1_EconSciences' book.sales$SubjCat[book.sales$Subject %in% c('Anthropology', 'Politics', 'Sociology') ] <- '2_SocSciences' book.sales$SubjCat[book.sales$Subject %in% c('Communication', 'Fiction', 'History', 'Research', 'Statistics') ] <- '3_other' # to get to my starting dataframe (close to the project I am working on) book.sales1 <- ddply(book.sales, c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), summarize, Stock = sum(Stock), Sold = sum(Quantity), Ratio = round((100 * sum(Quantity)/ sum(Stock)), digits = 1)) #melt it m.book.sales = melt(data = book.sales1, id.vars = c('Region', 'Representative', 'SubjCat', 'Subject', 'Publisher'), measured.vars = c('Stock', 'Sold', 'Ratio')) # cast it Tab1 <- dcast(data = m.book.sales, formula = Region + Representative ~ Publisher + variable, fun.aggregate = sum, margins = c('Region', 'Representative')) Now my questions : I have been able to add the subtotals in rows. But is it possible also to add margins in the columns. Say for example, Totals of Stock for one Publisher ? Sorry I meant to say example total sold for all publishers There is a problem with the columns with “ratio”. How can I get “mean” instead of “sum” for this variable ? P.S: I have seen some examples using reshape. Will you recommend to use it instead of reshape2 (which seems not to include the functionalities of two functions).

    Read the article

  • Starting with versioning mysql schemata without overkill. Good solutions?

    - by tharkun
    I've arrived at the point where I realise that I must start versioning my database schemata and changes. I consequently read the existing posts on SO about that topic but I'm not sure how to proceed. I'm basically a one man company and not long ago I didn't even use version control for my code. I'm on a windows environment, using Aptana (IDE) and SVN (with Tortoise). I work on PHP/mysql projects. What's a efficient and sufficient (no overkill) way to version my database schemata? I do have a freelancer or two in some projects but I don't expect a lot of branching and merging going on. So basically I would like to keep track of concurrent schemata to my code revisions. [edit] Momentary solution: for the moment I decided I will just make a schema dump plus one with the necessary initial data whenever I'm going to commit a tag (stable version). That seems to be just enough for me at the current stage.[/edit] [edit2]plus I'm now also using a third file called increments.sql where I put all the changes with dates, etc. to make it easy to trace the change history in one file. from time to time I integrate the changes into the two other files and empty the increments.sql[/edit]

    Read the article

  • fitnesse test framework, arbitrary properties for test and queries/test runs based on them?

    - by Marcel
    hi, our testers have the requirement to store multiple properties for a test that are not present in the "properties". e.g. they want to store priority, a description(not in the wiki page itself) and so on. they don't want to use the tagging mechanism. is there a way to store any kind of new xml node in the properties.xml for a test? these properties should then be used to: query the fields via the search screen run tests based on the "SuiteResponder" ?suite=xxx&TAGx=abc&TAGy=cde they should be returned by "?properties" responder. they should appear in the test history of the test run in essence they want to store any kind of "meta" information in the properties.xml and work with them in all kinds of ways, search, run etc. does anybody here know if there is already something available in that direction? if not i think we have to "pimp" these features into fitnesse to make our testers happy. thanks a lot any help appreciated marcel ps: i've also posted the question in the yahoo fitnesse group

    Read the article

  • what libraries or platforms should I use to build web apps that provide real-time, asynchronous data

    - by Daniel Sterling
    This is a less a question with a simple, practical answer and more a question to foster discussion on the real-time data exchange topic. I'll begin with an example: Google Wave is, at its core, a real-time asynchronous data synchronization engine. Wave supports (or plans to support) concurrent (real-time) document collaboration, disconnected (offline) document editing, conflict resolution, document history and playback with attribution, and server federation. A core part of Wave is the Operational Transformation engine: http://www.waveprotocol.org/whitepapers/operational-transform The OT engine manages document state. Changes between clients are merged and each client has a sane and consistent view of the document at all times; the final document is eventually consistent between all connected clients. My question is: is this system abstract or general enough to be used as a library or generic framework upon which to build web apps that synchronize real-time, asynchronous state in each client? Is the Wave protocol directly used by any current web applications (besides Google's client)? Would it make sense to directly use it for generic state synchronization in a web app? What other existing libraries or frameworks would you consider using when building such a web app? How much code in such an app might be domain-specific logic vs generic state synchronization logic? Or, put another way, how leaky might the state synchronization abstractions be? Comments and discussion welcomed!

    Read the article

  • fastest SCM tool available for Embedded software development

    - by wrapperm
    Hi All, In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters). But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations. We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility. With these above mentioned requirement, we have come up with some of the tools that are presently available in the market: GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe. I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements. Thanking you in Advance, Rahamath.

    Read the article

< Previous Page | 188 189 190 191 192 193 194 195 196 197 198 199  | Next Page >