Search Results

Search found 5908 results on 237 pages for 'cody short'.

Page 175/237 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • improve my code for collapsing a list of data.frames

    - by romunov
    Dear StackOverFlowers (flowers in short), I have a list of data.frames (walk.sample) that I would like to collapse into a single (giant) data.frame. While collapsing, I would like to mark (adding another column) which rows have came from which element of the list. This is what I've got so far. This is the data.frame that needs to be collapsed/stacked. > walk.sample [[1]] walker x y 1073 3 228.8756 -726.9198 1086 3 226.7393 -722.5561 1081 3 219.8005 -728.3990 1089 3 225.2239 -727.7422 1032 3 233.1753 -731.5526 [[2]] walker x y 1008 3 205.9104 -775.7488 1022 3 208.3638 -723.8616 1072 3 233.8807 -718.0974 1064 3 217.0028 -689.7917 1026 3 234.1824 -723.7423 [[3]] [1] 3 [[4]] walker x y 546 2 629.9041 831.0852 524 2 627.8698 873.3774 578 2 572.3312 838.7587 513 2 633.0598 871.7559 538 2 636.3088 836.6325 1079 3 206.3683 -729.6257 1095 3 239.9884 -748.2637 1005 3 197.2960 -780.4704 1045 3 245.1900 -694.3566 1026 3 234.1824 -723.7423 I have written a function to add a column that denote from which element the rows came followed by appending it to an existing data.frame. collapseToDataFrame <- function(x) { # collapse list to a dataframe with a twist walk.df <- data.frame() for (i in 1:length(x)) { n.rows <- nrow(x[[i]]) if (length(x[[i]])>1) { temp.df <- cbind(x[[i]], rep(i, n.rows)) names(temp.df) <- c("walker", "x", "y", "session") walk.df <- rbind(walk.df, temp.df) } else { cat("Empty list", "\n") } } return(walk.df) } > collapseToDataFrame(walk.sample) Empty list Empty list walker x y session 3 1 -604.5055 -123.18759 1 60 1 -562.0078 -61.24912 1 84 1 -594.4661 -57.20730 1 9 1 -604.2893 -110.09168 1 43 1 -632.2491 -54.52548 1 1028 3 240.3905 -724.67284 1 1040 3 232.5545 -681.61225 1 1073 3 228.8756 -726.91980 1 1091 3 209.0373 -740.96173 1 1036 3 248.7123 -694.47380 1 I'm curious whether this can be done more elegantly, with perhaps do.call() or some other more generic function?

    Read the article

  • Creating a Custom Design-Time Environment

    - by Charlie
    Hello all, My question is related to the design-time support of WPF. From MSDN I read, The WPF Designer provides a framework and a public API which you can use to implement custom adorners, tools, property editors, and designers. But the vast majority of the examples I have found are trivial, and do not illustrate much concerning the creation of a customized designer in an existing WPF application. We have migrated our application from Windows Forms to WPF over the past year, and the next step will be to take an existing WinForms Panel designer, and rewrite it in WPF. Suffice it to say that this will be a huge project. But I don't even know where to begin. I am wondering if any of you have had similar experiences writing a customized designer for a WPF application, and what it was like. Even better, if you could compare and contrast the functionality between the WinForms designer and the WPF designer, or explain the transition from the former to the latter, that would be helpful. If you know of any simple examples that demonstrate a customized design environment (with custom controls, etc.) that would be extremely beneficial. All in all, I am just wondering if many people have undertaken this yet, and what their results have been. EDIT: To clarify, yes, I am talking about hosting a WPF designer. It appears that this may not even be possible, which is a huge setback. Here is a screenshot of our current WinForms designer. As you can see, it is used to create customized user interfaces. You can drag custom controls onto it and design them, then put the panel into a "run mode" in which all of the controls become functional. Short of spending months writing our designer, would this be possible in WPF? What about .NET 4.0 and VS2010? Will those add any designer functionality?

    Read the article

  • Using jstl tags in a dynamically created div

    - by George
    I want to be able to show some data based on criteria the user enters in a text field. I can easily take this data, process the form post, and show the data on another page. However, I want to be able to do it all on the same page - they click the button, and a new div shows up with the information. This doesn't seem too complicated, but I want to use jstl tags to format the data like: <c:forEach items="${model.data}" var="d"> <tr> <td><fmt:formatDate type="date" dateStyle="short" timeStyle="default" value="${d.reportDate}" /></td> <td><c:out value="${d.cardType}"/></td> </tr> </c:forEach> If jstl tags are processed when the page loads, can I use that in this new div? Can I update it via a javascript (using prototype) function to display the proper data? Will I be able to do the same thing if they change the criteria and click the submit button again?

    Read the article

  • What is the recommended coding style for PowerShell?

    - by stej
    Is there any recommended coding style how to write PowerShell scripts? It's not about how to structure the code (how many functions, if to use module, ...). It's about 'how to write the code so that it is readable'. In programming languages there are some recommended coding styles (what to indent, how to indent - spaces/tabs, where to make new line, where to put braces,...), but I haven't seen any suggestion for PowerShell. What I'm interested particularly in: How to write parameters function New-XYZItem ( [string] $ItemName , [scriptblock] $definition ) { ... (I see that it's more like 'V1' syntax) or function New-PSClass { param([string] $ClassName ,[scriptblock] $definition )... or (why to add empty attribute?) function New-PSClass { param([Parameter()][string] $ClassName ,[Parameter()][scriptblock] $definition )... or (other formatting I saw maybe in Jaykul's code) function New-PSClass { param( [Parameter()] [string] $ClassName , [Parameter()] [scriptblock] $definition )... or ..? How to write complex pipeline Get-SomeData -param1 abc -param2 xyz | % { $temp1 = $_ 1..100 | % { Process-somehow $temp1 $_ } } | % { Process-Again $_ } | Sort-Object -desc or (name of cmdlet on new line) Get-SomeData -param1 abc -param2 xyz | % { $temp1 = $_ 1..100 | % { Process-somehow $temp1 $_ } } | % { Process-Again $_ } | Sort-Object -desc | and what if there are -begin -process -end params? how to make it the most readable? Get-SomeData -param1 abc -param2 xyz | % -begin { init } -process { Process-somehow2 ... } -end { Process-somehow3 ... } | % -begin { } .... or Get-SomeData -param1 abc -param2 xyz | % ` -begin { init } ` -process { Process-somehow2 ... } ` -end { Process-somehow3 ... } | % -begin { } .... the indentitation is important here and what element is put on new line as well. I have covered only questions that come on my mind very frequently. There are some others, but I'd like to keep this SO question 'short'. Any other suggestions are welcome.

    Read the article

  • SQL Exception: "Impersonate Session Security Context" cannot be called in this batch because a simul

    - by kasey
    When opening a connection to SQL Server 2005 from our web app, we occasionally see this error: "Impersonate Session Security Context" cannot be called in this batch because a simultaneous batch has called it. We use MARS and connection pooling. The exception originates from the following piece of code: protected SqlConnection Open() { SqlConnection connection = new SqlConnection(); connection.ConnectionString = m_ConnectionString; if (connection != null) { try { connection.Open(); if (m_ExecuteAsUserName != null) { string sql = Format("EXECUTE AS LOGIN = {0};", m_ExecuteAsUserName); ExecuteCommand(connection, sql); } } catch (Exception exception) { connection.Close(); connection = null; } } return connection; } I found an MS Connect article which suggests that the error is caused when a previous command has not yet terminated before the EXECUTE AS LOGIN command is sent. Yet how can this be if the connection has only just been opened? Could this be something to do with connection pooling interacting strangely with MARS? UPDATE: For the short-term we have implemented a workaround by clearing out the connection pool whenever this happens, to get rid of the bad connection, as it otherwise keeps getting handed back to various users. (Not too bad as this only happens a couple of times a day.) But if anyone has any further ideas, we are still looking out for a real solution...

    Read the article

  • Why is my Android app camera preview running out of memory on my AVD?

    - by Bryan
    I have yet to try this on an actual device, but expect similar results. Anyway, long story short, whenever I run my app on the emulator, it crashes due to an out of memory exception. My code really is essentially the same as the camera preview API demo from google, which runs perfectly fine. The only file in the app (that I created/use) is as below- package berbst.musicReader; import java.io.IOException; import android.app.Activity; import android.content.Context; import android.hardware.Camera; import android.os.Bundle; import android.view.SurfaceHolder; import android.view.SurfaceView; /********************************* * Music Reader v.0001 * Still VERY under construction. * @author Bryan * *********************************/ public class MusicReader extends Activity { private MainScreen main; @Override //Begin activity public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); main = new MainScreen(this); setContentView(main); } class MainScreen extends SurfaceView implements SurfaceHolder.Callback { SurfaceHolder sHolder; Camera cam; MainScreen(Context context) { super(context); //Set up SurfaceHolder sHolder = getHolder(); sHolder.addCallback(this); sHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(SurfaceHolder holder) { // Open the camera and start viewing cam = Camera.open(); try { cam.setPreviewDisplay(holder); } catch (IOException exception) { cam.release(); cam = null; } } public void surfaceDestroyed(SurfaceHolder holder) { // Kill all our crap with the surface cam.stopPreview(); cam.release(); cam = null; } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // Modify parameters to match size. Camera.Parameters params = cam.getParameters(); params.setPreviewSize(w, h); cam.setParameters(params); cam.startPreview(); } } }

    Read the article

  • Loop through all cells in Xceed DataGrid for WPF?

    - by ewall
    I am changing the background color of the cells when the user has made an edit. I would like to return all cells to normal colors when the changes are saved (or reverted). It's easy enough to set the cell's original background color (as stored in the parent row). But I can't figure out how to loop through all the cells in the table to reset them. I found an article in the Xceed Knowledge Base called "How to iterate through the grid's rows"... which you would think would be perfect, right? Wrong; the properties (or methods) like .DataRows, .FixedHeaderRows, etc. mentioned in the article are from an older/defunct Xceed product. This forum thread recommends using the DataGrid's .Items property, which in my case returns a collection of System.Data.DataRowViews... but I can't find any way to cast that (or any of its related elements) up to the Xceed.Wpf.DataGrid.DataCells I need to change the background color. In short, how do I loop through the rows and cells so I can reset the background property?

    Read the article

  • Hidden features of Perl?

    - by Adam Bellaire
    What are some really useful but esoteric language features in Perl that you've actually been able to employ to do useful work? Guidelines: Try to limit answers to the Perl core and not CPAN Please give an example and a short description Hidden Features also found in other languages' Hidden Features: (These are all from Corion's answer) C# Duff's Device Portability and Standardness Quotes for whitespace delimited lists and strings Aliasable namespaces Java Static Initalizers JavaScript Functions are First Class citizens Block scope and closure Calling methods and accessors indirectly through a variable Ruby Defining methods through code PHP Pervasive online documentation Magic methods Symbolic references Python One line value swapping Ability to replace even core functions with your own functionality Other Hidden Features: Operators: The bool quasi-operator The flip-flop operator Also used for list construction The ++ and unary - operators work on strings The repetition operator The spaceship operator The || operator (and // operator) to select from a set of choices The diamond operator Special cases of the m// operator The tilde-tilde "operator" Quoting constructs: The qw operator Letters can be used as quote delimiters in q{}-like constructs Quoting mechanisms Syntax and Names: There can be a space after a sigil You can give subs numeric names with symbolic references Legal trailing commas Grouped Integer Literals hash slices Populating keys of a hash from an array Modules, Pragmas, and command-line options: use strict and use warnings Taint checking Esoteric use of -n and -p CPAN overload::constant IO::Handle module Safe compartments Attributes Variables: Autovivification The $[ variable tie Dynamic Scoping Variable swapping with a single statement Loops and flow control: Magic goto for on a single variable continue clause Desperation mode Regular expressions: The \G anchor (?{}) and '(??{})` in regexes Other features: The debugger Special code blocks such as BEGIN, CHECK, and END The DATA block New Block Operations Source Filters Signal Hooks map (twice) Wrapping built-in functions The eof function The dbmopen function Turning warnings into errors Other tricks, and meta-answers: cat files, decompressing gzips if needed Perl Tips See Also: Hidden features of C Hidden features of C# Hidden features of C++ Hidden features of Java Hidden features of JavaScript Hidden features of Ruby Hidden features of PHP Hidden features of Python

    Read the article

  • MS SQL Database with clustered GUID PKs - switch clustered index or switch to sequential (comb) GUID

    - by Eyvind
    We have a database in which all the PKs are GUIDs, and most of the PKs are also the clustered index for the table. We know that this is bad (due to the random nature of GUIDs). So, it seems there are basically two options here (short of throwing out GUIDs as PKs altogether, which we cannot do (at least not at this time)). We could change the GUID generation algorithm to e.g. the one that NHibernate uses, as detailed in this post, or we could, for the tables that are under the heaviest use, change to a different clustered index, e.g. an IDENTITY column, and keep the "random" GUIDs as PKs. Is it possible to give any general recommendations in such a scenario? The application in question has 500+ tables, the largest one presently at about 1,5 million rows, a few tables around 500 000 rows, and the rest significantly lower (most of them well below 10K). Furthermore, the application is installed at several customer sites already, so we have to take any possible negative effects for existing customer into consideration. Thanks!

    Read the article

  • Using a Visual Sourcesafe 2005 database with VB6 still launches VSS 6.0d

    - by John Galt
    I know all versions of VSS have many horror stories and I feel I will escape to a better source control mechanism someday but in the short term I am just trying to do a little cleanup and would like your advice on this issue: Objective - consolidate old VB6 source code in a "new" VSS 2005 database (currently all these old projects are checked in to an "old" VSS 6.0d database); eventually, eliminate the "old" VSS. Progress so far - The new VSS 2005 database now contains a mixture of projects. Some are using Visual Studio 2008, some use Vstudio 2005, and the more recently added ones are the above mentioned VB6 projects. Individually all these projects and "solutions" come up OK, I can check in - check out, launch SourceSafe, view differences, etc. But all the VB6 projects now in a VSS 2005 database launch VSS 6.0d when asked, rather than VSS 2005. Is this normal and just something to cope with until I get to some better nonVSS approach? Or can VB6 be re-configured someway to launch VSS 2005 when I click Tools-SourceSafe-Run SourceSafe? I seem to recall VSS 6.0d got "integrated" into VB6 by way of the "Add-In Manager". At this point, the development PC with VB6 installed has both VSS 2005 and VSS 6.0d clients installed.

    Read the article

  • ClickOnce permissions

    - by stephenfalken
    We recently updated our main website. This included creating a new directory to hold the new site; then, some of the existing subdirectories needed to be copied over. Some of the virtual directories below the main site are clickonce publishing locations. These have been 100% successful publishing locations for 3 years now. We would update the application in Visual Studio and then publish... painless. Very long story short, since we've copied the directories to the new main site location on disk, all of our clickonce sites except one will no longer publish. They all fail with an error saying "you are not authorized to perform the current operation". This is immeidately after we set permissions to Full Control for my domain user group. I've checked everything I know how to check as far as permissions go and made the non-working ones' permissions match the one that does work, but no joy. We had problems on Friday and I fixed the one site that does work, but I can't remember how I fixed it; all I remember is that it took a long time screwing with it to make it work. Could there be some arcane setting in IIS that has been omitted? Is there a simple list of things to check anywhere on the Net? ClickOnce information is scattered among 50,000 URLs and I haven't been able to figure it out again. Thanks

    Read the article

  • Using Essential Use Cases to design a UI-centric Application

    - by Bruno Brant
    Hello all, I'm begging a new project (oh, how I love the fresh taste of a new project!) and we are just starting to design it. In short: The application is a UI that will enable users to model an execution flow (a Visio like drag & drop interface). So our greatest concern is usability and features that will help the users model fast and clearly the execution flow. Our established methodology makes extensive use of Use Cases in order to create a harmonious view of the application between the programmers and users. This is a business concern, really: I'd prefer to use an Agile Method with User Stories rather than User Cases, but we need to define a clear scope to sell the product to our clients. However, Use Cases have a number of flaws, most of which are related to the fact that they include technical details, like UI, etc, as can be seem here. But, since we can't use User Stories and a fully interactive design, I've decided that we compromise: I will be using Essential Use Cases in order to hide those details. Now I have another problem: it's essential (no pun intended) to have a clear description of UI interaction, so, how should I document it? In other words, how do I specify a application through the use of Essential Use Cases where the UI interaction is vital to it? I can see some alternatives: Abandon the use of Use Cases since they don't correctly represent the problem Do not include interface descriptions in the use case, but create another documentation (Story Boards) and link then to the Essential Use Cases Include UI interaction description to the Essential Use Cases, since they are part of the business rules in the perspective of the users and the application itself

    Read the article

  • ServerIdentity memory leak with IHttpAsyncHandler

    - by Anton
    I have a .NET web application that consists of a single HTTP handler class that implements IHttpAsyncHandler. All requests to this handler are handled asynchronously, though some requests are short-lived and some are long-lived (nothing over a few seconds). The problem is that memory consumption grows over time as requests are handled. All profiling results point to an unbounded growth of String objects held by instances of System.Runtime.Remoting.ServerIdentity. Every String value is different, but they all look similar to: /dd41c00e_1566_4702_b660_c81cdea18a43/vigefresi5pfv8n0ekddg57z_1154.rem There is nothing in my application that uses ServerIdentity directly, and unless I am mistaken, the ServerIdentity instances are proportional to the number of incoming requests. If this is an internal .NET structure, it looks like the CLR is not cleaning up after itself. What could be causing the leak? UPDATE A little less than half of the String objects are being held by System.Runtime.Remoting. The remaining String objects are being held by System.Runtime.Serialization and look similar to: +1sgess5rjcrgbmp3kqr6bmv_3474.rem Also, the problem only seems to occur when lots of simultaneous HTTP web requests arrive.

    Read the article

  • Named pipe blocking with user nobody

    - by dnagirl
    I have 2 short scripts. The first, an awk script, processes a large file and prints to a named pipe 'myfifo.dat'. The second, a Perl script, runs a LOAD DATA LOCAL INFILE 'myfifo.dat'... command. Both of these scripts work when run locally like so: lee.awk big.file & lee.pl However, when I call these scripts from a PHP webpage, the named pipe blocks: $awk="/path/to/lee.awk {$_FILES['uploadfile']['tmp_name']} &"; $sql="/path/to/lee.pl"; if(!exec($awk,$return,$err)) throw new ZException(print_r($err,true)); //blocks here if(!exec($sql,$return,$err)) throw new ZException(print_r($err,true)); If I modify the awk and Perl scripts so that they write and read to a normal file, everything works fine from PHP. The permissions on the fifo and the normal file are 666 (for testing purposes). These operations run much more quickly through a named pipe, so I'd prefer to use one. Any ideas how to unblock it? ps. In case you're wondering why I'm going to all this aggravation, see this SO question.

    Read the article

  • Base 128 or 256 Encoding for the Binary Lexical Octet Adhoc Transport Protocol?

    - by Randolpho
    I'm in the process of implementing a network driver for the Binary Lexical Octet Adhoc Transport (BLOAT) protocols in the hopes of replacing the TCP/UDP/IP stack with a much more flexible XML structure. BLOAT is detailed in RFC 3252, so if you're unfamiliar with the protocol I highly recommend you read the entire RFC before providing any comments. Don't worry, it's short and sweet; you might even enjoy it. Anyway, my problem is this: BLOAT requires that the payload be Base64 encoded which doesn't make sense to me. I mean, sure, it's the internet standard for binary payloads, but there are better, more efficient encodings available: Base128 and Base256, for example. That the RFC requires Base64 and doesn't allow for any other payload encoding really bothers me. To that end, I'm considering a small optional change to the protocol. Embrace and extend, right? Anyway, I'd like to modify the payload element to accept an encoding attribute, which can extend the encoding to Base128 or Base256, or even to other encodings I can't conceive of at the moment. If the encoding attribute isn't present, Base64 would be assumed. So my question is this: should I? I mean, BLOAT is an accepted standard, even if it isn't exactly omnipresent. If I make this change, will there be compatibility issues? I don't foresee any, but perhaps you, oh great Stack Overflow Community, can? If I do implement this change, should I contact the original RFC author? Should I offer a supplemental RFC?

    Read the article

  • Natural language processing - Ideas for beginner's projects

    - by Microkernel
    Hi guys, I am a beginner in NLP and NLTK. I am very interested in NLP and hence joined a weekend course on AI in some local institution, which requires me to do a project for completion of the course, and I decided to do it in NLP. The problem is,the instructor is not good at all for this course (According to me she is just a charlatan) (or may not be very interested in teaching as this is her last batch here after which the institute is going to send her out). So I am stuck in a situation where where I got to finish this project in a month to one and half months period, but as a naive person in the field I am feeling it very difficult to comprehend the things required to decide on project. (Also, as I am working full time, I am not finding enough time to dedicate on this). I considered using NLTK toolkit in python for the project for following reasons. (1) Python is famous for ease of use, rapid prototyping and very active community (considering very short span of time I have, and as I am a C programmer by profession, I need a language that I can learn fast and is simple to use). (2) NLTk has good review, and extensive documentation and a very active community. So the problem is what project should I take up, so that I can learn something and will be able to finish project in time. (I know almost nothing in NLP, don't even know what exactly corpora is... :( ) So, please suggest me some topics that I should consider for the project. Regards, MicroKernel :)

    Read the article

  • Perl XML SAX parser emulating XML::Simple record for record

    - by DVK
    Short Q summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog - due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. What I'm looking for is an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is - e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • In Mercurial, what is the exact step that Peter or me has to do so that he gets back the rolled back

    - by Jian Lin
    The short question is: if I hg rollback, how does Peter get my rolled back version if he cloned from me? What are the exact steps he or me has to do or type? This is related to http://stackoverflow.com/questions/3034793/in-mercurial-when-peter-hg-clone-me-and-i-commit-and-he-pull-and-update-he-g The details: After the following steps, Mary has 7 and Peter has 11. My repository is 7 What are the exact steps Peter or me has to do or type SO THAT PETER GETS 7 back? F:\>mkdir hgme F:\>cd hgme F:\hgme>hg init F:\hgme>echo the code is 7 > code.txt F:\hgme>hg add code.txt F:\hgme>hg commit -m "this is version 1" F:\hgme>cd .. F:\>hg clone hgme hgpeter updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgpeter F:\hgpeter>type code.txt the code is 7 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>notepad code.txt [now i change 7 to 11] F:\hgme>hg commit -m "this is version 2" F:\hgme>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files (run 'hg update' to get a working copy) F:\hgpeter>hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>cd .. F:\>cd hgme F:\hgme>hg rollback rolling back last transaction F:\hgme>cd .. F:\>hg clone hgme hgmary updating to branch default 1 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\>cd hgmary F:\hgmary>type code.txt the code is 7 F:\hgmary>cd .. F:\>cd hgpeter F:\hgpeter>hg pull pulling from f:\hgme searching for changes no changes found F:\hgpeter>hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved F:\hgpeter>type code.txt the code is 11 F:\hgpeter>

    Read the article

  • Business Logic Layer Pattern on Rails? MVCL

    - by Fabiano PS
    That is a broad question, and I appreciate no short/dumb asnwers like: "Oh that is the model job, this quest is retarded (period)" PROBLEM Where I work at people created a system over 2 years for managing the manufacture process over demand in the most simplified still broad as possible, involving selling, buying, assemble, The system is coded over Ruby On Rails. The app has been changed lots of times and the result is a mess on callbacks (some are called several times), 200+ models, and fat controllers: Total bad. The QUESTION is, if there is a gem, or pattern designed to handle Rails large app logic? The logic whould be able to fully talk to models (whose only concern would be data format handling and validation) What I EXPECT is to reduce complexity from various controllers, and hard to track callbacks into files with the responsibility to handle a business operation logic. In some cases there is the need to wait for a response, in others, only validation of the input is enough and a bg process would take place. ie: -- Sell some products (need to wait the operation to finish) 1. Set a View able to get the products input 2. Controller gets the product list inputed by employee and call the logic Logic::ExecuteWithResponse('sell', 'products', :prods => @product_list_with_qtt, :when => @date, :employee => current_user() ) This Logic would handle buying order, assemble order, machine schedule, warehouse reservation, and others. Have in mind that a callback on SalesOrder is not enough, since it depends on where it is called (no field for that), depends on the class of the user, among other stuff not visible for the model, or in some cases it would take long for the model to process.

    Read the article

  • Long running stats process - thoughts on language choice?

    - by Josh
    I am on a LAMP stack for a website I am managing. There is a need to roll up usage statistics (a variety of things related to our desktop product), and I initially tackled the problem with PHP (being that I had a bunch of classes to work with the data already). All worked well on my dev box which was using 5.3 Long story short, 5.1 memory management seems to suck a lot worse, and I've had to do a lot of fooling to get the long term roll up scripts to run in a fixed memory space. Our server guys are unwilling to upgrade php at this time. I've since moved my dev server back to 5.1 so I don't run into this problem again... For mining of mysql databases to roll up statistics for different periods and resolutions, potentially running a process that does this all the time in the future (as opposed to on a cron schedule), what language choice do you recommend? I was looking at python (I know it more or less), java (don't know it that well), sticking it out with php (know it quite well). Thanks for any suggestions. Josh

    Read the article

  • Implementation review for a MVC.NET app with custom membership

    - by mrjoltcola
    I'd like to hear if anyone sees any problems with how I implemented the security in this Oracle based MVC.NET app, either security issues, concurrency issues or scalability issues. First, I implemented a CustomOracleMembershipProvider to handle the database interface to the membership store. I implemented a custom Principal named User which implements IPrincipal, and it has a hashtable of Roles. I also created a separate class named AuthCache which has a simple cache for User objects. Its purpose is simple to avoid return trips to the database, while decoupling the caching from either the web layer or the data layer. (So I can share the cache between MVC.NET, WCF, etc.) The MVC.NET stock MembershipService uses the CustomOracleMembershipProvider (configured in web.config), and both MembershipService and FormsService share access to the singleton AuthCache. My AccountController.LogOn() method: 1) Validates the user via the MembershipService.Validate() method, also loads the roles into the User.Roles container and then caches the User in AuthCache. 2) Signs the user into the Web context via FormsService.SignIn() which accesses the AuthCache (not the database) to get the User, sets HttpContext.Current.User to the cached User Principal. In global.asax.cs, Application_AuthenticateRequest() is implemented. It decrypts the FormsAuthenticationTicket, accesses the AuthCache by the ticket.Name (Username) and sets the Principal by setting Context.User = user from the AuthCache. So in short, all these classes share the AuthCache, and I have, for thread synchronization, a lock() in the cache store method. No lock in the read method. The custom membership provider doesn't know about the cache, the MembershipService doesn't know about any HttpContext (so could be used outside of a web app), and the FormsService doesn't use any custom methods besides accessing the AuthCache to set the Context.User for the initial login, so it isn't dependent on a specific membership provider. The main thing I see now is that the AuthCache will be sharing a User object if a user logs in from multiple sessions. So I may have to change the key from just UserId to something else (maybe using something in the FormsAuthenticationTicket for the key?).

    Read the article

  • Subset a data.frame by list and apply function on each part, by rows

    - by aL3xa
    This may seem as a typical plyr problem, but I have something different in mind. Here's the function that I want to optimize (skip the for loop). # dummy data set.seed(1985) lst <- list(a=1:10, b=11:15, c=16:20) m <- matrix(round(runif(200, 1, 7)), 10) m <- as.data.frame(m) dfsub <- function(dt, lst, fun) { # check whether dt is `data.frame` stopifnot (is.data.frame(dt)) # check if vectors in lst are "whole" / integer # vector elements should be column indexes is.wholenumber <- function(x, tol = .Machine$double.eps^0.5) abs(x - round(x)) < tol # fall if any non-integers in list idx <- rapply(lst, is.wholenumber) stopifnot(idx) # check for list length stopifnot(ncol(dt) == length(idx)) # subset the data subs <- list() for (i in 1:length(lst)) { # apply function on each part, by row subs[[i]] <- apply(dt[ , lst[[i]]], 1, fun) } # preserve names names(subs) <- names(lst) # convert to data.frame subs <- as.data.frame(subs) # guess what =) return(subs) } And now a short demonstration... actually, I'm about to explain what I primarily intended to do. I wanted to subset a data.frame by vectors gathered in list object. Since this is a part of code from a function that accompanies data manipulation in psychological research, you can consider m as a results from personality questionnaire (10 subjects, 20 vars). Vectors in list hold column indexes that define questionnaire subscales (e.g. personality traits). Each subscale is defined by several items (columns in data.frame). If we presuppose that the score on each subscale is nothing more than sum (or some other function) of row values (results on that part of questionnaire for each subject), you could run: > dfsub(m, lst, sum) a b c 1 46 20 24 2 41 24 21 3 41 13 12 4 37 14 18 5 57 18 25 6 27 18 18 7 28 17 20 8 31 18 23 9 38 14 15 10 41 14 22 I took a glance at this function and I must admit that this little loop isn't spoiling the code at all... BUT, if there's an easier/efficient way of doing this, please, let me know!

    Read the article

  • CakePHP, CodeIgniter or Rails for multi-user Tumblr clone?

    - by Jordan
    I'm about to start building a tumblr clone that handles multiple users (so premade clones like Gelato won't cut it) and I'm not sure which framework I'd like to build this is. Right now, I'm only intending to build a prototype. Something I can get a dozen friends on to test the concept and grow to maybe a couple hundred users to prove the market, so I'm not worried about long term scale. My biggest concern right now is quick deployment. I'd like to get from zero to signups in as short a time as possible, with as little customization to the framework of choice as possible. I have experience with PHP, but not Ruby. However, I don't think the learning curve would be too steep so I'm not ruling out rails. I just want the framework that is most appropriate for a system like a multi-user tumblr clone so that I can build it with as little hassle, and as quickly, as possible. If anyone has experience with a similar project, or with these frameworks and can offer an insightful perspective, I'd be very appreciative. Thanks for taking the time to read. Cheers, ~Jordan Feldstein

    Read the article

  • Using ThreadPool.QueueUserWorkItem in ASP.NET in a high traffic scenario

    - by Michael Hart
    I've always been under the impression that using the ThreadPool for (let's say non-critical) short-lived background tasks was considered best practice, even in ASP.NET, but then I came across this article that seems to suggest otherwise - the argument being that you should leave the ThreadPool to deal with ASP.NET related requests. So here's how I've been doing small asynchronous tasks so far: ThreadPool.QueueUserWorkItem(s => PostLog(logEvent)) And the article is suggesting instead to create a thread explicitly, similar to: new Thread(() => PostLog(logEvent)){ IsBackground = true }.Start() The first method has the advantage of being managed and bounded, but there's the potential (if the article is correct) that the background tasks are then vying for threads with ASP.NET request-handlers. The second method frees up the ThreadPool, but at the cost of being unbounded and thus potentially using up too many resources. So my question is, is the advice in the article correct? If your site was getting so much traffic that your ThreadPool was getting full, then is it better to go out-of-band, or would a full ThreadPool imply that you're getting to the limit of your resources anyway, in which case you shouldn't be trying to start your own threads? Clarification: I'm just asking in the scope of small non-critical asynchronous tasks (eg, remote logging), not expensive work items that would require a separate process (in these cases I agree you'll need a more robust solution).

    Read the article

  • Collision Attacks, Message Digests and a Possible solution

    - by Dominar
    I've been doing some preliminary research in the area of message digests. Specifically collision attacks of cryptographic hash functions such as MD5 and SHA-1, such as the Postscript example and X.509 certificate duplicate. From what I can tell in the case of the postscript attack, specific data was generated and embedded within the header of the postscript (which is ignored during rendering) which brought about the internal state of the md5 to a state such that the modified wording of the document would lead to a final MD equivalent to the original. The X.509 took a similar approach where by data was injected within the comment/whitespace of the certificate. Ok so here is my question, and I can't seem to find anyone asking this question: Why isn't the length of ONLY the data being consumed added as a final block to the MD calculation? In the case of X.509 - Why is the whitespace and comments being taken into account as part of the MD? Wouldn't a simple processes such as one of the following be enough to resolve the proposed collision attacks: MD(M + |M|) = xyz MD(M + |M| + |M| * magicseed_0 +...+ |M| * magicseed_n) = xyz where : M : is the message |M| : size of the message MD : is the message digest function (eg: md5, sha, whirlpool etc) xyz : is the acutal message digest value for the message M magicseed_{i}: Is a set random values generated with seed based on the internal-state prior to the size being added. This technqiue should work, as to date all such collision attacks rely on adding more data to the original message. In short, the level of difficulty involved in generating a collision message such that: It not only generates the same MD But is also comprehensible/parsible/compliant and is also the same size as the original message, is immensely difficult if not near impossible. Has this approach ever been discussed? Any links to papers etc would be nice.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >