Search Results

Search found 1325 results on 53 pages for 'factor'.

Page 48/53 | < Previous Page | 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Smooth animation using MatrixTransform?

    - by Mattias Konradsson
    I'm trying to do an Matrix animation where I both scale and transpose a canvas at the same time. The only approach I found was using a MatrixTransform and MatrixAnimationUsingKeyFrames. Since there doesnt seem to be any interpolation for matrices built in (only for path/rotate) it seems the only choice is to try and build the interpolation and DiscreteMatrixKeyFrame's yourself. I did a basic implementation of this but it isnt exactly smooth and I'm not sure if this is the best way and how to handle framerates etc. Anyone have suggestions for improvement? Here's the code: MatrixAnimationUsingKeyFrames anim = new MatrixAnimationUsingKeyFrames(); int duration = 1; anim.KeyFrames = Interpolate(new Point(0, 0), centerPoint, 1, factor,100,duration); this.matrixTransform.BeginAnimation(MatrixTransform.MatrixProperty, anim,HandoffBehavior.Compose); public MatrixKeyFrameCollection Interpolate(Point startPoint, Point endPoint, double startScale, double endScale, double framerate,double duration) { MatrixKeyFrameCollection keyframes = new MatrixKeyFrameCollection(); double steps = duration * framerate; double milliSeconds = 1000 / framerate; double timeCounter = 0; double diffX = Math.Abs(startPoint.X- endPoint.X); double xStep = diffX / steps; double diffY = Math.Abs(startPoint.Y - endPoint.Y); double yStep = diffY / steps; double diffScale= Math.Abs(startScale- endScale); double scaleStep = diffScale / steps; if (endPoint.Y < startPoint.Y) { yStep = -yStep; } if (endPoint.X < startPoint.X) { xStep = -xStep; } if (endScale < startScale) { scaleStep = -scaleStep; } Point currentPoint = new Point(); double currentScale = startScale; for (int i = 0; i < steps; i++) { keyframes.Add(new DiscreteMatrixKeyFrame(new Matrix(currentScale, 0, 0, currentScale, currentPoint.X, currentPoint.Y), KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(timeCounter)))); currentPoint.X += xStep; currentPoint.Y += yStep; currentScale += scaleStep; timeCounter += milliSeconds; } keyframes.Add(new DiscreteMatrixKeyFrame(new Matrix(endScale, 0, 0, endScale, endPoint.X, endPoint.Y), KeyTime.FromTimeSpan(TimeSpan.FromMilliseconds(0)))); return keyframes; }

    Read the article

  • Plane projection and scale causing bluring in silverlight

    - by Andy
    Ok, So I've tried to make an application which relies on images being scaled by an individual factor. These images are then able to be turned over, but the use of an animation working on the ProjectionPlane rotation. The problem comes around when an image is both scaled and rotated. For some reason it starts bluring, where a non scaled image doesn't blur. Also, if you look at the example image below (top is scaled and rotated, bottom is rotated) the projection of the top one doesn't even seem right. Its too horizontal. This this the code for the test app: <UserControl x:Class="SilverlightApplication1.Page" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Width="400" Height="300"> <Canvas x:Name="LayoutRoot" Background="White"> <Border Canvas.Top="25" Canvas.Left="50"> <Border.RenderTransform> <TransformGroup> <ScaleTransform ScaleX="3" ScaleY="3" /> </TransformGroup> </Border.RenderTransform> <Border.Projection> <PlaneProjection RotationY="45"/> </Border.Projection> <Image Source="bw-test-pattern.jpg" Width="50" Height="40"/> </Border> <Border Canvas.Top="150" Canvas.Left="50"> <Border.RenderTransform> <TransformGroup> <ScaleTransform ScaleX="1" ScaleY="1" /> </TransformGroup> </Border.RenderTransform> <Border.Projection> <PlaneProjection RotationY="45"/> </Border.Projection> <Image Source="bw-test-pattern.jpg" Width="150" Height="120"/> </Border> </Canvas> </UserControl> So if anyone could possible shed any light on why this may be happening, I'd very much appreciate it. Suggestions also welcome! :)

    Read the article

  • Classic ASP vs. ASP.NET encryption options

    - by harrije
    I'm working on a web site where the new pages are ASP.NET and the legacy pages are Classic ASP. Being new to development in the Windows env, I've been studying the latest technology, i.e. .NET and I become like a deer in headlights when ever legacy issues come up regarding COM objects. Security on the website is an abomination, but I've easily encrypted the connectionStrings in the web.config file per http://www.4guysfromrolla.com/articles/021506-1.aspx based on DPAPI machine mode. I understand this approach is not the most secure, but it's better than nothing which is what it was for the ASP.NET pages. Now, I question how to do similar encryption for the connection strings used by the Classic ASP pages. A complicating factor is that the web sited is hosted where I do not have admin permissions or even command line access, just FTP. Moreover I want to avoid managing the key. My research has found: DPAPI with COM interop. Seems like this should already be available, but the only thing I could find discussing this is CyptoUtility (see http://msdn.microsoft.com/en-us/magazine/cc163884.aspx) which is not installed on the hosting server. There are plenty of other third party COM objects, e.g. Crypto from Dalun Software http://www.dalun.com, but these aren't on the hosted server either, and they look to me to require you to do some kind of key management. There is CAPICOM on the hosted server, but M$ has deprecated it and many report it is not the easiest to use. It is not clear to me whether I can avoid key management with CAPICOM similar to using DPAPI for ASP.NET. If anyone happens to know, please clue me in. I could write an web service in ASP.NET and have the classic ASP pages use it to get the decrypted connection strings and then store those in an application variable. I would not need to use SSL since I could use localhost and nothing would be sent over the internet. In the simpliest form I could implement what someone termed a poor man's version based on a simple XML stream, however, I really was looking to avoid any development since I find it hard to believe there is not a simple solution for Classic ASP like there is for ASP.NET. Maybe I'm missing some options... Recommendations are requested...

    Read the article

  • How to better create stacked bar graphs with multiple variables from ggplot2?

    - by deoksu
    I often have to make stacked barplots to compare variables, and because I do all my stats in R, I prefer to do all my graphics in R with ggplot2. I would like to learn how to do two things: First, I would like to be able to add proper percentage tick marks for each variable rather than tick marks by count. Counts would be confusing, which is why I take out the axis labels completely. Second, there must be a simpler way to reorganize my data to make this happen. It seems like the sort of thing I should be able to do natively in ggplot2 with plyR, but the documentation for plyR is not very clear (and I have read both the ggplot2 book and the online plyR documentation. My best graph looks like this, the code to create it follows: the R code I use to get it is the following: library(epicalc) ### recode the variables to factors ### recode(c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ), c(1,2,3,4,5,6,7,8,9, NA), c('Very Interested','Somewhat Interested','Not Very Interested','Not At All interested',NA,NA,NA,NA,NA,NA)) ### Combine recoded variables to a common vector Interest1<-c(int_newcoun, int_newneigh, int_neweur, int_newusa, int_neweco, int_newit, int_newen, int_newsp, int_newhr, int_newlit, int_newent, int_newrel, int_newhth, int_bapo, int_wopo, int_eupo, int_educ) ### Create a second vector to label the first vector by original variable ### a1<-rep("News about Bangladesh", length(int_newcoun)) a2<-rep("Neighboring Countries", length(int_newneigh)) [...] a17<-rep("Education", length(int_educ)) Interest2<-c(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17) ### Create a Weighting vector of the proper length ### Interest.weight<-rep(weight, 17) ### Make and save a new data frame from the three vectors ### Interest.df<-cbind(Interest1, Interest2, Interest.weight) Interest.df<-as.data.frame(Interest.df) write.csv(Interest.df, 'C:\\Documents and Settings\\[name]\\Desktop\\Sweave\\InterestBangladesh.csv') ### Sort the factor levels to display properly ### Interest.df$Interest1<-relevel(Interest$Interest1, ref='Not Very Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Somewhat Interested') Interest.df$Interest1<-relevel(Interest$Interest1, ref='Very Interested') Interest.df$Interest2<-relevel(Interest$Interest2, ref='News about Bangladesh') Interest.df$Interest2<-relevel(Interest$Interest2, ref='Education') [...] Interest.df$Interest2<-relevel(Interest$Interest2, ref='European Politics') detach(Interest) attach(Interest) ### Finally create the graph in ggplot2 ### library(ggplot2) p<-ggplot(Interest, aes(Interest2, ..count..)) p<-p+geom_bar((aes(weight=Interest.weight, fill=Interest1))) p<-p+coord_flip() p<-p+scale_y_continuous("", breaks=NA) p<-p+scale_fill_manual(value = rev(brewer.pal(5, "Purples"))) p update_labels(p, list(fill='', x='', y='')) I'd very much appreciate any tips, tricks or hints. Thanks.

    Read the article

  • Efficiency of Java "Double Brace Initialization"?

    - by Jim Ferrans
    In Hidden Features of Java the top answer mentions Double Brace Initialization, with a very enticing syntax: Set<String> flavors = new HashSet<String>() {{ add("vanilla"); add("strawberry"); add("chocolate"); add("butter pecan"); }}; This idiom creates an anonymous inner class with just an instance initializer in it, which "can use any [...] methods in the containing scope". Main question: Is this as inefficient as it sounds? Should its use be limited to one-off initializations? (And of course showing off!) Second question: The new HashSet must be the "this" used in the instance initializer ... can anyone shed light on the mechanism? Third question: Is this idiom too obscure to use in production code? Summary: Very, very nice answers, thanks everyone. On question (3), people felt the syntax should be clear (though I'd recommend an occasional comment, especially if your code will pass on to developers who may not be familiar with it). On question (1), The generated code should run quickly. The extra .class files do cause jar file clutter, and slow program startup slightly (thanks to coobird for measuring that). Thilo pointed out that garbage collection can be affected, and the memory cost for the extra loaded classes may be a factor in some cases. Question (2) turned out to be most interesting to me. If I understand the answers, what's happening in DBI is that the anonymous inner class extends the class of the object being constructed by the new operator, and hence has a "this" value referencing the instance being constructed. Very neat. Overall, DBI strikes me as something of an intellectual curiousity. Coobird and others point out you can achieve the same effect with Arrays.asList, varargs methods, Google Collections, and the proposed Java 7 Collection literals. Newer JVM languages like Scala, JRuby, and Groovy also offer concise notations for list construction, and interoperate well with Java. Given that DBI clutters up the classpath, slows down class loading a bit, and makes the code a tad more obscure, I'd probably shy away from it. However, I plan to spring this on a friend who's just gotten his SCJP and loves good natured jousts about Java semantics! ;-) Thanks everyone!

    Read the article

  • Resizing an image using mouse dragging (C#)

    - by Gaax
    Hi all. I'm having some trouble resizing an image just by dragging the mouse. I found an average resize method and now am trying to modify it to use the mouse instead of given values. The way I'm doing it makes sense to me but maybe you guys can give me some better ideas. I'm basically using the distance between the current location of the mouse and the previous location of the mouse as the scaling factor. If the distance between the current mouse location and the center of of the image is less than the distance between previous mouse location and the center of the image then the image gets smaller, and vice-versa. With the code below I'm getting an Argument Exception (invalid parameter) when creating the new bitmap with the new height and width and I really don't understand why... any ideas? private static Image resizeImage(Image imgToResize, System.Drawing.Point prevMouseLoc, System.Drawing.Point currentMouseLoc) { int sourceWidth = imgToResize.Width; int sourceHeight = imgToResize.Height; float dCurrCent = 0; //Distance between current mouse location and the center of the image float dPrevCent = 0; //Distance between previous mouse location and the center of the image float dCurrPrev = 0; //Distance between current mouse location and the previous mouse location int sign = 1; System.Drawing.Point imgCenter = new System.Drawing.Point(); float nPercent = 0; imgCenter.X = imgToResize.Width / 2; imgCenter.Y = imgToResize.Height / 2; // Calculating the distance between the current mouse location and the center of the image dCurrCent = (float)Math.Sqrt(Math.Pow(currentMouseLoc.X - imgCenter.X, 2) + Math.Pow(currentMouseLoc.Y - imgCenter.Y, 2)); // Calculating the distance between the previous mouse location and the center of the image dPrevCent = (float)Math.Sqrt(Math.Pow(prevMouseLoc.XimgCenter.X,2) + Math.Pow(prevMouseLoc.Y - imgCenter.Y, 2)); // Calculating the sign value if (dCurrCent >= dPrevCent) { sign = 1; } else { sign = -1; } nPercent = sign * (float)Math.Sqrt(Math.Pow(currentMouseLoc.X - prevMouseLoc.X, 2) + Math.Pow(currentMouseLoc.Y - prevMouseLoc.Y, 2)); int destWidth = (int)(sourceWidth * nPercent); int destHeight = (int)(sourceHeight * nPercent); Bitmap b = new Bitmap(destWidth, destHeight); // exception thrown here Graphics g = Graphics.FromImage((Image)b); g.InterpolationMode = InterpolationMode.HighQualityBicubic; g.DrawImage(imgToResize, 0, 0, destWidth, destHeight); g.Dispose(); return (Image)b; }

    Read the article

  • Is this a situation where Qt Model/View architecture is not useful?

    - by csmithmaui
    Hi, I am writing a GUI based application where I read a string of values from serial port every few seconds and I need to display most of the values in some type graphical indicator(I was thinking of QprogressBar maybe) that displays the range and the value. Some of the other data that I am parsing from the string are the date and fault codes. Also, the data is hierarchical. I wanted to use the model/view architecture of Qt because I have been interested in MVC stuff for a while but have never quite wrapped my brain around how to implement it very well. As of now, I have subclassed QAbstractItemModel and in the model I read the serial port and wrap the items parsed from the string in a Tree data structure. I can view all of the data in a QtreeView with no issues. I have also began to subclass QAbstractItemView to build my custom view with all of the Graphical Indicators and such. This is where I am getting stuck. It seems to me that in order for me to design a view that knows how to display my custom model the view needs to know exactly how all of the data in the model is organized. Doesn't that defeat the purpose of Model/View? The QTreeView I tested the model with is basically just displaying the model as it is setup in the Tree structure but I don't want to do that because the data is not all of the same type. Is the type of data or the way you would like to present it to the user a determining factor in whether or not you should use this architecture? I always assumed it was just always better to design in an MVC style. It seems to me like it might have been better to just subclass QWidget and then read in from the serial port and update all of subwidgets(graphical indicators, labels, etc...) from the subclass. Essentially, do everything in one class. Does anybody understand this issue that can explain to me either what I am missing or why I shouldn't be doing it this way. As of now I am a little confused. Thanks so much for any help!

    Read the article

  • How to optimize my PostgreSQL DB for prefix search?

    - by asmaier
    I have a table called "nodes" with roughly 1.7 million rows in my PostgreSQL db =#\d nodes Table "public.nodes" Column | Type | Modifiers --------+------------------------+----------- id | integer | not null title | character varying(256) | score | double precision | Indexes: "nodes_pkey" PRIMARY KEY, btree (id) I want to use information from that table for autocompletion of a search field, showing the user a list of the ten titles having the highest score fitting to his input. So I used this query (here searching for all titles starting with "s") =# explain analyze select title,score from nodes where title ilike 's%' order by score desc; QUERY PLAN ----------------------------------------------------------------------------------------------------------------------- Sort (cost=64177.92..64581.38 rows=161385 width=25) (actual time=4930.334..5047.321 rows=161264 loops=1) Sort Key: score Sort Method: external merge Disk: 5712kB -> Seq Scan on nodes (cost=0.00..46630.50 rows=161385 width=25) (actual time=0.611..4464.413 rows=161264 loops=1) Filter: ((title)::text ~~* 's%'::text) Total runtime: 5260.791 ms (6 rows) This was much to slow for using it with autocomplete. With some information from Using PostgreSQL in Web 2.0 Applications I was able to improve that with a special index =# create index title_idx on nodes using btree(lower(title) text_pattern_ops); =# explain analyze select title,score from nodes where lower(title) like lower('s%') order by score desc limit 10; QUERY PLAN ------------------------------------------------------------------------------------------------------------------------------------------ Limit (cost=18122.41..18122.43 rows=10 width=25) (actual time=1324.703..1324.708 rows=10 loops=1) -> Sort (cost=18122.41..18144.60 rows=8876 width=25) (actual time=1324.700..1324.702 rows=10 loops=1) Sort Key: score Sort Method: top-N heapsort Memory: 17kB -> Bitmap Heap Scan on nodes (cost=243.53..17930.60 rows=8876 width=25) (actual time=96.124..1227.203 rows=161264 loops=1) Filter: (lower((title)::text) ~~ 's%'::text) -> Bitmap Index Scan on title_idx (cost=0.00..241.31 rows=8876 width=0) (actual time=90.059..90.059 rows=161264 loops=1) Index Cond: ((lower((title)::text) ~>=~ 's'::text) AND (lower((title)::text) ~<~ 't'::text)) Total runtime: 1325.085 ms (9 rows) So this gave me a speedup of factor 4. But can this be further improved? What if I want to use '%s%' instead of 's%'? Do I have any chance of getting a decent performance with PostgreSQL in that case, too? Or should I better try a different solution (Lucene?, Sphinx?) for implementing my autocomplete feature?

    Read the article

  • Error at lapack cgesv when matrix is not singular

    - by Jan Malec
    This is my first post. I usually ask classmates for help, but they have a lot of work now and I'm too desperate to figure this out on my own :). I am working on a project for school and I have come to a point where I need to solve a system of linear equations with complex numbers. I have decided to call lapack routine "cgesv" from c++. I use the c++ complex library to work with complex numbers. Problem is, when I call the routine, I get error code "2". From lapack documentation: INFO is INTEGER = 0: successful exit < 0: if INFO = -i, the i-th argument had an illegal value > 0: if INFO = i, U(i,i) is exactly zero. The factorization has been completed, but the factor U is exactly singular, so the solution could not be computed. Therefore, the element U(2, 2) should be zero, but it is not. This is how I declare the function: void cgesv_( int* N, int* NRHS, std::complex* A, int* lda, int* ipiv, std::complex* B, int* ldb, int* INFO ); This is how I use it: int *IPIV = new int[NA]; int INFO, NRHS = 1; std::complex<double> *aMatrix = new std::complex<double>[NA*NA]; for(int i=0; i<NA; i++){ for(int j=0; j<NA; j++){ aMatrix[j*NA+i] = A[i][j]; } } cgesv_( &NA, &NRHS, aMatrix, &NA, IPIV, B, &NB, &INFO ); And this is how the matrix looks like: (1,-160.85) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-40.213) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-0.000613592) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-40.213) (0,0.000306796) (0,-0) (0,-0) (0,-0) (0,0.000306796) (1,-160.85) I had to split the matrix colums, otherwise it did not format correctly. My first suspicion was that complex is not parsed correctly, but I have used lapack functions with complex numbers before this way. Any ideas?

    Read the article

  • log4j performance

    - by Bob
    Hi, I'm developing a web app, and I'd like to log some information to help me improve and observe the app. (I'm using Tomcat6) First I thought I would use StringBuilders, append the logs to them and a task would persist them into the database like every 2 minutes. Because I was worried about the out-of-the-box logging system's performance. Then I made some test. Especially with log4j. Here is my code: Main.java public static void main(String[] args) { Thread[] threads = new Thread[LoggerThread.threadsNumber]; for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i] = new Thread(new LoggerThread("name - " + i)); } LoggerThread.startTimestamp = System.currentTimeMillis(); for(int i = 0; i < LoggerThread.threadsNumber; ++i){ threads[i].start(); } LoggerThread.java public class LoggerThread implements Runnable{ public static int threadsNumber = 10; public static long startTimestamp; private static int counter = 0; private String name; public LoggerThread(String name) { this.name = name; } private Logger log = Logger.getLogger(this.getClass()); @Override public void run() { for(int i=0; i<10000; ++i){ log.info(name + ": " + i); if(i == 9999){ int c = increaseCounter(); if(c == threadsNumber){ System.out.println("Elapsed time: " + (System.currentTimeMillis() - startTimestamp)); } } } } private synchronized int increaseCounter(){ return ++counter; } } } log4j.properties log4j.logger.main.LoggerThread=debug, f log4j.appender.f=org.apache.log4j.RollingFileAppender log4j.appender.f.layout=org.apache.log4j.PatternLayout log4j.appender.f.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n log4j.appender.f.File=c:/logs/logging.log log4j.appender.f.MaxFileSize=15000KB log4j.appender.f.MaxBackupIndex=50 I think this is a very common configuration for log4j. First I used log4j 1.2.14 then I realized there was a newer version, so I switched to 1.2.16 Here are the figures (all in millisec) LoggerThread.threadsNumber = 10 1.2.14: 4235, 4267, 4328, 4282 1.2.16: 2780, 2781, 2797, 2781 LoggerThread.threadsNumber = 100 1.2.14: 41312, 41014, 42251 1.2.16: 25606, 25729, 25922 I think this is very fast. Don't forget that: in every cycle the run method not just log into the file, it has to concatenate strings (name + ": " + i), and check an if test (i == 9999). When threadsNumber is 10, there are 100.000 loggings and if tests and concatenations. When it is 100, there are 1.000.000 loggings and if tests and concatenations. (I've read somewhere JVM uses StringBuilder's append for concatenation, not simple concatenation). Did I missed something? Am I doing something wrong? Did I forget any factor that could decrease the performance? If these figures are correct I think, I don't have to worry about log4j's performance even if I heavily log, do I?

    Read the article

  • The Math of a Jump in a 2D game.

    - by ONi
    I have been all around with this question and I can't find the correct answer! So, behold, the description of my question: I'm working in J2ME, I have my gameloop that do the following: public void run() { Graphics g = this.getGraphics(); while (running) { long diff = System.currentTimeMillis() - lastLoop; lastLoop = System.currentTimeMillis(); input(); this.level.doLogic(); render(g, diff); try { Thread.sleep(10); } catch (InterruptedException e) { stop(e); } } } so it's just a basic gameloop, the doLogic() function calls for all the logic functions of the characters in the scene and render(g, diff) calls the animateChar function of every character on scene, following this, the animChar function in the Character class sets up everything in the screen as this: protected void animChar(long diff) { this.checkGravity(); this.move((int) ((diff * this.dx) / 1000), (int) ((diff * this.dy) / 1000)); if (this.acumFrame > this.framerate) { this.nextFrame(); this.acumFrame = 0; } else { this.acumFrame += diff; } } This ensures me that everything must to move according to the time that the machine takes to go from cycle to cycle (remember it's a phone, not a gaming rig). I'm sure it's not the most efficient way to achieve this behavior so I'm totally open for criticism of my programming skills in the comments, but here my problem: When I make I character jump, what I do is that I put his dy to a negative value, say -200 and I set the boolean jumping to true, that makes the character go up, and then I have this function called checkGravity() that ensure that everything that goes up has to go down, checkGravity also checks for the character being over platforms so I will strip it down a little for the sake of your time: public void checkGravity() { if (this.jumping) { this.jumpSpeed += 10; if (this.jumpSpeed > 0) { this.jumping = false; this.falling = true; } this.dy = this.jumpSpeed; } if (this.falling) { this.jumpSpeed += 10; if (this.jumpSpeed > 200) this.jumpSpeed = 200; this.dy = this.jumpSpeed; if (this.collidesWithPlatform()) { this.falling = false; this.standing = true; this.jumping = false; this.jumpSpeed = 0; this.dy = this.jumpSpeed; } } } So, the problem is, that this function updates the dy regardless of the diff, making the characters fly like Superman in slow machines, and I have no idea how to implement the diff factor so that when a character is jumping, his speed decrement in a proportional way to the game speed. Can anyone help me fix this issue? or give me pointers on how to make a 2D Jump in J2ME the right way. Thank you very much for your time.

    Read the article

  • General ORM design question

    - by Calvin
    Suppose you have 2 classes, Person and Rabbit. A person can do a number of things to a rabbit, s/he can either feed it, buy it and become its owner, or give it away. A rabbit can have none or at most 1 owner at a time. And if it is not fed for a while, it may die. Class Person { Void Feed(Rabbit r); Void Buy(Rabbit r); Void Giveaway(Person p, Rabbit r); Rabbit[] rabbits; } Class Rabbit { Bool IsAlive(); Person pwner; } There are a couple of observations from the domain model: Person and Rabbit can have references to each other Any actions on 1 object can also change the state of the other object Even if no explicit actions are invoked, there can still be a change of state in the objects (e.g. Rabbit can be starved to death, and that causes it to be removed from the Person.rabbits array) As DDD is concerned, I think the correct approach is to synchronize all calls that may change the states in the domain model. For instance, if a Person buys a Rabbit, s/he would need to acquire a lock in Person to make a change to the rabbits array AND also another lock in Rabbit to change its owner before releasing the first one. This would prevent a race condition where 2 Persons claim to be the owner of the little Rabbit. The other approach is to let the database to handle all these synchronizations. Who makes the first call wins, but then the DB needs to have some kind of business logics to figure out if it is a valid transaction (e.g. if a Rabbit already has an owner, it cannot change its owner unless the Person gives it away). There are both pros/cons in either approach, and I’d expect the “best” solution would be somewhere in-between. How would you do it in real life? What’s your take and experience? Also, is it a valid concern that there can be another race condition the domain model has committed its change but before it is fully committed in the database? And for the 3rd observation (i.e. state change due to time factor). How will you do it?

    Read the article

  • Which programming language to choose? (for a specific problem/domain, details inside)

    - by Bijan
    I am building a trading portfolio management system that is responsible for production, optimization, and simulation of non-high frequency trading portfolios (dealing with 1min or 3min bars of data, not tick data). I plan on employing Amazon web services to take on the entire load of the application. I have four choices that I am considering as language. a) Java b) C++ c) C# d) Python Here is the scope of the extremes of the project scope. This isn't how it will be, maybe ever, but it's within the scope of the requirements: Weekly simulation of 10,000,000 trading systems. (Each trading system is expected to have its own data mining methods, including feature selection algorithms which are extremely computationally-expensive. Imagine 500-5000 features using wrappers. These are not run often by any means, but it's still a consideration) Real-time production of portfolio w/ 100,000 trading strategies Taking in 1 min or 3 min data from every stock/futures market around the globe (approx 100,000) Portfolio optimization of portfolios with up to 100,000 strategies. (rather intensive algorithm) Speed is a concern, but I believe that Java can handle the load. I just want to make sure that Java CAN handle the above requirements comfortably. I don't want to do the project in C++, but I will if it's required. The reason C# is on there is because I thought it was a good alternative to Java, even though I don't like Windows at all and would prefer Java if all things are the same. Python - I've read somethings on PyPy and pyscho that claim python can be optimized with JIT compiling to run at near C-like speeds.... That's pretty much the only reason it is on this list, besides that fact that Python is a great language and would probably be the most enjoyable language to code in, which is not a factor at all for this project, but a perk. To sum up: - real time production - weekly simulations of a large number of systems - weekly/monthly optimizations of portfolios - large numbers of connections to collect data from There is no dealing with millisecond or even second based trades. The only consideration is if Java can possibly deal with this kind of load when spread out of a necessary amount of EC2 servers. Thank you guys so much for your wisdom.

    Read the article

  • Setting custom UITableViewCells height

    - by Vijayeta
    I am using a custum UITableViewCell , which has some labels , buttons imageviews to be displayed.There is one label in the cell whose text is a NSString object and the length of string could be variable , due to this i cannot set a constant height to the cell in UITableViews : heightForCellAtIndex method.The ceels height depends on the labels height , whcich can be determined using the NSStrings sizeWithFont method . i tried using it , but looks like i m going wrong somewhere . Can anyone help me out in this , adding the code used in iniatilizing the cell if (self = [super initWithFrame:frame reuseIdentifier:reuseIdentifier]) { self.selectionStyle = UITableViewCellSelectionStyleNone; UIImage *image = [UIImage imageNamed:@"dot.png"]; imageView = [[UIImageView alloc] initWithImage:image]; imageView.frame = CGRectMake(45.0,10.0,10,10); headingTxt = [[UILabel alloc] initWithFrame: CGRectMake(60.0,0.0,150.0,post_hdg_ht)]; [headingTxt setContentMode: UIViewContentModeCenter]; headingTxt.text = postData.user_f_name; headingTxt.font = [UIFont boldSystemFontOfSize:13]; headingTxt.textAlignment = UITextAlignmentLeft; headingTxt.textColor = [UIColor blackColor]; dateTxt = [[UILabel alloc] initWithFrame:CGRectMake(55.0,23.0,150.0,post_date_ht)]; dateTxt.text = postData.created_dtm; dateTxt.font = [UIFont italicSystemFontOfSize:11]; dateTxt.textAlignment = UITextAlignmentLeft; dateTxt.textColor = [UIColor grayColor]; NSString * text1 = postData.post_body; NSLog(@"text length = %d",[text1 length]); CGRect bounds = [UIScreen mainScreen].bounds; CGFloat tableViewWidth; CGFloat width = 0; tableViewWidth = bounds.size.width/2; width = tableViewWidth - 40; //fudge factor //CGSize textSize = {width, 20000.0f}; //width and height of text area CGSize textSize = {245.0, 20000.0f}; //width and height of text area CGSize size1 = [text1 sizeWithFont:[UIFont systemFontOfSize:11.0f] constrainedToSize:textSize lineBreakMode:UILineBreakModeWordWrap]; CGFloat ht = MAX(size1.height, 28); textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; textView.text = postData.post_body; textView.font = [UIFont systemFontOfSize:11]; textView.textAlignment = UITextAlignmentLeft; textView.textColor = [UIColor blackColor]; textView.lineBreakMode = UILineBreakModeWordWrap; textView.numberOfLines = 3; textView.autoresizesSubviews = YES; [self.contentView addSubview:imageView]; [self.contentView addSubview:textView]; [self.contentView addSubview:webView]; [self.contentView addSubview:dateTxt]; [self.contentView addSubview:headingTxt]; [self.contentView sizeToFit]; [imageView release]; [textView release]; [webView release]; [dateTxt release]; [headingTxt release]; } textView = [[UILabel alloc] initWithFrame:CGRectMake(55.0,42.0,245.0,ht)]; this is the label whose height and widh are going wrong

    Read the article

  • Performing full screen grab in windows

    - by Steven Lu
    I am working an idea that involves getting a full capture of the screen including windows and apps, analyzing it, and then drawing items back onto the screen, as an overlay. I want to learn image processing techniques and I could get lots of data to work with if I can directly access the Windows screen. I could use this to build automation tools the likes of which have never been seen before. More on that later. I have full screen capture working for the most part. HWND hwind = GetDesktopWindow(); HDC hdc = GetDC(hwind); int resx = GetSystemMetrics(SM_CXSCREEN); int resy = GetSystemMetrics(SM_CYSCREEN); int BitsPerPixel = GetDeviceCaps(hdc,BITSPIXEL); HDC hdc2 = CreateCompatibleDC(hdc); BITMAPINFO info; info.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); info.bmiHeader.biWidth = resx; info.bmiHeader.biHeight = resy; info.bmiHeader.biPlanes = 1; info.bmiHeader.biBitCount = BitsPerPixel; info.bmiHeader.biCompression = BI_RGB; void *data; hbitmap = CreateDIBSection(hdc2,&info,DIB_RGB_COLORS,(void**)&data,0,0); SelectObject(hdc2,hbitmap); Once this is done, I can call this repeatedly: BitBlt(hdc2,0,0,resx,resy,hdc,0,0,SRCCOPY); The cleanup code (I have no idea if this is correct): DeleteObject(hbitmap); ReleaseDC(hwind,hdc); if (hdc2) { DeleteDC(hdc2); } Every time BitBlt is called it grabs the screen and saves it in memory I can access thru data. Performance is somewhat satisfactory. BitBlt executes in 50 milliseconds (sometimes as low as 33ms) at 1920x1200x32. What surprises me is that when I switch display mode to 16 bit, 1920x1200x16, either through my graphics settings beforehand, or by using ChangeDisplaySettings, I get a massively improved screen grab time between 1ms and 2ms, which cannot be explained by the factor of two reduction in bit-depth. Using CreateDIBSection (as above) offers a significant speed up when in 16-bit mode, compared to if I set up with CreateCompatibleBitmap (6-7ms/f). Does anybody know why dropping to 16bit causes such a speed increase? Is there any hope for me to grab 32bit at such speeds? if not for the color depth, but for not forcing a change of screen buffer modes and the awful flickering.

    Read the article

  • Generate lags R

    - by Btibert3
    Hi All, I hope this is basic; just need a nudge in the right direction. I have read in a database table from MS Access into a data frame using RODBC. Here is a basic structure of what I read in: PRODID PROD Year Week QTY SALES INVOICE Here is the structure: str(data) 'data.frame': 8270 obs. of 7 variables: $ PRODID : int 20001 20001 20001 100001 100001 100001 100001 100001 100001 100001 ... $ PROD : Factor w/ 1239 levels "1% 20qt Box",..: 335 335 335 128 128 128 128 128 128 128 ... $ Year : int 2010 2010 2010 2009 2009 2009 2009 2009 2009 2010 ... $ Week : int 12 18 19 14 15 16 17 18 19 9 ... $ QTY : num 1 1 0 135 300 270 300 270 315 315 ... $ SALES : num 15.5 0 -13.9 243 540 ... $ INVOICES: num 1 1 2 5 11 11 10 11 11 12 ... Here are the top few rows: head(data, n=10) PRODID PROD Year Week QTY SALES INVOICES 1 20001 Dolie 12" 2010 12 1 15.46 1 2 20001 Dolie 12" 2010 18 1 0.00 1 3 20001 Dolie 12" 2010 19 0 -13.88 2 4 100001 Cage Free Eggs 2009 14 135 243.00 5 5 100001 Cage Free Eggs 2009 15 300 540.00 11 6 100001 Cage Free Eggs 2009 16 270 486.00 11 7 100001 Cage Free Eggs 2009 17 300 540.00 10 8 100001 Cage Free Eggs 2009 18 270 486.00 11 9 100001 Cage Free Eggs 2009 19 315 567.00 11 10 100001 Cage Free Eggs 2010 9 315 569.25 12 I simply want to generate lags for QTY, SALES, INVOICE for each product but I am not sure where to start. I know R is great with Time Series, but I am not sure where to start. I have two questions: 1- I have the raw invoice data but have aggregated it for reporting purposes. Would it be easier if I didn't aggregate the data? 2- Regardless of aggregation or not, what functions will I need to loop over each product and generate the lags as I need them? In short, I want to loop over a set of records, calculate lags for a product (if possible), append the lags (as they apply) to the current record for each product, and write the results back to a table in my database for my reporting software to use. Any help you can provide will be greatly appreciated! Many thanks in advance, Brock

    Read the article

  • How to move a kinect skeleton to another position

    - by Ewerton
    I am working on a extension method to move one skeleton to a desired position in the kinect field os view. My code receives a skeleton to be moved and the destiny position, i calculate the distance between the received skeleton hip center and the destiny position to find how much to move, then a iterate in the joint applying this factor. My code, actualy looks like this. public static Skeleton MoveTo(this Skeleton skToBeMoved, Vector4 destiny) { Joint newJoint = new Joint(); ///Based on the HipCenter (i dont know if it is reliable, seems it is.) float howMuchMoveToX = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.X - destiny.X); float howMuchMoveToY = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.Y - destiny.Y); float howMuchMoveToZ = Math.Abs(skToBeMoved.Joints[JointType.HipCenter].Position.Z - destiny.Z); float howMuchToMultiply = 1; // Iterate in the 20 Joints foreach (JointType item in Enum.GetValues(typeof(JointType))) { newJoint = skToBeMoved.Joints[item]; // This adjust, try to keeps the skToBeMoved in the desired position if (newJoint.Position.X < 0) howMuchToMultiply = 1; // if the point is in a negative position, carry it to a "more positive" position else howMuchToMultiply = -1; // if the point is in a positive position, carry it to a "more negative" position // applying the new values to the joint SkeletonPoint pos = new SkeletonPoint() { X = newJoint.Position.X + (howMuchMoveToX * howMuchToMultiply), Y = newJoint.Position.Y, // * (float)whatToMultiplyY, Z = newJoint.Position.Z, // * (float)whatToMultiplyZ }; newJoint.Position = pos; skToBeMoved.Joints[item] = newJoint; //if (skToBeMoved.Joints[JointType.HipCenter].Position.X < 0) //{ // if (item == JointType.HandLeft) // { // if (skToBeMoved.Joints[item].Position.X > 0) // { // } // } //} } return skToBeMoved; } Actualy, only X position is considered. Now, THE PROBLEM: If i stand in a negative position, and move my hand to a positive position, a have a strange behavior, look this image To reproduce this behaviour you could use this code using (SkeletonFrame frame = e.OpenSkeletonFrame()) { if (frame == null) return new Skeleton(); if (skeletons == null || skeletons.Length != frame.SkeletonArrayLength) { skeletons = new Skeleton[frame.SkeletonArrayLength]; } frame.CopySkeletonDataTo(skeletons); Skeleton skeletonToTest = skeletons.Where(s => s.TrackingState == SkeletonTrackingState.Tracked).FirstOrDefault(); Vector4 newPosition = new Vector4(); newPosition.X = -0.03412333f; newPosition.Y = 0.0407479f; newPosition.Z = 1.927342f; newPosition.W = 0; // ignored skeletonToTest.MoveTo(newPosition); } I know, this is simple math, but i cant figure it out why this is happen. Any help will be apreciated.

    Read the article

  • Calculate minimum moves to solve a puzzle

    - by Luke
    I'm in the process of creating a game where the user will be presented with 2 sets of colored tiles. In order to ensure that the puzzle is solvable, I start with one set, copy it to a second set, then swap tiles from one set to another. Currently, (and this is where my issue lies) the number of swaps is determined by the level the user is playing - 1 swap for level 1, 2 swaps for level 2, etc. This same number of swaps is used as a goal in the game. The user must complete the puzzle by swapping a tile from one set to the other to make the 2 sets match (by color). The order of the tiles in the (user) solved puzzle doesn't matter as long as the 2 sets match. The problem I have is that as the number of swaps I used to generate the puzzle approaches the number of tiles in each set, the puzzle becomes easier to solve. Basically, you can just drag from one set in whatever order you need for the second set and solve the puzzle with plenty of moves left. What I am looking to do is after I finish building the puzzle, calculate the minimum number of moves required to solve the puzzle. Again, this is almost always less than the number of swaps used to create the puzzle, especially as the number of swaps approaches the number of tiles in each set. My goal is to calculate the best case scenario and then give the user a "fudge factor" (i.e. 1.2 times the minimum number of moves). Solving the puzzle in under this number of moves will result in passing the level. A little background as to how I currently have the game configured: Levels 1 to 10: 9 tiles in each set. 5 different color tiles. Levels 11 to 20: 12 tiles in each set. 7 different color tiles. Levels 21 to 25: 15 tiles in each set. 10 different color tiles. Swapping within a set is not allowed. For each level, there will be at least 2 tiles of a given color (one for each set in the solved puzzle). Is there any type of algorithm anyone could recommend to calculate the minimum number of moves to solve a given puzzle?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • NSNotifications vs delegate for multiple instances of same protocol

    - by Brent Traut
    I could use some architectural advice. I've run into the following problem a few times now and I've never found a truly elegant way to solve it. The issue, described at the highest level possible:I have a parent class that would like to act as the delegate for multiple children (all using the same protocol), but when the children call methods on the parent, the parent no longer knows which child is making the call. I would like to use loose coupling (delegates/protocols or notifications) rather than direct calls. I don't need multiple handlers, so notifications seem like they might be overkill. To illustrate the problem, let me try a super-simplified example: I start with a parent view controller (and corresponding view). I create three child views and insert each of them into the parent view. I would like the parent view controller to be notified whenever the user touches one of the children. There are a few options to notify the parent: Define a protocol. The parent implements the protocol and sets itself as the delegate to each of the children. When the user touches a child view, its view controller calls its delegate (the parent). In this case, the parent is notified that a view is touched, but it doesn't know which one. Not good enough. Same as #1, but define the methods in the protocol to also pass some sort of identifier. When the child tells its delegate that it was touched, it also passes a pointer to itself. This way, the parent know exactly which view was touched. It just seems really strange for an object to pass a reference to itself. Use NSNotifications. The parent defines a separate method for each of the three children and then subscribes to the "viewWasTouched" notification for each of the three children as the notification sender. The children don't need to attach themselves to the user dictionary, but they do need to send the notification with a pointer to themselves as the scope. Same as #4, but rather than using separate methods, the parent could just use one with a switch case or other branching along with the notification's sender to determine which path to take. Create multiple man-in-the-middle classes that act as the delegates to the child views and then call methods on the parent either with a pointer to the child or with some other differentiating factor. This approach doesn't seem scalable. Are any of these approaches considered best practice? I can't say for sure, but it feels like I'm missing something more obvious/elegant.

    Read the article

  • synchronized in java - Proper use

    - by ZoharYosef
    I'm building a simple program to use in multi processes (Threads). My question is more to understand - when I have to use a reserved word synchronized? Do I need to use this word in any method that affects the bone variables? I know I can put it on any method that is not static, but I want to understand more. thank you! here is the code: public class Container { // *** data members *** public static final int INIT_SIZE=10; // the first (init) size of the set. public static final int RESCALE=10; // the re-scale factor of this set. private int _sp=0; public Object[] _data; /************ Constructors ************/ public Container(){ _sp=0; _data = new Object[INIT_SIZE]; } public Container(Container other) { // copy constructor this(); for(int i=0;i<other.size();i++) this.add(other.at(i)); } /** return true is this collection is empty, else return false. */ public synchronized boolean isEmpty() {return _sp==0;} /** add an Object to this set */ public synchronized void add (Object p){ if (_sp==_data.length) rescale(RESCALE); _data[_sp] = p; // shellow copy semantic. _sp++; } /** returns the actual amount of Objects contained in this collection */ public synchronized int size() {return _sp;} /** returns true if this container contains an element which is equals to ob */ public synchronized boolean isMember(Object ob) { return get(ob)!=-1; } /** return the index of the first object which equals ob, if none returns -1 */ public synchronized int get(Object ob) { int ans=-1; for(int i=0;i<size();i=i+1) if(at(i).equals(ob)) return i; return ans; } /** returns the element located at the ind place in this container (null if out of range) */ public synchronized Object at(int p){ if (p>=0 && p<size()) return _data[p]; else return null; }

    Read the article

  • Is 4-5 years the “Midlife Crisis” for a programming career?

    - by Jeffrey
    I’ve been programming C# professionally for a bit over 4 years now. For the past 4 years I’ve worked for a few small/medium companies ranging from “web/ads agencies”, small industry specific software shops to a small startup. I've been mainly doing "business apps" that involves using high-level programming languages (garbage collected) and my overall experience was that all of the works I’ve done could have been more professional. A lot of the things were done incorrectly (in a rush) mainly due to cost factor that people always wanted something “now” and with the smallest amount of spendable money. I kept on thinking maybe if I could work for a bigger companies or a company that’s better suited for programmers, or somewhere that's got the money and time to really build something longer term and more maintainable I may have enjoyed more in my career. I’ve never had a “mentor” that guided me through my 4 years career. I am pretty much blog / google / self taught programmer other than my bachelor IT degree. I’ve also observed another issue that most so called “senior” programmer in “my working environment” are really not that senior skill wise. They are “senior” only because they’ve been a long time programmer, but the code they write or the decisions they make are absolutely rubbish! They don't want to learn, they don't want to be better they just want to get paid and do what they've told to do which make sense and most of us are like that. Maybe that’s why they are where they are now. But I don’t want to become like them I want to be better. I’ve run into a mental state that I no longer intend to be a programmer for my future career. I started to think maybe there are better things out there to work on. The more blogs I read, the more “best practices” I’ve tried the more I feel I am drifting away from “my reality”. But I am not a great programmer otherwise I don't think I am where I am now. I think 4-5 years is a stage that can be a step forward career wise or a step out of where you are. I just wanted to hear what other have to say about what I’ve mentioned above and whether you’ve experienced similar situation in your past programming career and how you dealt with it. Thanks.

    Read the article

  • Architectural Design for a Data-Driven Silverlight WP7 app

    - by Rosarch
    I have a Silverlight Windows Phone 7 app that pulls data from a public API. I find myself doing much of the same thing over and over again: In the UI, set a loading message or loading progress bar in place of where the content is Get the content, which may be already in memory, cached in isolated file storage, or require an HTTP request If the content can not be acquired (no network connection, etc), display an error message If the content is acquired, display it in the UI Keep the content in main memory for subsequent queries The content that is displayed to the user can be taken directly from a data source, such as an ObservableCollection, or it may be a query on a data source. I would like to factor out this repetitive process into a framework where ideally only the following needs to be specified: Where to display the content in the UI The UI elements to show while loading, on failure, and on success The URI of the HTTP request How to parse the HTTP response into the data structure that will kept in memory The location of the file in isolated storage, if it exists How to parse the file contents into the data structure that will be kept in memory It may sound like a lot, but two strings, three FrameworkElements, and two methods is less than the overhead that I currently have. Also, this needs to work for however the data is maintained in memory, and needs to work for direct collections and queries on those collections. My questions are: Has something like this already been implemented? Are my thoughts about the topic above fundamentally wrong in some way? Here is a design I'm thinking of: There are two components, a View and a Model. The View is given the FrameworkElements for loading, failure, and success. It is also given a reference to the corresponding Model. The View is a UserControl that is placed somewhere in the UI. The Model a class that is given the URI for the data, a method of how to parse the data, and optionally a filename and how to parse the file. It is responsible for retrieving the data and notifying the View whenever the current status (loading/fail/success) changes. If the data downloaded from the network is different from the cache, the network data takes precedence. When the app closes or is tombstoned, the model writes the data to the cache. How does that sound?

    Read the article

  • Please Describe Your Struggles with Minimizing Use of Global Variables

    - by MetaHyperBolic
    Most of the programs I write are relatively flowchartable processes, with a defined start and hoped-for end. The problems themselves can be complex but do not readily lean towards central use of objects and event-driven programming. Often, I am simply churning through great varied batches of text data to produce different text data. Only occasionally do I need to create a class: As an example, to track warnings, errors, and debugging message, I created a class (Problems) with one instantiation (myErr), which I believe to be an example of the Singleton design pattern. As a further factor, my colleagues are more old school (procedural) than I and are unacquainted with object-oriented programming, so I am loath to create things they could not puzzle through. And yet I hear, again and again, how even the Singleton design pattern is really an anti-pattern and ought to be avoided because Global Variables Are Bad. Minor functions need few arguments passed to them and have no need to know of configuration (unchanging) or program state (changing) -- I agree. However, the functions in the middle of the chain, which primarily control program flow, have a need for a large number of configuration variables and some program state variables. I believe passing a dozen or more arguments along to a function is a "solution," but hardly an attractive one. I could, of course, cram variables into a single hash/dict/associative array, but that seems like cheating. For instance, connecting to the Active Directory to make a new account, I need such configuration variables as an administrative username, password, a target OU, some default groups, a domain, etc. I would have to pass those arguments down through a variety of functions which would not even use them, merely shuffle them off down through a chain which would eventually lead to the function that actually needs them. I would at least declare the configuration variables to be constant, to protect them, but my language of choice these days (Python) provides no simple manner to do this, though recipes do exist as workarounds. Numerous Stack Overflow questions have hit on the why? of the badness and the requisite shunning, but do not often mention tips on living with this quasi-religious restriction. How have you resolved, or at least made peace with, the issue of global variables and program state? Where have you made compromises? What have your tricks been, aside from shoving around flocks of arguments to functions?

    Read the article

  • FFmpeg extract clip - stream frame rate differs from container frame rate (x264, aac)

    - by fideli
    Summary H.264 video seems to have a really high frame rate that requires a scaling factor to the applied to the duration of video that I'm trying to extract (900x lower). Body I'm trying to extract a clip from a movie that I have in MP4 format (created using Handbrake). After trying mencoder and VLC, I decided to give FFmpeg a shot since it was the least troublesome when it came to copying the codecs. That is, compared to mencoder and VLC, the resulting file was still playable in QuickTime (I know about Perian, etc, I'm just trying to learn how all this works). Anyway, my command was as follows: ffmpeg -ss 01:15:51 -t 00:05:59 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 During the copy, The following comes up: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from outofsight.mp4': Duration: 01:57:42.10, start: 0.000000, bitrate: 830 kb/s Stream #0.0(und): Video: h264, yuv420p, 720x384, 25 tbr, 22500 tbn, 45k tbc Stream #0.1(eng): Audio: aac, 48000 Hz, stereo, s16 Output #0, mp4, to 'out.mp4': Stream #0.0(und): Video: libx264, yuv420p, 720x384, q=2-31, 90k tbn, 22500 tbc Stream #0.1(eng): Audio: libfaac, 48000 Hz, stereo, s16 Stream mapping: Stream #0.0 -> #0.0 Stream #0.1 -> #0.1 Press [q] to stop encoding frame= 2591 fps=2349 q=-1.0 size= 8144kB time=101.60 bitrate= 656.7kbits/s … Instead of a 5:59 duration clip, I get the entire rest of the movie. So, to test this, I ran the ffmpeg command with -t 00:00:01. What I got was exactly a 15:00 minute clip. So I did some black box engineering and decided to scale my -t option by calculating what value to enter given that 1 second was interpreted as 900 s. For my desired 359 s clip, I calculated 0.399 s and so my ffmpeg command became: ffmpeg -ss 01:15.51 -t 00:00:00.399 -i outofsight.mp4 \ -acodec copy -vcodec copy clip.mp4 This works, but I have no idea why the duration is scaled by 900. Investigating further, each ffmpeg run has the line: Seems stream 0 codec frame rate differs from container frame rate: 45000.00 (45000/1) -> 25.00 (25/1) 45000/25 = 1800. Must be a relation somewhere. Somehow, the obscenely high frame rate is causing issues with the timing. How is that frame rate so high? The best part about this is that the resulting clip.mp4 has the exact same feature (due to the copied video codec), and taking further clips from this needs the same scaling for the -t duration option. Therefore, I've made it available for anyone willing to check this out. Appendix The preamble for ffmpeg on my system (built using MacPorts ffmpeg port): FFmpeg version 0.5, Copyright (c) 2000-2009 Fabrice Bellard, et al. configuration: --prefix=/opt/local --disable-vhook --enable-gpl --enable-postproc --enable-swscale --enable-avfilter --enable-avfilter-lavf --enable-libmp3lame --enable-libvorbis --enable-libtheora --enable-libdirac --enable-libschroedinger --enable-libfaac --enable-libfaad --enable-libxvid --enable-libx264 --mandir=/opt/local/share/man --enable-shared --enable-pthreads --cc=/usr/bin/gcc-4.2 --arch=x86_64 libavutil 49.15. 0 / 49.15. 0 libavcodec 52.20. 0 / 52.20. 0 libavformat 52.31. 0 / 52.31. 0 libavdevice 52. 1. 0 / 52. 1. 0 libavfilter 1. 4. 0 / 1. 4. 0 libswscale 1. 7. 1 / 1. 7. 1 libpostproc 51. 2. 0 / 51. 2. 0 built on Jan 4 2010 21:51:51, gcc: 4.2.1 (Apple Inc. build 5646) (dot 1)

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53  | Next Page >