Search Results

Search found 6950 results on 278 pages for 'moving average'.

Page 43/278 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • Summary statistics in visual basic

    - by ben
    Below I am trying to write a script the goal of which is to calculate some summary statistics for a few different columns of numbers. I have gotten some help on it up to the "Need help below" mark. But beyond that I am flabergasted as to how to calculate the simple stats (sum, mean, standard deviation, coefficient of variation). I know VB has scripts for these stats, which I have included in my code, but I guess I need to do some extra declaring or something. Advice much appreciated. Thanks. Sub TOAinput() Const n As Integer = 648 Dim stratum(n), hybrid(n), acres(n), hhsz(n), offinc(n) Dim s1 As Integer Dim s2 As Integer Dim i As Integer For i = 1 To n stratum(i) = Worksheets("hhid level").Cells(i + 1, 2).Value Next i s1 = 0 s2 = 0 For i = 1 To n If stratum(i) = 1 Then s1 = s1 + 1 Else: s2 = s2 + 1 End If Next i Dim acres1(), hhsz1(), offinc1(), acres2(), hhsz2(), offinc2() ReDim acres1(s1), hhsz1(s1), offinc1(s1), acres2(s2), hhsz2(s2), offinc2(s2) 'data infiles: acres, hh size, off-farm income, For i = 1 To n acres(i) = Worksheets("hhid level").Cells(i + 1, 4).Value hhsz(i) = Worksheets("hhid level").Cells(i + 1, 5).Value offinc(i) = Worksheets("hhid level").Cells(i + 1, 6).Value Next i s1 = 0 s2 = 0 For i = 1 To n If stratum(i) = 1 Then s1 = s1 + 1 acres1(s1) = acres(i) hhsz1(s1) = hhsz(i) offinc1(s1) = offinc(i) Else: s2 = s2 + 1 acres2(s2) = acres(i) hhsz2(s2) = hhsz(i) offinc2(s2) = offinc(i) End If Next i '**************************** 'Need help below '**************************** Dim sumac1, sumac2, mhhsz1, mhhsz2, cvhhsz1, cvhhsz2 sumac1 = Sum(acres1) sumac2 = Sum(acres2) mhhsz1 = Average(hhsz1) mhhsz2 = Average(hhsz2) cvhhsz1 = StDev(hhsz1) / Average(hhsz1) cvhhsz2 = StDev(hhsz2) / Average(hhsz2) End Sub

    Read the article

  • HTG Explains: Should You Build Your Own PC?

    - by Chris Hoffman
    There was a time when every geek seemed to build their own PC. While the masses bought eMachines and Compaqs, geeks built their own more powerful and reliable desktop machines for cheaper. But does this still make sense? Building your own PC still offers as much flexibility in component choice as it ever did, but prebuilt computers are available at extremely competitive prices. Building your own PC will no longer save you money in most cases. The Rise of Laptops It’s impossible to look at the decline of geeks building their own PCs without considering the rise of laptops. There was a time when everyone seemed to use desktops — laptops were more expensive and significantly slower in day-to-day tasks. With the diminishing importance of computing power — nearly every modern computer has more than enough power to surf the web and use typical programs like Microsoft Office without any trouble — and the rise of laptop availability at nearly every price point, most people are buying laptops instead of desktops. And, if you’re buying a laptop, you can’t really build your own. You can’t just buy a laptop case and start plugging components into it — even if you could, you would end up with an extremely bulky device. Ultimately, to consider building your own desktop PC, you have to actually want a desktop PC. Most people are better served by laptops. Benefits to PC Building The two main reasons to build your own PC have been component choice and saving money. Building your own PC allows you to choose all the specific components you want rather than have them chosen for you. You get to choose everything, including the PC’s case and cooling system. Want a huge case with room for a fancy water-cooling system? You probably want to build your own PC. In the past, this often allowed you to save money — you could get better deals by buying the components yourself and combining them, avoiding the PC manufacturer markup. You’d often even end up with better components — you could pick up a more powerful CPU that was easier to overclock and choose more reliable components so you wouldn’t have to put up with an unstable eMachine that crashed every day. PCs you build yourself are also likely more upgradable — a prebuilt PC may have a sealed case and be constructed in such a way to discourage you from tampering with the insides, while swapping components in and out is generally easier with a computer you’ve built on your own. If you want to upgrade your CPU or replace your graphics card, it’s a definite benefit. Downsides to Building Your Own PC It’s important to remember there are downsides to building your own PC, too. For one thing, it’s just more work — sure, if you know what you’re doing, building your own PC isn’t that hard. Even for a geek, researching the best components, price-matching, waiting for them all to arrive, and building the PC just takes longer. Warranty is a more pernicious problem. If you buy a prebuilt PC and it starts malfunctioning, you can contact the computer’s manufacturer and have them deal with it. You don’t need to worry about what’s wrong. If you build your own PC and it starts malfunctioning, you have to diagnose the problem yourself. What’s malfunctioning, the motherboard, CPU, RAM, graphics card, or power supply? Each component has a separate warranty through its manufacturer, so you’ll have to determine which component is malfunctioning before you can send it off for replacement. Should You Still Build Your Own PC? Let’s say you do want a desktop and are willing to consider building your own PC. First, bear in mind that PC manufacturers are buying in bulk and getting a better deal on each component. They also have to pay much less for a Windows license than the $120 or so it would cost you to to buy your own Windows license. This is all going to wipe out the cost savings you’ll see — with everything all told, you’ll probably spend more money building your own average desktop PC than you would picking one up from Amazon or the local electronics store. If you’re an average PC user that uses your desktop for the typical things, there’s no money to be saved from building your own PC. But maybe you’re looking for something higher end. Perhaps you want a high-end gaming PC with the fastest graphics card and CPU available. Perhaps you want to pick out each individual component and choose the exact components for your gaming rig. In this case, building your own PC may be a good option. As you start to look at more expensive, high-end PCs, you may start to see a price gap — but you may not. Let’s say you wanted to blow thousands of dollars on a gaming PC. If you’re looking at spending this kind of money, it would be worth comparing the cost of individual components versus a prebuilt gaming system. Still, the actual prices may surprise you. For example, if you wanted to upgrade Dell’s $2293 Alienware Aurora to include a second NVIDIA GeForce GTX 780 graphics card, you’d pay an additional $600 on Alienware’s website. The same graphics card costs $650 on Amazon or Newegg, so you’d be spending more money building the system yourself. Why? Dell’s Alienware gets bulk discounts you can’t get — and this is Alienware, which was once regarded as selling ridiculously overpriced gaming PCs to people who wouldn’t build their own. Building your own PC still allows you to get the most freedom when choosing and combining components, but this is only valuable to a small niche of gamers and professional users — most people, even average gamers, would be fine going with a prebuilt system. If you’re an average person or even an average gamer, you’ll likely find that it’s cheaper to purchase a prebuilt PC rather than assemble your own. Even at the very high end, components may be more expensive separately than they are in a prebuilt PC. Enthusiasts who want to choose all the individual components for their dream gaming PC and want maximum flexibility may want to build their own PCs. Even then, building your own PC these days is more about flexibility and component choice than it is about saving money. In summary, you probably shouldn’t build your own PC. If you’re an enthusiast, you may want to — but only a small minority of people would actually benefit from building their own systems. Feel free to compare prices, but you may be surprised which is cheaper. Image Credit: Richard Jones on Flickr, elPadawan on Flickr, Richard Jones on Flickr     

    Read the article

  • C#/.NET Little Wonders: Interlocked CompareExchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Two posts ago, I discussed the Interlocked Add(), Increment(), and Decrement() methods (here) for adding and subtracting values in a thread-safe, lightweight manner.  Then, last post I talked about the Interlocked Read() and Exchange() methods (here) for safely and efficiently reading and setting 32 or 64 bit values (or references).  This week, we’ll round out the discussion by talking about the Interlocked CompareExchange() method and how it can be put to use to exchange a value if the current value is what you expected it to be. Dirty reads can lead to bad results Many of the uses of Interlocked that we’ve explored so far have centered around either reading, setting, or adding values.  But what happens if you want to do something more complex such as setting a value based on the previous value in some manner? Perhaps you were creating an application that reads a current balance, applies a deposit, and then saves the new modified balance, where of course you’d want that to happen atomically.  If you read the balance, then go to save the new balance and between that time the previous balance has already changed, you’ll have an issue!  Think about it, if we read the current balance as $400, and we are applying a new deposit of $50.75, but meanwhile someone else deposits $200 and sets the total to $600, but then we write a total of $450.75 we’ve lost $200! Now, certainly for int and long values we can use Interlocked.Add() to handles these cases, and it works well for that.  But what if we want to work with doubles, for example?  Let’s say we wanted to add the numbers from 0 to 99,999 in parallel.  We could do this by spawning several parallel tasks to continuously add to a total: 1: double total = 0; 2:  3: Parallel.For(0, 10000, next => 4: { 5: total += next; 6: }); Were this run on one thread using a standard for loop, we’d expect an answer of 4,999,950,000 (the sum of all numbers from 0 to 99,999).  But when we run this in parallel as written above, we’ll likely get something far off.  The result of one of my runs, for example, was 1,281,880,740.  That is way off!  If this were banking software we’d be in big trouble with our clients.  So what happened?  The += operator is not atomic, it will read in the current value, add the result, then store it back into the total.  At any point in all of this another thread could read a “dirty” current total and accidentally “skip” our add.   So, to clean this up, we could use a lock to guarantee concurrency: 1: double total = 0.0; 2: object locker = new object(); 3:  4: Parallel.For(0, count, next => 5: { 6: lock (locker) 7: { 8: total += next; 9: } 10: }); Which will give us the correct result of 4,999,950,000.  One thing to note is that locking can be heavy, especially if the operation being locked over is trivial, or the life of the lock is a high percentage of the work being performed concurrently.  In the case above, the lock consumes pretty much all of the time of each parallel task – and the task being locked on is relatively trivial. Now, let me put in a disclaimer here before we go further: For most uses, lock is more than sufficient for your needs, and is often the simplest solution!    So, if lock is sufficient for most needs, why would we ever consider another solution?  The problem with locking is that it can suspend execution of your thread while it waits for the signal that the lock is free.  Moreover, if the operation being locked over is trivial, the lock can add a very high level of overhead.  This is why things like Interlocked.Increment() perform so well, instead of locking just to perform an increment, we perform the increment with an atomic, lockless method. As with all things performance related, it’s important to profile before jumping to the conclusion that you should optimize everything in your path.  If your profiling shows that locking is causing a high level of waiting in your application, then it’s time to consider lighter alternatives such as Interlocked. CompareExchange() – Exchange existing value if equal some value So let’s look at how we could use CompareExchange() to solve our problem above.  The general syntax of CompareExchange() is: T CompareExchange<T>(ref T location, T newValue, T expectedValue) If the value in location == expectedValue, then newValue is exchanged.  Either way, the value in location (before exchange) is returned. Actually, CompareExchange() is not one method, but a family of overloaded methods that can take int, long, float, double, pointers, or references.  It cannot take other value types (that is, can’t CompareExchange() two DateTime instances directly).  Also keep in mind that the version that takes any reference type (the generic overload) only checks for reference equality, it does not call any overridden Equals(). So how does this help us?  Well, we can grab the current total, and exchange the new value if total hasn’t changed.  This would look like this: 1: // grab the snapshot 2: double current = total; 3:  4: // if the total hasn’t changed since I grabbed the snapshot, then 5: // set it to the new total 6: Interlocked.CompareExchange(ref total, current + next, current); So what the code above says is: if the amount in total (1st arg) is the same as the amount in current (3rd arg), then set total to current + next (2nd arg).  This check and exchange pair is atomic (and thus thread-safe). This works if total is the same as our snapshot in current, but the problem, is what happens if they aren’t the same?  Well, we know that in either case we will get the previous value of total (before the exchange), back as a result.  Thus, we can test this against our snapshot to see if it was the value we expected: 1: // if the value returned is != current, then our snapshot must be out of date 2: // which means we didn't (and shouldn't) apply current + next 3: if (Interlocked.CompareExchange(ref total, current + next, current) != current) 4: { 5: // ooops, total was not equal to our snapshot in current, what should we do??? 6: } So what do we do if we fail?  That’s up to you and the problem you are trying to solve.  It’s possible you would decide to abort the whole transaction, or perhaps do a lightweight spin and try again.  Let’s try that: 1: double current = total; 2:  3: // make first attempt... 4: if (Interlocked.CompareExchange(ref total, current + i, current) != current) 5: { 6: // if we fail, go into a spin wait, spin, and try again until succeed 7: var spinner = new SpinWait(); 8:  9: do 10: { 11: spinner.SpinOnce(); 12: current = total; 13: } 14: while (Interlocked.CompareExchange(ref total, current + i, current) != current); 15: } 16:  This is not trivial code, but it illustrates a possible use of CompareExchange().  What we are doing is first checking to see if we succeed on the first try, and if so great!  If not, we create a SpinWait and then repeat the process of SpinOnce(), grab a fresh snapshot, and repeat until CompareExchnage() succeeds.  You may wonder why not a simple do-while here, and the reason it’s more efficient to only create the SpinWait until we absolutely know we need one, for optimal efficiency. Though not as simple (or maintainable) as a simple lock, this will perform better in many situations.  Comparing an unlocked (and wrong) version, a version using lock, and the Interlocked of the code, we get the following average times for multiple iterations of adding the sum of 100,000 numbers: 1: Unlocked money average time: 2.1 ms 2: Locked money average time: 5.1 ms 3: Interlocked money average time: 3 ms So the Interlocked.CompareExchange(), while heavier to code, came in lighter than the lock, offering a good compromise of safety and performance when we need to reduce contention. CompareExchange() - it’s not just for adding stuff… So that was one simple use of CompareExchange() in the context of adding double values -- which meant we couldn’t have used the simpler Interlocked.Add() -- but it has other uses as well. If you think about it, this really works anytime you want to create something new based on a current value without using a full lock.  For example, you could use it to create a simple lazy instantiation implementation.  In this case, we want to set the lazy instance only if the previous value was null: 1: public static class Lazy<T> where T : class, new() 2: { 3: private static T _instance; 4:  5: public static T Instance 6: { 7: get 8: { 9: // if current is null, we need to create new instance 10: if (_instance == null) 11: { 12: // attempt create, it will only set if previous was null 13: Interlocked.CompareExchange(ref _instance, new T(), (T)null); 14: } 15:  16: return _instance; 17: } 18: } 19: } So, if _instance == null, this will create a new T() and attempt to exchange it with _instance.  If _instance is not null, then it does nothing and we discard the new T() we created. This is a way to create lazy instances of a type where we are more concerned about locking overhead than creating an accidental duplicate which is not used.  In fact, the BCL implementation of Lazy<T> offers a similar thread-safety choice for Publication thread safety, where it will not guarantee only one instance was created, but it will guarantee that all readers get the same instance.  Another possible use would be in concurrent collections.  Let’s say, for example, that you are creating your own brand new super stack that uses a linked list paradigm and is “lock free”.  We could use Interlocked.CompareExchange() to be able to do a lockless Push() which could be more efficient in multi-threaded applications where several threads are pushing and popping on the stack concurrently. Yes, there are already concurrent collections in the BCL (in .NET 4.0 as part of the TPL), but it’s a fun exercise!  So let’s assume we have a node like this: 1: public sealed class Node<T> 2: { 3: // the data for this node 4: public T Data { get; set; } 5:  6: // the link to the next instance 7: internal Node<T> Next { get; set; } 8: } Then, perhaps, our stack’s Push() operation might look something like: 1: public sealed class SuperStack<T> 2: { 3: private volatile T _head; 4:  5: public void Push(T value) 6: { 7: var newNode = new Node<int> { Data = value, Next = _head }; 8:  9: if (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next) 10: { 11: var spinner = new SpinWait(); 12:  13: do 14: { 15: spinner.SpinOnce(); 16: newNode.Next = _head; 17: } 18: while (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next); 19: } 20: } 21:  22: // ... 23: } Notice a similar paradigm here as with adding our doubles before.  What we are doing is creating the new Node with the data to push, and with a Next value being the original node referenced by _head.  This will create our stack behavior (LIFO – Last In, First Out).  Now, we have to set _head to now refer to the newNode, but we must first make sure it hasn’t changed! So we check to see if _head has the same value we saved in our snapshot as newNode.Next, and if so, we set _head to newNode.  This is all done atomically, and the result is _head’s original value, as long as the original value was what we assumed it was with newNode.Next, then we are good and we set it without a lock!  If not, we SpinWait and try again. Once again, this is much lighter than locking in highly parallelized code with lots of contention.  If I compare the method above with a similar class using lock, I get the following results for pushing 100,000 items: 1: Locked SuperStack average time: 6 ms 2: Interlocked SuperStack average time: 4.5 ms So, once again, we can get more efficient than a lock, though there is the cost of added code complexity.  Fortunately for you, most of the concurrent collection you’d ever need are already created for you in the System.Collections.Concurrent (here) namespace – for more information, see my Little Wonders – The Concurent Collections Part 1 (here), Part 2 (here), and Part 3 (here). Summary We’ve seen before how the Interlocked class can be used to safely and efficiently add, increment, decrement, read, and exchange values in a multi-threaded environment.  In addition to these, Interlocked CompareExchange() can be used to perform more complex logic without the need of a lock when lock contention is a concern. The added efficiency, though, comes at the cost of more complex code.  As such, the standard lock is often sufficient for most thread-safety needs.  But if profiling indicates you spend a lot of time waiting for locks, or if you just need a lock for something simple such as an increment, decrement, read, exchange, etc., then consider using the Interlocked class’s methods to reduce wait. Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked,CompareExchange,threading,concurrency

    Read the article

  • How to monitor the total number of SQL Server logins

    - by Shiraz Bhaiji
    We have an SQL Server 2005 that is the backend of a web application. The application is partly SharePoint and partly web services accessing the database via Entity Framework. In the performance monitor I am seeing average SQL Logins is ca, 60 per second (max 170), but the average logouts is less than 1. Where can I see the total number of SQL Server logins? Anyone have an idea what could be causing this?

    Read the article

  • My internet speed became slow at night

    - by FrozenKing
    My internet plan is 512kbps unlimited and I get speed of average 64kbps but at night I used to get speed of 112kbps ..but recently my speed got normal like day time ...as per my view usually at night their is less traffic so I should get good speed like before ... Due to good speed I download and upload at night and my average download+upload per month is 60gb or 70gb... Is it that my ISP people putting restriction on my download and uploads.. I am confused.

    Read the article

  • BitTorrent Myth

    - by Moon .
    In BitTorent Statistics there is a field "Total Ratio" that is the ratio between total downloads and uploads. i have heard that this ratio affects BitTorrent'ss performance. If the ratio is better then BitTorrent Network provides you services on priority. And If the ratio is down (less uploads) then the BitTorrent provides you services on average or below average priorities. Is there something like that.....

    Read the article

  • custom video icon for a single video file in windows 7 file explorer

    - by MrBrody
    recently I found a video on the net ( a .mp4 file), and when I had it on my computer with Windows7, I noticed its thumbnail was not the average windows 7 video thumbnail (which looks like a piece of video film with a random picture from the movie), but a custom thumbnail! Looking in the file properties did not help find the correct button to change the thumbnail...so I just wonder how he did it! Here is a picture: left: the custom thumbnail, right: the average thumbnail...

    Read the article

  • Possible reasons for high CPU load of taskmgr.exe process on VM?

    - by mjn
    On a VMware virtual machine which has severe performance problems I can see a constant average of 20+ percent CPU load for the TASKMGR.EXE (task manager) process. The apps running on this server have lower load, around 4 to 10 percent average. The VM is running Windows 2003 Server Standard with 3.75 GB assigned RAM. I suspect that the task manager CPU load has something to do with other VM instances on the VMWare server but could not see a similar value on internal ESXi systems (the problematic VM runs in the customers IT).

    Read the article

  • Incorrect value for sum of two NSIntegers

    - by Antonio
    Hi everybody: I'm sure I'm missing something and the answer is very simple, but I can't seem to understand why this is happening. I'm trying to make an average of dates: NSInteger runningSum =0; NSInteger count=0; for (EventoData *event in self.events) { NSDate *dateFromString = [[NSDate alloc] init]; if (event.date != nil) { dateFromString = [dateFormatter dateFromString:event.date]; runningSum += (NSInteger)[dateFromString timeIntervalSince1970]; count += 1; } } if (count>0) { NSLog(@"average is: %@",[NSDate dateWithTimeIntervalSince1970:(NSInteger)((CGFloat)runningAverage/count)]); } Everything seems to work OK, except for runningSum += (NSInteger)[dateFromString timeIntervalSince1970], which gives an incorrect result. If I put a breakpoint when taking the average of two equal dates (2009-10-10, for example, which is a timeInterval of 1255125600), runningSum is -1784716096, instead of the expected 2510251200. I've tried using NSNumber and I get the same result. Can anybody point me in the right direction? Thanks! Antonio

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • add Constraint on database with trigger

    - by Am1rr3zA
    Hi, I have 3 tables (Student, Course, student_course_choose(have field grade)) I defined a view on these 3 tables that get me an Average of the each student. I want to have constraint(with trigger) on these view(or on the table that need it) to limit the average of each student between 13 and 18. I somewhere read that I must use foreach statement(instead of foreach row) on trigger because when I decrease some grade of special student and his/her average become less than 13 they don't give me error (because later I increase grade of another his/her course ). how must I wrote this Trigger? (I want to implement aprh for testing trigger) note:I can write it in SQL server, oracle or Mysql no diff for me.

    Read the article

  • Copy/move EBS volume from one Region to another

    - by Gnanam
    Background of our setup: We've hosted our web-based application in Amazon EC2 US East (Virginia) Region. Our instance is based on Linux distribution (CentOS) and AMI is S3-backed. 1 EBS volume (400 GB size) is attached to this instance. Question: We've planned to migrate our deployment to US West (N. California) Region. From AWS doc, I understood that for moving AMI, there is a command-line tool available - ec2-migrate-bundle. But for moving EBS volume across Region, currently there is no tool available. I'm looking for easiest and/or fastest way of copying/moving EBS volume from one Region to another. Also, are there any hidden risks involved during and/or after the migration? Experts ideas/suggestions/recommendations on this are highly appreciated.

    Read the article

  • How do I handle a low job offer for an entry level position?

    - by user229269
    Hi guys! I recently graduated with MS in CS and I am excited because I just received a job offer from a company I really like for an entry-level sw engineer position. The thing is that, although the salary is not my priority and I care way more about gaining experience, their offer unfortunately is way below of what I expected. Actually after I did some research I realized that, comparing to the average salary range for the entry-level sw engineering positions in my area (one of the most expensive areas in the US) supposedly [X - Y]$ (where X is the lowest average and Y the highest), their offer is 20% below X! I wouldnt have a problem accepting an offer around X but this one is even lower than the lowest. Can I counter offer the X which is 20% more than what they offered me but at the same time is the minimum average? -- And mind you that I didnt even take under consideration the fact that I hold a MS degree which in many cases yields to a 5-10% more pay.

    Read the article

  • how to solve nested list programs [closed]

    - by riya
    write a function to get most popular car that accepts a car detail as input and returns the most popular car name along with its average rating .Each element of car details list is a sublist that provides the below information about a car (a)name of a car(b)car price (c) list of ratings obtained by car from various agencies.Incase two cars have the same average rating then the car with the lesser price qualifies as most popular car? here's my solution-: (define-struct cardetails ("name" price list of '(ratings)) (define car1 (make-cardetails "toyota" 123 '( 1 2 3))) (define car2 (make-cardetails "santro" 321 '( 2 2 3))) (define car3 (make-cardetails "toyota" 100 '( 1 2 3))) (define cardetailslist(list(car1) (car2)(car 3))) (let loop ((count 0)) (let (len (length cardetailslist)) (if(< count len) (string-ref (string-ref n)0) now please tell me how to find maximum average and display car name.it's not a homework question tomorrow is my test and we have not been taught this concept in class although it is very important from test point of view

    Read the article

  • How can I copy files to an external drive and verify their integrity in OS X?

    - by jedavis
    I'm moving large amounts of data from one external drive to another larger one. The files are important and the smaller drives need to be cleared and reused (HD camera). Is there some utility for moving files and verifying their integrity? I've been using this command find . -type f -exec md5 '{}' \; > md5list.txt in the terminal to create a list of MD5s for each file then using diff to compare the two. However, I am moving 320GB at a time, which takes a while by itself. Computing the checksums takes another hour or so. It would be much more efficient to do this on the fly, during the copy. I'm just hoping someone has already written the software...

    Read the article

  • Either .each do or .all isn't working how I think it should

    - by user1299656
    So whenever someone rates a shop, I want the Shop model to calculate its new average rating and store that in the database (instead of calculating the average every time someone looks at it). So I wrote the segment of code that follows, and it doesn't work. The loop always iterates exactly once, no matter how many shop_ratings in the database exist that have the shop's id as their shop_id. I played around with it a bit and found that every time a new rating is submitted the function is called successfully, but it only runs the loop once and sets the average to what the first rating was. I don't know if the "query" that sets the ratings variable is wrong or if it's the loop that's wrong. class Shop < ActiveRecord::Base has_many :shop_ratings attr_accessible :name, :latitude, :longitude validates_presence_of :name validates_presence_of :latitude validates_presence_of :longitude def distance_to(lat, long) return (self.longitude - long) + (self.latitude - lat) end def find_average total = 0 count = 0 ratings = ShopRating.all(:conditions => {:shop_id => id}) ratings.each do |submission| total = total + submission.rating count = count + 1 end update_attribute :average_rating, total/count end end

    Read the article

  • Handling a binary operation that makes sense only for part of a hierarchy.

    - by usersmarvin_
    I have a hierarchy, which I'll simplify greatly, of implementations of interface Value. Assume that I have two implementations, NumberValue, and StringValue. There is an average operation which only makes sense for NumberValue, with the signature NumberValue average(NumberValue numberValue){ ... } At some point after creating such variables and using them in various collections, I need to average a collection which I know is only of type NumberValue, there are three possible ways of doing this I think: Very complicated generic signatures which preserve the type info in compile time (what I'm doing now, and results in hard to maintain code) Moving the operation to the Value level, and: throwing an unsupportedOperationException for StringValue, and casting for NumberValue. Casting at the point where I know for sure that I have a NumberValue, using slightly less complicated generics to insure this. Does anybody have any better ideas, or a recommendation on oop best practices?

    Read the article

  • OOP PHP simple question

    - by Tristan
    Hello, I'm new to OOP in PHP, is that to seems correct ? class whatever { Function Maths() { $this->sql->query($requete); $i = 0; while($val = mysql_fetch_array($this)) { $tab[i][average] = $val['average']; $tab[i][randomData] = $val['sum']; $i=$i+1; } return $tab; } I want to access the data contained in the array $foo = new whatever(); $foo->Maths(); for ($i, $i <= endOfTheArray; i++) { echo Maths->tab[i][average]; echo Maths->tab[i][randomData]; } Thank you ;)

    Read the article

  • How do I know if I need to backup locally stored emails?

    - by Sometimes
    I am moving a friend's website and emails from the current server to a new one. I don't have much experience working with migrating emails and in the past when moving servers all the emails have disappeared from the users local inbox, eg. MS Outlook. To make my question more clear, How do I know if I have to backup the emails before moving server? as I know sometimes they are stored locally and sometimes they are not. And, how do I know if the emails will remain on the user's machine once I move the information from server to server?

    Read the article

  • Where to declare variable? C#

    - by user1303781
    I am trying to make an average function... 'Total' adds them, then 'Total' is divided by n, the number of entries... No matter where I put 'double Total;', I get an error message. In this example I get... Use of unassigned local variable 'Total' If I put it before the comment, both references show up as error... I'm sure it's something simple..... namespace frmAssignment3 { class StatisticalFunctions { public static class Statistics { //public static double Average(List<MachineData.MachineRecord> argMachineDataList) public static double Average(List<double> argMachineDataList) { double Total; int n; for (n = 1; n <= argMachineDataList.Count; n++) { Total = argMachineDataList[n]; } return Total / n; } public static double StDevSample(List<MachineData.MachineRecord> argMachineDataList) { return -1; } } } }

    Read the article

  • How can I move a polygon edge 1 unit away from the center?

    - by Stephen
    Let's say I have a polygon class that is represented by a list of vector classes as vertices, like so: var Vector = function(x, y) { this.x = x; this.y = y; }, Polygon = function(vectors) { this.vertices = vectors; }; Now I make a polygon (in this case, a square) like so: var poly = new Polygon([ new Vector(2, 2), new Vector(5, 2), new Vector(5, 5), new Vector(2, 5) ]); So, the top edge would be [poly.vertices[0], poly.vertices[1]]. I need to stretch this polygon by moving each edge away from the center of the polygon by one unit, along that edge's normal. The following example shows the first edge, the top, moved one unit up: The final polygon should look like this new one: var finalPoly = new Polygon([ new Vector(1, 1), new Vector(6, 1), new Vector(6, 6), new Vector(1, 6) ]); It is important that I iterate, moving one edge at a time, because I will be doing some collision tests after moving each edge. Here is what I tried so far (simplified for clarity), which fails triumphantly: for(var i = 0; i < vertices.length; i++) { var a = vertices[i], b = vertices[i + 1] || vertices[0]; // in case of final vertex var ax = a.x, ay = a.y, bx = b.x, by = b.y; // get some new perpendicular vectors var a2 = new Vector(-ay, ax), b2 = new Vector(-by, bx); // make into unit vectors a2.convertToUnitVector(); b2.convertToUnitVector(); // add the new vectors to the original ones a.add(a2); b.add(b2); // the rest of the code, collision tests, etc. } This makes my polygon start slowly rotating and sliding to the left, instead of what I need. Finally, the example shows a square, but the polygons in question could be anything. They will always be convex, and always with vertices in clockwise order.

    Read the article

  • Strange 3D game engine camera with X,Y,Zoom instead of X,Y,Z

    - by Jenko
    I'm using a 3D game engine, that uses a 4x4 matrix to modify the camera projection, in this format: r r r x r r r y r r r z - - - zoom Strangely though, the camera does not respond to the Z translation parameter, and so you're forced to use X, Y, Zoom to move the camera around. Technically this is plausible for isometric-style games such as Age Of Empires III. But this is a 3D engine, and so why would they have designed the camera to ignore Z and respond only to zoom? Am I missing something here? I've tried every method of setting the camera and it really seems to ignore Z. So currently I have to resort to moving the main object in the scene graph instead of moving the camera in relation to the objects. My question: Do you have any idea why the engine would use such a scheme? Is it common? Why? Or does it seem like I'm missing something and the SetProjection(Matrix) function is broken and somehow ignores the Z translation in the matrix? (unlikely, but possible) Anyhow, what are the workarounds? Is moving objects around the only way? Edit: I'm sorry I cannot reveal much about the engine because we're in a binding contract. It's a locally developed engine (Australia) written in managed C# used for data visualizations. Edit: The default mode of the engine is orthographic, although I've switched it into perspective mode. Its probably more effective to use X, Y, Zoom in orthographic mode, but I need to use perspective mode to render everyday objects as well.

    Read the article

  • 2D Rectangle Collision Response with Multiple Rectangles

    - by Justin Skiles
    Similar to: Collision rectangle response I have a level made up of tiles where the edges of the level are made up of collidable rectangles. The player's collision box is represented by a rectangle as well. The player can move in 8 directions. The player's velocity is equal in X and Y directions and constant. Each update, I am checking the player's collision against all tiles that are a certain distance away. When the player collides with a rectangle, I am finding the intersection depth and resolving along the most shallow axis followed by the other axis. This resolution happens for both axes simultaneously. See below for two examples of situations where I am having trouble. Moving up-left against the left wall In the scenario below, the player is colliding with two tiles. The tile intersection depth is equal on both axes for the top tile and more shallow in the X axis for the middle tile. Because the player is moving up the wall, the player should slide in an upward direction along the wall. This works properly as long as the rectangle with the more shallow depth is evaluated first. If the equal intersection depth rectangle is evaluated first, there is a chance the player becomes stuck. Moving up-left against the top wall Here is an identical scenario with the exception that the collision is with the top wall. The same problem occurs at the corners when intersection depth is equal for both axes. I guess my overall question is: How can I ensure that collision response occurs on tiles that have non-equal intersection depth before tiles that have equal intersection depth in order to get around the weirdness that occurs at these corners. Sean's answer in the linked question was good, but his solution required having different velocity components in a certain direction. My situation has equal velocities, so there's no good way to tell which direction to resolve at corners. I hope I have made my explanation clear.

    Read the article

  • Handling early/late/dropped packets for interpolation in a 3D multiplayer game

    - by Ben Cracknell
    I'm working on a multiplayer game that for the purposes of this question, is most similar to Team Fortress. Each network data packet will contain the 3D position of the target moving object. (this object could be another player) The packets are sent on a fixed interval, and linear interpolation will be used to smooth the transition between packets. Under normal circumstances, interpolation will occur between the second-to-last packet, and the last packet received. The linear interpolation algorithm is the same as this post: Interpolating positions in a multiplayer game I have the same issue as in that post, but the answers don't seem like they will work in my situation. Consider the following scenario: Normal packet timing, everything is okay The next expected packet is late. That's okay, we'll just extrapolate based on previous positions The late packet eventually arrives with corrections to our extrapolation. Now what do we do with its information? The answers on the above post suggest we should just interpolate to this new packet's position, but that would not work at all. If we have already extrapolated past that point in time, moving back would cause rubber-banding. The issue is similar in the case of an early or dropped packet. So I believe what I am looking for is some way to smoothly deal with new information in an ongoing interpolation/extrapolation process. Since I might be moving on to quadratic or even cubic interpolation, it would be great if the same solutiuon could be applied to those as well.

    Read the article

  • Data structure for bubble shooter game

    - by SundayMonday
    I'm starting to make a bubble shooter game for a mobile OS. Assume this is just the basic "three or more same-color bubbles that touch pop" and all bubbles that are separated from their group fall/pop. What data structures are common for storing the bubbles? I've considered using an undirected, connected graph where each node is a bubble. This seems like it could help answer the question "which bubbles (if any) should fall now?" after some arbitrary bubbles are popped and corresponding nodes are removed from the graph. I think the answer is all bubbles that were just disconnected from the graph should fall. However the graph approach might be overkill so I'm not sure. Another consideration for the data structure is collision detection. Perhaps being able to grab a list of neighboring bubbles in constant time for a particular "bubble slot" is useful. So the collision detection would be something like "moving bubble is closest to slot ij, neighbors of slot ij are bubbles a,b,c, moving bubble is sufficiently close to bubble b hence moving bubble should come to rest in slot ij". A game like this could be probably be made with a relatively crude grid structure as the primary data structure. However it seems like answering "which bubbles (if any) should fall now?" would be trickier with this data structure.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >