Search Results

Search found 2347 results on 94 pages for 'slightly frustrated'.

Page 65/94 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Monitoring the Application alongside SQL Server

    - by Tony Davis
    Sometimes, on Simple-Talk, it takes a while to spot strange and unexpected patterns of user activity, or small bugs. For example, one morning we spotted that an article’s comment count had leapt to 1485, but that only four were displayed. With some rooting around in Google Analytics, and the endlessly annoying Community Server admin-interface, we were able to work out that a few days previously the article had been subject to a spam attack and that the comment count was for some reason including both accepted and unaccepted comments (which in turn uncovered a bug in the SQL). This sort of incident made us a lot keener on monitoring Simple-talk website usage more effectively. However, the metrics we wanted are troublesome, because they are far too specific for Google Analytics to measure, and the SQL Server backend doesn’t keep sufficient information to enable us to plot trends. The latter could provide, for example, the total number of comments made on, or votes cast for, articles, over all time, but not the number that occur by hour over a set time. We lacked a baseline, in other words. We couldn’t alter the database, as it is a bought-in package. We had neither the resources nor inclination to build-in dedicated application monitoring. Possibly, we could investigate a third-party tool to do the job; but then it occurred to us that we were already using a monitoring tool (SQL Monitor) to keep an eye on the database. It stored data, made graphs and sent alerts. Could we get it to monitor some aspects of the application as well? Of course, SQL Monitor’s single purpose is to check and monitor SQL Server, over time, rather than to monitor applications that use SQL Server. However, how different is the business of gathering and plotting SQL Server Wait Stats, from gathering and plotting various aspects of user activity on the site? Not a lot, it turns out. The latest version allows us to write our own custom monitoring scripts, meaning that we could now monitor any metric in the application that returns an integer. It took little time to write a simple SQL Query that collects basic metrics of the total number of subscribers, votes cast, comments made, or views of articles, over time. The SQL Monitor database polls Simple-Talk every second or so in order to get the latest totals, and can then store and plot this information, or even correlate SQL Server usage to application usage. You can see the live data by visiting monitor.red-gate.com. Click the "Analysis" tab, and select one of the "Simple-talk:" entries in the "Show" box and an appropriate data range (e.g. last 30 days). It’s nascent, and we’re still working on it, but it’s already given us more confidence that we’ll spot quickly trends, bugs, or bursts of ‘abnormal’ activity. If there is a sudden rise in comments, we get an alert, and if it’s due to a spam attack, we can moderate or ban the perpetrator very quickly. We’ve often argued that a tool should perform a single job well rather than turn into a Swiss-army knife, but ironically we’ve rather appreciated being able to make best use of what’s there anyway for a slightly different purpose. Is this a good or common practice? What do you think? Cheers, Tony.

    Read the article

  • P90X or How I Stopped Worrying and Love Exercise

    - by Matt Christian
    Last Wednesday, after many UPS delivery failures, I received P90X in the mail.  P90X is a series of DVD's and a nutrition guide you use to shed pounds and gain muscle.  Odds are you've seen the infomercial on TV at some point if you watch a little tube now and again.  I started last Thursday and am still standing to tell this tale. At it's core, P90X is a 12 DVD set of exercise videos.  Each video is comprised of a different workout routine that typically last around an hour (some up to 1 1/2 hours).  Every day you are supposed to do one of the workouts which are different every day (sometimes you may repeat a shorter 6 min workout dedicated to abs twice a week).  There are different 'programs' focused on different areas, for weight loss you do the Lean Program, standard weight loss and muscle gain do the Regular Program, and for those hardcore health-nuts, the Insane Program (which consists of 2 - 1 hour long exercises per day).  Each Program has a different set of workouts per week which you repeat for 3 weeks, followed by a 'Relaxation Week' which is essentially a slightly different order.  After the month of workouts is over, you've finished 1 phase out of 3.  P90X takes 90 days, split into 3 Phases (1 phase per month).  Every phase has a different workout order which is also focused on different areas (Weight Loss, Muscle Gain, etc...)  With the DVD's you also get a small glossy book of about 100 pages detailing the different workouts and the different programs as well as a sample workout to see if you're even ready to start P90X. The second part of P90X, which can also be considered the 'core' (actually the other half of the core) is the nutrition guide that is included.  The Nutrition Guide is a book similar to the one that defines the exercises (about 100 glossy pages) though it details foods you should eat, the amounts, and a number of healthy (and tasty!) recipes.  The guide is split up into 3 phases as well, promoting high protein and low carb/dairy at during Phase 1, and levelling off through to Phase 3 where you have a relatively balanced amount of every food group. So after 1 week where am I?  I've stuck quite close to the nutrition guide (there isn't 'diet food' in here people, it's ACTUALLY food) and done my exercise every day.  I think a lot of the first week is getting into the whole idea and learning the moves performed on the DVD.  Have I lost weight?  No.  Do I feel some definition already starting to poke out?  Absolutely (no pun intended). Tony Horton (the 51-year old hulk that runs the whole thing) is very fun to listen and work along with and the 'diet' really isn't too hard to follow unless all you eat is carbs.  I've tried the gym thing and could not get motivated enough to continue going.  P90X is the first time I've ached from a workout, BEFORE starting my next workout.  For anyone interested, Google 'P90X' or 'BeachBody' to find out more information about this awesome program!

    Read the article

  • Prepping the Raspberry Pi for Java Excellence (part 1)

    - by HecklerMark
    I've only recently been able to begin working seriously with my first Raspberry Pi, received months ago but hastily shelved in preparation for JavaOne. The Raspberry Pi and other diminutive computing platforms offer a glimpse of the potential of what is often referred to as the embedded space, the "Internet of Things" (IoT), or Machine to Machine (M2M) computing. I have a few different configurations I want to use for multiple Raspberry Pis, but for each of them, I'll need to perform the following common steps to prepare them for their various tasks: Load an OS onto an SD card Get the Pi connected to the network Load a JDK I've been very happy to see good friend and JFXtras teammate Gerrit Grunwald document how to do these things on his blog (link to article here - check it out!), but I ran into some issues configuring wi-fi that caused me some needless grief. Not knowing if any of the pitfalls were caused by my slightly-older version of the Pi and not being able to find anything specific online to help me get past it, I kept chipping away at it until I broke through. The purpose of this post is to (hopefully) help someone else recognize the same issues if/when they encounter them and work past them quickly. There is a great resource page here that covers several ways to get the OS on an SD card, but here is what I did (on a Mac): Plug SD card into reader on/in Mac Format it (FAT32) Unmount it (diskutil unmountDisk diskn, where n is the disk number representing the SD card) Transfer the disk image for Debian to the SD card (dd if=2012-08-08-wheezy-armel.img of=/dev/diskn bs=1m) Eject the card from the Mac (diskutil eject diskn) There are other ways, but this is fairly quick and painless, especially after you do it several times. Yes, I had to do that dance repeatedly (minus formatting) due to the wi-fi issues, as it kept killing the ability of the Pi to boot. You should be able to dramatically reduce the number of OS loads you do, though, if you do a few things with regard to your wi-fi. Firstly, I strongly recommend you purchase the Edimax EW-7811Un wi-fi adapter. This adapter/chipset has been proven with the Raspberry Pi, it's tiny, and it's cheap. Avoid unnecessary aggravation and buy this one! Secondly, visit this page for a script and instructions regarding how to configure your new wi-fi adapter with your Pi. Here is the rub, though: there is a missing step. At least for my combination of Pi version, OS version, and uncanny gift of timing and luck there was. :-) Here is the sequence of steps I used to make the magic happen: Plug your newly-minted SD card (with OS) into your Pi and connect a network cable (for internet connectivity) Boot your Pi. On the first boot, do the following things: Opt to have it use all space on the SD card (will require a reboot eventually) Disable overscan Set your timezone Enable the ssh server Update raspi-config Reboot your Pi. This will reconfigure the SD to use all space (see above). After you log in (UID: pi, password: raspberry), upgrade your OS. This was the missing step for me that put a merciful end to the repeated SD card re-imaging and made the wi-fi configuration trivial. To do so, just type sudo apt-get upgrade and give it several minutes to complete. Pour yourself a cup of coffee and congratulate yourself on the time you've just saved.  ;-) With the OS upgrade finished, now you can follow Mr. Engman's directions (to the letter, please see link above), download his script, and let it work its magic. One aside: I plugged the little power-sipping Edimax directly into the Pi and it worked perfectly. No powered hub needed, at least in my configuration. To recap, that OS upgrade (at least at this point, with this combination of OS/drivers/Pi version) is absolutely essential for a smooth experience. Miss that step, and you're in for hours of "fun". Save yourself! I'll pick up next time with more of the Java side of the RasPi configuration, but as they say, you have to cross the moat to get into the castle. Hopefully, this will help you do just that. Until next time! All the best, Mark 

    Read the article

  • Secret Agent Man

    - by Bil Simser
    Just a quick one this morning as we all get started in the week. Something that comes into play (sometimes in a big way) is the user agent string your browser gives off. So for example using the User-Agent field in the request header, you can determine what browser the user is running and act accordingly.Internet Explorer 9 modified the UA string slightly so just in case you're looking for it here are the user agent strings for IE9 (in various modes):Internet Explorer 9 Mode: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0)Internet Explorer 8 Mode: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 7 Mode: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)Internet Explorer 9 (Compatibility Mode): Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; MS-RTC LM 8; InfoPath.3; .NET4.0C; .NET4.0E; Zune 4.7)A couple of things to note here:This was from a 64-bit Windows 7 client so that might account for the WOW64 in the agent string (I don't have a 32-bit client to test from)Various applications and platforms add to the UA string just like they do in previous IE releases. So for example you can see I have various .NET versions installed as well as Zune. You can take advantage of this by querying the UA string for compatibilities and present options accordingly to the end user.As applications will continue to add and modify this string you'll want to query the string for parts not the entire string. For example if you want to detect if you're coming from IE running  on a Windows Phone 7 just look for "iemobile" in the user agent stringHappy hacking!

    Read the article

  • Entity Framework 4, WCF &amp; Lazy Loading Tip

    - by Dane Morgridge
    If you are doing any work with Entity Framework and custom WCF services in EFv1, everything works great.  As soon as you jump to EFv4, you may find yourself getting odd errors that you can’t seem to catch.  The problem is almost always has something to do with the new lazy loading feature in Entity Framework 4.  With Entity Framework 1, you didn’t have lazy loading so this problem didn’t surface.  Assume I have a Person entity and an Address entity where there is a one-to-many relationship between Person and Address (Person has many Addresses). In Entity Framework 1 (or in EFv4 with lazy loading turned off), I would have to load the Address data by hand by either using the Include or Load Method: var people = context.People.Include("Addresses"); or people.Addresses.Load(); Lazy loading works when the first time the Person.Addresses collection is accessed: 1: var people = context.People.ToList(); 2:  3: // only person data is currently in memory 4:  5: foreach(var person in people) 6: { 7: // EF determines that no Address data has been loaded and lazy loads 8: int count = person.Addresses.Count(); 9: } 10:  Lazy loading has the useful (and sometimes not useful) feature of fetching data when requested.  It can make your life easier or it can make it a big pain.  So what does this have to do with WCF?  One word: Serialization. When you need to pass data over the wire with WCF, the data contract is serialized into either XML or binary depending on the binding you are using.  Well, if I am using lazy loading, the Person entity gets serialized and during that process, the Addresses collection is accessed.  When that happens, the Address data is lazy loaded.  Then the Address is serialized, and the Person property is accessed, and then also serialized and then the Addresses collection is accessed.  Now the second time through, lazy loading doesn’t kick in, but you can see the infinite loop caused by this process.  This is a problem with any serialization, but I personally found it trying to use WCF. The fix for this is to simply turn off lazy Loading.  This can be done at each call by using context options: context.ContextOptions.LazyLoadingEnabled = false; Turning lazy loading off will now allow your classes to be serialized properly.  Note, this is if you are using the standard Entity Framework classes.  If you are using POCO,  you will have to do something slightly different.  With POCO, the Entity Framework will create proxy classes by default that allow things like lazy loading to work with POCO.  This proxy basically creates a proxy object that is a full Entity Framework object that sits between the context and the POCO object.  When using POCO with WCF (or any serialization) just turning off lazy loading doesn’t cut it.  You have to turn off the proxy creation to ensure that your classes will serialize properly: context.ContextOptions.ProxyCreationEnabled = false; The nice thing is that you can do this on a call-by-call basis.  If you use a new context for each set of operations (which you should) then you can turn either lazy loading or proxy creation on and off as needed.

    Read the article

  • How to inhibit suspend temporarily?

    - by Zorn
    I have searched around a bit for this and can't seem to find anything helpful. I have my PC running Ubuntu 12.10 set up to suspend after 30 minutes of inactivity. I don't want to change that, it works great most of the time. What I do want to do is disable the automatic suspend if a particular application is running. How can I do this? The closest thing I've found so far is to add a shell script in /usr/lib/pm-utils/sleep.d which checks if the application is running and returns 1 to indicate that suspend should be prevented. But it looks like the system then gives up on suspending automatically, instead of trying again after another 30 minutes. (As far as I can tell, if I move the mouse, that restarts the timer again.) It's quite likely the application will finish after a couple of hours, and I'd rather my PC then suspended automatically if I'm not using it at that point. (So I don't want to add a call to pm-suspend when the application finishes.) Is this possible? Any advice would be appreciated. Cheers. EDIT: As I noted in one of the comments below, what I actually wanted was to inhibit suspend when my PC was serving files over NFS; I just wanted to focus on the "suspend" part of the question because I already had an idea how to solve the NFS part. Using the 'xdotool' idea given in one of the answers, I have come up with the following script which I run from cron every few minutes. It's not ideal because it stops the screensaver kicking in as well, but it does work. I need to have a look at why 'caffeine' doesn't correctly re-enable suspend later on, then I could probably do better. Anyway, this does seem to work, so I'm including it here in case anyone else is interested. #!/bin/bash # If the output of this function changes between two successive runs of this # script, we inhibit auto-suspend. function check_activity() { /usr/sbin/nfsstat --server --list } # Prevent the automatic suspend from kicking in. function inhibit_suspend() { # Slightly jiggle the mouse pointer about; we do a small step and # reverse step to try to stop this being annoying to anyone using the # PC. TODO: This isn't ideal, apart from being a bit hacky it stops # the screensaver kicking in as well, when all we want is to stop # the PC suspending. Can 'caffeine' help? export DISPLAY=:0.0 xdotool mousemove_relative --sync -- 1 1 xdotool mousemove_relative --sync -- -1 -1 } LOG="$HOME/log/nfs-suspend-blocker.log" ACTIVITYFILE1="$HOME/tmp/nfs-suspend-blocker.current" ACTIVITYFILE2="$HOME/tmp/nfs-suspend-blocker.previous" echo "Started run at $(date)" >> "$LOG" if [ ! -f "$ACTIVITYFILE1" ]; then check_activity > "$ACTIVITYFILE1" exit 0; fi /bin/mv "$ACTIVITYFILE1" "$ACTIVITYFILE2" check_activity > "$ACTIVITYFILE1" if cmp --quiet "$ACTIVITYFILE1" "$ACTIVITYFILE2"; then echo "No activity detected since last run" >> "$LOG" else echo "Activity detected since last run; inhibiting suspend" >> "$LOG" inhibit_suspend fi

    Read the article

  • Merge sort versus quick sort performance

    - by Giorgio
    I have implemented merge sort and quick sort using C (GCC 4.4.3 on Ubuntu 10.04 running on a 4 GB RAM laptop with an Intel DUO CPU at 2GHz) and I wanted to compare the performance of the two algorithms. The prototypes of the sorting functions are: void merge_sort(const char **lines, int start, int end); void quick_sort(const char **lines, int start, int end); i.e. both take an array of pointers to strings and sort the elements with index i : start <= i <= end. I have produced some files containing random strings with length on average 4.5 characters. The test files range from 100 lines to 10000000 lines. I was a bit surprised by the results because, even though I know that merge sort has complexity O(n log(n)) while quick sort is O(n^2), I have often read that on average quick sort should be as fast as merge sort. However, my results are the following. Up to 10000 strings, both algorithms perform equally well. For 10000 strings, both require about 0.007 seconds. For 100000 strings, merge sort is slightly faster with 0.095 s against 0.121 s. For 1000000 strings merge sort takes 1.287 s against 5.233 s of quick sort. For 5000000 strings merge sort takes 7.582 s against 118.240 s of quick sort. For 10000000 strings merge sort takes 16.305 s against 1202.918 s of quick sort. So my question is: are my results as expected, meaning that quick sort is comparable in speed to merge sort for small inputs but, as the size of the input data grows, the fact that its complexity is quadratic will become evident? Here is a sketch of what I did. In the merge sort implementation, the partitioning consists in calling merge sort recursively, i.e. merge_sort(lines, start, (start + end) / 2); merge_sort(lines, 1 + (start + end) / 2, end); Merging of the two sorted sub-array is performed by reading the data from the array lines and writing it to a global temporary array of pointers (this global array is allocate only once). After each merge the pointers are copied back to the original array. So the strings are stored once but I need twice as much memory for the pointers. For quick sort, the partition function chooses the last element of the array to sort as the pivot and scans the previous elements in one loop. After it has produced a partition of the type start ... {elements <= pivot} ... pivotIndex ... {elements > pivot} ... end it calls itself recursively: quick_sort(lines, start, pivotIndex - 1); quick_sort(lines, pivotIndex + 1, end); Note that this quick sort implementation sorts the array in-place and does not require additional memory, therefore it is more memory efficient than the merge sort implementation. So my question is: is there a better way to implement quick sort that is worthwhile trying out? If I improve the quick sort implementation and perform more tests on different data sets (computing the average of the running times on different data sets) can I expect a better performance of quick sort wrt merge sort? EDIT Thank you for your answers. My implementation is in-place and is based on the pseudo-code I have found on wikipedia in Section In-place version: function partition(array, 'left', 'right', 'pivotIndex') where I choose the last element in the range to be sorted as a pivot, i.e. pivotIndex := right. I have checked the code over and over again and it seems correct to me. In order to rule out the case that I am using the wrong implementation I have uploaded the source code on github (in case you would like to take a look at it). Your answers seem to suggest that I am using the wrong test data. I will look into it and try out different test data sets. I will report as soon as I have some results.

    Read the article

  • Would using a self-signed SSL certificate be appropriate in this scenario?

    - by Kevin Y
    Now I realize this topic has been discussed in a few questions before (specifically this one), but I'm still a little confused about the implications of using a self-signed certificate, and how I would be affected by doing so in this case. After reading various sources, I'm still a little confused about the exact details of using one. The biggest problem with a self-signed certificate, is a man-in-the-middle attack. Even if you are 100% sure that you are on the correct website and you completely trust the site (your email server for example), you could have someone intercept the connection and present you with their own self-signed certificate. You would think that you are using a secure connection with your email server but you are really using a secure connection to an attacker's email server. – SSL Shopper So somebody could switch out my self-signed certificate with their own, and I wouldn't be able to detect it? The way this site phrases it, it makes it sound worse to install a self-signed certificate than to leave your site without a certificate at all. Self-signed certificates cannot (by nature) be revoked, which may allow an attacker who has already gained access to monitor and inject data into a connection to spoof an identity if a private key has been compromised. CAs on the other hand have the ability to revoke a compromised certificate if alerted, which prevents its further use. - Wikipedia Does this mean that the only way someone could switch out their own certificate for mine is for them to find out the private key? I suppose this is more secure, but I'm still slightly confused about what exactly results from using a self-signed certificate. Is the only issue that obnoxious security warning that pops up in your browser when directed to the site, or is there more to it? Now in my case, I want to add the an SSL certificate to a minuscule Wordpress blog I run that I don't expect anyone else will read anytime soon; I mainly started it to get into the habit of blogging, and to learn more about the process of administrating a site (ex. what to do in situations like this one). Whenever I go to the login page and there's an HTTP:// instead of HTTPS://, I cringe a little. Submitting my password feels like I'm shouting my password out loud with hundreds of people listening. I don't plan on adding any other authors to the site, so I am the only person who would ever need to login. This isn't a site I'm trying to get page views from, or one that handles e-commerce or any sensitive info like that, simply my username and password to login with. One of the concerns (that I've gathered so far) of a self-signed certificate is that non-technical users might be scared by the security warning, but this would not be an issue in my case. TL;DR: If scaring visitors away isn't a concern (which it isn't in my case), is it acceptable to use a self-signed certificate for the purpose of encrypting my Wordpress blog's password, or are there added security issues I should be aware of? Essentially, I'm wondering whether adding a self-signed certificate will be safer than leaving my login page the way it is now, or if it adds the potential for more security breaches than leaving it sans-SSL.

    Read the article

  • C# 4 Named Parameters for Overload Resolution

    - by Steve Michelotti
    C# 4 is getting a new feature called named parameters. Although this is a stand-alone feature, it is often used in conjunction with optional parameters. Last week when I was giving a presentation on C# 4, I got a question on a scenario regarding overload resolution that I had not considered before which yielded interesting results. Before I describe the scenario, a little background first. Named parameters is a well documented feature that works like this: suppose you have a method defined like this: 1: void DoWork(int num, string message = "Hello") 2: { 3: Console.WriteLine("Inside DoWork() - num: {0}, message: {1}", num, message); 4: } This enables you to call the method with any of these: 1: DoWork(21); 2: DoWork(num: 21); 3: DoWork(21, "abc"); 4: DoWork(num: 21, message: "abc"); and the corresponding results will be: Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: Hello Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc This is all pretty straight forward and well-documented. What is slightly more interesting is how resolution is handled with method overloads. Suppose we had a second overload for DoWork() that looked like this: 1: void DoWork(object num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } The first rule applied for method overload resolution in this case is that it looks for the most strongly-type match first.  Hence, since the second overload has System.Object as the parameter rather than Int32, this second overload will never be called for any of the 4 method calls above.  But suppose the method overload looked like this: 1: void DoWork(int num) 2: { 3: Console.WriteLine("Inside second overload: " + num); 4: } In this case, both overloads have the first parameter as Int32 so they both fulfill the first rule equally.  In this case the overload with the optional parameters will be ignored if the parameters are not specified. Therefore, the same 4 method calls from above would result in: Inside second overload: 21 Inside second overload: 21 Inside DoWork() - num: 21, message: abc Inside DoWork() - num: 21, message: abc Even all this is pretty well documented. However, we can now consider the very interesting scenario I was presented with. The question was what happens if you change the parameter name in one of the overloads.  For example, what happens if you change the parameter *name* for the second overload like this: 1: void DoWork(int num2) 2: { 3: Console.WriteLine("Inside second overload: " + num2); 4: } In this case, the first 2 method calls will yield *different* results: 1: DoWork(21); 2: DoWork(num: 21); results in: Inside second overload: 21 Inside DoWork() - num: 21, message: Hello We know the first method call will go to the second overload because of normal method overload resolution rules which ignore the optional parameters.  But for the second call, even though all the same rules apply, the compiler will allow you to specify a named parameter which, in effect, overrides the typical rules and directs the call to the first overload. Keep in mind this would only work if the method overloads had different parameter names for the same types (which in itself is weird). But it is a situation I had not considered before and it is one in which you should be aware of the rules that the C# 4 compiler applies.

    Read the article

  • Ubuntu 12.04 // Likewise Open // Unable to ever authenticate AD users

    - by Rob
    So Ubuntu 12.04, Likewise latest from the beyondtrust website. Joins domain fine. Gets proper information from lw-get-status. Can use lw-find-user-by-name to retrieve/locate users. Can use lw-enum-users to get all users. Attempting to login with an AD user via SSH generates the following errors in the auth.log file: Nov 28 19:15:45 hostname sshd[2745]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:15:45 hostname sshd[2745]: PAM adding faulty module: pam_winbind.so Nov 28 19:15:51 hostname sshd[2745]: error: PAM: Authentication service cannot retrieve authentication info for DOMAIN\\user.name from remote.hostname Nov 28 19:16:06 hostname sshd[2745]: Connection closed by 10.1.1.84 [preauth] Attempting to login via the LightDM itself generates similar errors in the auth.log file. Nov 28 19:19:29 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:29 hostname lightdm: PAM adding faulty module: pam_winbind.so Nov 28 19:19:47 hostname lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "DOMAIN\user.name" Nov 28 19:19:52 hostname lightdm: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:19:54 hostname lightdm: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:19:54 hostname lightdm: PAM adding faulty module: pam_winbind.so Attempting to login via a console on the system itself generates slightly different errors: Nov 28 19:31:09 hostname login[997]: PAM unable to dlopen(pam_winbind.so): /lib/security/pam_winbind.so: cannot open shared object file: No such file or directory Nov 28 19:31:09 hostname login[997]: PAM adding faulty module: pam_winbind.so Nov 28 19:31:11 hostname login[997]: [lsass-pam] [module:pam_lsass]pam_sm_authenticate error [login:DOMAIN\user.name][error code:40022] Nov 28 19:31:14 hostname login[997]: FAILED LOGIN (1) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info Nov 28 19:31:31 hostname login[997]: FAILED LOGIN (2) on '/dev/tty2' FOR 'DOMAIN\user.name', Authentication service cannot retrieve authentication info I am baffled. The errors obviously are correct, the file /lib/security/pam_winbind.so does not exist. If its a dependancy/required, surely it should be part of the package? I've installed/reinstalled, I've used the downloaded package from the beyondtrust website, i've used the repository, nothing seems to work, every method of installing this application generates the same errors for me. UPDATE : Hrmm, I thought likewise didn't use native winbind but its own modules. Installing winbind from apt-get uninstalls pbis-open (likewise) and generates failures when installing if pbis-open is installed first. Uninstalled winbind, reinstalled pbis-open, same issue as above. The file pam_winbind.so does not exist in that location. Setting up pbis-open-legacy (7.0.1.918) ... Installing Packages was successful This computer is joined to DOMAIN.LOCAL New libraries and configurations have been installed for PAM and NSS. Clearly it thinks it has installed it, but it hasn't. It may be a legacy issue with the previous attempt to configure domain integration manually with winbind. Does anyone have a working likewise-open installation and does the /etc/nsswitch.conf include references to winbind? Or do the /etc/pam.d/common-account or /etc/pam.d/common-password reference pam_winbind.so? I'm unsure if those entries are just legacy or setup by likewise. UPDATE 2 : Complete reinstall of OS fixed it and it worked seamlessly, like it was meant to and those 2 PAM files did NOT include entries for pam_winbind.so, so that was the underlying problem. Thanks for the assist.

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • Flash Actionscript 3.0 Game Projectile Creation

    - by Christian Basar
    I have been creating a side-scrolling Actionscript 3.0 game. In this game I want the Player to be able to shoot blow darts as weapons. I had some trouble getting the darts to be created in the right place (in front of the player), but eventually got it working with some help from this page (please look at it for background information on this problem): http://stackoverflow.com/questions/8031553/flash-actionscript-3-0-game-projectile-creation I got the darts to be created in the right place (near the player) and a 'movePlayerDarts()' function moves them. But I actually have a new problem. When the player moves after firing a dart, the dart tries to follow him! If the player jumps, the dart rises up. If the player moves to the left, the dart moves slightly to the left. Obviously, there is some code somewhere which is telling the darts to follow the player. I do not see how, unless the 'playerDartContainer' has something to do with that. But the container is always at position (0,0) and it does not move. Also, as a test I traced a dart's 'y' coordinate within the constantly-running 'movePlayerDarts()' function. As you can see, that function constantly moves the dart down the y axis by increasing its y-coordinate value. But when I jump, the 'y' coordinate being traced is never reduced, even though the dart clearly looks like it's rising! If anybody has any suggestions, I'd appreciate them! Here is the code I use to create the darts: // This function creates a dart public function createDart():void { if (playerDartContainer.numChildren <= 4) { // Play dart shooting sound sndDartShootIns.play(); // Create a new 'PlayerDart' object playerDart = new PlayerDart(); // Set the new dart's initial position and direction depending on the player's direction // Player's facing right if (player.scaleX == 1) { // Create dart in front of player's dart gun playerDart.x = player.x + 12; playerDart.y = player.y - 85; // Dart faces right, too playerDart.scaleX = 1; } // Player's facing left else if (player.scaleX == -1) { // Create dart in front of player's dart gun playerDart.x = player.x - 12; playerDart.y = player.y - 85; // Dart faces left, too playerDart.scaleX = -1; } playerDartContainer.addChild(playerDart); } } // End of 'createDart()' function This code is the EnterFrameHandler for the player darts: // In every frame, call 'movePlayerDarts()' to move the darts within the 'playerDartContainer' public function playerDartEnterFrameHandler(event:Event):void { // Only move the Player's darts if their container has at least one dart within if (playerDartContainer.numChildren > 0) { movePlayerDarts(); } } And finally, this is the code that actually moves all of the player's darts: // Move all of the Player's darts public function movePlayerDarts():void { for (var pdIns:int = 0; pdIns < playerDartContainer.numChildren; pdIns++) { // Set the Player Dart 'instance' variable to equal the current PlayerDart playerDartIns = PlayerDart(playerDartContainer.getChildAt(pdIns)); // Move the current dart in the direction it was shot. The dart's 'x-scale' // factor is multiplied by its speed (5) to move the dart in its correct // direction. If the 'x-scale' factor is -1, the dart is pointing left (as // seen in the 'createDart()' function. (-1 * 5 = -5), so the dart will go // to left at a rate of 5. The opposite is true for the right-ward facing // darts playerDartIns.x += playerDartIns.scaleX * 1; // Make gravity work on the dart playerDartIns.y += 0.7; //playerDartIns.y += 1; // What if the dart hits the ground? if (HitTest.intersects(playerDartIns, floor, this)) { playerDartContainer.removeChild(playerDartIns); } //trace("Dart x: " + playerDartIns.x); trace("Dart y: " + playerDartIns.y); } }

    Read the article

  • Use Case Actors - Primary versus Secondary

    - by Dave Burke
    The Unified Modeling Language (UML1) defines an Actor (from UseCases) as: An actor specifies a role played by a user or any other system that interacts with the subject. In Alistair Cockburn’s book “Writing Effective Use Cases” (2) Actors are further defined as follows: Primary Actor: The primary actor of a use case is the stakeholder that calls on the system to deliver one of its services. It has a goal with respect to the system – one that can be satisfied by its operation. The primary actor is often, but not always, the actor who triggers the use case. Supporting Actors: A supporting actor in a use case in an external actor that provides a service to the system under design. It might be a high-speed printer, a web service, or humans that have to do some research and get back to us. In a 2006 article (3) Cockburn refined the definitions slightly to read: Primary Actors: The Actor(s) using the system to achieve a goal. The Use Case documents the interactions between the system and the actors to achieve the goal of the primary actor. Secondary Actors: Actors that the system needs assistance from to achieve the primary actor’s goal. Finally, the Oracle Unified Method (OUM) concurs with the UML definition of Actors, along with Cockburn’s refinement, but OUM also includes the following: Secondary actors may or may not have goals that they expect to be satisfied by the use case, the primary actor always has a goal, and the use case exists to satisfy the primary actor. Now that we are on the same “page”, let’s consider two examples: A bank loan officer wants to review a loan application from a customer, and part of the process involves a real-time credit rating check. Use Case Name: Review Loan Application Primary Actor: Loan Officer Secondary Actors: Credit Rating System A Human Resources manager wants to change the job code of an employee, and as part of the process, automatically notify several other departments within the company of the change. Use Case Name: Maintain Job Code Primary Actor: Human Resources Manager Secondary Actors: None The first example is quite straight forward; we need to define the Secondary Actor because without the “Credit Rating System” we cannot successfully complete the Use Case. In other words, the goal of the Primary Actor is to successfully complete the Loan Application, but they need the explicit “help” of the Secondary Actor (Credit Rating System) to achieve this goal. The second example is where people sometimes get confused. Within OUM we would not include the “other departments” as Secondary Actors and therefore not include them on the Use Case diagram for the following reasons: The other departments are not required for the successful completion of the Use Case We are not expecting any response from the other departments (at least within the bounds of the Use Case under discussion) Having said that, within the detail of the Use Case Specification Main Success Scenario, we would include something like: “The system sends a notification to the related department heads (ref. Business Rule BR101)” Now let’s consider one final example. A Procurement Manager wants to place a “bid” for some goods using an On-Line Trading Community (B2B version of eBay) Use Case Name: Create Bid Primary Actor: Procurement Manager Secondary Actors: On-Line Trading Community You might wonder why the Trading Community is listed as a Secondary Actor, i.e. if all we are going to do is place a bid for a specific quantity of goods at a given price and send that off to the Trading Community, then why would the Trading Community need to “assist” in that Use Case? Well, once again, it comes back to the “User Experience” and how we want to optimize that when we think about our Use Case, and ultimately, when the developer comes to assembling some code. In this final example, the Procurement Manager cannot successfully complete the “Create Bid” Use Case until they receive an affirmative confirmation back from the Trading Community that the Bid has been accepted. Therefore, the Trading Community must become a Secondary Actor and be referenced both on the Use Case diagram and Use Case Specification. Any astute readers who are wondering about the “single sitting” rule will have to wait for a follow-up Blog entry to find out how that consideration can be factored in!!! Happy Use Case writing! (1) OMG Unified Modeling LanguageTM (OMG UML), Superstructure Version 2.4.1 (2) Cockburn, A, 2000, Writing Effective Use Case, Addison-Wesley Professional; Edition 1 (3) Cockburn, A, 2006 “Use Case fundamentals” viewed 20th March 2012, http://alistair.cockburn.us/Use+case+fundamentals

    Read the article

  • .Net Reflector 6.5 EAP now available

    - by CliveT
    With the release of CLR 4 being so close, we’ve been working hard on getting the new C# and VB language features implemented inside Reflector. The work isn’t complete yet, but we have some of the features working. Most importantly, there are going to be changes to the Reflector object model, and we though it would be useful for people to see the changes and have an opportunity to comment on them. Before going any further, we should tell you what the EAP contains that’s different from the released version. A number of bugs have been fixed, mainly bugs that were raised via the forum. This is slightly offset by the fact that this EAP hasn’t had a whole lot of testing and there may have been new bugs introduced during the development work we’ve been doing. The C# language writer has been changed to display in and out co- and contra-variance markers on interfaces and delegates, and to display default values for optional parameters in method definitions. We also concisely display values passed by reference into COM calls. However, we do not change callsites to display calls using named parameters; this looks like hard work to get right. The forthcoming version of the C# language introduces dynamic types and dynamic calls. The new version of Reflector should display a dynamic call rather than the generated C#: dynamic target = MyTestObject(); target.Hello("Mum"); We have a few bugs in this area where we are not casting to dynamic when necessary. These have been fixed on a branch and should make their way into the next EAP. To support the dynamic features, we’ve added the types IDynamicMethodReferenceExpression, IDynamicPropertyIndexerExpression, and IDynamicPropertyReferenceExpression to the object model. These types, based on the versions without “Dynamic” in the name, reflect the fact that we don’t have full information about the method that is going to be called, but only have its name (as a string). These interfaces are going to change – in an internal version, they have been extended to include information about which parameter positions use runtime types and which use compile time types. There’s also the interface, IDynamicVariableDeclaration, that can be used to determine if a particular variable is used at dynamic call sites as a target. A couple of these language changes have also been added to the Visual Basic language writer. The new features are exposed only when the optimization level is set to .NET 4. When the level is set this high, the other standard language writers will simply display a message to say that they do not handle such an optimization level. Reflector Pro now has 4.0 as an optional compilation target and we have done some work to get the pdb generation right for these new features. The EAP version of Reflector no longer installs the add-in on startup. The first time you run the EAP, it displays the integration options dialog. You can use the checkboxes to select the versions of Visual Studio into which you want to install the EAP version. Note that you can only have one version of Reflector Pro installed in Visual Studio; if you install into a Visual Studio that has another version installed, the previous version will be removed. Please try it out and send your feedback to the EAP forum.

    Read the article

  • Why are my Unity procedural animations jerky?

    - by Phoenix Perry
    I'm working in Unity and getting some crazy weird motion behavior. I have a plane and I'm moving it. It's ever so slightly getting about 1 pixel bigger and smaller. It looks like the it's kind of getting squeezed sideways by a pixel. I'm moving a plane by cos and sin so it will spin on the x and z axes. If the planes are moving at Time.time, everything is fine. However, if I put in slower speed multiplier, I get an amazingly weird jerk in my animation. I get it with or without the lerp. How do I fix it? I want it to move very slowly. Is there some sort of invisible grid in unity? Some sort of minimum motion per frame? I put a visual sample of the behavior here. Here's the relevant code: public void spin() { for (int i = 0; i < numPlanes; i++ ) { GameObject g = planes[i] as GameObject; //alt method //currentRotation += speed * Time.deltaTime * 100; //rotation.eulerAngles = new Vector3(0, currentRotation, 0); //g.transform.position = rotation * rotationRadius; //sine method g.GetComponent<PlaneSetup>().pos.x = g.GetComponent<PlaneSetup>().radiusX * (Mathf.Cos((Time.time*speed) + g.GetComponent<PlaneSetup>().startAngle)); g.GetComponent<PlaneSetup>().pos.z = g.GetComponent<PlaneSetup>().radius * Mathf.Sin((Time.time*speed) + g.GetComponent<PlaneSetup>().startAngle); g.GetComponent<PlaneSetup>().pos.y = g.GetComponent<Transform>().position.y; ////offset g.GetComponent<PlaneSetup>().pos.z += 20; g.GetComponent<PlaneSetup>().posLerp.x = Mathf.Lerp(g.transform.position.x,g.GetComponent<PlaneSetup>().pos.x, .5f); g.GetComponent<PlaneSetup>().posLerp.z = Mathf.Lerp(g.transform.position.z, g.GetComponent<PlaneSetup>().pos.z, .5f); g.GetComponent<PlaneSetup>().posLerp.y = g.GetComponent<Transform>().position.y; g.transform.position = g.GetComponent<PlaneSetup>().posLerp; } Invoke("spin",0.0f); } The full code is on github. There is literally nothing else going on. I've turned off all other game objects so it's only the 40 planes with a texture2D shader. I removed it from Invoke and tried it in Update -- still happens. With a set frame rate or not, the same problem occurs. Tested it in Fixed Update. Same issue. The script on the individual plane doesn't even have an update function in it. The data on it could functionally live in a struct. I'm getting between 90 and 123 fps. Going to investigate and test further. I put this in an invoke function to see if I could get around it just occurring in update. There are no physics on these shapes. It's a straight procedural animation. Limited it to 1 plane - still happens. Thoughts? Removed the shader - still happening.

    Read the article

  • Subterranean IL: Custom modifiers

    - by Simon Cooper
    In IL, volatile is an instruction prefix used to set a memory barrier at that instruction. However, in C#, volatile is applied to a field to indicate that all accesses on that field should be prefixed with volatile. As I mentioned in my previous post, this means that the field definition needs to store this information somehow, as such a field could be accessed from another assembly. However, IL does not have a concept of a 'volatile field'. How is this information stored? Attributes The standard way of solving this is to apply a VolatileAttribute or similar to the field; this extra metadata notifies the C# compiler that all loads and stores to that field should use the volatile prefix. However, there is a problem with this approach, namely, the .NET C++ compiler. C++ allows methods to be overloaded using properties, like volatile or const, on the parameters; this is perfectly legal C++: public ref class VolatileMethods { void Method(int *i) {} void Method(volatile int *i) {} } If volatile was specified using a custom attribute, then the VolatileMethods class wouldn't be compilable to IL, as there is nothing to differentiate the two methods from each other. This is where custom modifiers come in. Custom modifiers Custom modifiers are similar to custom attributes, but instead of being applied to an IL element separately to its declaration, they are embedded within the field or parameter's type signature itself. The VolatileMethods class would be compiled to the following IL: .class public VolatileMethods { .method public instance void Method(int32* i) {} .method public instance void Method( int32 modreq( [mscorlib]System.Runtime.CompilerServices.IsVolatile)* i) {} } The modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile) is the custom modifier. This adds a TypeDef or TypeRef token to the signature of the field or parameter, and even though they are mostly ignored by the CLR when it's executing the program, this allows methods and fields to be overloaded in ways that wouldn't be allowed using attributes. Because the modifiers are part of the signature, they need to be fully specified when calling such a method in IL: call instance void Method( int32 modreq([mscorlib]System.Runtime.CompilerServices.IsVolatile)*) There are two ways of applying modifiers; modreq specifies required modifiers (like IsVolatile), and modopt specifies optional modifiers that can be ignored by compilers (like IsLong or IsConst). The type specified as the modifier argument are simple placeholders; if you have a look at the definitions of IsVolatile and IsLong they are completely empty. They exist solely to be referenced by a modifier. Custom modifiers are used extensively by the C++ compiler to specify concepts that aren't expressible in IL, but still need to be taken into account when calling method overloads. C++ and C# That's all very well and good, but how does this affect C#? Well, the C++ compiler uses modreq(IsVolatile) to specify volatility on both method parameters and fields, as it would be slightly odd to have the same concept represented using a modifier or attribute depending on what it was applied to. Once you've compiled your C++ project, it can then be referenced and used from C#, so the C# compiler has to recognise the modreq(IsVolatile) custom modifier applied to fields, and vice versa. So, even though you can't overload fields or parameters with volatile using C#, volatile needs to be expressed using a custom modifier rather than an attribute to guarentee correct interoperability and behaviour with any C++ dlls that happen to come along. Next up: a closer look at attributes, and how certain attributes compile in unexpected ways.

    Read the article

  • What Counts For a DBA: Imagination

    - by drsql
    "Imagination…One little spark, of inspiration… is at the heart, of all creation." – From the song "One Little Spark", by the Sherman Brothers I have a confession to make. Despite my great enthusiasm for databases and programming, it occurs to me that every database system I've ever worked on has been, in terms of its inputs and outputs, downright dull. Most have been glorified e-spreadsheets, many replacing manual systems built on actual spreadsheets. I've created a lot of database-driven software whose main job was to "count stuff"; phone calls, web visitors, payments, donations, pieces of equipment and so on. Sometimes, instead of counting stuff, the database recorded values from other stuff, such as data from sensors or networking devices. Yee hah! So how do we, as DBAs, maintain high standards and high spirits when we realize that so much of our work would fail to raise the pulse of even the most easily excitable soul? The answer lies in our imagination. To understand what I mean by this, consider a role that, in terms of its output, offers an extreme counterpoint to that of the DBA: the Disney Imagineer. Their job is to design Disney's Theme Parks, of which I'm a huge fan. To me this has always seemed like a fascinating and exciting job. What must an Imagineer do, every day, to inspire the feats of creativity that are so clearly evident in those spectacular rides and shows? Here, if ever there was one, is a role where "dull moments" must be rare indeed, surely? I wanted to find out, and so parted with a considerable sum of money for my wife and I to have lunch with one; I reasoned that if I found one small way to apply their secrets to my own career, it would be money well spent. Early in the conversation with our Imagineer (Cindy Cote), the job did indeed sound magical. However, as talk turned to management meetings, budget-wrangling and insane deadlines, I came to the strange realization that, in fact, her job was a lot more like mine than I would ever have guessed. Much like databases, all those spectacular Disney rides bring with them a vast array of complex plumbing, lighting, safety features, and all manner of other "boring bits", kept well out of sight of the end user, but vital for creating the desired experience; and, of course, it is these "boring bits" that take up much of the Imagineer's time. Naturally, there is still a vital part of their job that is spent testing out new ideas, putting themselves in the place of a park visitor, from a 9-year-old boy to a 90-year-old grandmother, and trying to imagine what experiences they'd like to have. It is these small, but vital, sparks of imagination and creativity that have the biggest impact. The real feat of a successful Imagineer is clearly to never to lose sight of this fact, in among all the rote tasks. It is the same for a DBA. Not matter how seemingly dull is the task at hand, try to put yourself in the shoes of the end user, and imagine how your input will affect the experience he or she will have with the database you're building, and how that may affect the world beyond the bits stored in your database. Then, despite the inevitable rush to be "done", find time to go the extra mile and hone the design so that it delivers something as close to that imagined experience as you can get. OK, our output still can't and won't reach the same spectacular heights as the "Journey into The Imagination" ride at EPCOT Theme Park in Orlando, where I first heard "One Little Spark". However, our imaginative sparks and efforts can, and will, make a difference to the user who now feels slightly more at home with a database application, or to the manager holding a report presented with enough clarity to drive an interesting decision or two. They are small victories, but worth having, and appreciated, or at least that's how I imagine it.

    Read the article

  • How to create array with unique sprites? in cocos2d iphone

    - by prakash s
    I write the code like this. This displays only one sprite (red colour bubble) with number of times and moving down, but actually I want to display different sprites (different colour bubble) every time and moving down. I also add no of .png images in resource folder of my project. Here I used only 3.png, but I need to display all *.png images (different colour bubbles) in my project but I don't know how to get this. Please help me Thank you. Here is the code: -(void)addTarget { CCSprite *target = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0, 256, 256)]; CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target]; // Determine speed of the target int minDuration = 4.0; int maxDuration = 12.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2,actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 2; [_targets addObject:target]; } -(void)gameLogic:(ccTime)dt { [self addTarget]; } -(id) init { if( (self=[super initWithColor:ccc4(255,255,255,255)] )) { // Enable touch events self.isTouchEnabled = YES; // Initialize arrays _targets = [[NSMutableArray alloc] init]; _projectiles = [[NSMutableArray alloc] init]; // Get the dimensions of the window for calculation purposes CGSize winSize = [[CCDirector sharedDirector] winSize]; [self schedule:@selector(gameLogic:) interval:1.0]; [self schedule:@selector(update:)]; } return self; } - (void)update:(ccTime)dt { NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *projectile in _projectiles) { CGRect projectileRect = CGRectMake(projectile.position.x - (projectile.contentSize.width/2), projectile.position.y - (projectile.contentSize.height/2), projectile.contentSize.width, projectile.contentSize.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *target in _targets) { CGRect targetRect = CGRectMake(target.position.x - (target.contentSize.width/2), target.position.y - (target.contentSize.height/2), target.contentSize.width, target.contentSize.height); if (CGRectIntersectsRect(projectileRect, targetRect)) { [targetsToDelete addObject:target]; } } for (CCSprite *target in targetsToDelete) { [_targets removeObject:target]; [self removeChild:target cleanup:YES]; _projectilesDestroyed++; if (_projectilesDestroyed > 30) { //GameOverScene *gameOverScene = [GameOverScene node]; // [gameOverScene.layer.label setString:@"You Win!"]; // [[CCDirector sharedDirector] replaceScene:gameOverScene]; } } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:projectile]; } [targetsToDelete release]; } for (CCSprite *projectile in projectilesToDelete) { [_projectiles removeObject:projectile]; [self removeChild:projectile cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • Normalisation and 'Anima notitia copia' (Soul of the Database)

    - by Phil Factor
    (A Guest Editorial for Simple-Talk) The other day, I was staring  at the sys.syslanguages  table in SQL Server with slightly-raised eyebrows . I’d just been reading Chris Date’s  interesting book ‘SQL and Relational Theory’. He’d made the point that you’re not necessarily doing relational database operations by using a SQL Database product.  The same general point was recently made by Dino Esposito about ASP.NET MVC.  The use of ASP.NET MVC doesn’t guarantee you a good application design: It merely makes it possible to test it. The way I’d describe the sentiment in both cases is ‘you can hit someone over the head with a frying-pan but you can’t call it cooking’. SQL enables you to create relational databases. However,  even if it smells bad, it is no crime to do hideously un-relational things with a SQL Database just so long as it’s necessary and you can tell the difference; not only that but also only if you’re aware of the risks and implications. Naturally, I’ve never knowingly created a database that Codd would have frowned at, but around the edges are interfaces and data feeds I’ve written  that have caused hissy fits amongst the Normalisation fundamentalists. Part of the problem for those who agonise about such things  is the misinterpretation of Atomicity.  An atomic value is one for which, in the strange virtual universe you are creating in your database, you don’t have any interest in any of its component parts.  If you aren’t interested in the electrons, neutrinos,  muons,  or  taus, then  an atom is ..er.. atomic. In the same way, if you are passed a JSON string or XML, and required to store it in a database, then all you need to do is to ask yourself, in your role as Anima notitia copia (Soul of the database) ‘have I any interest in the contents of this item of information?’.  If the answer is ‘No!’, or ‘nequequam! Then it is an atomic value, however complex it may be.  After all, you would never have the urge to store the pixels of images individually, under the misguided idea that these are the atomic values would you?  I would, of course,  ask the ‘Anima notitia copia’ rather than the application developers, since there may be more than one application, and the applications developers may be designing the application in the absence of full domain knowledge, (‘or by the seat of the pants’ as the technical term used to be). If, on the other hand, the answer is ‘sure, and we want to index the XML column’, then we may be in for some heavy XML-shredding sessions to get to store the ‘atomic’ values and ensure future harmony as the application develops. I went back to looking at the sys.syslanguages table. It has a months column with the months in a delimited list January,February,March,April,May,June,July,August,September,October,November,December This is an ordered list. Wicked? I seem to remember that this value, like shortmonths and days, is treated as a ‘thing’. It is merely passed off to an external  C++ routine in order to format a date in a particular language, and never accessed directly within the database. As far as the database is concerned, it is an atomic value.  There is more to normalisation than meets the eye.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • C#: A "Dumbed-Down" C++?

    - by James Michael Hare
    I was spending a lovely day this last weekend watching my sons play outside in one of the better weekends we've had here in Saint Louis for quite some time, and whilst watching them and making sure no limbs were broken or eyes poked out with sticks and other various potential injuries, I was perusing (in the correct sense of the word) this month's MSDN magazine to get a sense of the latest VS2010 features in both IDE and in languages. When I got to the back pages, I saw a wonderful article by David S. Platt entitled, "In Praise of Dumbing Down"  (msdn.microsoft.com/en-us/magazine/ee336129.aspx).  The title captivated me and I read it and found myself agreeing with it completely especially as it related to my first post on divorcing C++ as my favorite language. Unfortunately, as Mr. Platt mentions, the term dumbing-down has negative connotations, but is really and truly a good thing.  You are, in essence, taking something that is extremely complex and reducing it to something that is much easier to use and far less error prone.  Adding safeties to power tools and anti-kick mechanisms to chainsaws are in some sense "dumbing them down" to the common user -- but that also makes them safer and more accessible for the common user.  This was exactly my point with C++ and C#.  I did not mean to infer that C++ was not a useful or good language, but that in a very high percentage of cases, is too complex and error prone for the job at hand. Choosing the correct programming language for a job is a lot like choosing any other tool for a task.  For example: if I want to dig a French drain in my lawn, I can attempt to use a huge tractor-like backhoe and the job would be done far quicker than if I would dig it by hand.  I can't deny that the backhoe has the raw power and speed to perform.  But you also cannot deny that my chances of injury or chances of severing utility lines or other resources climb at an exponential rate inverse to the amount of training I may have on that machinery. Is C++ a powerful tool?  Oh yes, and it's great for those tasks where speed and performance are paramount.  But for most of us, it's the wrong tool.  And keep in mind, I say this even though I have 17 years of experience in using it and feel myself highly adept in utilizing its features both in the standard libraries, the STL, and in supplemental libraries such as BOOST.  Which, although greatly help with adding powerful features quickly, do very little to curb the relative dangers of the language. So, you may say, the fault is in the developer, that if the developer had some higher skills or if we only hired C++ experts this would not be an issue.  Now, I will concede there is some truth to this.  Obviously, the higher skilled C++ developers you hire the better the chance they will produce highly performant and error-free code.  However, what good is that to the average developer who cannot afford a full stable of C++ experts? That's my point with C#:  It's like a kinder, gentler C++.  It gives you nearly the same speed, and in many ways even more power than C++, and it gives you a much softer cushion for novices to fall against if they code less-than-optimally.  A bug is a bug, of course, in any language, but C# does a good job of hiding and taking on the task of handling almost all of the resource issues that make C++ so tricky.  For my money, C# is much more maintainable, more feature-rich, second only slightly in performance, faster to market, and -- last but not least -- safer and easier to use.  That's why, where I work, I much prefer to see the developers moving to C#.  The quantity of bugs is much lower, and we don't need to hire "experts" to achieve the same results since the language itself handles those resource pitfalls so prevalent in poorly written C++ code.  C++ will still have its place in the world, and I'm sure I'll still use it now and again where it is truly the correct tool for the job, but for nearly every other project C# is a wonderfully "dumbed-down" version of C++ -- in the very best sense -- and to me, that's the smart choice.

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • Automated Error Reporting = More Robust Software

    - by Laila
    I would like to tell you how to revolutionize your software development process </marketing hyperbole> On a more serious note, we (Red Gate's .NET Development team) recently rolled a new tool into our development process which has made our lives dramatically easier AND improved the quality of our software, and I (& one of our developers, Alex Davies) just wanted to take a quick moment to share the love. I work with a development team that takes pride in what they ship, so we take software testing rather seriously. For every development project we run, we allocate at least one software tester for every two developers, and we never ship software without first shipping early access releases and betas to get user feedback. And therein lies the challenge -encouraging users to provide consistent, useful feedback is a headache, but without that feedback, improving the software is. tricky. Until fairly recently, we used the standard (if long-winded) approach of receiving bug reports of variable quality via email or through our support forums. If that didn't give us enough information to reproduce the problem - which was most of the time - we had to enter into a time-consuming to-and-fro conversation with the end-user, to get scrape together the data we needed to work out where the problem lay. As I'm sure you're aware, this is painfully slow. To the delight of the team, we recently got to work with SmartAssembly, which lets us embed automated exception and error reporting into our software with very little pain, and we decided to do a little dogfooding. As a result, we've have made a really handy (if perhaps slightly obvious) discovery: As soon as we release a beta, or indeed any release of software, we now get tonnes of customer feedback through automated error reports. Making this process easier for our users has dramatically increased the amount (and quality) of feedback we get. From their point of view, they get an experience similar to Microsoft's error reporting, and process is essentially idiot-proof. From our side of things, we can now react much faster to the information we get, fixing the bugs and shipping a new-and-improved release, which our users rather appreciate. Smiles and hugs all round. Even more so because, as we're use SmartAssembly's Automated Error Reporting, we get to avoid having to spend weeks building an exception reporting mechanism. It takes just a few minutes to add reporting to a project, and we get a bunch of useful information back, like a stack trace and the values of all the local variables, which we can use to fix bugs. Happily, "Automated Error Reporting = More Robust Software" can actually be read two ways: we've found that we not only ship higher quality software, but we also release within a shorter time. We can ship stable software that our users are happy to upgrade to, and we then bask in the glory of lots of positive customer feedback. Once we'd starting working with SmartAssembly, we were curious to know how widespread error reporting was as a practice. Our product manager ran a survey in autumn last year, and found that 40% of software developers never really considered deploying error reporting. Considering how we've now got plenty of experience on the subject, one of our dev guys, Alex Davies, thought we should share what we've learnt, and he's kindly offered to host a webinar on delivering robust software with Automated Error Reporting. Drawing on our own in-house development experiences, he'll cover how to add error reporting to your program, how to actually use the error reports to fix bugs (don't snigger, not everyone's as bright as you), how to customize the error report dialog that your users see, and how to automatically get log files from your users' machine. The webinar will take place on Jan 25th (that's next week). It's free to attend, but you'll still need to register to hear Alex's dulcet tones.

    Read the article

  • Tyrus 1.8

    - by Pavel Bucek
    Another version of Tyrus, the reference implementation of JSR 356 – Java API for WebSocket is out! Complete list of fixes and features is below, but let me describe some of the new features in more detail. All information presented here is also available in Tyrusdocumentation. What’s new? First to mention is that JSR 356 Maintenance review Ballot is over and the change proposed for 1.1 release was accepted. More details about changes in the API can be found in this article. Important part is that Tyrus 1.8 implements this API, meaning you can use Lambda expressions and some features of Nashorn without the need for any workarounds. Almost all other features are related to client side support, which was significantly improved in this release. Firstly – I have to admit, that Tyrus client contained security issue – SSL Hostname verification was not performed when connecting to “wss” endpoints. This was fixed as part of TYRUS-339 and resulted in some changes in the client configuration API. Now you can control whether HostnameVerification should be performed (SslEngineConfigurator#setHostnameVerificationEnabled(boolean)) or even set your own HostnameVerifier (please use carefully): #setHostnameVerifier(…). Detailed description can be found in Host verification chapter. Another related enhancement is support for Http Basic and Digest authentication schemes. Tyrus client now enables users to provide credentials and underlying implementation will take care of everything else. Our implementation is strictly non pre-emptive, so the login information is sent always as a response to 401 Http Status Code. If the Basic and Digest are not good enough and there is a need to use some custom scheme or something which is not yet supported in Tyrus, custom Authenticator can be registered and the authentication part of the handshake process will be handled by it. Please seeClient HTTP Authentication chapter in the user guide for more details. There are other features, like fine-grain threadpool configuration for JDK client container, build-in Http redirect support and some reshuffling related to unifying the location of client configuration classes and properties definition – every property should be now part of ClientProperties class. All new features are described in the user guide – in chapterTyrus proprietary configuration. Update – Tyrus 1.8.1 There was another slightly late reported issue related to running in environments with SecurityManager enabled, so this version fixes that. Another noteworthy fixes are TYRUS-355 and TYRUS-361; the first one is about incorrect thread factory used for shared container timeout, which resulted in JVM waiting for that thread and not exiting as it should. The other issue enables relative URIs in Location header when using redirect feature. Links Tyrus homepage mailing list JIRA Complete list of changes: Bug [TYRUS-333] – Multiple endpoints on one client [TYRUS-334] – When connection is closed by a peer, periodic heartbeat pong is not stopped [TYRUS-336] – ReaderBuffer.getNextChars() keeps blocking a server thread after client has closed the session [TYRUS-338] – JDK client SSL filter needs better synchronization during handshake phase [TYRUS-339] – SSL hostname verification is missing [TYRUS-340] – Test PathParamTest are not stable with JDK client [TYRUS-341] – A control frame inside a stream of continuation frames is treated as the part of the stream [TYRUS-343] – ControlFrameInDataStreamTest does not pass on GF [TYRUS-345] – NPE is thrown, when shared container timeout property in JDK client is not set [TYRUS-346] – IllegalStateException is thrown, when using proxy in JDK client [TYRUS-347] – Introduce better synchronization in JDK client thread pool [TYRUS-348] – When a client and server close connection simultaneously, JDK client throws NPE [TYRUS-356] – Tyrus cannot determine the connection port for a wss URL [TYRUS-357] – Exception thrown in MessageHandler#OnMessage is not caught in @OnError method [TYRUS-359] – Client based on Java 7 Asynchronous IO makes application unexitable Improvement [TYRUS-328] – JDK 1.7 AIO Client container – threads – (setting threadpool, limits, …) [TYRUS-332] – Consolidate shared client properties into one file. [TYRUS-337] – Create an SSL version of Basic Servlet test New Feature [TYRUS-228] – Add client support for HTTP Basic/Digest Task [TYRUS-330] – create/run tests/servlet/basic via wss [TYRUS-335] – [clustering] – introduce RemoteSession and expose them via separate method (not include remote sessions in the getOpenSessions()) [TYRUS-344] – Introduce Client support for HTTP Redirect

    Read the article

  • Shadows shimmer when camera moves

    - by Chad Layton
    I've implemented shadow maps in my simple block engine as an exercise. I'm using one directional light and using the view volume to create the shadow matrices. I'm experiencing some problems with the shadows shimmering when the camera moves and I'd like to know if it's an issue with my implementation or just an issue with basic/naive shadow mapping itself. Here's a video: http://www.youtube.com/watch?v=vyprATt5BBg&feature=youtu.be Here's the code I use to create the shadow matrices. The commented out code is my original attempt to perfectly fit the view frustum. You can also see my attempt to try clamping movement to texels in the shadow map which didn't seem to make any difference. Then I tried using a bounding sphere instead, also to no apparent effect. public void CreateViewProjectionTransformsToFit(Camera camera, out Matrix viewTransform, out Matrix projectionTransform, out Vector3 position) { BoundingSphere cameraViewFrustumBoundingSphere = BoundingSphere.CreateFromFrustum(camera.ViewFrustum); float lightNearPlaneDistance = 1.0f; Vector3 lookAt = cameraViewFrustumBoundingSphere.Center; float distanceFromLookAt = cameraViewFrustumBoundingSphere.Radius + lightNearPlaneDistance; Vector3 directionFromLookAt = -Direction * distanceFromLookAt; position = lookAt + directionFromLookAt; viewTransform = Matrix.CreateLookAt(position, lookAt, Vector3.Up); float lightFarPlaneDistance = distanceFromLookAt + cameraViewFrustumBoundingSphere.Radius; float diameter = cameraViewFrustumBoundingSphere.Radius * 2.0f; Matrix.CreateOrthographic(diameter, diameter, lightNearPlaneDistance, lightFarPlaneDistance, out projectionTransform); //Vector3 cameraViewFrustumCentroid = camera.ViewFrustum.GetCentroid(); //position = cameraViewFrustumCentroid - (Direction * (camera.FarPlaneDistance - camera.NearPlaneDistance)); //viewTransform = Matrix.CreateLookAt(position, cameraViewFrustumCentroid, Up); //Vector3[] cameraViewFrustumCornersWS = camera.ViewFrustum.GetCorners(); //Vector3[] cameraViewFrustumCornersLS = new Vector3[8]; //Vector3.Transform(cameraViewFrustumCornersWS, ref viewTransform, cameraViewFrustumCornersLS); //Vector3 min = cameraViewFrustumCornersLS[0]; //Vector3 max = cameraViewFrustumCornersLS[0]; //for (int i = 1; i < 8; i++) //{ // min = Vector3.Min(min, cameraViewFrustumCornersLS[i]); // max = Vector3.Max(max, cameraViewFrustumCornersLS[i]); //} //// Clamp to nearest texel //float texelSize = 1.0f / Renderer.ShadowMapSize; //min.X -= min.X % texelSize; //min.Y -= min.Y % texelSize; //min.Z -= min.Z % texelSize; //max.X -= max.X % texelSize; //max.Y -= max.Y % texelSize; //max.Z -= max.Z % texelSize; //// We just use an orthographic projection matrix. The sun is so far away that it's rays are essentially parallel. //Matrix.CreateOrthographicOffCenter(min.X, max.X, min.Y, max.Y, -max.Z, -min.Z, out projectionTransform); } And here's the relevant part of the shader: if (CastShadows) { float4 positionLightCS = mul(float4(position, 1.0f), LightViewProj); float2 texCoord = clipSpaceToScreen(positionLightCS) + 0.5f / ShadowMapSize; float shadowMapDepth = tex2D(ShadowMapSampler, texCoord).r; float distanceToLight = length(LightPosition - position); float bias = 0.2f; if (shadowMapDepth < (distanceToLight - bias)) { return float4(0.0f, 0.0f, 0.0f, 0.0f); } } The shimmer is slightly better if I drastically reduce the view volume but I think that's mostly just because the texels become smaller and it's harder to notice them flickering back and forth. I'd appreciate any insight, I'd very much like to understand what's going on before I try other techniques.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >