Search Results

Search found 1433 results on 58 pages for 'the crazy chimp'.

Page 35/58 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • New SQL Code Deployment Book and Damn I Need to Blog More

    - by Rodney
    Select datediff(d,'02/19/2009',getdate()) This value returned from the above SELECT statement  is 398 and that is the number of days since my last blog post.  As I was formulating my apology for this hiatus from blogging, it dawned on me that I also do not twitter (sorry tweet) and as apologies beget apologies, I then realized that instead of catching up on the backlog of blogs, I should write a book about what I have been most focused on in the past year, one month and 3 days.  That focus is my day job, which of course, most of us have. And that day job we share is why most of us read blogs, tweets, articles and even books in the first place. So my focus for the past year has been SQL code deployments and all of that entails. I am fortunate that Redgate has agreed to entertain my crazy notion of writing an entire book about this subject, which I have tentatively titled, "The Sound and the Fury". Wait..that is not right. Oh yes, a title more befiting a techical tome but with as much profundity, "Standardizing SQL Server Code Deployements - A Redgate Guide". The great American novel must wait a few more years. As I begin this journey, I am inviting you to assist me in the discovery process and even be interviewed and included in the book itself. How do you do deployments in your company? Do you have a documented process or no process? Do you do code review or cross your fingers? Do you work for a small company or a Fortune 100 company? Government regulations or  garage? It does not  matter to me. I am not here to judge. I worked for both companies myself and have seen many things that you can relate to.  If you would like to participate and are one of the 3 people still reading this blog after 398 days, please fill out my survey and let's get started.  http://www.surveymonkey.com/s/RRG86RH  

    Read the article

  • 24 Hours of PASS coming up soon!

    - by Rob Farley
    Massive thanks to all the people that have been shouting about this event already. I’ve seen quite a number of blog posts about it, and rather than listing some and missing others, please assume I’ve noticed your blog and accept my thanks. But in case this is all news to you – the next 24 Hours of PASS event is less than a fortnight away (Sep 20/21)! And there’s lots of info about it at http://www.sqlpass.org/24hours/fall2012/  (Don’t ask why it’s “Fall 2012”. Apparently that’s what this time of year is called in at least two countries. I would call it “Spring”, personally, but do appreciate that it’s “Autumn” in the Northern Hemisphere...) Yes, I blogged about it on the PASS blog a few weeks ago, but haven’t got around to writing about it here yet. As always, 24HOP is going to have some amazing content. But it’s going to be pointing at the larger event, which now less than two months away. That’s right, this 24HOP is the Summit 2012 Preview event. Most of the precon speakers are going to be represented, as are half-day session presenters, quite a few of the Spotlight presenters and some of the Microsoft speakers too. When you look down the list of sessions at http://www.sqlpass.org/24hours/fall2012/SessionsbySchedule.aspx, you’ll find yourself wondering how you can fit them all in. Luckily, that’s not my problem. For me, it’s just about making sure that you can get to hear these people present, and get a taste for the amazing time that you’ll have if you can come to the Summit. I see this 24HOP as the kind of thing that will just drive you crazy if you can’t get to the Summit. There will be so much great content, and every one of these presenters will be delivering even more than this at the Summit itself. If you tune into Jason Strate’s 24HOP session on the Plan Cache and are impressed – well, you can get to a longer session by him on that same topic at the Summit. And the same goes for all of them. If you’re anything like me, you’ll find yourself looking at the Summit schedule, wishing you could get to several presentations for every time slot. So get yourself registered for 24HOP and help yourself make that decision. And if you can’t go to the Summit, tune in anyway. You’ll still learn a lot, and you might just be able to help persuade someone to send you to the Summit after all (before the price goes up after Sep 30).

    Read the article

  • How do I install Citrix receiver?

    - by krondor
    Has anyone managed to get the Citrix receiver client working in 64-bit Natty (11.04). It seems libmotif4 won't install multi-arch (32 bit and 64 bit libraries). I also see crazy dependency errors despite the libraries being present. Here is what I received initially when trying to install icaclient.deb from Citrix; sudo dpkg -i Downloads/icaclient_11.100_i386.patched.deb dpkg: error processing Downloads/icaclient_11.100_i386.patched.deb (--install): package architecture (i386) does not match system (amd64) Errors were encountered while processing: Downloads/icaclient_11.100_i386.patched.deb I then installed the 32 bit libraries. sudo apt-get install ia32-libs ia32-libs-gtk Then I noticed libmotif4 (a dependency of the citrix client) wasn't present so I installed the 64bit version. sudo apt-get install libmotif4 I then tried to force the 32 bit version; sudo dpkg --force-architecture -i Downloads/libmotif4_2.3.3-5_i386.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package libmotif4:i386. dpkg: error processing Downloads/libmotif4_2.3.3-5_i386.deb (--install): libmotif4:i386 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:amd64 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed So I uninstalled the 64 bit version and tried to install the 32 bit version. This worked, but when I attempt to install Citrix I enter dependency hell. sudo dpkg --force-architecture -i Downloads/icaclient_11.100_i386.patched.deb dpkg: warning: overriding problem because --force enabled: package architecture (i386) does not match system (amd64) Selecting previously deselected package icaclient:i386. (Reading database ... 183036 files and directories currently installed.) Unpacking icaclient:i386 (from .../icaclient_11.100_i386.patched.deb) ... dpkg: dependency problems prevent configuration of icaclient:i386: icaclient:i386 depends on libc6 (>= 2.3). icaclient:i386 depends on libice6 (>= 1:1.0.0). icaclient:i386 depends on libsm6. icaclient:i386 depends on libx11-6. icaclient:i386 depends on libxaw7. icaclient:i386 depends on libxext6. icaclient:i386 depends on libxmu6. icaclient:i386 depends on libxp6. icaclient:i386 depends on libxpm4. icaclient:i386 depends on libxt6. dpkg: error processing icaclient:i386 (--install): dependency problems - leaving unconfigured Errors were encountered while processing: icaclient:i386 So the state it's in, there's no icaclient binaries installed only the docs. It is complaining about libraries that are indeed present (libc6 64 bit and 32 bit). libmotif4 is only 32 bit installed and won't install alongside libmotif4 64 bit. libmotif4 error when you try to install it alongside the 32bit instance; libmotif4:amd64 2.3.3-5 (Multi-Arch: no) is not co-installable with libmotif4:i386 2.3.3-5ubuntu1 (Multi-Arch: no) which is currently installed Any tips?

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • Media server...serving files...............for a limited time only

    - by Craig
    I’m new to Ubuntu and am seeking help with a media server I have built. I have a couple of HTPCs in my house running XBMC. I wanted to build one for the family room working double duty as a HTPC, and media server to share movies, TV shows, music, etc. on my Windows network. So using some spare old parts I had lying around I decided to go CRAZY and build my first Linux box. I used Ubuntu because it seemed to be the most user friendly variant, especially for people that are new to Linux. I had to do a few things to get the media files shared properly on my network: Made sure my two media drives auto-mount every time I boot the computer by editing the “fstab” file – “sudo nano /etc/fstab” Installed Samba - “sudo apt-get install samba” Set a password for Samba - “sudo smbpasswd –a USERNAME” Edited the Samba configuration file to make sure the computer was in my networks workgroup – “sudo nano /etc/samba/smb.conf” In the file manager (not sure if that’s the right name for it), I right-clicked my media folders and set the sharing and permissions. The sharing was done without guest access, and permissions were set to; Owner, Group, and Others - Access: Create and Delete Files. Adjusted the Power Management settings to never put the system into sleep mode. I checked to see if I had access to the files from a Windows 7 machine and I did (Woo Hoo!). But when I tried to play any of my video files from the Windows machine (using VLC media player), they would only play for about 2-5 minutes and then they would stop with an error message saying that the file could not be accessed (Booo...). I tried playing some files through XBMC running in Windows and they worked for a bit longer (about 10-15mins), but they also stopped playing. I installed the Linux version of XBMC on the server and played the files locally with no problems. It doesn’t seem to be an issue with the files themselves, it seems to be a sharing problem on my network. So my question to the Ubuntu gurus out there is: Did I miss adding/editing something in the Samba configuration file? Did I use the right method to share my media files (file manager vs. using the terminal)? Is it possible for the computer to still go to sleep without the screen going black (does that even make sense?). Are there any special settings in Ubuntu that I should be using since this computer as a media server (is there a media server mode?...!...?). Any help on this matter would be greatly appreciated. Thanks

    Read the article

  • Loading levels from .txt or .XML for XNA

    - by Dave Voyles
    I'm attemptin to add multiple levels to my pong game. I'd like to simply exchange a few elements with each level, nothing crazy. Just the background texture, the color of the AI paddle (the one on the right side), and the music. It seems that the best way to go about this is by utilizing the StreamReader to read and write the files from XML. If there is a better, or more efficient alternative way then I'm all for it. In looking over the XNA Starter Platformer Kit provided by MS it seems that they've done it in this manner as well. I'm perplexed by a few things, however, namely parts within the Level class which aren't commented. /// <summary> /// Iterates over every tile in the structure file and loads its /// appearance and behavior. This method also validates that the /// file is well-formed with a player start point, exit, etc. /// </summary> /// <param name="fileStream"> /// A stream containing the tile data. /// </param> private void LoadTiles(Stream fileStream) { // Load the level and ensure all of the lines are the same length. int width; List<string> lines = new List<string>(); using (StreamReader reader = new StreamReader(fileStream)) { string line = reader.ReadLine(); width = line.Length; while (line != null) { lines.Add(line); if (line.Length != width) throw new Exception(String.Format("The length of line {0} is different from all preceeding lines.", lines.Count)); line = reader.ReadLine(); } } What does width = line.Length mean exactly? I mean I know how it reads the line, but what difference does it make if one line is longer than any of the others? Finally, their levels are simply text files that look like this: .................... .................... .................... .................... .................... .................... .................... .........GGG........ .........###........ .................... ....GGG.......GGG... ....###.......###... .................... .1................X. #################### It can't be that easy..... Can it?

    Read the article

  • What if I can't make my unit test fail in "Red, Green, Refactor" of TDD?

    - by Joshua Harris
    So let's say that I have a test: @Test public void MoveY_MoveZero_DoesNotMove() { Point p = new Point(50.0, 50.0); p.MoveY(0.0); Assert.assertAreEqual(50.0, p.Y); } This test then causes me to create the class Point: public class Point { double X; double Y; public void MoveY(double yDisplace) { throw new NotYetImplementedException(); } } Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so: public void MoveY(double yDisplace) { Y += yDisplace; } Great, now I have green again and I can move on. I've tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn't fail at first. That means that I didn't fit the principle of "Red, Green, Refactor." Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test. I don't think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.

    Read the article

  • Career Change Need Advice: Professional Web Developer

    - by bikedorkseattle
    I'm hoping to get some advice here on the steps I should take to make a career change into professional web development. I've been working in cancer research the last 14 years and I need a change. The job market is terrible, the pay is worse, and despite what one would think the atmosphere is generally un-collegial, even in your own group. Venture funding never returned after the dot com burst and with 3 to 5 wars our country is now in, NIH funding is only going to get worse. I know things are not going to get better for my field, sadly, and I know I need to move on. For probably just as long I have fiddled around with web development, I even run a fairly popular site with close to 1 million/month pageviews that pulls a decent income, but not stable enough to live off of right now. My skills are ok for being self taught. I enjoy the fast paced nature of the web and the tools the community creates and how eager people are to help and share knowledge; it's what science should be. I have been trying to find an entry level developer job doing standard HTML/CSS/PHP/MySQL/JS/jQuery type work. A good 50%+ of the jobs want someone with a CS degree, and most want 5 years experience. Having no professional experience and no formal education, I know I'm at a huge disadvantage. I am now considering my options on how to move forward professionally. The way I see it I have basically 3 options. Build up my portfolio of work as much as I can and continue to learn as much as I can on my own. Try to contribute on some open source project when time allows. Network like crazy and go to meetups. Be confident and pray a lot in private. OR While doing above, do some certification programs in PHP and Java, possibly others. Get a Zend Certification. OR Spend a few years getting a CS degree while doing 1. I've already done the work fulltime go to school thing and it doesn't excite me one bit. I didn't have the greatest college experience and am not too eager to return, but I have a family to feed. Is the degree really necessary or is it more of a right of passage type thing in most instances? I appreciate everyones input. Thanks for taking the time to respond.

    Read the article

  • How to handle interruptions in developer work without losing concentration? [closed]

    - by tomaszs
    I work as a developer for some years now. Mainly the issue why it's antisocial work is because you need to spend much time programming. I've been always the kind of developer who likes to cut off from any sources of distraction and spend several hours on project because in this way i (as i hope) do it faster. There are also other kinds of developers, more social that can chat, read, watch movies while development and they are ok with this and don't hesitate to be interrupted in their work in any time and come back to the project without any problem. For me any distraction is source of frustration because i need to spend substantial time to load my mind with all info about the project and to concentrate back on the tasks. I always thought it's better to do this that way because project is completed faster. But it makes some things difficult: it's hard to chat with someone who needs to have some important info: because you are a bit frustrated when you know you loose your Zen. And sometimes its more important to chat with someone than to loose Zen. Well.. mostly in any other kind of work the ability to be "multitask" is very important. But as a developer and as a person it's also very important to stay social. And i see now that the problem of concentration makes it difficult to make the right chose: the cost of maintaining concentration is just sometimes so damn high! So is it only me that i have so little concentration skills so any interruption is for me a big deal? Maybe it's just i have so bad memory so that i dont remember all issues of a project so long? Or maybe i develop the project in a fashion that requires me to store so much info on my mind only to be able to start working with code? Or should i just accept that being more social will make me finish project slower and in the fashion that i personally consider non 100% productive? And it's just normal thing and i should just accept it and start to live like any other person who has many works and don't assume that programming is in any case other than any other work and i just do fuzz about the whole concentration thing? This is question for mid-pro developers. I think you was having the same dillema in your life. I would be glad if you could help me take the right road here because it's just driving me and i suppose people i work with crazy for years.

    Read the article

  • Upgrading 10.04LTS -> 10.10 using custom sources

    - by Boatzart
    I'm trying to upgrade to 10.10 from 10.04 LTS using a custom sources.list file that points to an unofficial mirror*. The mirror does have maverick, but I get the following output when upgrading: boatzart@somecomputer: > sudo do-release-upgrade Checking for a new ubuntu release Done Upgrade tool signature Done Upgrade tool Done downloading extracting 'maverick.tar.gz' authenticate 'maverick.tar.gz' against 'maverick.tar.gz.gpg' tar: Removing leading `/' from member names Reading cache Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Updating repository information WARNING: Failed to read mirror file No valid mirror found While scanning your repository information no mirror entry for the upgrade was found. This can happen if you run a internal mirror or if the mirror information is out of date. Do you want to rewrite your 'sources.list' file anyway? If you choose 'Yes' here it will update all 'lucid' to 'maverick' entries. If you select 'No' the upgrade will cancel. Continue [yN] y WARNING: Failed to read mirror file 96% [Working] Checking package manager Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Calculating the changes Calculating the changes Could not calculate the upgrade An unresolvable problem occurred while calculating the upgrade: The package 'update-manager-kde' is marked for removal but it is in the removal blacklist. This can be caused by: * Upgrading to a pre-release version of Ubuntu * Running the current pre-release version of Ubuntu * Unofficial software packages not provided by Ubuntu If none of this applies, then please report this bug against the 'update-manager' package and include the files in /var/log/dist-upgrade/ in the bug report. Restoring original system state Aborting Reading package lists... Done Building dependency tree Reading state information... Done Building data structures... Done Here is the relevant section from /var/log/dist-upgrade/main.log: 2010-11-18 14:05:52,117 DEBUG The package 'update-manager-kde' is marked for removal but it's in the removal blacklist 2010-11-18 14:05:52,136 ERROR Dist-upgrade failed: 'The package 'update-manager-kde' is marked for removal but it is in the removal blacklist.' 2010-11-18 14:05:52,136 DEBUG abort called *I'm located inside of USC, and for some crazy reason any sustained downloads to anywhere outside of the University are throttled down to 5kbps inside of my lab. Because of this I need to use the following sources.list: deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-updates main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-backports main restricted universe multiverse deb http://mirrors.usc.edu/pub/linux/distributions/ubuntu/ lucid-security main restricted universe multiverse I've tried adding four more entries to the sources.list with s/lucid/maverick/ but that didn't help. Does anyone know how to fix this? Thanks!

    Read the article

  • following a moving sprite

    - by iQue
    Im trying to get my enemies to follow my main-character of the game (2D), but for some reason the game starts lagging like crazy when I do it the way I want to do it, and the following-part dosnt work 100% either, its just 1/24 enemies that comes to my sprite, the other 23 move towards it but stay at a certain point. Might be a poor explenation but dont know how else to put it. Code for moving my enemies: private int enemyX(){ int x = 0; for (int i = 0; i < enemies.size(); i++){ if (controls.pointerPosition.x > enemies.get(i).getX()){//pointerPosition is the position of my main-sprite. x = 5; } else{ x=-5; } Log.d(TAG, "happyX HERE: " + controls.pointerPosition.x); Log.d(TAG, "enemyX HERE: " + enemies.get(i).getX()); } return x; } private int enemyY(){ int y = 0; for (int i = 0; i < enemies.size(); i++){ if (controls.pointerPosition.y > enemies.get(i).getY()){ y = 5; } else{ y=-5; } } return y; } I send it to the update-method in my Enemy-class: private void drawEnemy(Canvas canvas){ addEnemies(); // a method where I add enemies to my arrayList, no parameters except bitmap. for(int i = 0; i < enemies.size(); i++){ enemies.get(i).update(enemyX(), enemyY()); } for(int i = 0; i < enemies.size(); i++){ enemies.get(i).draw(canvas); } } and finally, the update-method itself, located in my Enemy-class: public void update(int velX, int velY) { x += velX; //sets x before I draw y += velY; //sets y before I draw currentFrame = ++currentFrame % BMP_COLUMNS; } So can any1 figure out why it starts lagging so much and how I can fix it? Thanks for your time!

    Read the article

  • Layout of mathematical views (iOS)

    - by William Jockusch
    I am trying to figure out the right way to encapsulate graphical information about mathematical objects. It is not simple. For example, a matrix can include square brackets around its entries, or not. Some things carry down to sub-objects -- for example, a matrix might track the font size to be used by its entries. Similarly, the font color and the background color would carry down to the entries. Other things do not carry down. For example, the entries of the matrix do not need to know whether or not the matrix has those square brackets. Based on all of the above, I need to calculate sizes for everything, then frames. All of this can depend on the properties stored above. The size of a matrix depends on the sizes of its entries, and also on whether or not it has those brackets. What I am having a hard time with is not the individual ways to calculate sensible frames for this or that. It is the overall organizational structure of the whole thing. How can I keep track of it all without going crazy. One particular obstacle is worth mentioning -- for reasons I don't want to go into here, I need to calculate the sizes and frames for everything before I instantiate any actual views. So, for example, if I have a Matrix object, I need to calculate its size before I make a MatrixView. If I have an equation, I need to calculate the size of the view for the equation before I create the actual view. So I clearly need separate objects for those calculations. But I can't figure out a sensible class structure for those objects. If I put them all into a single class, I get some advantages because copying then becomes easy. But I also end up with a bloated class that contains info that is irrelevant for some objects -- such as whether or not to include those brackets around the matrix. But if I use a lot of different classes, copying properties becomes a real pain. If it matters, this is all in Objective C, for an iOS environment. Any pointers would be greatly appreciated.

    Read the article

  • Best Advice Ever: Learn By Helping Others

    - by Argenis
    I remember when back in 2001 my friend and former SQL Server MVP Carlos Eduardo Rojas was busy earning his MVP street-cred in the NNTP forums, aka Newsgroups. I always thought he was playing the Sheriff trying to put some order in a Wild Wild West town by trying to understand what these people were asking. He spent a lot of time doing this stuff – and I thought it was just plain crazy. After all, he was doing it for free. What was he gaining from all of that work? It was not until the advent of Twitter and #SQLHelp that I realized the real gain behind helping others. Forget about the glory and the laurels of others thanking you (and thinking you’re the best thing ever – ha!), or whatever award with whatever three letter acronym might be given to you. It’s about what you learn in the process of helping others. See, when you teach something, it’s usually at a fixed date and time, and on a specific topic. But helping others with their issues or general questions is something that goes on 24x7, on whatever topic under the sun. Just go look at sites like DBA.StackExchange.com, or the SQLServerCentral forums. It’s questions coming in literally non-stop from all corners or the world. And yet a lot of people are willing to help you, regardless of who you are, where you come from, or what time of day it is. And in my case, this process of helping others usually leads to me learning something new. Especially in those cases where the question isn’t really something I’m good at. The delicate part comes when you’re ready to give an answer, but you’re not sure. Often times I’ll try to validate with Internet searches and what have you. Often times I’ll throw in a question mark at the end of the answer, so as not to look authoritative, but rather suggestive. But as time passes by, you get more and more comfortable with that topic. And that’s the real gain.  I have done this for many years now on #SQLHelp, which is my preferred vehicle for providing assistance. I cannot tell you how much I’ve learned from it. By helping others, by watching others help. It’s all knowledge and experience you gain…and you might not be getting all that in your day job today. Such thing, my dear reader, is invaluable. It’s what will differentiate yours amongst a pack of resumes. It’s what will get you places. Take it from me - a guy who, like you, knew nothing about SQL Server.

    Read the article

  • Access Control Service v2: Registering Web Identities in your Applications [code]

    - by Your DisplayName here!
    You can download the full solution here. The relevant parts in the sample are: Configuration I use the standard WIF configuration with passive redirect. This kicks automatically in, whenever authorization fails in the application (e.g. when the user tries to get to an area the requires authentication or needs registration). Checking and transforming incoming claims In the claims authentication manager we have to deal with two situations. Users that are authenticated but not registered, and registered (and authenticated) users. Registered users will have claims that come from the application domain, the claims of unregistered users come directly from ACS and get passed through. In both case a claim for the unique user identifier will be generated. The high level logic is as follows: public override IClaimsPrincipal Authenticate( string resourceName, IClaimsPrincipal incomingPrincipal) {     // do nothing if anonymous request     if (!incomingPrincipal.Identity.IsAuthenticated)     {         return base.Authenticate(resourceName, incomingPrincipal);     } string uniqueId = GetUniqueId(incomingPrincipal);     // check if user is registered     RegisterModel data;     if (Repository.TryGetRegisteredUser(uniqueId, out data))     {         return CreateRegisteredUserPrincipal(uniqueId, data);     }     // authenticated by ACS, but not registered     // create unique id claim     incomingPrincipal.Identities[0].Claims.Add( new Claim(Constants.ClaimTypes.Id, uniqueId));     return incomingPrincipal; } User Registration The registration page is handled by a controller with the [Authorize] attribute. That means you need to authenticate before you can register (crazy eh? ;). The controller then fetches some claims from the identity provider (if available) to pre-fill form fields. After successful registration, the user is stored in the local data store and a new session token gets issued. This effectively replaces the ACS claims with application defined claims without requiring the user to re-signin. Authorization All pages that should be only reachable by registered users check for a special application defined claim that only registered users have. You can nicely wrap that in a custom attribute in MVC: [RegisteredUsersOnly] public ActionResult Registered() {     return View(); } HTH

    Read the article

  • Adding JavaScript to your code dependent upon conditions

    - by DavidMadden
    You might be in an environment where you code is source controlled and where you might have build options to different environments.  I recently encountered this where the same code, built on different configurations, would have the website at a different URL.  If you are working with ASP.NET as I am you will have to do something a bit crazy but worth while.  If someone has a more efficient solution please share. Here is what I came up with to make sure the client side script was placed into the HEAD element for the Google Analytics script.  GA wants to be the last in the HEAD element so if you are doing others in the Page_Load then you should do theirs last. The settings object below is an instance of a class that holds information I collection.  You could read from different sources depending on where you stored your unique ID for Google Analytics. *** This has been formatted to fit this screen. *** if (!IsPostBack) { if (settings.GoogleAnalyticsID != null || settings.GoogleAnalyticsID != string.Empty) { string str = @"//<!CDATA[ var _gaq = _gaq || []; _gaq.push(['_setAccount', '"  + settings.GoogleAnalyticsID + "']); _gaq.push(['_trackPageview']);  (function () {  var ga = document.createElement('script');  ga.type = 'text/javascript';  ga.async = true;  ga.src = ('https:' == document.location.protocol  ? 'https://ssl' :  'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];  s.parentNode.insertBefore(ga, s);})();"; System.Web.UI.HtmlControls.HtmlGenericControl si =  new System.Web.UI.HtmlControls.HtmlGenericControl(); si.TagName = "script"; si.Attributes.Add("type", @"text/javascript"); si.InnerHtml = sb.ToString(); this.Page.Header.Controls.Add(si); } } The code above will prevent the code from executing if it is a PostBack and then if the ID was not able to be read or something caused the settings to be lost by accident. If you have larger function to declare, you can use a StringBuilder to separate the lines. This is the most compact I wished to go and manage readability.

    Read the article

  • Crash when using datablocks

    - by scorcher24
    I have really throughly searched the net and could not find any solution for this so I ask for help here. Anyway, I have this datablock in datablocks.cs: datablock t2dSceneObjectDatablock(EnemyShipConfig) { canSaveDynamicFields = "1"; Layer = "3"; size = "64 64"; CollisionActiveSend = "1"; CollisionActiveReceive = "1"; CollisionCallback = true; CollisionLayers = "3"; CollisionDetectionMode = "POLYGON"; CollisionPolyList = "0.00 -0.791 0.756 0.751 -0.746 0.732"; UsesPhysics = "0"; Rotation = "-90"; WorldLimitMode = "KILL"; WorldLimitMax = "880 360"; WorldLimitMin = "-765 -436"; minFireRate = "2000"; maxFireRate = "1200"; laserSpeed = "800"; minSpeed = "100"; maxSpeed = "150"; }; This is an exact reproduction of an object that I have manually edited in the editor. So far, I just used clone() to get as many enemies as I need, while it was out of sight. It is a r-type style shooter, so I need a variable amount of enemies. Since clone() spams my log, I decided to use datablocks, since it is also more flexible. That's what I get when I use clone(): Con::execute - 0 has no namespace: onRemoveFromScene However, once spawning begins, my game freezes and crashes: function SpawnEnemy() { //%spawnedEnemy = EnemyShipMaster.clone(true); %spawnedEnemy = new t2dStaticSprite() { class = "EnemyShip"; sceneGraph = $global_sceneGraph; datablock = "EnemyShipConfig"; imageMap = "starshipImageMap"; layer = 3; }; %speed = getRandom(%spawnedEnemy.minSpeed, %spawnedEnemy.maxSpeed); %y = getRandom(-320, 320); // Set Properties %spawnedEnemy.setPositionX(700); %spawnedEnemy.setPositionY(%y); %spawnedEnemy.setVisible(true); %spawnedEnemy.setLinearVelocityX( -%speed ); %spawnedEnemy.setTimerOn( getRandom( %spawnedEnemy.maxFireRate, %spawnedEnemy.minFireRate ) ); } // Definition of $global_sceneGraph from game.cs: $global_sceneGraph = sceneWindow2D.loadLevel(%level); As I said, it works fine when I use clone() (which is commented out here), but my log gets spammed. I really hope someone can shed some light for me, this is driving me crazy.

    Read the article

  • Simulate 'Shock absorbtion' with tire rubber in PhysX (2.8.x)

    - by Mungoid
    This is a kinda tricky question and I fear there is no easy enough solution, but I figured I'd hit SE up before giving up on it and just doing what I can. A machine I am working on has no suspension or shocks or springs of any sort in the real machine, so you would think that when it drives over bumps, it would shake like crazy but because its tires (6 of them) are quite large they seem to absorb a lot of shock from the bumps. Part of this is because the machine is around 30k lbs and it just smashes/compresses any bumps in the ground down (This is another issue im still working on) and the other part is that the tires seem to have a lot of flex to them with a lot of air as well. So my current task is to simulate shock absorption in physx without visibly separating the tires from the spindle/axle.. I have been messing with all kinds of NxMaterial, NxSpring, Joints, etc. and have had no luck getting this to work. The main problem is that the spindle attached to the tire is directly in the center and the axle is basically solidly attached to the chassis, so if i give it any spring or suspension travel, that spindle on the tires will move upwards or downwards, looking very odd because now its not any longer in the center of the tire. I tried giving it a higher restitution but that just makes it bouncy without any shock absorption. Another avenue I am messing with is to actively smooth the terrain in front of the tires so that before it hits a bumpy patch, that patch is smoothed and it doesn't bounce. The only issue with this is that it is pretty expensive to do with 6 tires, high tesselation of the terrain and other complex things going on at the same time in this simulation. I am still working on this but I am hoping to mix and match a few different aspects to get the best possible outcome. This is a bit of a complex issue so I'm not expecting anyone to have a definitive answer, just hoping someone may think of something I haven't =-) -Side note: Yes i know PhysX 2.8.x is quite outdated but we have to stick with it for this implementation. We are in the process of going to another physics engine but it is out of scope to apply that engine to this project.

    Read the article

  • Problem with DualBooting Ubuntu 13.10 and Win7

    - by VinArrow
    this is my first post here on AskUbuntu, not first time using it though. I wanted to install Ubuntu 13.10 on my PC to have all my work stuff there and leave Win7 for gaming. So i did my research on how to Dual Boot when you already have Win7 installed, here are the steps i took Used Disk Management on Win7 and shrunk that partition, leaving 80GB free for Ubuntu. Made a Bootable pendrive following the instructions on Ubuntu`s website. During the installation steps there was supposed to be a Install alongside Win7, but there wasnt, so i chose Something else. Everything was fine and i was able to install Ubuntu no problem on my unallocated 80GB partition (76GB Ubuntu + 4GB swap) There was a prompt for me to restart my PC and so I did expecting to see the dual boot screen (grub right?) Now, when i restarted my PC, Grub never showed up and it booted straight to Windows. Then I did some more research and found out that that could happen. Tried three things then Plugging in bootable pendrive again and selected Try Ubuntu without installing. Then i followed some instructions found here (How can I repair grub? (How to get Ubuntu back after installing Windows?)) and i could chroot into my Ubuntu install just fine. Repaired grub as instructed on that link, restarted the PC and booted straight into Win7 again. Again, used the bootable pen drive to Try Ubuntu... and used the Boot-repair tool (recommended repair). Again, booted straight into Win7. Lastly, i installed easyBCD on my Win7 and made a new entry for Ubuntu (Linux/BSD). When i rebooted the PC, there was the option to choose between Win7 and Linux, chose linux and it didnt work, taking me straight to a command line-like enviroment that read Minimum bash like scripting or something, as if I didn`t have a Linux OS installed. So, I thought I`d try and repair my Ubuntu install. And during the Installation method step there was the choice to install alongside Ubuntu 13.10! and that right there drove me crazy. Here is a screenshot of gparted showing how things are set up now http://imageshack.us/f/801/77u3.png/ Notice on the left-hand side how i can access my installation files just fine. sdb1- win7 reserved space, sdb2- win7 OS, sdb3- 76GB ubuntu install, sdb5- 4GB swap area. Does anyone know why my Ubuntu 13.10 is not being recognized? and what should I do to get it working? Thanks and sorry for the long read and bad english! (BIOS = legacy)

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • Down to the Wire - Yet More Solaris Things to See at OpenWorld (and JavaOne!)

    - by Larry Wake
    San Francisco is bracing for the annual invasion. The airport's jammed, the tweets are flying, and the numbers are crazy: more than 50,000 attendees and 2,500+ sessions, taking over Moscone Convention Center, two streets, Union Square, and seemingly every hotel in town (98,000 hotel room nights). So yeah, it's busy. And it's not just OpenWorld--we've also got JavaOne, MySQL Connect, and four other sub-events going on as well. Speaking of JavaOne, you can find Solaris-related activity there, too -- I've highlighted one hands-on lab below. Here's a last pre-event roundup of activities for consideration; enjoy the show(s)! (Remember, Schedule Builder is your friend; use it with the session numbers below to register.) Monday, October 1st: 3:15 PM - General Session: Accelerate Your Business with the Oracle Hardware Advantage(GEN9691, Moscone North Hall D) John Fowler, head of Oracle's Systems organization, will talk about Oracle hardware technology and how it's co-engineered with other key technologies, including Oracle Solaris. Tuesday, October 2nd: 10:15 AM - Building an IaaS Platform with SPARC, Oracle Solaris 11, and Oracle VM Server for SPARC(CON4431, Moscone South 270)Get the birds-eye lowdown (whatever that means) on how U.S. Cellular  built its Infrastructure as a Service (IaaS) cloud delivery platform with Oracle’s SPARC T4 servers, Oracle Solaris 11, Oracle Solaris Cluster 4, and Oracle VM Server for SPARC. The session covers the high-level design, business case made, implementation details, and lessons learned. 11:45 AM - Oracle Solaris 11 Panel: Insights and Directions from Oracle Solaris Core Engineering(CON8790, Moscone South 252) This has been one of the livelier Solaris-related sessions in years past (and I'm not saying that just because I get to moderate it this year). A panel of core engineers responsible for a wide range of key Solaris technologies will talk about some of the interesting work they've been doing -- but mostly we keep time open for the panel to take questions from attendees, because that's the fun part. Wednesday, October 3rd: 10:00 AM - Tracing Your Java Application Tuning on Oracle Solaris with DTrace(HOL10214, Hilton San Francisco, Franciscan A/B/C/D) This JavaOne hands-on lab will show how to use the DTrace framework to dynamically trace your Java applications on Oracle Solaris and uncover new tuning opportunities. Thursday, October 4th: 12:45 PM - Oracle Solaris 11: Optimized for Oracle Database, Oracle WebLogic Server, and Java(CON8800, Moscone South 252) Explore how Oracle Solaris 11 has been built to be the best platform for the cloud and enterprise applications, with built-in optimizations to improve performance and deliver unique functionality with Oracle Database, Oracle WebLogic Server, and Java.

    Read the article

  • QNTC and Windows Server 2008 R2

    - by Ben
    I am having a really hard time getting an iSeries (AS/400) machine talking to my new Windows Server 2008 R2 box using the QNTC file system on the iSeries. I had similar problems getting it to initially talk to a Windows Server 2003 machine, but enabling the local Guest account on the 2003 box solved that one. No such luck with the new 2008 box. When I do a WRKLNK /QNTC/SVR01 on the iSeries (which should show share listings, and does on any 2003 boxes) all I get is (Cannot find object to match specified name.). I know the iSeries likes the same username and password on the remote server, but unfortunately for us this is not the case. Anyhow, it does currently work with different username/password combinations on a 2003 box. To try and get the wretched things talking, I have made the 2008 server pretty open but the iSeries will not see shares on it. I have enabled the local Guest account, turned Windows firewall off, set the share permissions so Everyone has full control but to no avail. I read something on the internet about the iSeries only being able to handle NTLM authentication (and I understand by default that Server 2008 R2 only uses NTLMv2 and has NTLM disabled), so I made a special group policy for the server and tweaked all Group Policy settings under Computer Configuration\Policies\Windows Settings\Security Settings\Local Policies\Security Options but the iSeries STILL won't see it. We have a team of programmers who do all the system administration of the iSeries, but they are stumped for ideas on their side, and I'm stumped for ideas on my side. This is driving me crazy now, and if anybody has managed to get an iSeries to talk to Windows Server 2008 R2 using QNTC I would be very appreciative of any suggestions, be it on the Windows side, iSeries settings or even IBM PTF's that might patch anything. The iSeries is running V5R4 and I have *SECOFR privileges on it, if it helps. One final (most important!) note - The programmers think it's my system being tricky, and I think it's theirs - please prove me right :)

    Read the article

  • SSTP BPDU with bad TLV and macflap -- info please

    - by Adeodatus
    Hi All, I'm slowly locking down the network I've inherited and mac-flapping has been a problem in the past with customers doing all kinds of crazy things. Thats changing but I am now encountering this error: Dec 30 18:31:31 10.50.1.50 1565: 001567: Dec 30 18:31:30: %SW_MATM-4-MACFLAP_NOTIF: Host xxxx.xxxx.f681 in vlan 1 is flapping between port Gi0/5 and port Gi0/48 Dec 30 18:43:28 10.50.1.50 1566: 001568: Dec 30 18:43:26: %SPANTREE-2-RECV_BAD_TLV: Received SSTP BPDU with bad TLV on GigabitEthernet0/5 VLAN1. Dec 30 18:48:18 10.50.1.50 1567: 001569: .Dec 30 18:48:17: %SPANTREE-2-RECV_BAD_TLV: Received SSTP BPDU with bad TLV on GigabitEthernet0/5 VLAN1. unfortunately, that mac address is the mac of our core router, the only link to the internet, on port gi0/48 On the other end of gi0/5, I have about 50 bridged customer machines connected through a series of managed and unmanaged L2 switches. Yes, on VLAN1 too ... like I said, working on changing this slowly. In the mean time, it has me quite baffled on how to deal with this and track down the customer or switch that is the problem. What else could be going on with these messages ... the bad TLV is a new one for me. Any ideas? Thank you and Happy New Year to you all!!

    Read the article

  • directory with 980MB meta data, millions of files, how to delete it? (ext3)

    - by Alexandre
    Hello, So I'm stuck with this directory: drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2 The directories contents is small - just millions of tiny little files. I want to wipe it from the filesystem but have been unable to. My first try was: find sessions2 -type f -delete and find sessions2 -type f -print0 | xargs -0 rm -f but had to stop because both caused escalating memory usage. At one point it was using 65% of the system's memory. So I thought (no doubt incorrectly), that it had to do with the fact that dir_index was enabled on the system. Perhaps find was trying to read the entire index into memory? So I did this (foolishly): tune2fs -O^dir_index /dev/xxx Alright, so that should do it. Ran the find command above again and... same thing. Crazy memory usage. I hurriedly ran tune2fs -Odir_index /dev/xxx to reenable dir_index, and ran to Server Fault! 2 questions: 1) How do I get rid of this directory on my live system? I don't care how long it takes, as long as it uses little memory and little CPU. By the way, using nice find ... I was able to reduce CPU usage, so my problem right now is only memory usage. 2) I disabled dir_index for about 20 minutes. No doubt new files were written to the filesystem in the meanwhile. I reenabled dir_index. Does that mean the system will not find the files that were written before dir_index was reenabled since their filenames will be missing from the old indexes? If so and I know these new files aren't important, can I maintain the old indexes? If not, how do I rebuild the indexes? Can it be done on a live system? Thanks!

    Read the article

  • What exactly is a Mobile mouse? + Mouse Recommendation

    - by chobo2
    I am really disappointed with Logitech. My first cordless wireless mouse was from them and it lasted like 5 years. So I decided to get another one from them http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10099373&catid= And this mouse sucks bad. After 6 months it broke. I returned it under warranty and got a new one now 4-6 month later it is on the verge of breaking again.... You pay like $50 for this mouse and it lasts like 6 months that sad. I just lost faith in Logitech mice as I remember my bro also had a logitech mouse too and it broke after like 6 months. He then bought another logitech mouse(different model) that has been working for maybe 2 years(and no signs of breaking) but I am not crazy about the mouse(I don't like the 2 buttons by the wheel) and I not sure if they even sell it(maybe they got rid of it because it lasts too long). http://www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=1578495&CatId=1285 So I am looking at a Microsoft mouse. http://www.futureshop.ca/catalog/proddetail.asp?logon=&langid=EN&sku_id=0665000FS10125565&catid= I am looking at this one but I am not sure what they mean by mobile mouse. I think that is what MS calls notebook mice. So I am not sure if this would be a good mouse to get for a desktop. I see it uses like a micro usb receiver but I am not sure if it is smaller then a standard mouse. But almost all the mice I looked at at futureshop.ca or staples are labeled notebook mice or mobile mice. So not sure what mice would be right for me. I don't want a corded one though. I really liked the LX6 design alot but it can't last more than 6 months. Thanks

    Read the article

  • Authentication on Exchange using EWS managed API

    - by Jacob Proffitt
    I'm having a weird issue with the Exchange Web Services. The operation I'm attempting is pretty simple—pull a user's calendar items for the current week on our internal website. When testing locally, the ews managed API pulls the calendar information just fine. When deployed to the web server (using integrated windows authentication), it chokes. My trace is telling me that access is denied in the Exchange call. Initially, I thought this was a double-hop NTLM permissions issue, but it turns out that the service actually works for some internal users, but not for most. The only thing I can find that the functioning users have in common is that they are blackberry users and I surmise that their exchange permissions are setup differently. Or are their active directory accounts setup differently? I don't know and it's driving me crazy. I surmise that the blackberry app runs some scripts when a user is added to the application, but I'm completely unfamiliar with what may be going on behind the scenes there. So. Is there a way to duplicate the permissions those users enjoy (either AD or Exchange permissions)? And/or how exactly does one fix the double-hop credentials situation?

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >