Search Results

Search found 17953 results on 719 pages for 'someone like you'.

Page 656/719 | < Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >

  • SQL SERVER – Not Possible – Delete From Multiple Table – Update Multiple Table in Single Statement

    - by pinaldave
    There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? The answer is – No, You cannot and you should not. SQL Server does not support deleting or updating from two tables in a single update. If you want to delete or update two different tables – you may want to write two different delete or update statements for it. This method has many issues – from the consistency of the data to SQL syntax. Now here is the real reason for this blog post – yesterday I was asked this question again and I replied my canned answer saying it is not possible and it should not be any way implemented that day. In the response to my reply I was pointed out to my own blog post where user suggested that I had previously mentioned this is possible and with demo example. Let us go over my conversation – you may find it interesting. Let us call the user DJ. DJ: Pinal, can we delete multiple table in a single statement or with single delete statement? Pinal: No, you cannot and you should not. DJ: Oh okey, if that is the case, why do you suggest to do that? Pinal: (baffled) I am not suggesting that. I am rather suggesting that it is not possible and it should not be possible. DJ: Hmm… but in that case why did you blog about it earlier? Pinal: (What?) No, I did not. I am pretty confident. DJ: Well, I am confident as well. You did. Pinal: In that case, it is my word against your word. Isn’t it? DJ: I have proof. Do you want to see it that you suggest it is possible? Pinal: Yes, I will be delighted too. (After 10 Minutes) DJ: Here are not one but two of your blog posts which talks about it - SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2 Pinal: Oh! DJ: I know I was correct. Pinal: Well, oh man, I did not mean there what you mean here. DJ: I did not understand can you explain it further. Pinal: Here we go. The example in the other blog is the example of the cascading delete or cascading update. I think you may want to understand the concept of the foreign keys and cascading update/delete. The concept of cascading exists to maintain data integrity. If there primary keys get deleted the update or delete reflects on the foreign key table to maintain the key integrity and data consistency. SQL Server follows ANSI Entry SQL with regard to referential integrity between PrimaryKey and ForeignKey columns which requires the inserting, updating, and deleting of data in related tables to be restricted to values that preserve the integrity. This is all together different concept than deleting multiple values in a single statement. When I hear that someone wants to delete or update multiple table in a single statement what I assume is something very similar to following. DELETE/UPDATE Table 1 (cols) Table 2 (cols) VALUES … which is not valid statement/syntax as well it is not ASNI standards as well. I guess, after this discussion with DJ, I realize I need to do a blog post so I can add the link to this blog post in my canned answer. Well, it was a fun conversation with DJ and I hope it the message is very clear now. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Adding a Network Loopback Adapter to Windows 8

    - by Greg Low
    I have to say that I continue to be frustrated with finding out how to do things in Windows 8. Here's another one and it's recorded so it might help someone else. I've also documented what I tried so that if anyone from the product group ever reads this, they'll understand how I searched for it and might try to make it easier.I wanted to add a network loopback adapter, to have a fixed IP address to work with when using an "internal" network with Hyper-V. (The fact that I even need to do this is also painful. I don't know why Hyper-V can't make it easy to work with host system folders, etc. as easily as I can with VirtualPC, VirtualBox, etc. but that's a topic for another day).In the end, what I needed was a known IP address on the same network that my guest OS was using, via the internal network (which allows connectivity from the host OS to/from guest OS's).I started by looking in the network adapters areas but there is no "add" functionality there. Realising that this was likely to be another unexpected challenge, I resorted to searching for info on doing this. I found KB article 2777200 entitled "Installing the Microsoft Loopback Adapter in Windows 8 and Windows Server 2012". Aha, I thought that's what I'd need. It describes the symptom as "You are trying to install the Microsoft Loopback Adapter, but are unable to find it." and that certainly sounded like me. There's a certain irony in documenting that something's hard to find instead of making it easier to find. Anyway, you'd hope that in that article, they'd then provide a step by step example of how to do it, but what they supply is this: The Microsoft Loopback Adapter was renamed in Windows 8 and Windows Server 2012. The new name is "Microsoft KM-TEST Loopback Adapter". When using the Add Hardware Wizard to manually add a network adapter, choose Manufacturer "Microsoft" and choose network adapter "Microsoft KM-TEST Loopback Adapter".The trick with this of course is finding the "Add Hardware Wizard". In Control Panel -> Hardware and Sound, there are options to "Add a device" and for "Device Manager". I tried the "Add a device" wizard (seemed logical to me) but after that wizard tries it's best, it just tells you that there isn't any hardware that it thinks it needs to install. It offers a link for when you can't find what you're looking for, but that leads to a generic help page that tells you how to do things like turning on your printer.In Device Manager, I checked the options in the program menus, and nothing useful was present. I even tried right-clicking "Network adapters", hoping that would lead to an option to add one, also to no avail.So back to the search engine I went, to try to find out where the "Add Hardware Wizard" is. Turns out I was in the right place in Device Manager, but I needed to right-click the computer's name, and choose "Add Legacy Hardware". No doubt that hasn't changed location lately but it's a while since I needed to add one so I'd forgotten. Regardless, I'm left wondering why it couldn't be in the menu as well.Anyway, for a step by step list, you need to do the following:1. From Control Panel, select "Device Manager" under the "Devices and Printers" section of the "Hardware and Sound" tab.2. Right-click the name of the computer at the top of the tree, and choose "Add Legacy Hardware".3. In the "Welcome to the Add Hardware Wizard" window, click Next.4. In the "The wizard can help you install other hardware" window, choose "Install the hardware that I manually select from a list" option and click Next.5. In the "The wizard did not find any new hardware on your computer" window, click Next.6. In the "From the list below, select the type of hardware you are installing" window, select "Network Adapters" from the list, and click Next.7. In the "Select Network Adapter" window, from the Manufacturer list, choose Microsoft, then in the Network Adapter window, choose "Microsoft KM-TEST Loopback Adapter", then click Next.8. In the "The wizard is ready to install your hardware" window, click Next.9. In the "Completing the Add Hardware Wizard" window, click Finish.Then you need to continue to set the IP address, etc.10. Back in Control Panel, select the "Network and Internet" tab, click "View Network Status and Tasks".11. In the "View your basic network information and set up connections" window, click "Change adapter settings".12. Right-click the new adapter that has been added (find it in the list by checking the device name of "Microsoft KM-TEST Loopback Adapter"), and click Properties.   

    Read the article

  • VB Myth - Case Insensitivity is Awesome!

    - by Damon
    I was reading Andy Brown's article 10 Reasons Why Visual Basic is Better than C# and the first claim is that VB is superior because of case insensitivity.  I think the reasons he outlines are basically as follows: Your fingers get tired finding the shift key (e.g. typing PascalCase and camelCase members) You are much more likely to make mistakes while typing names When you accidentally leave caps lock on, it really matters These three arguments culminate in the conclusion: "It doesn't matter if you disagree with everything else in this article: case-sensitivity alone is sufficient reason to ditch C#!" Righto.  I've been using Visual Basic since version 5.0, I wrote a book about ASP.NET in Visual Basic, so I want everyone to know I'm definitely not a VB.NET hater.  I had to converted to C# because it was the language of preference at the places I've worked, so I'm used to both languages.  I love me some case sensitivity.  So first, let's debunk the claims. First, your fingers do not get tired of finding the shift key unless you are writing code in notepad and compiling everything on the command line.  Visual Studio pretty much takes away the need to use the shift key at all. For the most part, any programmer worth a damn doesn't have to type more than about 3-5 characters of any variable or method name before IntelliSense kicks in to help.  VB or C#, if you are not using the tab key for autocomplete then you are typing too much anyway, regardless of whether the shift key is involved or not.  Also, you've got to be a pretty hard-core candy ass if you're complaining at the end of the day that your little fingers are hurting from hitting the shift key. Second, I cannot logically refute the fact that if there are more stringent rules about case sensitivity it will lead to more mistakes.  As such, know that you will be more prone to mistakes in C#.  However, lets talk about the magnitude of the problem.  If you are using IntelliSense then you have auto-correction built in so you probably won't have much of a problem in the first place.  If you manage to bypass IntelliSense and type something wrong you normally are immediately presented with a red-squiggly to let you know something is amiss.  Normally, a person would look at the problem, figure out what the heck went wrong, and then avoid that problem again in the future.  Granted, I have met people who seem to lack this capability, but their problem is deeper than a decision between VB.NET and C#.  So let's make sure that we're all on the same page about this problem.  If you have two teams of developers, one that uses VB.NET and one that uses C#, do not expect to see the VB.NET team drinking beer at the end of the project in festive revelry while the C# team is crying over what the hell to do because their code is riddled with case-sensitivity problems that nobody can resolve. Lastly, if you leave your caps lock key on, turn it off.  Really, what kind of ass-hat would write an entire VB.NET application ENTIRELY IN CAPS?  I happen to be a fan of case sensitivity because it encourages precision and uniformity.  The last thing I need is a code base that looks like it was ransacked by LeEt HacKors wHo Can uSe wHateVer cASe tHey wanT.  I mean really, if you saw someone write this: PuBLIc Sub MyMethod . End Sub And upon asking them why BL was upper case, they responded "Oh, I accidentally hit the shift key there.  Fortunately for me VB.NET is a case insensitive language so I saved a couple of keystrokes by leaving it in there."  Or if you saw: PUBLIC SUB ANOTHERMETHOD . END SUB And the response to why it was uppercased was "Yeah, I accidentally had caps locks on, fortunately for me VB.NET doesn't care.  Really dodged a bullet there, glad I wasn't using C#."  Would you not think that a bit ridiculous?  If you want to convince C# developers that C# sucks, go for it.  But the case insensitivity argument is crap.

    Read the article

  • How to make creating viewmodels at runtime less painful

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painful. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genius since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use a DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an in-depth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • More Quick Interview Tips

    - by Ajarn Mark Caldwell
    In the last couple of years I have conducted a lot of interviews for application and database developers for my company, and I can tell you that the little things can mean a lot.  Here are a few quick tips to help you make a good first impression. A year ago I gave you my #1 interview tip: Do some basic research!  And a year later, I am still stunned by how few technical people do the most basic of research.  I can only guess that it is because it is so engrained in our psyche that technical competence is everything (see How to Manage Technical Employees for more on this idea) that we forget or ignore the importance of soft skills and the art of the interview.  Or maybe it is because we have heard the stories of the uber-geek who has zero personal skills but still makes a fortune working for Microsoft.  Well, here’s another quick tip:  You’re probably not as good as he is; and a large number of companies actually run small to medium sized teams and can’t really afford to have the social outcast in the group.  In a small team, everyone has to get along well, and that’s an important part of what I’m evaluating during the interview process. My #2 tip is to act alive!  I typically conduct screening interviews by phone before I bring someone in for an in-person.  I don’t care how laid-back you are or if you have a “quiet personality”, when we are talking, ACT like you are happy I called and you are interested in getting the job.  If you sound like you are bored-to-death and that you would be perfectly happy to never work again, I am perfectly happy to help you attain that goal, and I’ll move on to the next candidate. And closely related to #2, perhaps we’ll call it #2.1 is this tip:  When I call you on the phone for the interview, don’t answer your phone by just saying, “Hello”.  You know that the odds are about 999-to-1 that it is me calling for the interview because we have specifically arranged this time slot for the call.  And you can see on the caller ID that it is not one of your buddies calling, so identify yourself.  Don’t make me question whether I dialed the right number.  Answer your phone with a, “Hello, this is ___<your full name preferred, but at least your first name>___.”.  And when I say, “Hi, <your name>, this is Mark from <my company>” it would be really nice to hear you say, “Hi, Mark, I have been expecting your call.”  This sets the perfect tone for our conversation.  I know I have the right person; you are professional enough and interested enough in the job or contract to remember your appointments; and now we can move on to a little intro segment and get on with the reason for our call. As crazy as it sounds, I’ve actually had phone interviews that went like this: <Ring…> You:  “Hello?” Me:  “Hi, this is Mark from _______” You:  “Yeah?” Me:  “Is this <your name>?” You:  “Yeah.” Me:  “I had this time in my calendar for us to talk…were you expecting my call?” You:  “Oh, yeah, sure…” I used to be nice and would try to go ahead with the interview even after this bad start, thinking I was giving the candidate the benefit of the doubt…a second chance…but more often than not it was a struggle and 10 minutes into what was supposed to be a 45-minute call, I’m looking for a way to hang up without being rude myself.  It never worked out.  I never brought that person in for an in-person interview, much less offered them the job or contract.  Who knows, maybe they were some sort of wunderkind that we missed out on.  What I know is that they would never fit in with the rest of the team, and around here that is absolutely critical. So, in conclusion… Act alive!  Identify yourself!  And do at least the very basic of research.

    Read the article

  • Declarative Architectures in Infrastructure as a Service (IaaS)

    - by BuckWoody
    I deal with computing architectures by first laying out requirements, and then laying in any constraints for it's success. Only then do I bring in computing elements to apply to the system. As an example, a requirement might be "world-side availability" and a constraint might be "with less than 80ms response time and full HA" or something similar. Then I can choose from the best fit of technologies which range from full-up on-premises computing to IaaS, PaaS or SaaS. I also deal in abstraction layers - on-premises systems are fully under your control, in IaaS the hardware is abstracted (but not the OS, scale, runtimes and so on), in PaaS the hardware and the OS is abstracted and you focus on code and data only, and in SaaS everything is abstracted - you merely purchase the function you want (like an e-mail server or some such) and simply use it. When you think about solutions this way, the architecture moves to the primary factor in your decision. It's problem-first architecting, and then laying in whatever technology or vendor best fixes the problem. To that end, most architects design a solution using a graphical tool (I use Visio) and then creating documents that  let the rest of the team (and business) know what is required. It's the template, or recipe, for the solution. This is extremely easy to do for SaaS - you merely point out what the needs are, research the vendor and present the findings (and bill) to the business. IT might not even be involved there. In PaaS it's not much more complicated - you use the same Application Lifecycle Management and design tools you always have for code, such as Visual Studio or some other process and toolset, and you can "stamp out" the application in multiple locations, update it and so on. IaaS is another story. Here you have multiple machines, operating systems, patches, virus scanning, run-times, scale-patterns and tools and much more that you have to deal with, since essentially it's just an in-house system being hosted by someone else. You can certainly automate builds of servers - we do this as technical professionals every day. From Windows to Linux, it's simple enough to create a "build script" that makes a system just like the one we made yesterday. What is more problematic is being able to tie those systems together in a coherent way (as a solution) and then stamp that out repeatedly, especially when you might want to deploy that solution on-premises, or in one cloud vendor or another. Lately I've been working with a company called RightScale that does exactly this. I'll point you to their site for more info, but the general idea is that you document out your intent for a set of servers, and it will deploy them to on-premises clouds, Windows Azure, and other cloud providers all from the same script. In other words, it doesn't contain the images or anything like that - it contains the scripts to build them on-premises or on a cloud vendor like Microsoft. Using a tool like this, you combine the steps of designing a system (all the way down to passwords and accounts if you wish) and then the document drives the distribution and implementation of that intent. As time goes on and more and more companies implement solutions on various providers (perhaps for HA and DR) then this becomes a compelling investigation. The RightScale information is here, if you want to investigate it further. Yes, there are other methods I've found, but most are tied to a single kind of cloud, and I'm not into vendor lock-in. Poppa Bear Level - Hands-on EvaluateRightScale at no cost.  Just bring your Windows Azurecredentials and follow the these tutorials: Sign Up for Windows Azure Add     Windows Azure to a RightScale Account Windows Azure Virtual Machines     3-tier Deployment Momma Bear Level - Just the Right level... ;0)  WindowsAzure Evaluation Guide - if you are new toWindows Azure Virtual Machines and new to RightScale, we recommend that youread the entire evaluation guide to gain a more complete understanding of theWindows Azure + RightScale solution.    WindowsAzure Support Page @ support.rightscale.com - FAQ's, tutorials,etc. for  Windows Azure Virtual Machines (Work in Progress) Baby Bear Level - Marketing WindowsAzure Page @ www.rightscale.com - find overview informationincluding solution briefs and presentation & demonstration videos   Scale     and Automate Applications on Windows Azure  Solution Brief     - how RightScale makes Windows Azure Virtual Machine even better SQL     Server on Windows Azure  Solution Brief   -       Run Highly Available SQL Server on Windows Azure Virtual Machines

    Read the article

  • Come meet our Interns in Dublin

    - by klaudia.drulis
    Oracle Worldwide Product Translation Group (WPTG) provides solutions for all Oracle product and Content translation requirements. WPTG is a global organisation with its headquarters in Ireland and employees in Oracle offices worldwide. WPTG offer expertise in fields such as process engineering, tools development, linguistic quality, terminology, global product release, financial and vendor management. WPTG provides translation solution for over 40 languages including Asia Pacific, European, American and Middle Eastern languages. WPTG first introduced an intern program over 10 years ago and it has become a key component of our teams structure. The majority of Interns are sourced from a Computer Science related course, these Interns joining the engineering team. Others are sourced from Business courses and work within the Business / Project management area. The intern program allows us to maintain ties with current course curriculum and brings fresh energy and perspective into our Organisation. Four of the full time staff working in Dublin today joined us originally as Interns and subsequently were offered permanent positions. Come Meet some of our 2010 Interns, Come and see what Darragh, Anthony, Caoimhe, James and Artemij thought about working within the WPTG at Oracle: Darragh “Oracle has been a fun, challenging work placement for me. From day one I was treated as a full member of staff, this was both comforting and a little bit scary. The responsibilities stack up but I found I was able to keep on top of everything and even make improvements to how we handle a few things thanks to a great team and a very supportive manager. There’s a very positive atmosphere in work that’s really conducive to getting a lot of work done. Ideas seem to be the central hub in my line of business so all of my ideas and innovations were greeted with enthusiasm. Oracle has given me a fantastic opportunity and I urge you to grab it with both hands, you’ll find that you’re with a set of like minded people from all works of life that make work both interesting and fun. Even when the pressure is on you know that you can always get help and advice from someone nearby. My last word of advice is don’t be afraid to stick your neck out, everyone here is willing to learn, try something new and innovate, your voice will be heard and who knows, you could end up having a large impact on Oracle and your career.” Anthony “I had a great experience working with Oracle, from day one I was treated like a full member of staff with responsibilities of my own. I found that the more I put into the work the more I got out from the experience. Volunteering and being willing to face challenges have made this a more exciting placement. I am given a lot of leeway to do my own projects and so I’ve found that I am really enjoying my time here.” Caoimhe “I am currently spending my year of placement within the Release Management Team in the WPTG. My main role is to handle the finance process of all translation projects under 100k which includes creating workspecs and PO's, sending out kits, dealing with vendor queries and handling the invoicing and payment part. I am really enjoying my time here at Oracle, everyone is very open and friendly and willing to help you out with any questions you may have. I would definitely be interested in returning to Oracle after I graduate!” James “I am currently on a 12 month placement with Oracle, working as part of the Worldwide Product Translation Group in the Business Management. The Business Management team provides a global view on WPTG’s vendor and business strategy and is an interface into WPTG for new business. The business management team work together to support the external translation partner network. My role is to support the Business Management team and also to work on various projects when the need arises. This involves working with translation vendors and working with other Oracle employees worldwide. I am really enjoying my time working for Oracle, at times it can be challenging bit also very rewarding. I would recommend any student wanting to undertake a placement year to apply to Oracle, I made some great friends and I will never forget my time in Dublin.” Artemij “From working within Oracle, I have truly understood what "career path" is, and what opportunities a large corporation like Oracle can offer. Without any illusions, the work itself is exciting, sometimes challenging, tests your ability to handle pressure, to make decisions and take responsibility, to learn quickly and cooperate efficiently in order to solve a problem. I have learned a lot about myself. What I am good at, where and what I can do better. My placement at Oracle has allowed me to get a clearer picture of what I want, and which door I am going to open after college. If you have any questions related to this article feel free to contact  [email protected]You can find our job opportunities via http://campus.oracle.com

    Read the article

  • Skype support and dead parrot sketches

    - by Greg Low
    We as an industry have a lot to answer for. One thing is the level of support provided for users. Here's the conversation I'm having with Skype right now. It feels like being in a Monty Python skit. I was really hoping that when Microsoft purchased Skype that things might improve. 8:33:44 PM greglowInitial Question/Comment: Subscriptions (Unlimited Country, Unlimited Europe, Unlimited Region, Unlimited World)8:33:44 PM SystemThank you for contacting Skype Customer Support!8:33:49 PM SystemDonna Faye has joined this session!8:33:50 PM SystemConnected with Donna Faye8:33:50 PM SystemThank you for contacting Skype Customer Support!8:33:54 PM SystemPlease hold for the next available Live Support Agent.8:33:59 PM Donna FayeHello! Welcome to Skype Live Support! My name is Donna L. How may I help you?8:34:09 PM greglowI've got a subscription (skypename greglow)8:34:21 PM greglowIt's set to auto-renew8:34:45 PM greglowWhy have I been getting messages about my Skype number expiring, if it's set to renew automatically8:34:54 PM greglowand has recently renewed automatically8:35:09 PM Donna FayeThank you for the information.8:35:29 PM Donna FayeTo better assist you may I know your Skype name, please?8:35:37 PM greglowgreglow8:35:44 PM Donna FayeThank you.8:36:14 PM Donna FayeI understand your concern that you receive an email notification stating that your calling subscription will about to expire. Am I right?8:36:43 PM greglowNo, I got an email telling me that one of my Skype numbers was going to expire8:37:08 PM greglowWhen I went to the website, it said it was part of the subscription and the subscription was set to auto-renew8:37:18 PM greglowThe subscription auto-renewed on Oct XXth8:37:24 PM greglowBut the number still expired8:37:43 PM greglowWhen I logged on today, it said that the number had expired and that I had 52 days left to reactivate it8:37:58 PM greglowI did that but am trying to understand why that happens at all8:38:36 PM greglowWhy do you expire numbers that are part of auto-renewing subscriptions?8:39:49 PM Donna FayeUpon checking your account, your calling subscription link to three Online Numbers.8:39:55 PM greglowcorrect8:40:10 PM greglowbut until an hour or so ago, one of them had expired. why?8:40:54 PM Donna FayeLet me explain to you that now Online Number are detach to a calling subscription unlike before it was attach to the calling subscription so therefore each Online Number will recur individually.8:41:29 PM Donna FayeUpon checking your account it shows that all your Online Number will expire on October XX, 20138:41:44 PM Donna FayeIf you don't want to be charged again for you can cancel it anytime.8:41:59 PM greglowI have set it to charge automatically8:42:06 PM greglowso why do the numbers expire?8:42:32 PM Donna FayeNo they are not expired they are all active.8:42:33 PM greglowif there is an automatic payment set up, why does anything expire?8:42:52 PM greglowyes, but one of them was expired, and I only fixed it a few hours ago8:43:07 PM greglowI'm trying to understand why it expired8:44:54 PM Donna FayeThe email you got just notify you that you should have good enough funding source of your calling subscription and Online Number in order for this to recur.8:45:12 PM greglowAgreed. I did that but the number still expired. Why?8:45:38 PM Donna FayeHowever do not worry on this hence all Online Number are active and not expired.8:45:41 PM greglowWhen I logged on today it said that the number had expired and that I had 52 days left to reactivate it. Why did that happen?8:46:14 PM Donna FayeMay I know the Online Number you are pertaining, please?The number that had expired was (07) XXXX XXXX (Australia)8:49:19 PM Donna FayeOkay, do not worry on this Online Number because this will expire on October XX, 2013.8:49:38 PM Donna FayeThank you for all the information.8:49:46 PM greglowI understand that it won't expire now until next year. I'm trying to understand why it stopped working this time8:49:51 PM greglowso I can avoid that next time8:50:39 PM Donna FayeWhat do yo stop working, you mean that Online Number is unable to be reach out?8:50:59 PM Donna Faye*what do you mean by stop working8:51:18 PM greglowThe Skype number +61 7 XXXX XXXX stopped working because it expired8:51:32 PM Donna FayeNo, this into expired.8:51:34 PM greglowI'm trying to understand why it expired8:51:50 PM greglowSorry I don't follow "this into expired"8:52:39 PM Donna FayeLet me inform you that this Online Number is not expired.8:52:54 PM Donna FayePlease disregard the notification you see on your Skype account.8:53:02 PM greglowSorry, this is getting very frustrating. I already told you I fixed it today. I want to know why it happened.8:53:20 PM Donna FayeHence I can rest assured you that this is active until October XX, 2013.8:53:21 PM greglowSo I can avoid it happening again, on this account, or on our other accounts8:53:34 PM greglowCan I speak to someone who really understands this please?Donna FayeSkype send a notification to Skype users for their Skype product in order for them to be aware the expiration of your product however your Online Number still active on your account hence when your calling subscription Unlimited World recur your Online Number also recur. 8:57:59 PM Donna Faye I do apologize for this matter, please allow me to resolve this issue for you. 9:02:48 PM greglow My question is still the same: if it's set to auto-pay, why did it expire? 9:03:50 PM greglow Why did it stop working? Why did I have to "reactivate" it? 9:04:08 PM greglow Why didn't it just keep working if it's set to auto-pay? 9:04:30 PM greglow This isn't a difficult concept 9:04:34 PM Donna Faye Let me clarify to you that this is not the really expired what is emphasizing on this is when you don't have good enough funding source when you calling subscription recur the Online Number will possibly expired but seem that you have a good funding source then this will continue until 2013. 9:05:10 PM greglow When I logged on today, it was not working, and your website said it had expired and that I had 52 days left to reactivate it 9:05:17 PM greglow Why?and so on, and so on, and so on.... 

    Read the article

  • SQLAuthority News – A Conversation with an Old Friend – Sri Sridharan

    - by pinaldave
    Sri Sridharan is my old friend and we often talk on GTalk. The subject varies from Life in India/USA, movies, musics, and of course SQL. We have our differences when we talk about food or movie but we always agree when we talk about SQL. Yesterday while chatting with him we talked about SQLPASS and the conversation lasted for a long time. Here is the conversation between us on GTalk. I have removed a few of the personal talks and formatted into paragraphs as GTalk often shows stuff out of formatting. Pinal: Sri, Congrats on running for the PASS BoD again. You were so close last year. What made you decide to run again this year? Sri: Thank you Pinal for your leadership in the PASS India Community and all the things you do out there. After coming so close last year, there was no doubt in my mind that I will run again. I was truly humbled by the support I got from the community. Growing up in India for over 25 years, you are brought up in a very competitive part of the world. Right from the pressure of staying in the top of the class from kindergarten to your graduation, the relentless push from your parents about studying and getting good grades (and nothing else matters), you land up essentially living in a pressure cooker. To survive that relentless pressure, you need to have a thick skin, ability to stand up for who you really are , what you want to accomplish and in the process stay true those values. I am striving for a greater cause, to make PASS an organization that can help people with their SQL Server careers, to make PASS relevant to its chapter members, to make PASS an organization that every SQL professional in the world wants to be connected with. Just because I did not get elected or appointed last year does not mean that these causes are not worth fighting. Giving up upon failing the first time is simply not in me. If I did that, what message would I send to those who voted for me? What message would I send to my kids? Pinal: As someone who has such strong roots in India, what can the Indian PASS Community expect from you? Sri: First of all, I think fostering a regional leadership is something PASS must encourage as part of its global growth plan. For PASS global being able to understand all the issues in a region of the world and make sound decisions will be a tough thing to do on a continuous basis. I expect people like you, chapter leaders, regional mentors, MVPs of the region start playing a bigger role in shaping the next generation of PASS. That is something I said in my campaign and I still stand by it. I would like to see growth in the number of chapters in India. The current count does not truly represent the full potential of that region. I was pretty thrilled to see the Bangalore SQLSaturday happen early this year. I would like to see more of SQLSaturday events, at least in the major metro cities. I know the issues in India are very different from the rest of the world. So the formula needs to be tweaked a little for it to work better in India. Once the SQLSaturday model is vetted out, maybe there could be enough justification to have SQLRally India. PASS needs to have a premier SQL event in that region. Going to USA or Europe for that matter is incredibly hard due to VISA issues etc. So this could be a case of where PASS comes closer to where the community is. Pinal: What portfolio would take on if you are elected to the PASS Board? Sri: There are some very strong folks on the PASS Board today. The President discusses the portfolios with the group and makes the final call on the portfolios. I am also a fan of having a team associated with the portfolios. In that case, one person is the primary for a portfolio but secondary on a couple of other portfolios. This way people on the board have a direct vested interest in a few portfolios. Personally, I know I would these portfolios good justice – Chapters, Global Growth and Events (SQLSat, SQLRally). I would try to see if we can get a director to focus on Volunteers.  To me that is very critical for growth in the international regions. Pinal: This is an interesting conversation with you Sri. I know you so long time but this is indeed inspiring to many. India is a big country and we appreciate your thoughts. Sri: Thank you very much for taking time to chat with me today. Cheers. There are pretty strong candidates for SQLPASS Board of Elections this year. I know all of them in person and honestly it is going to be extremely difficult to not to vote for anybody. I am indeed in a crunch right now how to pick one over another. Though the choice is difficult, I encourage you to vote for them. I am extremely confident that the new board of directors will all have the same goal – Better SQL Server Community. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, DBA, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Your Next IT Job

    - by BuckWoody
    Some data professionals have worked (and plan to work) in the same place for a long time. In organizations large and small, the turnover rate just isn’t that high. This has not been my experience. About every 3-5 years I’ve changed either roles or companies. That might be due to the IT environment or my personality (or a mix of the two), but the point is that I’ve had many roles and worked for many companies large and small throughout my 27+ years in IT. At one point this might have been a detriment – a prospective employer looks at the resume and says “it seems you’ve moved around quite a bit.” But I haven’t found that to be the case all the time –in fact, in some cases the variety of jobs I’ve held has been an asset because I’ve seen what works (and doesn’t) in other environments, which can save time and money. So if you’re in the first camp – great! Stay where you are, and continue doing the work you love. but if you’re in the second, then this post might be useful. If you are planning on making a change, or perhaps you’ve hit a wall at your current location, you might start looking around for a better paying job – and there’s nothing wrong with that. We all try to make our lives better, and for some that involves more money. Money, however, isn’t always the primary motivator. I’ve gone to another job that doesn’t have as many benefits or has the same salary as the current job I’m working to gain more experience, get a better work/life balance and so on. It’s a mix of factors that only you know about. So I thought I would lay out a few advantages and disadvantages in the shops I’ve worked at. This post isn’t aimed at a single employer, but represents a mix of what I’ve experienced, and of course the opinions here are my own. You will most certainly have a different take – if so, please post a response! I also won’t mention a specific industry – I’ve worked everywhere from medical firms, legal offices, retail, billing centers, manufacturing, government, even to NASA. I’m focusing here mostly on size and composition. And I’m making some very broad generalizations here – I am fully aware that a small company might have great benefits and a large company might allow a lot of role flexibility.  your mileage may vary – and again, post those comments! Small Company To me a “small company” means around 100 people or less – sometimes a lot less. These can be really fun, frustrating places to to work. Advantages: a great deal of flexibility, a wide range of roles (often at the same time), a large degree of responsibility, immediate feedback, close relationships with co-workers, work directly with your customer. Disadvantages: Too much responsibility, little work/life balance, immature political structure, few (if any) benefits. If the business is family-owned, they can easily violate work/life boundaries. Medium Size company In my experience the next size company I would work for involves from a few hundred people to around five thousand. Advantages: Good mobility – fairly easy to get promoted, acceptable benefits, more defined responsibilities, better work/life balance, balanced load for expertise, but still the organizational structure is fairly simple to understand. Disadvantages: Pay is not always highest, rapid changes in structure as the organization grows, transient workforce. You may not be given the opportunity to work with another technology if someone already “owns” it. Politics are painful at this level as people try to learn how to do it. Large Company When you get into the tens of thousands of folks employed around the world, you’re in a large company. Advantages: Lots of room to move around – sometimes you can work (as I have) multiple jobs through the years and yet stay at the same company, building time for benefits, very defined roles, trained managers (yes, I know some of them are still awful – trust me – I DO know that), higher-end benefits, long careers possible, discounts at retailers and other “soft” benefits, prestige. For some, a higher level of politics (done professionally) is a good thing. Disadvantages: You could become another faceless name in the crowd, might not allow a great deal of flexibility,  large organizational changes might take away any control you have of your career. I’ve also seen large layoffs happen, and good people get let go while “dead weight” is retained. For some, a higher level of politics is distasteful. So what are your experiences? Share with the group! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Content Query Web Part and the Yes/No Field

    - by Bil Simser
    The Content Query Web Part (CQWP) is a pretty powerful beast. It allows you to do multiple site queries and aggregate the results. This is great for rolling up content and doing some summary type reporting. Here’s a trick to remember about Yes/No fields and using the CQWP. If you’re building a news style site and want to aggregate say all the announcements that people tag a certain way, up onto the home page this might be a solution. First we need to allow a way for users of all our sites to mark an announcement for inclusion on our Intranet Home Page. We’ll do this by just modifying the Announcement Content type and adding a Yes/No field to it. There are alternate ways of doing this like building a new Announcement type or stapling a feature to all sites to add our column but this is pretty low impact and only affects our current site collection so let’s go with it for now, okay? You can berate me in the comments about the proper way I should have done this part. Go to the Site Settings for the Site Collection and click on Site Content Types under the Galleries. This takes you to the gallery for this site and all subsites. Scroll down until you see the List Content Types and click on Announcements. Now we’re modifying the Announcement content type which affects all those announcement lists that are created by default if you’re building sites using the Team Site template (or creating a new Announcements list on any site for that matter). Click on Add from new site column under the Column list. This will allow us to create a new Yes/No field that users will see in Announcement items. This field will allow the user to flag the announcement for inclusion on the home page. Feel free to modify the fields as you see fit for your environment, this is just an example. Now that we’ve added the column to our Announcements Content type we can go into any site that has an announcement list, modify that announcement and flag it to be included on our home page. See the new Featured column? That was the result of modifying our Announcements Content Type on this site collection. Now we can move onto the dirty part, displaying it in a CQWP on the home page. And here is where the fun begins (and the head scratching should end). On our home page we want to drop a Content Query Web Part and aggregate any Announcement that’s been flagged as Featured by the users (we could also add the filter to handle Expires so we don’t show old content so go ahead and do that if you want). First add a CQWP to the page then modify the settings for the web part. In the first section, Query, we want the List Type to be set to Announcements and the Content type to be Announcement so set your options like this: Click Apply and you’ll see the results display all Announcements from any site in the site collection. I have five team sites created each with a unique announcement added to them. Now comes the filtering. We don’t want to include every announcement, only ones users flag using that Featured column we added. At first blush you might scroll down to the Additional Filters part of the Query options and set the Featured column to be equal to Yes: This seems correct doesn’t it? After all, the column is a Yes/No column and looking at an announcement in the site, it displays the field as Yes or No: However after applying the filter you get this result: (I have the announcements from Team Site 1 and Team Site 4 flagged as Featured) Huh? It’s BACKWARDS! Let’s confirm that. Go back in and change the Additional Filters section from Yes to No and hit Apply and you get this: Wait a minute? Shouldn’t I see Team Site 1 and 4 if the logic is backwards? Why am I seeing the same thing as before. What gives… For whatever reason, unknown to me, a Yes/No field (even though it displays as such) really uses 1 and 0 behind the scenes. Yeah, someone was stuck on using integer values for booleans when they wrote SharePoint (probably after a long night of white boarding ways to mess with developers heads) and came up with this. The solution is pretty simple but not very discoverable. Set the filter to include your flagged items like so: And it will filter the items marked as Featured correctly giving you this result: This kind of solution could also be extended and enhanced. Here are a few suggestions and ideas: Modify the ItemStyle.xsl file to add a new style for this aggregation which would include the first few paragraphs of the body (or perhaps add another field to the Content type called Excerpt or Summary and display that instead) Add an Image column to the Announcement Content type to include a Picture field and display it in the summary Add a Category choice field (Employee News, Current Events, Headlines, etc.) and add multiple CQWPs to the home page filtering each one on a different category I know some may find this topic old and dusty but I didn’t see a lot out there specifically on filtering the Yes/No fields and the whole 1/0 trick was a little wonky, so I figured a few pictures would help walk through overcoming yet another SharePoint weirdness. With a little work and some creative juices you can easily us the power of aggregation and the CQWP to build a news site from content on your team sites.

    Read the article

  • WPF Databinding- Part 2 of 3

    - by Shervin Shakibi
    This is a follow up to my previous post WPF Databinding- Not your fathers databinding Part 1-3 you can download the source code here  http://ssccinc.com/wpfdatabinding.zip Example 04   In this example we demonstrate  the use of default properties and also binding to an instant of an object which is part of a collection bound to its container. this is actually not as complicated as it sounds. First of all, lets take a look at our Employee class notice we have overridden the ToString method, which will return employees First name , last name and employee number in parentheses, public override string ToString()        {            return String.Format("{0} {1} ({2})", FirstName, LastName, EmployeeNumber);        }   in our XAML we have set the itemsource of the list box to just  “Binding” and the Grid that contains it, has its DataContext set to a collection of our Employee objects. DataContext="{StaticResource myEmployeeList}"> ….. <ListBox Name="employeeListBox"  ItemsSource="{Binding }" Grid.Row="0" /> the ToString in the method for each instance will get executed and the following is a result of it. if we did not have a ToString the list box would look  like this: now lets take a look at the grid that will display the details when someone clicks on an Item, the Grid has the following DataContext DataContext="{Binding ElementName=employeeListBox,            Path=SelectedItem}"> Which means its bound to a specific instance of the Employee object. and within the gird we have textboxes that are bound to different Properties of our class. <TextBox Grid.Row="0" Grid.Column="1" Text="{Binding Path=FirstName}" /> <TextBox Grid.Row="1" Grid.Column="1" Text="{Binding Path=LastName}" /> <TextBox Grid.Row="2" Grid.Column="1" Text="{Binding Path=Title}" /> <TextBox Grid.Row="3" Grid.Column="1" Text="{Binding Path=Department}" />   Example 05   This project demonstrates use of the ObservableCollection and INotifyPropertyChanged interface. Lets take a look at Employee.cs first, notice it implements the INotifyPropertyChanged interface now scroll down and notice for each setter there is a call to the OnPropertyChanged method, which basically will will fire up the event notifying to the value of that specific property has been changed. Next EmployeeList.cs notice it is an ObservableCollection . Go ahead and set the start up project to example 05 and then run. Click on Add a new employee and the new employee should appear in the list box.   Example 06   This is a great example of IValueConverter its actuall a two for one deal, like most of my presentation demos I found this by “Binging” ( formerly known as g---ing) unfortunately now I can’t find the original author to give him  the credit he/she deserves. Before we look at the code lets run the app and look at the finished product, put in 0 in Celsius  and you should see Fahrenheit textbox displaying to 32 degrees, I know this is calculating correctly from my elementary school science class , also note the color changed to blue, now put in 100 in Celsius which should give us 212 Fahrenheit but now the color is red indicating it is hot, and finally put in 75 Fahrenheit and you should see 23.88 for Celsius and the color now should be black. Basically IValueConverter allows us different types to be bound, I’m sure you have had problems in the past trying to bind to Date values . First look at FahrenheitToCelciusConverter.cs first notice it implements IValueConverter. IValueConverter has two methods Convert and ConvertBack. In each method we have the code for converting Fahrenheit to Celsius and vice Versa. In our XAML, after we set a reference in our Windows.Resources section. and for txtCelsius we set the path to TxtFahrenheit and the converter to an instance our FahrenheitToCelciusConverter converter. no need to repeat this for TxtFahrenheit since we have a convert and ConvertBack. Text="{Binding  UpdateSourceTrigger=PropertyChanged,            Path=Text,ElementName=txtFahrenheit,            Converter={StaticResource myTemperatureConverter}}" As mentioned earlier this is a twofer Demo, in the second demo, we basically are converting a double datatype to a brush. Lets take a look at TemperatureToColorConverter, notice we in our Covert Method, if the value is less than our cold temperature threshold we return a blue brush and if it is higher than our hot temperature threshold we return a redbrush. since we don’t have to convert a brush to double value in our example the convert back is not being implemented. Take time and go through these three examples and I hope you have a better understanding   of databinding, ObservableCollection  and IValueConverter . Next blog posting we will talk about ValidationRule, DataTemplates and DataTemplate triggers.

    Read the article

  • My Red Gate Experience

    - by Colin Rothwell
    I’m Colin, and I’ve been an intern working with Mike in publishing on Simple-Talk and SQLServerCentral for the past ten weeks. I’ve mostly been working “behind the scenes”, making improvements to the spam filtering, along with various other small tweaks. When I arrived at Red Gate, one of the first things Mike asked me was what I wanted to get out of the internship. It wasn’t a question I’d given a great deal of thought to, but my immediate response was the same as almost anybody: to support my growing family. Well, ok, not quite that, but money was certainly a motivator, along with simply making sure that I didn’t get bored over the summer. Three months is a long time to fill, and many of my friends end up getting bored, or worse, knitting obsessively. With the arrogance which seems fairly common among Cambridge people, I wasn’t expecting to really learn much here! In my mind, the part of the year where I am at Uni is the part where I learn things, whilst Red Gate would be an opportunity to apply what I’d learnt. Thankfully, the opposite is true: I’ve learnt a lot during my time here, and there has been a definite positive impact on the way I write code. The first thing I’ve really learnt is that test-driven development is, in general, a sensible way of working. Before coming, I didn’t really get it: how could you test something you hadn’t yet written? It didn’t make sense! My problem was seeing a test as having to test all the behaviour of a given function. Writing tests which test the bare minimum possible and building them up is a really good way of crystallising the direction the code needs to grow in, and ensures you never attempt to write too much code at time. One really good experience of this was early on in my internship when Mike and I were working on the query used to list active authors: I’d written something which I thought would do the trick, but by starting again using TDD we grew something which revealed that there were several subtle mistakes in the query I’d written. I’ve also been awakened to the value of pair programming. Whilst I could sort of see the point before coming, I also thought that it was impossible that two people would ever get more done at the same computer than if they were working separately. I still think that this is true for projects with pieces that developers can easily work on independently, and with developers who both know the codebase, but I’ve found that pair programming can be really good for learning a code base, and for building up small projects to the point where you can start working on separate components, as well as solving particularly difficult problems. Later on in my internship, for my down tools week project, I was working on adding Python support to Glimpse. Another intern and I we pair programmed the entire project, using ping pong pair programming as much as possible. One bonus that this brought which I wasn’t expecting was that I found myself less prone to distraction: with someone else peering over my shoulder, I didn’t have the ever-present temptation to open gmail, or facebook, or yammer, or twitter, or hacker news, or reddit, and so on, and so forth. I’m quite proud of this project: I think it’s some of the best code I’ve written. I’ve also been really won over to the value of descriptive variables names. In my pre-Red Gate life, as a lone-ranger style cowboy programmer, I’d developed a tendency towards laziness in variable names, sometimes abbreviating or, worse, using acronyms. I’ve swiftly realised that this is a bad idea when working with a team: saving a few key strokes is inevitably not worth it when it comes to reading code again in the future. Longer names also mean you can do away with a majority of comments. I appreciate that if you’ve come up with an O(n*log n) algorithm for something which seemed O(n^2), you probably want to explain how it works, but explaining what a variable name means is a big no no: it’s so very easy to change the behaviour of the code, whilst forgetting about the comments. Whilst at Red Gate, I took the opportunity to attend a code retreat, which really helped me to solidify all the things I’d learnt. To be completely free of any existing code base really lets you focus on best practises and think about how you write code. If you get a chance to go on a similar event, I’d highly recommend it! Cycling to Red Gate, I’ve also become much better at fitting inner tubes: if you’re struggling to get the tube out, or re-fit the tire, letting a bit of air out usually helps. I’ve also become quite a bit better at foosball and will miss having a foosball table! I’d like to finish off by saying thank you to everyone at Red Gate for having me. I’ve really enjoyed working with, and learning from, the team that brings you this web site. If you meet any of them, buy them a drink!

    Read the article

  • 2D Platformer Collision Handling

    - by defender-zone
    Hello, everyone! I am trying to create a 2D platformer (Mario-type) game and I am some having some issues with handling collisions properly. I am writing this game in C++, using SDL for input, image loading, font loading, etcetera. I am also using OpenGL via the FreeGLUT library in conjunction with SDL to display graphics. My method of collision detection is AABB (Axis-Aligned Bounding Box), which is really all I need to start with. What I need is an easy way to both detect which side the collision occurred on and handle the collisions properly. So, basically, if the player collides with the top of the platform, reposition him to the top; if there is a collision to the sides, reposition the player back to the side of the object; if there is a collision to the bottom, reposition the player under the platform. I have tried many different ways of doing this, such as trying to find the penetration depth and repositioning the player backwards by the penetration depth. Sadly, nothing I've tried seems to work correctly. Player movement ends up being very glitchy and repositions the player when I don't want it to. Part of the reason is probably because I feel like this is something so simple but I'm over-thinking it. If anyone thinks they can help, please take a look at the code below and help me try to improve on this if you can. I would like to refrain from using a library to handle this (as I want to learn on my own) or the something like the SAT (Separating Axis Theorem) if at all possible. Thank you in advance for your help! void world1Level1CollisionDetection() { for(int i; i < blocks; i++) { if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==true) { int up = 0; int left = 0; int right = 0; int down = 0; if(ball.coords[0] < block[i].coords[0] && block[i].coords[0] < ball.coords[2] && ball.coords[2] < block[i].coords[2]) { left = 1; } if(block[i].coords[0] < ball.coords[0] && ball.coords[0] < block[i].coords[2] && block[i].coords[2] < ball.coords[2]) { right = 1; } if(ball.coords[1] < block[i].coords[1] && block[i].coords[1] < ball.coords[3] && ball.coords[3] < block[i].coords[3]) { up = 1; } if(block[i].coords[1] < ball.coords[1] && ball.coords[1] < block[i].coords[3] && block[i].coords[3] < ball.coords[3]) { down = 1; } cout << left << ", " << right << ", " << up << ", " << down << ", " << endl; if (left == 1) { ball.coords[0] = block[i].coords[0] - 16.0f; ball.coords[2] = block[i].coords[0] - 0.0f; } if (right == 1) { ball.coords[0] = block[i].coords[2] + 0.0f; ball.coords[2] = block[i].coords[2] + 16.0f; } if (down == 1) { ball.coords[1] = block[i].coords[3] + 0.0f; ball.coords[3] = block[i].coords[3] + 16.0f; } if (up == 1) { ball.yspeed = 0.0f; ball.gravity = 0.0f; ball.coords[1] = block[i].coords[1] - 16.0f; ball.coords[3] = block[i].coords[1] - 0.0f; } } if (de2dCheckCollision(ball,block[i],0.0f,0.0f)==false) { ball.gravity = -0.5f; } } } To explain what some of this code means: The blocks variable is basically an integer that is storing the amount of blocks, or platforms. I am checking all of the blocks using a for loop, and the number that the loop is currently on is represented by integer i. The coordinate system might seem a little weird, so that's worth explaining. coords[0] represents the x position (left) of the object (where it starts on the x axis). coords[1] represents the y position (top) of the object (where it starts on the y axis). coords[2] represents the width of the object plus coords[0] (right). coords[3] represents the height of the object plus coords[1] (bottom). de2dCheckCollision performs an AABB collision detection. Up is negative y and down is positive y, as it is in most games. Hopefully I have provided enough information for someone to help me successfully. If there is something I left out that might be crucial, let me know and I'll provide the necessary information. Finally, for anyone who can help, providing code would be very helpful and much appreciated. Thank you again for your help!

    Read the article

  • FAQ: Highlight GridView Row on Click and Retain Selected Row on Postback

    - by Vincent Maverick Durano
    A couple of months ago I’ve written a simple demo about “Highlighting GridView Row on MouseOver”. I’ve noticed many members in the forums (http://forums.asp.net) are asking how to highlight row in GridView and retain the selected row across postbacks. So I’ve decided to write this post to demonstrate how to implement it as reference to others who might need it. In this demo I going to use a combination of plain JavaScript and jQuery to do the client-side manipulation. I presumed that you already know how to bind the grid with data because I will not include the codes for populating the GridView here. For binding the gridview you can refer this post: Binding GridView with Data the ADO.Net way or this one: GridView Custom Paging with LINQ. To get started let’s implement the highlighting of GridView row on row click and retain the selected row on postback.  For simplicity I set up the page like this: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>You have selected Row: (<asp:Label ID="Label1" runat="server" />)</h2> <asp:HiddenField ID="hfCurrentRowIndex" runat="server"></asp:HiddenField> <asp:HiddenField ID="hfParentContainer" runat="server"></asp:HiddenField> <asp:Button ID="Button1" runat="server" onclick="Button1_Click" Text="Trigger Postback" /> <asp:GridView ID="grdCustomer" runat="server" AutoGenerateColumns="false" onrowdatabound="grdCustomer_RowDataBound"> <Columns> <asp:BoundField DataField="Company" HeaderText="Company" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="Title" HeaderText="Title" /> <asp:BoundField DataField="Address" HeaderText="Address" /> </Columns> </asp:GridView> </asp:Content>   Note: Since the action is done at the client-side, when we do a postback like (clicking on a button) the page will be re-created and you will lose the highlighted row. This is normal because the the server doesn't know anything about the client/browser not unless if you do something to notify the server that something has changed. To persist the settings we will use some HiddenFields control to store the data so that when it postback we can reference the value from there. Now here’s the JavaScript functions below: <asp:content id="Content1" runat="server" contentplaceholderid="HeadContent"> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.4/jquery.min.js" type="text/javascript"></script> <script type="text/javascript">       var prevRowIndex;       function ChangeRowColor(row, rowIndex) {           var parent = document.getElementById(row);           var currentRowIndex = parseInt(rowIndex) + 1;                 if (prevRowIndex == currentRowIndex) {               return;           }           else if (prevRowIndex != null) {               parent.rows[prevRowIndex].style.backgroundColor = "#FFFFFF";           }                 parent.rows[currentRowIndex].style.backgroundColor = "#FFFFD6";                 prevRowIndex = currentRowIndex;                 $('#<%= Label1.ClientID %>').text(currentRowIndex);                 $('#<%= hfParentContainer.ClientID %>').val(row);           $('#<%= hfCurrentRowIndex.ClientID %>').val(rowIndex);       }             $(function () {           RetainSelectedRow();       });             function RetainSelectedRow() {           var parent = $('#<%= hfParentContainer.ClientID %>').val();           var currentIndex = $('#<%= hfCurrentRowIndex.ClientID %>').val();           if (parent != null) {               ChangeRowColor(parent, currentIndex);           }       }          </script> </asp:content>   The ChangeRowColor() is the function that sets the background color of the selected row. It is also where we set the previous row and rowIndex values in HiddenFields.  The $(function(){}); is a short-hand for the jQuery document.ready event. This event will be fired once the page is posted back to the server that’s why we call the function RetainSelectedRow(). The RetainSelectedRow() function is where we referenced the current selected values stored from the HiddenFields and pass these values to the ChangeRowColor() function to retain the highlighted row. Finally, here’s the code behind part: protected void grdCustomer_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { e.Row.Attributes.Add("onclick", string.Format("ChangeRowColor('{0}','{1}');", e.Row.ClientID, e.Row.RowIndex)); } } The code above is responsible for attaching the javascript onclick event for each row and call the ChangeRowColor() function and passing the e.Row.ClientID and e.Row.RowIndex to the function. Here’s the sample output below:   That’s it! I hope someone find this post useful! Technorati Tags: jQuery,GridView,JavaScript,TipTricks

    Read the article

  • Master Page: Dynamically Adding Rows in ASP Table on Button Click event

    - by Vincent Maverick Durano
    In my previous post here, I wrote an example that demonstrates how are we going to generate table rows dynamically using ASP Table on click of the Button control. Now based on some comments in my previous example and in the forums they wanted to implement it within Masterpage. Unfortunately the code in my previous example doesn't work in Masterpage for the following main reasons: The Table is dynamically added within the Form tag and so the TextBox control will not be generated correcty in the page. The data will not be retained on each and every postbacks because the SetPreviousData() method is looking for the Table element within the Page and not on the MasterPage. The Request.Form key value should be set correctly since all controls within the master page are prefixed with the naming containter ID to prevent duplicate ids on the final rendered HTML. For example the TextBox control with the ID of TextBoxRow will turn to ID to this ctl00$MainBody$TextBoxRow. In order for the previous example to work within Masterpage then we will have to correct those three main reasons above and this post will guide you how to correct it. Suppose we have this content page declaration below:   <asp:Content ID="Content1" ContentPlaceHolderID="MainHead" Runat="Server"> </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainBody" Runat="Server"> <asp:PlaceHolder ID="PlaceHolder1" runat="server"> <asp:Button ID="BTNAdd" runat="server" Text="Add New Row" OnClick="BTNAdd_Click" /> </asp:PlaceHolder> </asp:Content> As you notice I've added a PlaceHolder control within the MainBody ContentPlaceHolder. This is because we are going to generate the Table in the PlaceHolder instead of generating it within the Form element. Now since issue #1 is already corrected then let's proceed to the code beind part. Here are the full code blocks below:     using System; using System.Web.UI; using System.Web.UI.WebControls; public partial class DynamicControlDemo : System.Web.UI.Page { private int numOfRows = 1; protected void Page_Load(object sender, EventArgs e) { //Generate the Rows on Initial Load if (!Page.IsPostBack) { GenerateTable(numOfRows); } } protected void BTNAdd_Click(object sender, EventArgs e) { if (ViewState["RowsCount"] != null) { numOfRows = Convert.ToInt32(ViewState["RowsCount"].ToString()); GenerateTable(numOfRows); } } private void SetPreviousData(int rowsCount, int colsCount) { Table table = (Table)this.Page.Master.FindControl("MainBody").FindControl("Table1"); // **** if (table != null) { for (int i = 0; i < rowsCount; i++) { for (int j = 0; j < colsCount; j++) { //Extracting the Dynamic Controls from the Table TextBox tb = (TextBox)table.Rows[i].Cells[j].FindControl("TextBoxRow_" + i + "Col_" + j); //Use Request object for getting the previous data of the dynamic textbox tb.Text = Request.Form["ctl00$MainBody$TextBoxRow_" + i + "Col_" + j];//***** } } } } private void GenerateTable(int rowsCount) { //Creat the Table and Add it to the Page Table table = new Table(); table.ID = "Table1"; PlaceHolder1.Controls.Add(table);//****** //The number of Columns to be generated const int colsCount = 3;//You can changed the value of 3 based on you requirements // Now iterate through the table and add your controls for (int i = 0; i < rowsCount; i++) { TableRow row = new TableRow(); for (int j = 0; j < colsCount; j++) { TableCell cell = new TableCell(); TextBox tb = new TextBox(); // Set a unique ID for each TextBox added tb.ID = "TextBoxRow_" + i + "Col_" + j; // Add the control to the TableCell cell.Controls.Add(tb); // Add the TableCell to the TableRow row.Cells.Add(cell); } // And finally, add the TableRow to the Table table.Rows.Add(row); } //Set Previous Data on PostBacks SetPreviousData(rowsCount, colsCount); //Sore the current Rows Count in ViewState rowsCount++; ViewState["RowsCount"] = rowsCount; } }   As you observed the code is pretty much similar to the previous example except for the highlighted lines above. That's it! I hope someone find this post usefu! Technorati Tags: Dynamic Controls,ASP.NET,C#,Master Page

    Read the article

  • Nothing drawing on screen OpenGL with GLSL

    - by codemonkey
    I hate to be asking this kind of question here, but I am at a complete loss as to what is going wrong, so please bear with me. I am trying to render a single cube (voxel) in the center of the screen, through OpenGL with GLSL on Mac I begin by setting up everything using glut glutInit(&argc, argv); glutInitDisplayMode(GLUT_RGBA|GLUT_ALPHA|GLUT_DOUBLE|GLUT_DEPTH); glutInitWindowSize(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); glutCreateWindow("Cubez-OSX"); glutReshapeFunc(reshape); glutDisplayFunc(render); glutIdleFunc(idle); _electricSheepEngine=new ElectricSheepEngine(DEFAULT_WINDOW_WIDTH, DEFAULT_WINDOW_HEIGHT); _electricSheepEngine->initWorld(); glutMainLoop(); Then inside the engine init camera & projection matrices: cameraPosition=glm::vec3(2,2,2); cameraTarget=glm::vec3(0,0,0); cameraUp=glm::vec3(0,0,1); glm::vec3 cameraDirection=glm::normalize(cameraPosition-cameraTarget); cameraRight=glm::cross(cameraDirection, cameraUp); cameraRight.z=0; view=glm::lookAt(cameraPosition, cameraTarget, cameraUp); lensAngle=45.0f; aspectRatio=1.0*(windowWidth/windowHeight); nearClippingPlane=0.1f; farClippingPlane=100.0f; projection=glm::perspective(lensAngle, aspectRatio, nearClippingPlane, farClippingPlane); then init shaders and check compilation and bound attributes & uniforms to be correctly bound (my previous question) These are my two shaders, vertex: #version 120 attribute vec3 position; attribute vec3 inColor; uniform mat4 mvp; varying vec3 fragColor; void main(void){ fragColor = inColor; gl_Position = mvp * vec4(position, 1.0); } and fragment: #version 120 varying vec3 fragColor; void main(void) { gl_FragColor = vec4(fragColor,1.0); } init the cube: setPosition(glm::vec3(0,0,0)); struct voxelData data[]={ //front face {{-1.0, -1.0, 1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, 1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, 1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, 1.0}, {0.0, 1.0, 1.0}}, //back face {{-1.0, -1.0, -1.0}, {0.0, 0.0, 1.0}}, {{ 1.0, -1.0, -1.0}, {0.0, 1.0, 1.0}}, {{ 1.0, 1.0, -1.0}, {0.0, 0.0, 1.0}}, {{-1.0, 1.0, -1.0}, {0.0, 1.0, 1.0}} }; glGenBuffers(1, &modelVerticesBufferObject); glBindBuffer(GL_ARRAY_BUFFER, modelVerticesBufferObject); glBufferData(GL_ARRAY_BUFFER, sizeof(data), data, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); const GLubyte indices[] = { // Front 0, 1, 2, 2, 3, 0, // Back 4, 6, 5, 4, 7, 6, // Left 2, 7, 3, 7, 6, 2, // Right 0, 4, 1, 4, 1, 5, // Top 6, 2, 1, 1, 6, 5, // Bottom 0, 3, 7, 0, 7, 4 }; glGenBuffers(1, &modelFacesBufferObject); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, modelFacesBufferObject); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices, GL_STATIC_DRAW); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); and then the render call: glClearColor(0.52, 0.8, 0.97, 1.0); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glEnable(GL_DEPTH_TEST); //use the shader glUseProgram(shaderProgram); //enable attributes in program glEnableVertexAttribArray(shaderAttribute_position); glEnableVertexAttribArray(shaderAttribute_color); //model matrix using model position vector glm::mat4 mvp=projection*view*voxel->getModelMatrix(); glUniformMatrix4fv(shaderAttribute_mvp, 1, GL_FALSE, glm::value_ptr(mvp)); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_position, // attribute 3, // number of elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements 0 // offset of first element ); glBindBuffer(GL_ARRAY_BUFFER, voxel->modelVerticesBufferObject); glVertexAttribPointer(shaderAttribute_color, // attribute 3, // number of colour elements per vertex, here (x,y) GL_FLOAT, // the type of each element GL_FALSE, // take our values as-is sizeof(struct voxelData), // coord every (sizeof) elements (GLvoid *)(offsetof(struct voxelData, color3D)) // offset of colour data ); //draw the model by going through its elements array glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, voxel->modelFacesBufferObject); int bufferSize; glGetBufferParameteriv(GL_ELEMENT_ARRAY_BUFFER, GL_BUFFER_SIZE, &bufferSize); glDrawElements(GL_TRIANGLES, bufferSize/sizeof(GLushort), GL_UNSIGNED_SHORT, 0); //close up the attribute in program, no more need glDisableVertexAttribArray(shaderAttribute_position); glDisableVertexAttribArray(shaderAttribute_color); but on screen all I get is the clear color :$ I generate my model matrix using: modelMatrix=glm::translate(glm::mat4(1.0), position); which in debug turns out to be for the position of (0,0,0): |1, 0, 0, 0| |0, 1, 0, 0| |0, 0, 1, 0| |0, 0, 0, 1| Sorry for such a question, I know it is annoying to look at someone's code, but I promise I have tried to debug around and figure it out as much as I can, and can't come to a solution Help a noob please? EDIT: Full source here, if anyone wants

    Read the article

  • How do I resolve this exercise on C++? [closed]

    - by user40630
    (Card Shuffling and Dealing) Create a program to shuffle and deal a deck of cards. The program should consist of class Card, class DeckOfCards and a driver program. Class Card should provide: a) Data members face and suit of type int. b) A constructor that receives two ints representing the face and suit and uses them to initialize the data members. c) Two static arrays of strings representing the faces and suits. d) A toString function that returns the Card as a string in the form “face of suit.” You can use the + operator to concatenate strings. Class DeckOfCards should contain: a) A vector of Cards named deck to store the Cards. b) An integer currentCard representing the next card to deal. c) A default constructor that initializes the Cards in the deck. The constructor should use vector function push_back to add each Card to the end of the vector after the Card is created and initialized. This should be done for each of the 52 Cards in the deck. d) A shuffle function that shuffles the Cards in the deck. The shuffle algorithm should iterate through the vector of Cards. For each Card, randomly select another Card in the deck and swap the two Cards. e) A dealCard function that returns the next Card object from the deck. f) A moreCards function that returns a bool value indicating whether there are more Cards to deal. The driver program should create a DeckOfCards object, shuffle the cards, then deal the 52 cards. This above is the exercise I'm trying to solve. I'd be very much appreciated if someone could solve it and explain it to me. The main idea of the program is quite simple. What I don't get is how to build the constructor for the class DeckOfCards and how to generate the 52 cards of the deck with different suits and faces. Untill now I've managed to do this: #include <iostream> #include <vector> using namespace std; /* * */ /* a) Data members face and suit of type int. b) A constructor that receives two ints representing the face and suit and uses them to initialize the data members. c) Two static arrays of strings representing the faces and suits. d) A toString function that returns the Card as a string in the form “face of suit.” You can use the + operator to concatenate strings. */ class Card { public: Card(int, int); string toString(); private: int suit, face; static string faceNames[13]; static string suitNames[4]; }; string Card::faceNames[13] = {"Ace","Two","Three","Four","Five","Six","Seven","Eight","Nine","Ten","Queen","Jack","King"}; string Card::suitNames[4] = {"Diamonds","Clubs","Hearts","Spades"}; string Card::toString() { return faceNames[face]+" of "+suitNames[suit]; } Card::Card(int f, int s) :face(f), suit(s) { } /* Class DeckOfCards should contain: a) A vector of Cards named deck to store the Cards. b) An integer currentCard representing the next card to deal. c) A default constructor that initializes the Cards in the deck. The constructor should use vector function push_back to add each Card to the end of the vector after the Card is created and initialized. This should be done for each of the 52 Cards in the deck. d) A shuffle function that shuffles the Cards in the deck. The shuffle algorithm should iterate through the vector of Cards. For each Card, randomly select another Card in the deck and swap the two Cards. e) A dealCard function that returns the next Card object from the deck. f) A moreCards function that returns a bool value indicating whether there are more Cards to deal. */ class DeckOfCards { public: DeckOfCards(); void shuffleCards(); Card dealCard(); bool moreCards(); private: vector<Card> deck(52); int currentCard; }; int main(int argc, char** argv) { return 0; } DeckOfCards::DeckOfCards() { //I'm stuck here I have no idea of what to take out of here. //I still don't fully get the idea of class inside class and that's turning out as a problem. I try to find a way to set the suits and faces members of the class Card but I can't figure out how. for(int i=0; i<deck.size(); i++) { deck[i]//....There is no function to set them. They must be set when initialized. But how?? } } For easier reading: http://pastebin.com/pJeXMH0f

    Read the article

  • Installing Ubuntu 12.04.1 x64 with Fake RAID 1 [SOLVED]

    - by Arkadius
    I had: Software: Dual boot with Windows XP Ubuntu 10.04 LTS x32 Hardware Fake RAID 1 (mirroring) with 2x1 TB: Partition 1 - Windows Partition 2 - SWAP Partition 3 - / (root) Partition 4 - Extended Partition 5 - /home Partition 6 - /data arek@domek:/var/log/installer$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sda1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sda2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sda3 528506370 570468149 20980890 83 Linux /dev/sda4 570468150 1953118439 691325145 5 Extended /dev/sda5 570468213 675340469 52436128+ 83 Linux /dev/sda6 675340533 1953118439 638888953+ 83 Linux Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de1b9 Device Boot Start End Blocks Id System /dev/sdb1 * 63 524297339 262148638+ 7 HPFS/NTFS/exFAT /dev/sdb2 524297340 528506369 2104515 82 Linux swap / Solaris /dev/sdb3 528506370 570468149 20980890 83 Linux /dev/sdb4 570468150 1953118439 691325145 5 Extended /dev/sdb5 570468213 675340469 52436128+ 83 Linux /dev/sdb6 675340533 1953118439 638888953+ 83 Linux arek@domek:/var/log/installer$ ls -l /dev/mapper/ total 0 crw------- 1 root root 10, 236 Oct 7 20:17 control lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha -> ../dm-0 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha1 -> ../dm-1 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha2 -> ../dm-2 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha3 -> ../dm-3 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha4 -> ../dm-4 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha5 -> ../dm-5 lrwxrwxrwx 1 root root 7 Oct 7 20:17 pdc_jhjbcaha6 -> ../dm-6 I wanted to upgrade from 10.04 x32 to 12.04 x64 using FRESH installation. So, run installation of Ubuntu 12.04.1 x64 LTS using alternate CD. During the installation I selected manual partitioning and to: - Use and Format / (root) - Use and Format SWAP - Use and Keep data on /home - Use and Keep data on /data After I clicked "Continue" I get error creating and formatting SWAP partition. I go to terminal with Alt + F2 (?) and hit enter. I discovered that there was visible RAID as only disk with NO partitions. Something like this: arek@domek:/var/log/installer$ ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Oct 7 20:17 /dev/mapper/pdc_jhjbcaha -> ../dm-0 arek@domek:/var/log/installer$ ls -l /dev/dm* brw-rw---- 1 root disk 252, 0 Oct 7 20:17 /dev/dm-0 So I switched to log console Alt+F3 (?) and saw errors like below: Oct 7 14:02:45 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:45 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:45 anna-install: Installing dmraid-udeb Oct 7 14:02:45 anna[12599]: DEBUG: retrieving dmraid-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving libdmraid1.0.0.rc16-udeb 1.0.0.rc16-4.1ubuntu8 Oct 7 14:02:49 anna[12599]: DEBUG: retrieving kpartx-udeb 0.4.9-3ubuntu5 Oct 7 14:02:49 disk-detect: Serial ATA RAID disk(s) detected. Oct 7 14:02:55 disk-detect: Enabling dmraid support. Oct 7 14:02:55 disk-detect: RAID set "pdc_jhjbcaha" was activated Oct 7 14:02:55 HERE --> dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha Oct 7 14:02:56 check-missing-firmware: /dev/.udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: /run/udev/firmware-missing does not exist, skipping Oct 7 14:02:56 check-missing-firmware: no missing firmware in /dev/.udev/firmware-missing /run/udev/firmware-missing Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (libnewt0.52): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: DEBUG: resolver (ext2-modules): package doesn't exist (ignored) Oct 7 14:02:57 main-menu[428]: INFO: Menu item 'partman-base' selected Oct 7 14:02:57 kernel: [ 316.512999] NTFS driver 2.1.30 [Flags: R/O MODULE]. Oct 7 14:02:57 kernel: [ 316.523221] Btrfs loaded Oct 7 14:02:57 kernel: [ 316.534781] JFS: nTxBlock = 8192, nTxLock = 65536 Oct 7 14:02:57 kernel: [ 316.554749] SGI XFS with ACLs, security attributes, realtime, large block/inode numbers, no debug enabled Oct 7 14:02:57 kernel: [ 316.555336] SGI XFS Quota Management subsystem Oct 7 14:02:58 md-devices: mdadm: No arrays found in config file or automatically Oct 7 14:02:58 partman: No matching physical volumes found Oct 7 14:02:58 partman: No volume groups found Oct 7 14:02:58 partman: Reading all physical volumes. This may take a while... Oct 7 14:02:58 partman-lvm: No volume groups found Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:02:58 partman: Error running 'tune2fs -l /dev/mapper/pdc_jhjbcaha' Oct 7 14:06:11 HERE --> partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory Oct 7 14:07:28 init: starting pid 401, tty '/dev/tty2': '-/bin/sh' Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface eth0 Oct 7 14:15:00 net/hw-detect.hotplug: Detected hotpluggable network interface lo As You can see there are 2 errors Oct 7 14:02:55 dmraid-activate: ERROR: Cannot retrieve RAID set information for pdc_jhjbcaha and Oct 7 14:06:11 partman: mkswap: can't open '/dev/mapper/pdc_jhjbcaha2': No such file or directory I looked in the internet and try to run command "dmraid -ay" and get something like that: dmraid -ay /dev/mapper/pdc_jhjbcaha -> Already activated /dev/mapper/pdc_jhjbcaha1 -> Successfully activated /dev/mapper/pdc_jhjbcaha2 -> Successfully activated /dev/mapper/pdc_jhjbcaha3 -> Successfully activated /dev/mapper/pdc_jhjbcaha4 -> Successfully activated /dev/mapper/pdc_jhjbcaha5 -> Successfully activated /dev/mapper/pdc_jhjbcaha6 -> Successfully activated Then I returned to installer with Alt+F1 (?) and click "Return" to return to partitioning menu. I did NOT change anything just selected again "Continue" and everything goes smoothly. I hope this will help someone. arkadius

    Read the article

  • Books are Dead! Long Live the Books!

    - by smisner
    We live in interesting times with regard to the availability of technical material. We have lots of free written material online in the form of vendor documentation online, forums, blogs, and Twitter. And we have written material that we can buy in the form of books, magazines, and training materials. Online videos and training – some free and some not free – are also an option. All of these formats are useful for one need or another. As an author, I pay particular attention to the demand for books, and for now I see no reason to stop authoring books. I assure you that I don’t get rich from the effort, and fortunately that is not my motivation. As someone who likes to refer to books frequently, I am still a big believer in books and have evidence from book sales that there are others like me. If I can do my part to help others learn about the technologies I work with, I will continue to produce content in a variety of formats, including books. (You can view a list of all of my books on the Publications page of my site and my online training videos at Pluralsight.) As a consumer of technical information, I prefer books because a book typically can get into a topic much more deeply than a blog post, and can provide more context than vendor documentation. It comes with a table of contents and a (hopefully accurate) index that helps me zero in on a topic of interest, and of course I can use the Search feature in digital form. Some people suggest that technology books are outdated as soon as they get published. I guess it depends on where you are with technology. Not everyone is able to upgrade to the latest and greatest version at release. I do assume, however, that the SQL Server 7.0 titles in my library have little value for me now, but I’m certain that the minute I discard the book, I’m going to want it for some reason! Meanwhile, as electronic books overtake physical books in sales, my husband is grateful that I can continue to build my collection digitally rather than physically as the books have a way of taking over significant square footage in our house! Blog posts, on the other hand, are useful for describing the scenarios that come up in real-life implementations that wouldn’t fit neatly into a book. As many years that I have working with the Microsoft BI stack, I still run into new problems that require creative thinking. Likewise, people who work with BI and other technologies that I use share what they learn through their blogs. Internet search engines help us find information in blogs that simply isn’t available anywhere else. Another great thing about blogs, also, is the connection to community and the dialog that can ensue between people with common interests. With the trend towards electronic formats for books, I imagine that we’ll see books continue to adapt to incorporate different forms of media and better ways to keep the information current. At the moment, I wish I had a better way to help readers with my last two Reporting Services books. In the case of the Microsoft® SQL Server™ 2005 Reporting Services Step by Step book, I have heard many cases of readers having problems with the sample database that shipped on CD – either the database was missing or it was corrupt. So I’ve provided a copy of the database on my site for download from http://datainspirations.com/uploads/rs2005sbsDW.zip. Then for the Microsoft® SQL Server™ 2008 Reporting Services Step by Step book, we decided to avoid the database problem by using the AdventureWorks2008 samples that Microsoft published on Codeplex (although code samples are still available on CD). We had this silly idea that the URL for the download would remain constant, but it seems that expectation was ill-founded. Currently, the sample database is found at http://msftdbprodsamples.codeplex.com/releases/view/37109 but I have no idea how long that will remain valid. My latest books (#9 and #10 which are milestones I never anticipated), Building Integrated Business Intelligence Solutions with SQL Server 2008 R2 and Office 2010 (McGraw Hill, 2011) and Business Intelligence in Microsoft SharePoint 2010 (Microsoft Press, 2011), will not ship with a CD, but will provide all code samples for download at a site maintained by the respective publishers. I expect that the URLs for the downloads for the book will remain valid, but there are lots of references to other sites that can change or disappear over time. Does that mean authors shouldn’t make reference to such sites? Personally, I think the benefits to be gained from including links are greater than the risks of the links becoming invalid at some point. Do you think the time for technology books has come to an end? Is the delivery of books in electronic format enough to keep them alive? If technological barriers were no object, what would make a book more valuable to you than other formats through which you can obtain information?

    Read the article

  • Application Development: Python or Java (or PHP)

    - by luckysmack
    I'm looking to get into application development, such as Facebook or Android apps and games. I am doing this for fun and to learn. Once my skills are to par I would like to have some side income from the apps, but I'm not banking on living off that (just so you know where I'm coming from and know what my end goals are). Currently I know and am familiar with PHP and frameworks such as cakephp and yii. However, I have been wanting to learn another language to broaden my horizons and to become a better developer. So I have narrowed it down to 2 languages. Python, and Java (I can already hear people cringing at the difference in the languages I have chosen, but I have some reasons). Python: closer to PHP that Java. Cross platformability. Also great as a general scripting language and has many file system level benefits that PHP does not. Cleaner syntax, readability, blah blah and the list goed on. Python will work great for cross platform apps and can be run on many OS's and is supported by Facebook for app development. But there is no support on Android (for full fledged apps). Java: a much stronger typed language, very robust community and corporate backing. Knowing Java is also good for personal marketability for enterprises, if you're into that. The main benefit here is that Java can write apps natively for Android and the apps can be ported for web versions to play on Facebook. So while I have seen many developers prefer Java over the two, Java has this significant advantage, where I can market my apps in both markets and in the future build more potential income. But like I said it is for fun. While money isn't the goal, it would still be nice. PHP: I'm putting this here because I know it already, and I'm sure a case could be made for it. It obviously works great for Facebook but like Python does not do so well on android. While it's mostly the realm of 'application development' that appeals to me, I do find Android apps fairly interesting and something that has a ton of potential to. But then again Facebook has a ton more users and the apps can also potentially be more immersive (desktop vs. mobile). So this is why I'm kinda stuck on what route to choose. Python for Facebook and web apps, with likely faster development to production times, or Java which can be developed for any of the platforms to make apps. Side note: I'm not really trying to get into 3D development, mostly 2D. And I also want to make an app with real-time play (websockets, etc). Someone mentioned node, js to me for that but Python seems to be more globally versatile for my goals. So, to anyone that does Facebook or Android development in either language: what do you suggest? Any input is valuable and I do appreciate it. And sorry for being long winded. EDIT: as mentioned in one of the answers, my primary goal is gaming. Although I do have some plans for non gaming apps such as general web based and desktop based ones. But gaming is my main goal with the possibility of income. EDIT: Another consideration could be Jython. Writing Python code which is converted into Java bytecode. This would allow the ability to do Android apps using Python. I could be wrong though, I'm still looking into it. Update 1-26-11: I recently acquired a new job which required I learn .NET using C#. Im sure some of you are cringing already but I really like the whole system and how it all works together between desktop and web development. But, as I am still interested in Python very much, and after some research I have decided I will learn Python as well as the IronPython implementation for .NET. But (again: I know...) since .NET is mostly a Windows thing and not as cross-compatible as I like, I will be learning Mono which is a cross platform implementation of .NET where I can use what I learn at work using C# and what I want to learn, Python/IronPython. So while learning and writing C#/.NET @ work I will be learning Python - Mono - Iron Python for what I want to do personally. And the benefit of them all being very closely related will help me out a lot, I think. What do you guys think? I almost feel like that should be another question, but there's not much of a question. Either way, you guys gave very helpful input.

    Read the article

  • Building a plug-in for Windows Live Writer

    - by mbcrump
    This tutorial will show you how to build a plug-in for Windows Live Writer. Windows Live Writer is a blogging tool that Microsoft provides for free. It includes an open API for .NET developers to create custom plug-ins. In this tutorial, I will show you how easy it is to build one. Open VS2008 or VS2010 and create a new project. Set the target framework to 2.0, Application Type to Class Library and give it a name. In this tutorial, we are going to create a plug-in that generates a twitter message with your blog post name and a TinyUrl link to the blog post.  It will do all of this automatically after you publish your post. Once, we have a new projected created. We need to setup the references. Add a reference to the WindowsLive.Writer.Api.dll located in the C:\Program Files (x86)\Windows Live\Writer\ folder, if you are using X64 version of Windows. You will also need to add a reference to System.Windows.Forms System.Web from the .NET tab as well. Once that is complete, add your “using” statements so that it looks like whats shown below: Live Writer Plug-In "Using" using System; using System.Collections.Generic; using System.Text; using WindowsLive.Writer.Api; using System.Web; Now, we are going to setup some build events to make it easier to test our custom class. Go into the Properties of your project and select Build Events, click edit the Post-build and copy/paste the following line: XCOPY /D /Y /R "$(TargetPath)" "C:\Program Files (x86)\Windows Live\Writer\Plugins\" Your screen should look like the one pictured below: Next, we are going to launch an external program on debug. Click the debug tab and enter C:\Program Files (x86)\Windows Live\Writer\WindowsLiveWriter.exe Your screen should look like the one pictured below:   Now we have a blank project and we need to add some code. We start with adding the attributes for the Live Writer Plugin. Before we get started creating the Attributes, we need to create a GUID. This GUID will uniquely identity our plug-in. So, to create a GUID follow the steps in VS2008/2010. Click Tools from the VS Menu ->Create GUID It will generate a GUID like the one listed below: GUID <Guid("56ED8A2C-F216-420D-91A1-F7541495DBDA")> We only want what’s inside the quotes, so your final product should be: "56ED8A2C-F216-420D-91A1-F7541495DBDA". Go ahead and paste this snipped into your class just above the public class. Live Writer Plug-In Attributes [WriterPlugin("56ED8A2C-F216-420D-91A1-F7541495DBDA",    "Generate Twitter Message",    Description = "After your new post has been published, this plug-in will attempt to generate a Twitter status messsage with the Title and TinyUrl link.",    HasEditableOptions = false,    Name = "Generate Twitter Message",    PublisherUrl = "http://michaelcrump.net")] [InsertableContentSource("Generate Twitter Message")] So far, it should look like the following: Next, we need to implement the PublishNotifcationHook class and override the OnPostPublish. I’m not going to dive into what the code is doing as you should be able to follow pretty easily. The code below is the entire code used in the project. PublishNotificationHook public class Class1 :  PublishNotificationHook  {      public override void OnPostPublish(System.Windows.Forms.IWin32Window dialogOwner, IProperties properties, IPublishingContext publishingContext, bool publish)      {          if (!publish) return;          if (string.IsNullOrEmpty(publishingContext.PostInfo.Permalink))          {              PluginDiagnostics.LogError("Live Tweet didn't execute, due to blank permalink");          }          else          {                var strBlogName = HttpUtility.UrlEncode("#blogged : " + publishingContext.PostInfo.Title);  //Blog Post Title              var strUrlFinal = getTinyUrl(publishingContext.PostInfo.Permalink); //Blog Permalink URL Converted to TinyURL              System.Diagnostics.Process.Start("http://twitter.com/home?status=" + strBlogName + strUrlFinal);            }      } We are going to go ahead and create a method to create the short url (tinyurl). TinyURL Helper Method private static string getTinyUrl(string url) {     var cmpUrl = System.Globalization.CultureInfo.InvariantCulture.CompareInfo;     if (!cmpUrl.IsPrefix(url, "http://tinyurl.com"))     {         var address = "http://tinyurl.com/api-create.php?url=" + url;         var client = new System.Net.WebClient();         return (client.DownloadString(address));     }     return (url); } Go ahead and build your project, it should have copied the .DLL into the Windows Live Writer Plugin Directory. If it did not, then you will want to check your configuration. Once that is complete, open Windows Live Writer and select Tools-> Options-> Plug-ins and enable your plug-in that you just created. Your screen should look like the one pictured below: Go ahead and click OK and publish your blog post. You should get a pop-up with the following: Hit OK and It should open a Twitter and either ask for a login or fill in your status as shown below:   That should do it, you can do so many other things with the API. I suggest that if you want to build something really useful consult the MSDN pages. This plug-in that I created was perfect for what I needed and I hope someone finds it useful.

    Read the article

  • "Yes, but that's niche."

    - by Geertjan
    JavaOne 2012 has come to an end though it feels like it hasn't even started yet! What happened, time is a weird thing. Too many things to report on. James Gosling's appearance at the JavaOne community keynote was seen, by everyone (which is quite a lot) of people I talked to, as the highlight of the conference. It was interesting that the software for the Duke's Choice Award winning Liquid Robotics that James Gosling is now part of and came to talk about is a Swing application that uses the WorldWind libraries. It was also interesting that James Gosling pointed out to the conference: "There are things you can't do using HTML." That brings me to the wonderful counter argument to the above, which I spend my time running into a lot: "Yes, but that's niche." It's a killer argument, i.e., it kills all discussions completely in one fell swoop. Kind of when you're talking about someone and then this sentence drops into the conversation: "Yes, but she's got cancer now." Here's one implementation of "Yes, but that's niche": Person A: All applications are moving to the web, tablet, and mobile phone. That's especially true now with HTML5, which is going to wipe away everything everywhere and all applications are going to be browser based. Person B: What about air traffic control applications? Will they run on mobile phones too? And do you see defence applications running in a browser? Don't you agree that there are multiple scenarios imaginable where the Java desktop is the optimal platform for running applications? Person A: Yes, but that's niche. Here's another implementation, though it contradicts the above [despite often being used by the same people], since JavaFX is a Java desktop technology: Person A: Swing is dead. Everyone is going to be using purely JavaFX and nothing else. Person B: Does JavaFX have a docking framework and a module system? Does it have a plugin system?  These are some of the absolutely basic requirements of Java desktop software once you get to high end systems, e.g., banks, defence force, oil/gas services. Those kinds of applications need a web browser and so they love the JavaFX WebView component and they also love the animated JavaFX charting components. But they need so much more than that, i.e., an application framework. Aren't there requirements that JavaFX isn't meeting since it is a UI toolkit, just like Swing is a UI toolkit, and what they have in common is their lack, i.e., natively, of any kind of application framework? Don't people need more than a single window and a monolithic application structure? Person A: Yes, but that's niche. In other words, anything that doesn't fit within the currently dominant philosophy is "niche", for no other reason than that it doesn't fit within the currently dominant philosophy... regardless of the actual needs of real developers. Saying "Yes, but that's niche", kills the discussion completely, because it relegates one side of the conversation to the arcane and irrelevant corners of the universe. You're kind of like Cobol now, as soon as "Yes, but that's niche" is said. What's worst about "Yes, but that's niche" is that it doesn't enter into any discussion about user requirements, i.e., there's so few that need this particular solution that we don't even need to talk about them anymore. Note, of course, that I'm not referring specifically or generically to anyone or anything in particular. Just picking up from conversations I've picked up on as I was scurrying around the Hilton's corridors while looking for the location of my next presentation over the past few days. It does, however, mean that there were people thinking "Yes, but that's niche" while listening to James Gosling pointing out that HTML is not the be-all and end-all of absolutely everything. And so this all leaves me wondering: How many applications must be part of a niche for the niche to no longer be a niche? And what if there are multiple small niches that have the same requirements? Don't all those small niches together form a larger whole, one that should be taken seriously, i.e., a whole that is not a niche?

    Read the article

  • AppHarbor - Azure Done Right AKA Heroku for .NET

    - by Robz / Fervent Coder
    Easy and Instant deployments and instant scale for .NET? Awhile back a few of us were looking at Ruby Gems as the answer to package management for .NET. The gems platform supported the concept of DLLs as packages although some changes would have needed to happen to have long term use for the entire community. From that we formed a partnership with some folks at Microsoft to make v2 into something that would meet wider adoption across the community, which people now call NuGet. So now we have the concept of package management. What comes next? Heroku Instant deployments and instant scaling. Stupid simple API. This is Heroku. It doesn’t sound like much, but when you think of how fast you can go from an idea to having someone else tinker with it, you can start to see its power. In literally seconds you can be looking at your rails application deployed and online. Then when you are ready to scale, you can do that. This is power. Some may call this “cloud-computing” or PaaS (Platform as a Service). I first ran into Heroku back in July when I met Nick of RubyGems.org. At the time there was no alternative in the .NET-o-sphere. I don’t count Windows Azure, mostly because it is not simple and I don’t believe there is a free version. Heroku itself would not lend itself well to .NET due to the nature of platforms and each language’s specific needs (solution stack).  So I tucked the idea in the back of my head and moved on. AppHarbor Enters The Scene I’m not sure when I first heard about AppHarbor as a possible .NET version of Heroku. It may have been in November, but I didn’t actually try it until January. I was instantly hooked. AppHarbor is awesome! It still has a ways to go to be considered Heroku for .NET, but it already has a growing community. I created a video series (at the bottom of this post) that really highlights how fast you can get a product onto the web and really shows the power and simplicity of AppHarbor. Deploying is as simple as a git/hg push to appharbor. From there they build your code, run any unit tests you have and deploy it if everything succeeds. The screen on the right shows a simple and elegant UI to getting things done. The folks at AppHarbor graciously gave me a limited number of invites to hand out. If you are itching to try AppHarbor then navigate to: https://appharbor.com/account/new?inviteCode=ferventcoder  After playing with it, send feedback if you want more features. Go vote up two features I want that will make it more like Heroku. Disclaimer: I am in no way affiliated with AppHarbor and have not received any funds or favors from anyone at AppHarbor. I just think it is awesome and I want others to know about it. From Zero To Deployed in 15 Minutes (Or Less) Now I have a challenge for you. I created a video series showing how fast I could go from nothing to a deployed application. It could have been from Zero to Deployed in Less than 5 minutes, but I wanted to show you the tools a little more and give you an opportunity to beat my time. And that’s the challenge. Beat my time and show it in a video response. The video series is below (at least one of the videos has to be watched on YouTube). The person with the best time by March 15th @ 11:59PM CST will receive a prize. Ground rules: .NET Application with a valid database connection Start from Zero Deployed with AppHarbor or an alternative A timer displayed in the video that runs during the entire process Video response published on YouTube or acceptable alternative Video(s) must be published by March 15th at 11:59PM CST. Either post the link here as a comment or on YouTube as a response (also by 11:59PM CST March 15th) From Zero To Deployed In 15 Minutes (Or Less) Part 1 From Zero To Deployed In 15 Minutes (Or Less) Part 2 From Zero To Deployed In 15 Minutes (Or Less) Part 3

    Read the article

  • Sending Outlook Invites

    - by Daniel Moth
    Sending an Outlook invite for a meeting (also referred to as S+ in Microsoft) is a simple thing to get right if you just run the quick mental check below, which is driven by visual cues in the Outlook UI. I know that some folks don’t do this often or are new to Outlook, so if you know one of those folks share this blog post with them and if they read nothing else ask them to read step 7. Add on the To line the folks that you want to be at the meeting. Indicate optional invitees. Click on the “To” button to bring up the dialog that lets you move folks to be Optional (you can also do this from the Scheduling Assistant). Set the Reminder according to the attendee that has to travel the most. 5 minutes is the minimum. Use the Response Options and uncheck the "Request Response" if your event is going ahead regardless of who can make it or not, i.e. if everyone is optional. Don’t force every recipient to make an extra click, instead make the extra click yourself - you are the organizer. Add a good subject Make the subject such that just by reading it folks know what the meeting is about. Examples, e.g. "Review…", "Finalize…", "XYZ sync up" If this is only between two people and what is commonly referred to as a one to one, the subject would be something like "MyName/YourName 1:1" Write the subject in such a way that when the recipient sees this on their calendar among all the other items, they know what this meeting is about without having to see location, recipients, or any other information about the invite. Add a location, typically a meeting room. If recipients are from different buildings, schedule it where the folks that are doing the other folks a favor live. Otherwise schedule it wherever the least amount of people will have to travel. If you send me an invite to come to your building, and there is more of us than you, you are silently sending me the message that you are doing me a favor so if you don’t want to do that, include a note of why this is in your building, e.g. "Sorry we are slammed with back to back meetings today so hope you can come over to our building". If this is in someone's office, the location would be something like "Moth's office (7/666)" where in parenthesis you see the office location. If some folks are remote in another building/country, or if you know you picked a time which wasn't free for everyone, add an Online option (click the Lync Meeting button). Add a date and time. This MUST be at a time that is showing on the recipients’ calendar as FREE or at worst TENTATIVE. You can check that on the Scheduling Assistant. The reality is that this is not always possible, so in that case you MUST say something about it in the Invite Body, e.g. "Sorry I can see X has a conflict, but I cannot find a better slot", or "With so many of us there are some conflicts and I cannot find a better slot so hope this works", or "Apologies but due to Y we must have this meeting at this time and I know there are some conflicts, hope you can make it anyway". When you do that, I better not be able to find a better slot myself for all of us, and of course when you do that you have implicitly designated the Busy folks as optional. Finally, the body of the invite. This has the agenda of the meeting and if applicable the courtesy apologies due to messing up steps 6 & 7. This should not be the introduction to the meeting, in other words the recipients should not be surprised when they see the invite and go to the body to read it. Notifying them of the meeting takes place via separate email where you explain the purpose and give them a heads up that you'll be sending an invite. That separate email is also your chance to attach documents, don’t do that as part of the invite. TIP: If you have sent mail about the meeting, you can then go to your sent folder to select the message and click the "Meeting" button (Ctrl+Alt+R). This will populate the body with the necessary background, auto select the mandatory and optional attendees as per the TO/CC line, and have a subject that may be good enough already (or you can tweak it). Long to write, but very quick to remember and enforce since most of it is common sense and the checklist is driven of the visual cues in the UI you use to send the invite. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

< Previous Page | 652 653 654 655 656 657 658 659 660 661 662 663  | Next Page >