Search Results

Search found 1702 results on 69 pages for 'guy mograbi'.

Page 66/69 | < Previous Page | 62 63 64 65 66 67 68 69  | Next Page >

  • Issues with Verizon's "Network Extender" device talking on my home network.

    - by Logan
    I recently switched my phone service to Verizon from ATT, and I get somewhat spotty service in my house. I called them and they sent me a "network extender" device for free. Its a femtocell that connects to my home network. The directions that come with it are very dumbed down, basically just say to connect it to your router and put it near a window (so it can get a GPS signal, it has to make sure its within the correct area before operating). The problem I'm having is the network light on it stays red. The troubleshooting information that came with it tells me this means there is a bad network connection. Its connected through an ASUS router running DD-WRT. No other devices on my network have a problem with it, including a Western Digital WDLIVE device, mine and my wife's cell phones (via wifi), a Wii, and an Xbox. If I connect the device directly to my cable modem, the light goes blue (which means good) and it starts working. So this tells me that its definately a configuration issue with my router. Verizon basically washed their hands of me when I connected it to my cable modem, and told me that its a router issue and to try a different router. Because normal people just have extra routers laying around their houses... When I connect it to the router, I can watch the DHCP Clients list on the status page, and the MAC of the network extender quickly fills up the clients list, grabbing every available DHCP address. Its like it grabs an address, can't connect to the internet, releases it, grabs another, then another, then another. So in the DHCP server settings I assigned a static IP to its MAC. This made it quit doing what it was doing before, but its still not working. I found the ports I needed to open on verizon's website, and opened them in the port forwarding config on my router. This still didn't help. So, I tried setting the network extender device's IP as the DMZ IP on the router. This still did no good. I called Verizon back and got the tech to write up a report which he passed on to a "senior network tech" who I got a call back from a few hours ago. This guy told me that while an ASUS router isn't listed as a supported device, he's not really sure why its not working. He suggested restoring the firmware to stock ASUS firmware and trying again. I have a very hard time believing its DD-WRT doing this, since every other device is working just fine with it. But its also not the Network Extender, since it works just fine when connected directly to the modem. At this point I'm out of ideas, and the next step is to restore the stock firmware on my router, and then going to walmart and getting a linksys WRT-54G to try. Is there anything else I could try before going that drastic? Cliffs- -Network extender won't work behind router, works when connected directly to cable modem. -Extender goes nuts when allowed to pick its own DHCP address, I had to assign it a static IP. -Won't work when correct ports are forwarded to it -Won't work with a DMZ address.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Agile Like Jazz

    - by Jeff Certain
    (I’ve been sitting on this for a week or so now, thinking that it needed to be tightened up a bit to make it less rambling. Since that’s clearly not going to happen, reader beware!) I had the privilege of spending around 90 minutes last night sitting and listening to Sonny Rollins play a concert at the Disney Center in LA. If you don’t know who Sonny Rollins is, I don’t know how to explain the experience; if you know who he is, I don’t need to. Suffice it to say that he has been recording professionally for over 50 years, and helped create an entire genre of music. A true master by any definition. One of the most intriguing aspects of a concert like this, however, is watching the master step aside and let the rest of the musicians play. Not just play their parts, but really play… letting them take over the spotlight, to strut their stuff, to soak up enthusiastic applause from the crowd. Maybe a lot of it has to do with the fact that Sonny Rollins has been doing this for more than a half-century. Maybe it has something to do with a kind of patience you learn when you’re on the far side of 80 – and the man can still blow a mean sax for 90 minutes without stopping! Maybe it has to do with the fact that he was out there for the love of the music and the love of the show, not because he had anything to prove to anyone and, I like to think, not for the money. Perhaps it had more to do with the fact that, when you’re at that level of mastery, the other musicians are going to be good. Really good. Whatever the reasons, there was a incredible freedom on that stage – the ability to improvise, for each musician to showcase their own specialization and skills, and them come back to the common theme, back to being on the same page, as it were. All this took place in the same venue that is home to the L.A. Phil. Somehow, I can’t ever see the same kind of free-wheeling improvisation happening in that context. And, since I’m a geek, I started thinking about agility. Rollins has put together a quintet that reflects his own particular style and past. No upright bass or piano for Rollins – drums, bongos, electric guitar and bass guitar along with his sax. It’s not about the mix of instruments. Other trios, quartets, and sextets use different mixes of instruments. New Orleans jazz tends towards trombones instead of sax; some prefer cornet or trumpet. But no matter what the choice of instruments, size matters. Team sizes are something I’ve been thinking about for a while. We’re on a quest to rethink how our teams are organized. They just feel too big, too unwieldy. In fact, they really don’t feel like teams at all. Most of the time, they feel more like collections or people who happen to report to the same manager. I attribute this to a couple factors. One is over-specialization; we have a tendency to have people work in silos. Although the teams are product-focused, within them our developers are both generalists and specialists. On the one hand, we expect them to be able to build an entire vertical slice of the application; on the other hand, each developer tends to be responsible for the vertical slice. As a result, developers often work on their own piece of the puzzle, in isolation. This sort of feels like working on a jigsaw in a group – each person taking a set of colors and piecing them together to reveal a portion of the overall picture. But what inevitably happens when you go to meld all those pieces together? Inevitably, you have some sections that are too big to move easily. These sections end up falling apart under their own weight as you try to move them. Not only that, but there are other challenges – figuring out where that section fits, and how to tie it into the rest of the puzzle. Often, this is when you find a few pieces need to be added – these pieces are “glue,” if you will. The other issue that arises is due to the overhead of maintaining communications in a team. My mother, who worked in IT for around 30 years, once told me that 20% per team member is a good rule of thumb for maintaining communication. While this is a rule of thumb, it seems to imply that any team over about 6 people is going to become less agile simple because of the communications burden. Teams of ten or twelve seem like they fall into the philharmonic organizational model. Complicated pieces of music requiring dozens of players to all be on the same page requires a much different model than the jazz quintet. There’s much less room for improvisation, originality or freedom. (There are probably orchestral musicians who will take exception to this characterization; I’m calling it like I see it from the cheap seats.) And, there’s one guy up front who is running the show, whose job is to keep all of those dozens of players on the same page, to facilitate communications. Somehow, the orchestral model doesn’t feel much like a self-organizing team, either. The first violin may be the best violinist in the orchestra, but they don’t get to perform free-wheeling solos. I’ve never heard of an orchestra getting together for a jam session. But I have heard of teams that organize their work based on the developers available, rather than organizing the developers based on the work required. I have heard of teams where desired functionality is deferred – or worse yet, schedules are missed – because one critical person doesn’t have any bandwidth available. I’ve heard of teams where people simply don’t have the big picture, because there is too much communication overhead for everyone to be aware of everything that is happening on a project. I once heard Paul Rayner say something to the effect of “you have a process that is perfectly designed to give you exactly the results you have.” Given a choice, I want a process that’s much more like jazz than orchestral music. I want a process that doesn’t burden me with lots of forms and checkboxes and stuff. Give me the simplest, most lightweight process that will work – and a smaller team of the best developers I can find. This seems like the kind of process that will get the kind of result I want to be part of.

    Read the article

  • Source-control 'wet-work'?

    - by Phil Factor
    When a design or creative work is flawed beyond remedy, it is often best to destroy it and start again. The other day, I lost the code to a long and intricate SQL batch I was working on. I’d thought it was impossible, but it happened. With all the technology around that is designed to prevent this occurring, this sort of accident has become a rare event.  If it weren’t for a deranged laptop, and my distraction, the code wouldn’t have been lost this time.  As always, I sighed, had a soothing cup of tea, and typed it all in again.  The new code I hastily tapped in  was much better: I’d held in my head the essence of how the code should work rather than the details: I now knew for certain  the start point, the end, and how it should be achieved. Instantly the detritus of half-baked thoughts fell away and I was able to write logical code that performed better.  Because I could work so quickly, I was able to hold the details of all the columns and variables in my head, and the dynamics of the flow of data. It was, in fact, easier and quicker to start from scratch rather than tidy up and refactor the existing code with its inevitable fumbling and half-baked ideas. What a shame that technology is now so good that developers rarely experience the cleansing shock of losing one’s code and having to rewrite it from scratch.  If you’ve never accidentally lost  your code, then it is worth doing it deliberately once for the experience. Creative people have, until Technology mistakenly prevented it, torn up their drafts or sketches, threw them in the bin, and started again from scratch.  Leonardo’s obsessive reworking of the Mona Lisa was renowned because it was so unusual:  Most artists have been utterly ruthless in destroying work that didn’t quite make it. Authors are particularly keen on writing afresh, and the results are generally positive. Lawrence of Arabia actually lost the entire 250,000 word manuscript of ‘The Seven Pillars of Wisdom’ by accidentally leaving it on a train at Reading station, before rewriting a much better version.  Now, any writer or artist is seduced by technology into altering or refining their work rather than casting it dramatically in the bin or setting a light to it on a bonfire, and rewriting it from the blank page.  It is easy to pick away at a flawed work, but the real creative process is far more brutal. Once, many years ago whilst running a software house that supplied commercial software to local businesses, I’d been supervising an accounting system for a farming cooperative. No packaged system met their needs, and it was all hand-cut code.  For us, it represented a breakthrough as it was for a government organisation, and success would guarantee more contracts. As you’ve probably guessed, the code got mangled in a disk crash just a week before the deadline for delivery, and the many backups all proved to be entirely corrupted by a faulty tape drive.  There were some fragments left on individual machines, but they were all of different versions.  The developers were in despair.  Strangely, I managed to re-write the bulk of a three-month project in a manic and caffeine-soaked weekend.  Sure, that elegant universally-applicable input-form routine was‘nt quite so elegant, but it didn’t really need to be as we knew what forms it needed to support.  Yes, the code lacked architectural elegance and reusability. By dawn on Monday, the application passed its integration tests. The developers rose to the occasion after I’d collapsed, and tidied up what I’d done, though they were reproachful that some of the style and elegance had gone out of the application. By the delivery date, we were able to install it. It was a smaller, faster application than the beta they’d seen and the user-interface had a new, rather Spartan, appearance that we swore was done to conform to the latest in user-interface guidelines. (we switched to Helvetica font to look more ‘Bauhaus’ ). The client was so delighted that he forgave the new bugs that had crept in. I still have the disk that crashed, up in the attic. In IT, we have had mixed experiences from complete re-writes. Lotus 123 never really recovered from a complete rewrite from assembler into C, Borland made the mistake with Arago and Quattro Pro  and Netscape’s complete rewrite of their Navigator 4 browser was a white-knuckle ride. In all cases, the decision to rewrite was a result of extreme circumstances where no other course of action seemed possible.   The rewrite didn’t come out of the blue. I prefer to remember the rewrite of Minix by young Linus Torvalds, or the rewrite of Bitkeeper by a slightly older Linus.  The rewrite of CP/M didn’t do too badly either, did it? Come to think of it, the guy who decided to rewrite the windowing system of the Xerox Star never regretted the decision. I’ll agree that one should often resist calls for a rewrite. One of the worst habits of the more inexperienced programmer is to denigrate whatever code he or she inherits, and then call loudly for a complete rewrite. They are buoyed up by the mistaken belief that they can do better. This, however, is a different psychological phenomenon, more related to the idea of some motorcyclists that they are operating on infinite lives, or the occasional squaddies that if they charge the machine-guns determinedly enough all will be well. Grim experience brings out the humility in any experienced programmer.  I’m referring to quite different circumstances here. Where a team knows the requirements perfectly, are of one mind on methodology and coding standards, and they already have a solution, then what is wrong with considering  a complete rewrite? Rewrites are so painful in the early stages, until that point where one realises the payoff, that even I quail at the thought. One needs a natural disaster to push one over the edge. The trouble is that source-control systems, and disaster recovery systems, are just too good nowadays.   If I were to lose this draft of this very blog post, I know I’d rewrite it much better. However, if you read this, you’ll know I didn’t have the nerve to delete it and start again.  There was a time that one prayed that unreliable hardware would deliver you from an unmaintainable mess of a codebase, but now technology has made us almost entirely immune to such a merciful act of God. An old friend of mine with long experience in the software industry has long had the idea of the ‘source-control wet-work’,  where one hires a malicious hacker in some wild eastern country to hack into one’s own  source control system to destroy all trace of the source to an application. Alas, backup systems are just too good to make this any more than a pipedream. Somehow, it would be difficult to promote the idea. As an alternative, could one construct a source control system that, on doing all the code-quality metrics, would systematically destroy all trace of source code that failed the quality test? Alas, I can’t see many managers buying into the idea. In reading the full story of the near-loss of Toy Story 2, it set me thinking. It turned out that the lucky restoration of the code wasn’t the happy ending one first imagined it to be, because they eventually came to the conclusion that the plot was fundamentally flawed and it all had to be rewritten anyway.  Was this an early  case of the ‘source-control wet-job’?’ It is very hard nowadays to do a rapid U-turn in a development project because we are far too prone to cling to our existing source-code.

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

  • How I Work: Staying Productive Whilst Traveling

    - by BuckWoody
    I travel a lot. Not like some folks that are gone every week, mind you, although in the last month I’ve been to: Cambridge, UK; Anchorage, AK; San Jose, CA; Copenhagen, DK, Boston, MA; and I’m currently en-route to Anaheim, CA.  While this many places in a month is a bit unusual for me, I would say I travel frequently. I’ve travelled most of my 28+ years in IT, and at one time was a consultant traveling weekly.   With that much time away from my primary work location, I have to find ways to stay productive. Some might say “just rest – take a nap!” – but I’m not able to do that. For one thing, I’m a very light sleeper and I’ve never slept on a plane - even a 30+ hour trip to New Zealand in Business Class - so that just isn’t option. I also am not always in the plane, of course. There’s the hotel, the taxi/bus/train, the airport and then all that over again when I arrive. Since my regular jobs have many demands, I have to get work done.   Note: No, I’m not always focused on work. I need downtime just like everyone else. Sometimes I just think, watch a movie or listen to tunes – and I give myself permission to do that anytime – sometimes the whole trip. I have too fewheartbeats left in life to only focus on work – it’s just not that important, and neither am I. Some of these tasks are letters to friends and family, or other personal things. What I’m talking about here is a plan, not some task list I have to follow. When I get to the location I’m traveling to, I always build in as much time as I can to ensure I enjoy those sights and the people I’m with. I would find traveling to be a waste if not for that.   The Unrealistic Expectation As I would evaluate the trip I was taking – say a 6-8 hour flight – I would expect to get 10-12 hours of work done. After all, there’s the time at the airport, the taxi and so on, and then of course the time in the air with all of the room, power, internet and everything else I needed to get my work done. I would pile up tasks at home, pack my bags, and head happily to the magical land of the TSA.   Right. On return from the trip, I had accomplished little, had more e-mails and other work that had piled up, and I was tired, hungry, and unorganized. This had to change. So, I decided to do three things: Segment my work Set realistic expectations Plan accordingly  Segmenting By Available Resources The first task was to decide what kind of work I could do in each location – if any. I found that I was dependent on a few things to get work done, such as power, the Internet, and a place to sit down. Before I fly, I take some time at home to get all of the work I’d like to accomplish while away segmented into these areas, and print that out on paper, which goes in my suit-coat pocket along with a mechanical pencil. I print my tickets, and I’m all set for the adventure ahead. Then I simply do each kind of work whenever I’m in that situation. No power There are certain times when I don’t have power available. But not only that, I might not even be able to use most of my electronics. So I now schedule as many phone calls as I can for the taxi/bus/train ride and the airports as I can. I have a paper notebook (Moleskine, of course) and a pencil and I print out any notes or numbers I need prior to the trip. Once I’m airborne or at the airport, I work on my laptop. I check and respond to e-mails, create slides, write code, do architecture, whatever I can.  If I can’t use any electronics, or once the power runs out, I schedule time for reading. I can read at the airport or anywhere, actually, even in-flight or any other transport. I “read with a pencil”, meaning I take a lot of notes, which I liketo put in OneNote, but since in most cases I don’t have power, I use the Moleskine to do that. Speaking of which, sometimes as I’m thinking I come up with new topics, ideas, blog posts, or things to teach in my classes. Once again I take out the notebook and write it down. All of these notes get a check-mark when I get back to the office and transfer the writing to OneNote. I’ve tried those “smart pens” and so on to automate this, but it just never works out. Pencil and paper are just fine. As I mentioned, sometime I just need to think. I’ll do nothing, and let my mind wander, thinking of nothing in particular, or some math problem or science question I’m interested in. My only issue with this is that I communicate tothink, and I don’t want to drive people crazy by being that guy that won’t shut up, so I think in a different way. Power, but no Internet or Phone If I have power but no Internet or phone, I focus on the laptop and the tablet as before, and I also recharge my other gadgets. Power, Internet, Phone and a Place to Work At first I thought that when I arrived at the hotel or event I could get the same amount of work done that I do at the office. Not so. There’s simply too many distractions, things you need, or other issues that allow this. Of course, Ican work on any device, read, think, write or whatever, but I am simply not as productive as I am in my home office. So I plan for about 25-50% as much work getting done in this environment as I think I could really do. I’ve done some measurements, and this holds out to be true almost every time. The key is that I re-set my expectations (and my co-worker’s expectations as well) that this is the case. I use the Out-Of-Office notices to let people know that I’m just not going to be 100% at this time – it’s hard for everyone, but it’s more honest and realistic, and I’d rather they know that – and that I realize that – than to let them think I’m totally available. Because I’m not – I’m traveling. I don’t tend to put too much detail, because after all I don’t necessarily want to let people know when I’m not home :) but I do think it’s important to let people that depend on my know that I’ll get back with them later. I hope this helps you think through your own methodology of staying productive when you travel. Or perhaps you just go offline, and don’t worry about any of this – good for you! That’s completely valid as well.   (Oh, and yes, I wrote this at 35K feet, on Alaska Airlines on a trip. :)  Practice what you preach, Buck.)

    Read the article

  • All my sites are 403 but the server is running. Errors on startup

    - by Craig
    We gave access to a contractor to install a firewall and somehow while he was doing it he fracked something up. Everything went off-line about 24 hours ago and we are effectively out of business until I solve this and the person who messed up the thing is not returning calls. I found a few errors. First, I'm not a server guy - I can look at log files and normally everything runs fine. All 'services' are running according to 1and1 server monitoring and mail is being delivered just fine. The whole thing was off-line until I (probably stupidly) updated the kernel from 6.2 to 6.3 this morning and I got everything back except the http access. All the domains (~200 of them) are returning a 403 error and nothing is recorded in the access log. On every restart I see this error in the messages log file: init: Failed to spawn ttyS0 main process: unable to execute: No such file or directory and a little later these: kernel: WARNING: at kernel/sched.c:5914 thread_return+0x232/0x79d() (Not tainted) kernel: Hardware name: X9SCL/X9SCM kernel: Modules linked in: xt_iprange iptable_filter ip_tables ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables ipv6 ext4 jbd2 serio_raw i2c_i801 i2c_core sg iTCO_wdt iTCO_vendor_support e1000e ext3 jbd mbcache raid1 sd_mod crc_t10dif ahci dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan] kernel: Pid: 367, comm: md3_raid1 Not tainted 2.6.32-220.2.1.el6.x86_64 #1 kernel: Call Trace: kernel: [<ffffffff81069997>] ? warn_slowpath_common+0x87/0xc0 kernel: [<ffffffff810699ea>] ? warn_slowpath_null+0x1a/0x20 kernel: [<ffffffff814eccc5>] ? thread_return+0x232/0x79d kernel: [<ffffffff8126a4d9>] ? cpumask_next_and+0x29/0x50 kernel: [<ffffffff813e9c05>] ? md_super_wait+0x55/0x90 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813ebf46>] ? md_update_sb+0x206/0x3f0 kernel: [<ffffffff813ee922>] ? md_check_recovery+0x3f2/0x6d0 kernel: [<ffffffffa005b129>] ? raid1d+0x49/0x1050 [raid1] kernel: [<ffffffff814ed985>] ? schedule_timeout+0x215/0x2e0 kernel: [<ffffffff814ef447>] ? _spin_unlock_irqrestore+0x17/0x20 kernel: [<ffffffff813eb336>] ? md_thread+0x116/0x150 kernel: [<ffffffff81090a10>] ? autoremove_wake_function+0x0/0x40 kernel: [<ffffffff813eb220>] ? md_thread+0x0/0x150 kernel: [<ffffffff810906a6>] ? kthread+0x96/0xa0 kernel: [<ffffffff8100c14a>] ? child_rip+0xa/0x20 kernel: [<ffffffff81090610>] ? kthread+0x0/0xa0 kernel: [<ffffffff8100c140>] ? child_rip+0x0/0x20 And something is wrong with the Named/BIND resulting in the same error for all domains: zone DOMAINEXAMPLE.com/IN: loading from master file DOMAINEXAMPLE.com failed: file not found zone DOMAINEXAMPLE.com/IN: not loaded due to errors. _default/DOMAINEXAMPLE.com/IN: file not found I'm pretty sure this is not enough information to solve the problem, but I'm willing to engage someone who can work this out for me. Any help would be greatly appreciated.

    Read the article

  • Blogging locally and globally–my experience

    - by DigiMortal
    In Baltic MVP Summit 2011 there was discussion about having two blogs - one for local and another for global audience – and how to publish once written information in these blogs. There are many ways how to optimize your blogging activities if you have more than one audience and here you can find my experiences, best practices and advices about this topic. My two blogs I have to working blogs: this one here technology and programming blog for local market My local blog is almost five years old and it makes it one of the oldest company blogs in Estonia. It is still active and I write there as much as I have time for it. This blog here is active since September 2007, so it is about 3.5 years old right now. Both of these blogs are  my major hits in my MVP carrier and they have very good web statistics too. My local blog My local blog is about programming, web and technology. It has way wider target audience then this blog here has. By example, in my local blog I blog also about local events, cool new concept phones, different webs providing some interesting services etc. But local guys can find there also my postings about how to solve one or another programming problem and postings about Microsoft technologies I am playing with. This far my local blog has a lot of readers for such a small country that Estonia is. This blog has made me a lot of cool contacts and I have had there a lot of interesting discussions about different technical topics. Why I started this blog? Living in small country is different than living in big country. In small country you have less people and therefore smaller audience so you have to target more than one technical topic to find enough readers. In a same time you are still interested in your main topics and you want to reach to more people who are sharing same interests with you. Practically one day y will grow out from local market and you go global. This is how this blog was born. Was it worth to create, promote and mess with it? Every second I have put on my time to this blog has been worth of it. Thanks to this blog I have found new good friends and without them I think it is more boring to work on different problems and solutions. Defining target audiences One thing you should always do when having more than one blog is defining target audiences. If you are just technomaniac interested in sharing your stuff and make some new friends and have something to write to your MVP nomination form then you don’t have to go through complex targeting process. You can do it simple way and same effectively. Here is how I defined target audiences to my blogs: local blog – reader of my local blog is IT professional, software developer, technology innovator or just some guy who is interested in technology,   this blog – reader of this blog is experienced professional software developer who works on Microsoft technologies or software developer who is open minded and open to new technologies and interesting solutions to development problems. You can see how local blog – due to small market with less people – has wider definition for audience while this blog is heavily targeted to Microsoft technologies and specially to software development. On practical side these decisions are also made well I think because it is very hard to build up popular common IT blog. On global level it is better to target some specific niche and find readers who are professionals on your favorite topics. Thanks to this blog I have found new friends who are professional developers and I am very happy about all the discussions I have had with them. Publishing content to different blogs My local blog and this blog have some overlapping topics like .NET, databases and SEO. Due to this overlapping there is question: when I write posting to my local blog then should I have to publish same thing in my global blog? And if I write something to my global blog then should I publish same thing also in my local blog? Well, it really depends on the definition of your target audiences. If they match then of course it is good idea to translate you post and publish it also to another blog. But if you have different audiences then you may need to modify your posting before publishing it. The questions you have to answer are: is target audience interested in this topic? is target audience expecting more specific and deeper handling of this topic or are they expecting more general handling of topic? is the problem you are discussing actual for target audience or not? You have to answer these questions and after that make your decision. If you need to modify your original posting then take some time and do it. Provide quality to all your readers because they will respect you if you respect them. Cross-posting and referencing It is tempting to save time that preparing some blog post takes and if you have are done with posting in one blog it may seem like good idea to make short posting to another blog and add reference to first one where topic is discussed longer. Well, don’t do it – all your readers expect good quality content from you and jumping from one blog post to another is disturbing for them. Of course, there is problem with differences between target audiences. You may have wider target audience and some people may be interested in more specific handling of topic. In this case feel free to refer your blog you are writing in english. This is not working very well in opposite direction because almost all my global blog readers understand english but not estonian. By example, estonian language is complex one and online translating tools make very poor translations from estonian language. This is why I don’t even plan to publish postings here that refer to my local blog for more information. I am keeping these two blogs as two different worlds and if there is posting that fits well to both blogs I will write my posting to one blog and then answer previous three questions before posting same thing to another blog. Conclusion Growing out of your local market is not anything mysterious if you are living in small country. As it is harder to find people there who are interested in same topics with you then sooner or later you will start finding these new contacts from global audience. Global audience is bigger and to be visible there you must provide high quality content to your audience. It is something you will learn over time and you will learn every day something new when you are posting to your global blog. You may ask: if global blog is much more complex thing to do then is it worth to do at all? My answer is: yes, do it for sure. It is not easy thing to do when you start but if you work on your global blog and improve it over time you will get over all obstacles pretty soon. Just don’t forget one thing – content is king and your readers expect high quality from you.

    Read the article

  • How to implement smooth flocking

    - by Craig
    I'm working on a simple survival game, avoid the big guy and chase the the small guys to stay alive for as long as possible. I have taken the chase and evade example from MSDN create and drawn 20 mice on the screen. I want the small guys to flock when they arent evading. They are doing this, but it isnt as smooth as I would like it to be. How do i make the movement smoother? Its very jittery.# Below is what I have going at the moment, flocking code is within the IF statement, when it isnt set to evading. Any help would be greatly appreciated! :) namespace ChaseAndEvade { class MouseSprite { public enum MouseAiState { // evading the cat Evading, // the mouse can't see the "cat", and it's wandering around. Wander } // how fast can the mouse move? public float MaxMouseSpeed = 4.5f; // and how fast can it turn? public float MouseTurnSpeed = 0.20f; // MouseEvadeDistance controls the distance at which the mouse will flee from // cat. If the mouse is further than "MouseEvadeDistance" pixels away, he will // consider himself safe. public float MouseEvadeDistance = 100.0f; // this constant is similar to TankHysteresis. The value is larger than the // tank's hysteresis value because the mouse is faster than the tank: with a // higher velocity, small fluctuations are much more visible. public float MouseHysteresis = 60.0f; public Texture2D mouseTexture; public Vector2 mouseTextureCenter; public Vector2 mousePosition; public MouseAiState mouseState = MouseAiState.Wander; public float mouseOrientation; public Vector2 mouseWanderDirection; int separationImpact = 4; int cohesionImpact = 6; int alignmentImpact = 2; int sensorDistance = 50; public void UpdateMouse(Vector2 position, MouseSprite [] mice, int numberMice, int index) { Vector2 catPosition = position; int enemies = numberMice; // first, calculate how far away the mouse is from the cat, and use that // information to decide how to behave. If they are too close, the mouse // will switch to "active" mode - fleeing. if they are far apart, the mouse // will switch to "idle" mode, where it roams around the screen. // we use a hysteresis constant in the decision making process, as described // in the accompanying doc file. float distanceFromCat = Vector2.Distance(mousePosition, catPosition); // the cat is a safe distance away, so the mouse should idle: if (distanceFromCat > MouseEvadeDistance + MouseHysteresis) { mouseState = MouseAiState.Wander; } // the cat is too close; the mouse should run: else if (distanceFromCat < MouseEvadeDistance - MouseHysteresis) { mouseState = MouseAiState.Evading; } // if neither of those if blocks hit, we are in the "hysteresis" range, // and the mouse will continue doing whatever it is doing now. // the mouse will move at a different speed depending on what state it // is in. when idle it won't move at full speed, but when actively evading // it will move as fast as it can. this variable is used to track which // speed the mouse should be moving. float currentMouseSpeed; // the second step of the Update is to change the mouse's orientation based // on its current state. if (mouseState == MouseAiState.Evading) { // If the mouse is "active," it is trying to evade the cat. The evasion // behavior is accomplished by using the TurnToFace function to turn // towards a point on a straight line facing away from the cat. In other // words, if the cat is point A, and the mouse is point B, the "seek // point" is C. // C // B // A Vector2 seekPosition = 2 * mousePosition - catPosition; // Use the TurnToFace function, which we introduced in the AI Series 1: // Aiming sample, to turn the mouse towards the seekPosition. Now when // the mouse moves forward, it'll be trying to move in a straight line // away from the cat. mouseOrientation = ChaseAndEvadeGame.TurnToFace(mousePosition, seekPosition, mouseOrientation, MouseTurnSpeed); // set currentMouseSpeed to MaxMouseSpeed - the mouse should run as fast // as it can. currentMouseSpeed = MaxMouseSpeed; } else { // if the mouse isn't trying to evade the cat, it should just meander // around the screen. we'll use the Wander function, which the mouse and // tank share, to accomplish this. mouseWanderDirection and // mouseOrientation are passed by ref so that the wander function can // modify them. for more information on ref parameters, see // http://msdn2.microsoft.com/en-us/library/14akc2c7(VS.80).aspx ChaseAndEvadeGame.Wander(mousePosition, ref mouseWanderDirection, ref mouseOrientation, MouseTurnSpeed); // if the mouse is wandering, it should only move at 25% of its maximum // speed. currentMouseSpeed = .25f * MaxMouseSpeed; Vector2 separate = Vector2.Zero; Vector2 moveCloser = Vector2.Zero; Vector2 moveAligned = Vector2.Zero; // What the AI does when it sees other AIs for (int j = 0; j < enemies; j++) { if (index != j) { // Calculate a vector towards another AI Vector2 separation = mice[index].mousePosition - mice[j].mousePosition; // Only react if other AI is within a certain distance if ((separation.Length() < this.sensorDistance) & (separation.Length()> 0) ) { moveAligned += mice[j].mouseWanderDirection; float distance = Math.Abs(separation.Length()); if (distance == 0) distance = 1; moveCloser += mice[j].mousePosition; separation.Normalize(); separate += separation / distance; } } } if (moveAligned.LengthSquared() != 0) { moveAligned.Normalize(); } if (moveCloser.LengthSquared() != 0) { moveCloser.Normalize(); } moveCloser /= enemies; mice[index].mousePosition += (separate * separationImpact) + (moveCloser * cohesionImpact) + (moveAligned * alignmentImpact); } // The final step is to move the mouse forward based on its current // orientation. First, we construct a "heading" vector from the orientation // angle. To do this, we'll use Cosine and Sine to tell us the x and y // components of the heading vector. See the accompanying doc for more // information. Vector2 heading = new Vector2( (float)Math.Cos(mouseOrientation), (float)Math.Sin(mouseOrientation)); // by multiplying the heading and speed, we can get a velocity vector. the // velocity vector is then added to the mouse's current position, moving him // forward. mousePosition += heading * currentMouseSpeed; } } }

    Read the article

  • Installing XP through USB-flash disc

    - by Crazy Buddy
    I don't know whether this could be asked here... So, Pardon me for this. Probably, this is based on My laptop and a contradiction to this question asked already here... I tried to format my "government-provided" laptop (No CD-drive). I thought those IT guys are proving that they're too smart..! I have the Windows XP CD right now. I didn't like to stick with some home-made OS from our Government. So, I used another laptop to format the govt. thing and tried to install XP (As I didn't have enough bills to invest on Windows 7 or 8). Case 1: First, I allowed WinSetupFromUSB 1.0 beta 8 to deal with the flash disk. I wondered for the first time that XP text-screen appeared. Using the first part, I formatted my laptop. It started to copy files, entered into the next part, and completed the installation. I started my PC for the first time. XP splash screen appeared. Suddenly, a blue screen flashed and disappeared (I can't even read what it says). Rebooted and arrived at the screen, "Start Windows Normally". It happens and happens still - like an infinite loop :-) Case 2: Next, I used Rufus 1.2.0 to transfer files to my Flash and it screwed everything out. Even if I used Flash to boot, it arrives to the same screen "Start Windows normally". It doesn't show any response of Flash being inserted. Then I recognized that, It's simply copies everything to the flash disk. Case 3: Then, I started with Novicorp WinToFlash (giving utmost priority to this site). I booted with the disk. I entered into the first part - "Text mode". Some lines started running like that "Press F6 if you..." like that. The last thing I saw was, "Setup is starting Windows..." Suddenly a blue screen appeared like this captured one. I've a suspicion that the same screen appears again & again in first case. Man, I'm dead. Case 4: For the sake of my last hope, I used WinSetupFromUSB 0.1.1. I was shocked on arriving at a screen which says something "GRUB4DOS" like that and some commands like {command line, reboot, halt, \find menu.lst} and when I go inside those "find" options, I see "Error:15 - File not found". Googling provided some commands to mount SETUPLDR.BIN file in the "grub" thing which also proved unsuccessful... Some sites say that Factory reset uses only some function keys. A guy said that it's F11 for lenovo. Screw him. It's all a waste-of-time. But, I think SE would help me out. Is our government IT guys doin' this to me? Are they Soooo smart to spark some blue screen in front of me to freak me out? Any suggestions or new (useful) USB transferring things would be appreciated. It's very urgent. So, It'd be better if you guys pay some attention in debugging and help me out..? Thanks for your time guys :-)

    Read the article

  • Installing .NET application on IIS 7.5 issues

    - by Juw
    Really need some help here. I am at a loss. I am trying to install a webservice that some other guy wrote in .NET. I have some basic IIS understanding. The webservice works just fine on my dev computer. But now i try to move the webservice to a production server and bad things happens. The webservice has been located in C:\inetpub\wwwroot\ dir on the dev server. But on this production server it is to be located in D:\services\ I have managed to install an application on the production server and everything seems fine and dandy. But when i "Test Settings" in the initial setup i get "Invalid application path" error. But i can just close it down and still install it. But when i try to access the webservice with: http://myserver.com/webservice/GetData nothing happens. Just a blank page and when i check the response headers...500 error. I don´t know what is going on here or where the problem is. I post the config file here so someone hopefully might notice something odd. Thanx in advance! EDIT: The config file is from my dev server. I just copied it to my production server...but that obviously didn´t work :-) UPDATE: I noticed that my dev server run in an Application pool with Net 4 and in "classic" "mode". On the production server it was in NET 4 but in "integrated" mode. So i changed it to "classic". I still get a blank page. But checking the log will output this: 2012-10-03 14:57:00 ip removed GET /boo/GetData - 80 - ip removed Mozilla/5.0+(Windows+NT+6.1;+WOW64;+rv:15.0)+Gecko/20100101+Firefox/15.0.1 404 2 1260 203 <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.web> <identity impersonate="true" /> <!-- Impersonate NT AUTHORITY/IUSR --> <compilation targetFramework="4.0"> <assemblies> <add assembly="System.Data.Entity, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b7735c561131e089" /> </assemblies> </compilation> <pages controlRenderingCompatibilityVersion="3.5" clientIDMode="AutoID" /> </system.web> <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> <httpErrors existingResponse="PassThrough" /> <httpProtocol> <customHeaders> <add name="Access-Control-Allow-Origin" value="*" /> </customHeaders> </httpProtocol> <directoryBrowse enabled="false" /> </system.webServer> <system.serviceModel> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <standardEndpoints> <webHttpEndpoint> <!-- Configure the WCF REST service base address via the global.asax.cs file and the default endpoint via the attributes on the <standardEndpoint> element below --> <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true" /> </webHttpEndpoint> </standardEndpoints> </system.serviceModel> <connectionStrings> <add name="Entities" connectionString="metadata=res://*/DataModel.csdl|res://*/DataModel.ssdl|res://*/DataModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;data source=someip;initial catalog=db_90;User ID=user1;Password=access2;multipleactiveresultsets=True;App=EntityFramework&quot;" providerName="System.Data.EntityClient" /> </connectionStrings> </configuration>

    Read the article

  • Setting environment variables in OS X

    - by Percival Ulysses
    Despite the warning that questions that can be answered are preferred, this question is more a request for comments. I apologize for this, but I feel that it is valuable nonetheless. The problem to set up environment variables such that they are available for GUI applications has been around since the dawn of Mac OS X. The solution with ~/.MacOSX/environment.plist never satisfied me because it was not reliable, and bash style globbing wasn't available. Another solution is the use of Login Hooks with a suitable shell script, but these are deprecated. The Apple approved way for such functionality as provided by login hooks is the use of Launch Agents. I provided a launch agent that is located in /Library/LaunchAgents/: <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>Label</key> <string>user.conf.launchd</string> <key>Program</key> <string>/Users/Shared/conflaunchd.sh</string> <key>ProgramArguments</key> <array> <string>~/.conf.launchd</string> </array> <key>EnableGlobbing</key> <true/> <key>RunAtLoad</key> <true/> <key>LimitLoadToSessionType</key> <array> <string>Aqua</string> <string>StandardIO</string> </array> </dict> </plist> The real work is done in the shell script /Users/Shared/conflaunchd.sh, which reads ~/.conf.launchd and feeds it to launchctl: #! /bin/bash #filename="$1" filename="$HOME/.conf.launchd" if [ ! -r "$filename" ]; then exit fi eval $(/usr/libexec/path_helper -s) while read line; do # skip lines that only contain whitespace or a comment if [ ! -n "$line" -o `expr "$line" : '#'` -gt 0 ]; then continue; fi eval launchctl $line done <"$filename" exit 0 Notice the call of path_helper to get PATH set up right. Finally, ~/.conf.launchd looks like that setenv PATH ~/Applications:"${PATH}" setenv TEXINPUTS .:~/Documents/texmf//: setenv BIBINPUTS .:~/Documents/texmf/bibtex//: setenv BSTINPUTS .:~/Documents/texmf/bibtex//: # Locale setenv LANG en_US.UTF-8 These are launchctl commands, see its manpage for further information. Works fine for me (I should mention that I'm still a Snow Leopard guy), GUI applications such as texstudio can see my local texmf tree. Things that can be improved: The shell script has a #filename="$1" in it. This is not accidental, as the file name should be feeded to the script by the launch agent as an argument, but that doesn't work. It is possible to put the script in the launch agent itsself. I am not sure how secure this solution is, as it uses eval with user provided strings. It should be mentioned that Apple intended a somewhat similar approach by putting stuff in ~/launchd.conf, but it is currently unsupported as to this date and OS (see the manpage of launchd.conf). I guess that things like globbing would not work as they do in this proposal. Finally, I would mention the sources I used as information on Launch Agents, but StackExchange doesn't let me [1], [2], [3]. Again, I am sorry that this is not a real question, I still hope it is useful.

    Read the article

  • Can spliting an access database cause printer and reporting issues?

    - by leeand00
    We have a setup in which our users log into an access database using MS Access 2003 over an RDP connection. The user's login to their own machines first using a roaming profile. They then click an rdp connection file on the desktop and login to the remote server, via RDP, where they use MS Access as the shell; they don't have any access to any of explorer.exe features such as the start menu. The database they are logging into is more of an application, and provides functionality for entering data, querying data, and running reports via form based menus. It all worked pretty well until we split the database as it was nearing 2GBs in size. We moved out the payroll data into a separate partition, a database with the same name in a different folder, both of them on the server. Only two tables were moved into this new database partition, and they were re-linked as external tables in the new partition. Now while everything appears to be working fine data-wise after the split, there's a new issue when our users login via RDP and attempt to run reports: often the report will not display and instead the user sees an error about the click event of the form. At first I didn't even know it was printer-related, as we didn't really change anything related to the printers as far as I knew. Confused about the error, I talked to the guy who previously worked here and who was in charge of splitting the database, and he told me to tell the users to set their default printers (on their local machines, not on the server) to the "printer" Microsoft XPS Document Writer which isn't a physical printer at all. This allowed the user's to display their reports, but if they want to print out reports, they are required to go to the File menu and select Print, clicking the print icon on the toolbar takes them to a Save As... dialog as would be expected when using the Microsoft XPS Document Writer as your default printer. It's easy to tell if the user is having a problem because a quick mouseover of the printer icon will yield a tooltip of (none) when they cannot access their reports, and a tooltip of Microsoft XPS Document Writer when they can view the reports. If the user's printer is set to anything other than Microsoft XPS Document Writer as the default on their local machine, then (none) is always displayed when they rdp to the database. The RDP settings are setup to transfer the local printer to the server. Telling the users to do this to print has been more of a band-aid on the whole situation until we find a better solution and an explanation as to why splitting a database would prevent users from printing or even viewing access database reports. Which is why I'm here asking this question. Also of note all the printers on the network now show up on the server so that when the users do click File->Print to print their reports on a physical printer, they have to look through a huge list of printers to find theirs in the dropdown. So the little band-aid fix we have is not ideal. Previously, only the printers on the user's local machine displayed here, and not all the printers on the network. My co-worker seems to think this has something to do with permissions, I personally think it has to do with roaming profiles, and Group Policies which is what I've been reading up on. I really don't know how to fix this or how it is related to splitting the database.

    Read the article

  • My 2009 MacBook Logic board failed - options to proceed and how difficult?

    - by user181061
    Scannerz just gave my MacBook logic board a big fat F! I upgraded from Snow Leopard to Mountain Lion about 3 weeks ago. The system was running short of memory so I upgraded it. The system was running fine for about 2 weeks. Yesterday the thing started acting erratic. A lot of spinning beach balls, delays, and then some errors saying files couldn't be read to or from the drive. I figured the drive was going because the system is over 3 years old. I ran Scannerz on it and it indicated a lot of errors and irregularities. I rescanned it in cursory mode, and none of them were repeatable, just showing up all over the place in different regions of the scan. I went through the docs and they implied either an I/O cable was bad, a connection was damaged, or the logic board was bad. I tossed on my backup of Snow Leopard that I cloned from the original hard drive because I figured Mountain Lion was to blame and booted from the USB drive with the clone on it. It wasn't. I performed scans on every single port, and errors and irregularities that couldn't be repeated were showing up on every single one of them. I then, for kicks, put a CD into the CD player. Scannerz doesn't test optical drives but I figured surely that will work. No it won't. More spinning beach balls and messages telling me it can't be read. It was working fine 3 days ago. I know a lot of people don't like MacBook's, but mine's been great, at least until now. It was working great even with Mountain Lion after the upgrade. The system is a mid-2009 MacBook. In my opinion, it's a complete waste to toss this system. The display is too good, the keyboard works great, and it still looks good, plus this type of MacBook still uses the FireWire 400 port and I use that for Time Machine backups. I've tried reseating the RAM, it didn't do anything. I shut the system down and put in the old RAM, booted to Snow Leopard, and the problems persist. Here are my questions: The Scannerz documentation somewhere said something about the Airport card not being seated properly, but when I go to iFixit, it's apparent, at least I think it's apparent, that this isn't a slot type Airport card that the user can easily install or remove. If the cables or connections to the Airport card are bad, could they be causing this problem. How about any other connections that can be intermittent, failing or erratic? Any type of resets that I could possibly do to get rid of this? For any of those that have replaced a logic board on a MacBook, if this really is the culprit, are there any "gotcha's" I need to be aware of? As an FYI, I replaced the hard drive on an old iBook @500MHz that I had a long time ago, and I replaced the drive on a 1.33GHz PowerBook about 6 years ago. You have to be careful, but using some of the info on web sites like iFixit it's not that hard. Time consuming, but not that hard. The Intel based MacBook's to me look like they're easier to service than either of those. I'm thinking about getting a unit off of eBay that matches mine but has something else wrong with it, like a busted display. I REFUSE to buy a new system. A guy at my office has a 2007 Mac Pro and he can't upgrade to Mountain Lion because his system is "obsoleted." That's ridiculous. If you pay nearly $7,500 for a system it shouldn't be trash just because Apple decides they don't have enough money (sorry for the soap box, but it's true, IMO!) Any input is appreciated.

    Read the article

  • How to use BCDEdit to dual boot Windows installations?

    - by Ian Boyd
    What are the bcdedit commands necessary to setup dual boot between different installations of Windows?5 Background i recently installed Windows 8 onto a separate hard drive1. Now that Windows 8 in installed i want to dual-boot back to Windows 7. i have my two2 hard drives: So you can see that i have my two disks, with the partitions containing Windows: Windows 7: \\PhysicalDisk0 (partition 03) Windows 8: \\PhysicalDisk2 (partition 1) What i'm trying to figure out how is how to use bcdedit to instruct the thing that boots Windows that there is another Windows installation out there. Running bcdedit now, it shows current configuration: C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {globalsettings} integrityservices Enable default {current} resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} displayorder {current} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale en-US inherit {bootloadersettings} recoverysequence {ce153eb9-3786-11e2-87c0-e740e123299f} integrityservices Enable recoveryenabled Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto i cannot find any documentation on the difference between Windows Boot Manager and Windows Boot Loader. Documentation There is some documentation on Bcdedit: Technet: Command Line Reference - Bcdedit Technet: Windows Automated Installation Kit - BCDEdit Command Line Options Whitepaper - BCDEdit Commands for Boot Environment (Word Document) But they don't explain how edit the binary boot configuration data If i had to guess, i would think that a Windows Boot Manager instructs the BIOS what program it should run. That program would give the user a set of boot choices. That leaves Windows Boot Loader do be a particular boot choice, that represents a particular installation of Windows. If that is the case i would need to create a new Windows Boot Loader entry. This means i might want to use the /create parameter: /create Creates a new boot entry: bcdedit [/store filename] /create [id] /d description [/application apptype | /inherit [apptype] | /inherit DEVICE | /device] So i assume a syntax of: >bcdedit /create /d "The old Windows 7" /application osloader Where application can be one of the following types: Apptype Description BOOTSECTOR The boot sector application OSLOADER The Windows boot loader RESUME A resume application Unfortunately, the only documentation about osloader is "The Windows boot loader". i don't see how that can differentiate between Windows 8 on one hard drive, and Windows 7 on another. The other possible parameter when /create a boot loader is >bcdedit /create /D "Windows Vista" /device "The Quick Brown Fox" Unfortunately the documentation is missing for /device: /device Optional. If id is not set to a well-known identifier, the option that is used to specify the new boot entry as an additional device options entry. Since i did not set id to a well-known identifier, i must set /device to "the option that is used to specify the new boot entry as an additional device options entry". i know all those words; they're all English. But i have on idea what it is saying; those words in that order seem nonsensical. So i'm somewhat stymied. i don't want to be like Dan Stolts from Microsoft: I found no content that was particularly helpful when I hosed my machine by playing with BCDEdit. This post would have been ok if there was much more detail especially on the /set command OSDevice, etc. So once I got my machine fixed, I documented the solution and the information is here.... i mean, if a Microsoft guy can't even figure out how to use BCDEdit to edit his BCD, then what chance to i have? Bonus Reading BCDEdit Command-Line Options Bcdedit Server 2008 R2 or Windows 7 System Will NOT Boot After Making Changes To Boot Manager Using BCDEdit Visual BCD Editor4 Windows 7 and Windows 8 RTM Dual Boot Setup Footnotes 1 Since the Windows 8 installer would have damaged my Windows 7 install, i decided to unplug my "main" hard drive during the install. Which is a long-winded explanation of why the Windows 8 installer didn't detect the existing Windows 7 install. Normally the installer would have automatically created the required entries for dual-boot. Not that the reason i'm asking the question is important. 2 Really there's three drives, but the third is just bulk storage. The existence of a 3rd hard drive is irrelevant to the question. i only mention it in case someone wants to know why the screenshot has 3 hard drives when i only mention two. 3 i arbitrarily started numbering partitions at "zero"; not to imply that partitions are numbered starting at zero. i only mention partitions because i don't see how any boot-loader could do its job without knowing which partition, and which folder, an installation of Windows is located in. 4 i'm asking about BCDEdit. i tried Visual BCD Editor. It seems to be a visual BCD editor. That is to say that it's a GUI, but still uses the same terminology as BCDEdit, and requires the same knowledge that BCD doesn't document. 5 For simplicity sake we'll assume that all installation of Windows i want to dual-boot between are Windows Vista or later, making them all compatible with the BCDEdit and the binary boot loader. The alternative would require delving into the intricacies of the old ntloader. Nor am i asking about dual booting to Linux; or how to boot to a Virtual Hard Drive (vhd) image. Just modern versions of Windows on existing hard drives in the same machine. Note: You can ignore everything after the word Background. It's all pointless exposition to satisfy some people's need for "research effort" before they'll consider being helpful. Some people have even been known to summarily close questions unless there is research effort. Some people have been know to close questions if there is too much research effort. Some people close questions when i put the note saying that they can ignore everything after the Background out of spite. Some people are just grumpy.

    Read the article

  • Performance required to improve Windows Experience Index?

    - by Ian Boyd
    Is there a guide on the metrics required to obtain a certain Windows Experience Index? A Microsoft guy said in January 2009: On the matter of transparency, it is indeed our plan to disclose in great detail how the scores are calculated, what the tests attempt to measure, why, and how they map to realistic scenarios and usage patterns. Has that amount of transparency happened? Is there a technet article somewhere? If my score was limited by my Memory subscore of 5.9. A nieve person would suggest: Buy a faster RAM Which is wrong of course. From the Windows help: If your computer has a 64-bit central processing unit (CPU) and 4 gigabytes (GB) or less random access memory (RAM), then the Memory (RAM) subscore for your computer will have a maximum of 5.9. You can buy the fastest, overclocked, liquid-cooled, DDR5 RAM on the planet; you'll still have a maximum Memory subscore of 5.9. So in general the knee-jerk advice "buy better stuff" is not helpful. What i am looking for is attributes required to achieve a certain score, or move beyond a current limitation. The information i've been able to compile so far, chiefly from 3 Windows blog entries, and an article: Memory subscore Score Conditions ======= ================================ 1.0 < 256 MB 2.0 < 500 MB 2.9 <= 512 MB 3.5 < 704 MB 3.9 < 944 MB 4.5 <= 1.5 GB 5.9 < 4.0GB-64MB on a 64-bit OS Windows Vista highest score 7.9 Windows 7 highest score Graphics Subscore Score Conditions ======= ====================== 1.0 doesn't support DX9 1.9 doesn't support WDDM 4.9 does not support Pixel Shader 3.0 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 7.9 Windows 7 highest score Gaming graphics subscore Score Result ======= ============================= 1.0 doesn't support D3D 2.0 supports D3D9, DX9 and WDDM 5.9 doesn't support DX10 or WDDM1.1 Windows Vista highest score 6.0-6.9 good framerates (e.g. 40-50fps) at normal resoltuions (e.g. 1280x1024) 7.0-7.9 even higher framerates at even higher resolutions 7.9 Windows 7 highest score Processor subscore Score Conditions ======= ========================================================================== 5.9 Windows Vista highest score 6.0-6.9 many quad core processors will be able to score in the high 6 low 7 ranges 7.0+ many quad core processors will be able to score in the high 6 low 7 ranges 7.9 8-core systems will be able to approach 8.9 Windows 7 highest score Primary hard disk subscore (note) Score Conditions ======= ======================================== 1.9 Limit for pathological drives that stop responding when pending writes 2.0 Limit for pathological drives that stop responding when pending writes 2.9 Limit for pathological drives that stop responding when pending writes 3.0 Limit for pathological drives that stop responding when pending writes 5.9 highest you're likely to see without SSD Windows Vista highest score 7.9 Windows 7 highest score Bonus Chatter You can find your WEI detailed test results in: C:\Windows\Performance\WinSAT\DataStore e.g. 2011-11-06 01.00.19.482 Disk.Assessment (Recent).WinSAT.xml <WinSAT> <WinSPR> <DiskScore>5.9</DiskScore> </WinSPR> <Metrics> <DiskMetrics> <AvgThroughput units="MB/s" score="6.4" ioSize="65536" kind="Sequential Read">89.95188</AvgThroughput> <AvgThroughput units="MB/s" score="4.0" ioSize="16384" kind="Random Read">1.58000</AvgThroughput> <Responsiveness Reason="UnableToAssess" Kind="Cap">TRUE</Responsiveness> </DiskMetrics> </Metrics> </WinSAT> Pre-emptive snarky comment: "WEI is useless, it has no relation to reality" Fine, how do i increase my hard-drive's random I/O throughput? Update - Amount of memory limits rating Some people don't believe Microsoft's statement that having less than 4GB of RAM on a 64-bit edition of Windows doesn't limit the rating to 5.9: And from xxx.Formal.Assessment (Recent).WinSAT.xml: <WinSPR> <LimitsApplied> <MemoryScore> <LimitApplied Friendly="Physical memory available to the OS is less than 4.0GB-64MB on a 64-bit OS : limit mem score to 5.9" Relation="LT">4227858432</LimitApplied> </MemoryScore> </LimitsApplied> </WinSPR> References Windows Vista Team Blog: Windows Experience Index: An In-Depth Look Understand and improve your computer's performance in Windows Vista Engineering Windows 7 Blog: Engineering the Windows 7 “Windows Experience Index”

    Read the article

  • Remote host: can tracert, can telnet, can*not* browse: what gives?

    - by MacThePenguin
    One of my customers of the company I work for has made a change to their Internet connection, and now we can't connect to them any more from our LAN. To help me troubleshoot this issue, the network guy on the customer's site has configured their firewall so that a HTTPS connection to their public IP address is open to any IP. I should put https://<customer's IP> in my browser and get a web page. Well, it works from any network I've tried (even from my smartphone), just not from my company's LAN. I thought it may be an issue with our firewall (though I checked its rules and it allows outbound TCP port 443 to anywhere), so I just connected a PC directly to the network connection of our provider, bypassing out firewall completely, and still it didn't work (everything else worked). So I asked for help to our Internet provider's customer service, and they asked me to do a tracert to our customer's IP. The tracert is successful, as the final hop shown in the output is the host I want to reach. So they said there's no problem. :( I also tried telnet <customer's IP> 443 and that works as well: I get a blank page with the cursor blinking (I've tried using another random port and that gives me an error message, as it should). Still, from any browser of any PC in my LAN I can't open that URL. I tried checking the network traffic with Wireshark: I see the packages going through and answers coming back, thought the packets I see passing are far less than they are if I successfully connect to another HTTPS website. See the attached screenshot: I had to blur the IPs, anyway the longer string is my PC's local IP address, the shorter one is the customer's public IP. I don't know what else to try. This is the only IP doing this... Any idea what could I try to find a solution to this issue? Thanks, let me know if you need further details. Edit: when I say "it doesn't work" I mean: the page doesn't open, the browser keeps loading for a long time and eventually shows an error saying that the page cannot be opened. I'm not in my office now so I can't paste the exact message, but it's the usual message you get when the browser reaches its timeout. When I say "it works", I mean the browser loads and shows a webpage (it's the logon page for the customers' firewall admin interface: so there's the firewall brand's logo and there are fields to enter a user id and a password). Update 13/09/2012: tried again to connect to the customer's network through our Internet connection without a firewall. This is what I did: Run a Kubuntu 12.04 live distro on a spare laptop; Updated all the packages I could and installed WireShark; Attached it to my LAN and verified that I couldn't open https://<customer's IP>. Verified that the Wireshark trace for this attempt was the same as the one I've already posted; Verified that I could connect to another customer's host using rdesktop (it worked); Tried to rdesktop to <customer's IP>, here's the output: kubuntu@kubuntu:/etc$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: recv: Connection reset by peer Disconnected the laptop from the LAN; Disconnected the firewall from the Extranet connection, connected the laptop instead. Set its network configuration so that I could access the Internet; Verified that I could connect to other websites in http and https and in RDP to other customers' hosts - it all worked as expected; Verified that I could still traceroute to <customer's IP>: I could; Verified that I still couldn't open https://<customer's IP> (same exact result as before); Checked the WireShark trace for this attempt and noticed a different behaviour: I could see packets going out to the customer's IP, but no replies at all; Tried to run rdesktop again, with a slightly different result: kubuntu@kubuntu:/etc/network$ rdesktop <customer's IP> Autoselected keyboard map en-us ERROR: <customer's IP>: unable to connect Finally gave up, put everything back as it was before, turned off the laptop and lost the WireShark traces I had saved. :( I still remember them very well though. :) Can you get anything out of it? Thank you very much. Update 12/09/2012 n.2: I followed the suggestion by MadHatter in the comments. From inside the firewall, this is what I get: user@ubuntu-mantis:~$ openssl s_client -connect <customer's IP>:443 CONNECTED(00000003) If I now type GET / the output pauses for several seconds and then I get: write:errno=104 I'm going to try the same, but bypassing the firewall, as soon as I can. Thanks. Update 12/09/2012 n.3: So, I think ISA Server is altering the results of my tests... I tried installing Wireshark directly on the firewall and monitoring the packets on the Extranet network card. When the destination is the customer's IP, whatever service I try to connect to (HTTPS, RDP or SAProuter), I can only see outbound packets and no response packets whatsoever from their side. It looks like ISA Server is "faking" the remote server's replies, that's why I get a connection using telnet or the openSSL client. This is the wireshark trace from inside our LAN: But this is the trace on the Extranet network card: This makes a bit more sense... I'll send this info to the customer's tech and see if he can make anything out of it. Thanks to all that took the time to read my question and post suggestions. I'll update this post again.

    Read the article

  • My D-Link's Ethernet bridge downlink just got 10-30x slower?

    - by Jay Levitt
    TL;DR: I unplugged my network to move my desk, and now downloading via my DIR-655's Ethernet LAN bridge is 10-30x slower than the Ethernet switch it's plugged into. Background My network is SMC cable modem <-> Cisco firewall <-> Netgear switch <-> D-Link WiFi† | | | | SMC8014 ASA-5505 GS608v2 gigE DIR-655 rev A3 gigE †The DIR-655 is used as an access point, not a router (although what D-Link calls an access point, I'd call a bridge). The "WAN" port is unused; the Netgear connects to the built-in 4-port Ethernet LAN switch, inside the built- in router/firewall. Endpoints: MacBook Pro 17" mid-2010 iPhone 4S Fedora 12 Linux server running reasonably fast dual-Athlon X2, VelociRaptors, etc. All cables are <10 feet, mostly CAT-5e, some CAT-6, all premade. All WiFi endpoints are within three feet of the D-Link. Yesterday I unplugged and rearranged stuff, and now connecting via the D-Link - even through the wired switch, right next to the incoming network cable - is 30x slower than connecting directly to the Netgear switch, on both my MacBook and iPhone. How I'm measuring "slower" I'm mostly using http://speedtest.net, which of course only really measures broadband speeds. I've also installed http://www.speedtest.net/mini.php on my local server, but can't test the iPhone with that. Results Speedtest.net, closest server over Comcast business-class: CONFIG | PING (ms) | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> Netgear | 9 | 31.6 | 6.8 Mac <-> Ethernet <-> D-Link | 8 | 4.1 | 6.0 Mac <-> WiFi <-> D-Link | 9 | 1.4 | 2.9 iPhone <-> WiFi <-> D-Link | 67 | 0.4 | 1.6 Speedtest Mini on Linux PC: CONFIG | DOWN (Mbps) | UP (Mbps) Mac <-> Ethernet <-> NetGear | 97.2 | 76.9 Mac <-> Ethernet <-> D-Link | 8.2 | 24.2 Mac <-> WiFi <-> D-Link | 1.0 | 8.6 Slow typing in SSH: Mac <-> Ethernet <-> Netgear <-> Linux PC: smooth Mac <-> Ethernet <-> D-Link <-> Linux PC: choppy Note that D-Link upload speeds are normal on broadband, slower locally (but I'd believe that's a D-Link limitation), and always faster than the downloads! Since ssh is choppy just with slow typing, I don't believe it's a throttling-type problem either; that's not a lot of bandwidth. What I've tried Swapping all "good" and "bad" cables Re-plugging "bad" cable from D-Link to Netgear and watching it be the "good" cable pulling cables away from power lines Verify that the Mac auto-detects the D-Link as gigE Try to verify the link speed of the D-Link <- Netgear connection, but the firmware doesn't report that Verify that the D-Link sees no TX/RX errors or collisions Use different Ethernet ports on both Netgear and D-Link Reset the D-Link to factory settings Upgrade the D-Link firmware from 1.21 to 1.35NA, 2010/11/12, the latest Reboot everything at least once On the Mac, disable Wi-Fi during the Ethernet tests, and unplug Ethernet during the Wi-Fi tests Using iStumbler, verify that the D-Link isn't picking overloaded Wi-Fi channels (usually just 1-5 neighbors on my and adjacent channels, average for my apt building) Verify that the only client connected to the Wi-Fi was the iPhone Verify that nothing was being chatty on my network according to the WISH log Enable and disable all sorts of D-Link settings, including forcing WAN auto-detect to gigE So. I don't mind buying a new access point—I wouldn't mind having a dual-link network—but as a guy who's been networking since gated v4 was a drastic rewrite, and who often used physical sniffers in the days before Wireshark, I'm baffled. I hate being baffled. What could I possibly have changed that would result in this? How can I measure it? All I can think of is a static zap—thick carpet, socks, HVAC—but I didn't feel one, and does that really happen anymore? Can I test if it's Ethernet vs. TCP layer slowness? I'm not familiar with modern network utilities; it's hard to Google without hitting "Q: Why is my network slow? A: Is your microwave on?" If I don't get an answer here, will someone big and powerful help me migrate it to serverfault without getting screamed back here? In the words of Inigo Montoya, "I must know." Don't get all Dread Pirate Roberts on me.

    Read the article

  • Windows 7 ipv4 autoconfiguration - cannot connect to internet

    - by GuiccoPiano
    I get my internet connection from a guy (lets call him my service provider henceforth). He gives internet connections to many students here in my hostel. My PC gets a private IP through his DHCP server. Now, when I switch on my WiFi, my PC gets a private IP as it should and I can connect to the internet just fine. But now when I connect my LAN cable, my PC gets some "Autoconfiguration IPv4 address" 169.254.110.154(Preferred) and I cannot connect to the internet. Here is the ipconfig /all output for ethernet port: Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Marvell Yukon 88E8059 PCI-E Gigabit Ethernet Controller Physical Address. . . . . . . . . : <<MAC DISPLAYED HERE>> DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::5054:a347:7d06:6e9a%11(Preferred) Autoconfiguration IPv4 Address. . : 169.254.110.154(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 285222078 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-14-50-AC-68-54-42-49-EE-52-16 DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled I also tried: Start a command prompt as admin. Run "netsh winsock reset" Run "netsh interface ipv4 reset" Run "netsh interface ipv6 reset" Restart your computer. All this does not work. Any idea to solve the problem?

    Read the article

  • System Center 2012 Service Manager change request status stuck at new

    - by Chuck Herrington
    The guy that built and setup this system left rather abruptly and I've taken over. My current issues are I have several change requests that are stuck at New. They do not move to Pending or In Progress. The system is not sending emails when incidents are getting assigned to people. This used to work on this system. I have done a lot of searching and the usual solution to this of stopping and restarting the system center services does not help. Can anyone give me any ideas of where else to look? Update: From all the searching I have done it seemed like I was at the point of re-installing. My initial installation of SCSM 2012 was on a machine that was upgraded from SCSM 2010 and also hosted SCCM 2007 and WSUS. We decided to give it a fresh start on a new server by installing a second instance of the SCSM server on a brand new 2008 R2 server then promoting the new server to the workflow master using the procedures outlined in this article - Dealing with Multiple management Servers. I've gotten to the point where we have both the old and the new server up and the new server has been promoted. I had hopped to get spammed by emails all the sudden due to the workflow taking off, but no such luck. Once all the clients are reconfigured to point to the new server we still plan to decommission the old server but at this point it seems to be that the problem is in the database. Short of any other input from the community, my next plan is to install a 180 day trial on a test server, complete with a separate database so that I can do a side by side comparison between a completely fresh install and what I have now and see if I can find any differences. While that install is running I also plan on investigating the event logs to see if there is anything in there that can shed some light on what is happening on the new server. Update 2: So I've now got a test SCSM server up with a completely fresh install including Database and it seems to be able to transition Change Requests from New to In Progress. I'm attempting to find differences between the two. Stay Tuned! Update 3: In looking through the event log on the new SCSM machine i discovered: Log Name: Operations Manager Source: OpsMgr Root Connector Date: 10/9/2013 3:48:18 PM Event ID: 28000 Task Category: None Level: Warning Keywords: Classic User: N/A Computer: scsm02 Description: The Root connector received an exception from the SDK Service while submitting task status: Cannot set availability on a health service that doesn't exist. This lead me to Event ID 2800 logged after installing secondary server for System Center 2012 Service Manager SP1. I contacted MS to obtain the hotfix, BIG warning here, turns out the hotfix is not so "hot". In order to apply this hotfix, you have to uninstall then reinstall using the files they supply. :( This is where I am at now ... Update 4: Not much luck after the re-install. The errors in the event log have gone away on the new server but the workflows still aren't running and neither the event log nor the workflow status screen seem to indicate why. I've done a comparison of the Activity and the Change Request Event Workflows and I've removed everything from the production system that is not in my fresh test system (which is everything), shut down the services, cleared out the cache folders and restarted the services and still no joy. At the moment the only thing I can think to do is either a)nuke the entire system including the database and start over, losing all of our data in the process or b)contact MS (which is probably going to cost us a butt load of money and time in the end to only advise us to do the same thing. Maybe more idea's will come after coffee ... No answers came after coffee. Attempting to contact MS. Managed to get to their first line of defense, gave them our SA number and someone is supposed to call me back. I am trying to log into my incident on their site to update my ticket with the link to this thread but when i click on the link in the email they sent me it goes to a "Sorry, the page you requested is not available" page ... Linux is looking better and better all the time.

    Read the article

  • Mauritius Software Craftsmanship Community

    There we go! I finally managed to push myself forward and pick up an old, actually too old, idea since I ever arrived here in Mauritius more than six years ago. I'm talking about a community for all kind of ICT connected people. In the past (back in Germany), I used to be involved in various community activities. For example, I was part of the Microsoft Community Leader/Influencer Program (CLIP) in Germany due to an FAQ on Visual FoxPro, actually Active FoxPro Pages (AFP) to be more precise. Then in 2003/2004 I addressed the responsible person of the dFPUG user group in Speyer in order to assist him in organising monthly user group meetings. Well, he handed over management completely, and attended our meetings regularly. Why did it take you so long? Well, I don't want to bother you with the details but short version is that I was too busy on either job (building up new companies) or private life (got married and we have two lovely children, eh 'monsters') or even both. But now is the time where I was starting to look for new fields given the fact that I gained some spare time. My businesses are up and running, the kids are in school, and I am finally in a position where I can commit myself again to community activities. And I love to do that! Why a new user group? Good question... And 'easy' to answer. Since back in 2007 I did my usual research, eh Google searches, to see whether there existing user groups in Mauritius and in which field of interest. And yes, there are! If I recall this correctly, then there are communities for PHP, Drupal, Python (just recently), Oracle, and Linux (which used to be even two). But... either they do not exist anymore, they are dormant, or there is only a low heart-beat, frankly speaking. And yes, I went to meetings of the Linux User Group Meta (Mauritius) back in 2010/2011 and just recently. I really like the setup and the way the LUGM is organised. It's just that I have a slightly different point of view on how a user group or community should organise itself and how to approach future members. Don't get me wrong, I'm not criticizing others doing a very good job, I'm only saying that I'd like to do it differently. The last meeting of the LUGM was awesome; read my feedback about it. Ok, so what's up with 'Mauritius Software Craftsmanship Community' or short: MSCC? As I've already written in my article on 'Communities - The importance of exchange and discussion' I think it is essential in a world of IT to stay 'connected' with a good number of other people in the same field. There is so much dynamic and every day's news that it is almost impossible to keep on track with all of them. The MSCC is going to provide a common platform to exchange experience and share knowledge between each other. You might be a newbie and want to know what to expect working as a software developer, or as a database administrator, or maybe as an IT systems administrator, or you're an experienced geek that loves to share your ideas or solutions that you implemented to solve a specific problem, or you're the business (or HR) guy that is looking for 'fresh' blood to enforce your existing team. Or... you're just interested and you'd like to communicate with like-minded people. Meetup of 26.06.2013 @ L'arabica: Of course there are laptops around. Free WiFi, power outlet, coffee, code and Linux in one go. The MSCC is technology-agnostic and spans an umbrella over any kind of technology. Simply because you can't ignore other technologies anymore in a connected IT world as we have. A front-end developer for iOS applications should have the chance to connect with a Python back-end coder and eventually with a DBA for MySQL or PostgreSQL and exchange their experience. Furthermore, I'm a huge fan of cross-platform development, and it is very pleasant to have pure Web developers - with all that HTML5, CSS3, JavaScript and JS libraries stuff - and passionate C# or Java coders at the same table. This diversity of knowledge can assist and boost your personal situation. And last but not least, there are projects and open positions 'flying' around... People might like to hear others opinion about an employer or get new impulses on how to tackle down an issue at their workspace, etc. This is about community. And that's how I see the MSCC in general - free of any limitations be it by programming language or technology. Having the chance to exchange experience and to discuss certain aspects of technology saves you time and money, and it's a pleasure to enjoy. Compared to dusty books and remote online resources. It's human! Organising meetups (meetings, get-together, gatherings - you name it!) As of writing this article, the MSCC is currently meeting every Wednesday for the weekly 'Code & Coffee' session at various locations (suggestions are welcome!) in Mauritius. This might change in the future eventually but especially at the beginning I think it is very important to create awareness in the Mauritian IT world. Yes, we are here! Come and join us! ;-) The MSCC's main online presence is located at Meetup.com because it allows me to handle the organisation of events and meeting appointments very easily, and any member can have a look who else is involved so that an exchange of contacts is given at any time. In combination with the other entities (G+ Communities, FB Pages or in Groups) I advertise and manage all future activities here: Mauritius Software Craftsmanship Community This is a community for those who care and are proud of what they do. For those developers, regardless how experienced they are, who want to improve and master their craft. This is a community for those who believe that being average is just not good enough. I know, there are not many 'craftsmen' yet but it's a start... Let's see how it looks like by the end of the year. There are free smartphone apps for Android and iOS from Meetup.com that allow you to keep track of meetings and to stay informed on latest updates. And last but not least, there is a Trello workspace to collect and share ideas and provide downloads of slides, etc. Trello is also available as free smartphone app. Sharing is caring! As mentioned, the #MSCC is present in various social media networks in order to cover as many people as possible here in Mauritius. Following is an overview of the current networks: Twitter - Latest updates and quickies Google+ - Community channel Facebook - Community Page LinkedIn - Community Group Trello - Collaboration workspace to share and develop ideas Hopefully, this covers the majority of computer-related people in Mauritius. Please spread the word about the #MSCC between your colleagues, your friends and other interested 'geeks'. Your future looks bright Running and participating in a user group or any kind of community usually provides quite a number of advantages for anyone. On the one side it is very joyful for me to organise appointments and get in touch with people that might be interested to present a little demo of their projects or their recent problems they had to tackle down, and on the other side there are lots of companies that have various support programs or sponsorships especially tailored for user groups. At the moment, I already have a couple of gimmicks that I would like to hand out in small contests or raffles during one of the upcoming meetings, and as said, companies provide all kind of goodies, books free of charge, or sometimes even licenses for communities. Meeting other software developers or IT guys also opens up your point of view on the local market and there might be interesting projects or job offers available, too. A community like the Mauritius Software Craftsmanship Community is great for freelancers, self-employed, students and of course employees. Meetings will be organised on a regular basis, and I'm open to all kind of suggestions from you. Please leave a comment here in blog or join the conversations in the above mentioned social networks. Let's get this community up and running, my fellow Mauritians! Recent updates The MSCC is now officially participating in the O'Reilly UK User Group programm and we are allowed to request review or recension copies of recent titles. Additionally, we have a discount code for any books or ebooks that you might like to order on shop.oreilly.com. More applications for user group sponsorship programms are pending and I'm looking forward to a couple of announcement very soon. And... we need some kind of 'corporate identity' - Over at the MSCC website there is a call for action (or better said a contest with prizes) to create a unique design for the MSCC. This would include a decent colour palette, a logo, graphical banners for Meetup, Google+, Facebook, LinkedIn, etc. and of course badges for our craftsmen to add to their personal blogs and websites. Please spread the word and contribute. Thanks!

    Read the article

  • MVP Summit 2011 summary and thoughts: The &ldquo;I hope I don&rsquo;t cross a line and lose my MVP status&rdquo; post

    - by George Clingerman
    I've been wanting to write this post summarizing my thoughts about the MVP summit but have been dragging my feet since it's a very difficult one to write. However seeing Andy (http://forums.create.msdn.com/forums/t/77625.aspx) and Catalin (http://www.catalinzima.com/2011/03/mvp-summit-2011/) and Chris (http://geekswithblogs.net/cwilliams/archive/2011/03/07/144229.aspx) post about it has encouraged me to finally take the plunge. I'm going to have to write carefully though because I'm going to be dancing around a ton of NDA mine fields as well as having to walk the tight-rope of not sending the wrong message or having people read too much into what I'm saying. I want to note that most of what I'm about to say is just based on my observations, they're not thoughts that Microsoft has asked me to pass along and they're not things I heard Microsoft say. It's just me sharing what I think after going to the MVP summit. Let's start off with a short imaginary question and answer session.     Has the App Hub forums and XBLIG management been rather poor by Microsoft? Yes.     Do I think we're going to see changes to that overnight? No.     Will it continue to look bad from the outside? Somewhat. Confusing right? Well that's kind of how things are right now. Lots of confusion. XNA is doing AWESOME. Like, really, really awesome. As a result of that awesomeness, XNA is on three major platforms: Xbox 360, WP7 and PC. This means that internally Microsoft is really excited and invested in the technology. That's fantastic for XNA and really should show you the future the framework has. It's here to stay. So why are Xbox LIVE Indie Game developers feeling so much pain? The ironic thing is that pain is being caused by the success of XNA. When XNA was just a small thing, there was more freedom and more focus. It was just us and them. We were an only child. Now our family has grown and everyone has and wants some time with XNA. This gets XNA pulled in all directions and as it moves onto new platforms, it plays catch up trying to get those platforms up to speed to where Xbox LIVE Indie Games has grown. Forums, documentation, educational content. They all need to be there because Xbox LIVE Indie Games has all of that and more. Along with the catch up in features/documentation/awesomeness there's the catch up that the people on the team have to play. New platforms and new areas of development mean new players and those new guys don't have the history of being around from the beginning. This leads to a lack of understanding at times just how important some things are because they seem so small and insignificant (Rich Text defaulting for new forum profiles would be one things that jumps to mind). If you're not aware that the forums have become more than just a basic Q&A, if you're not aware that they're a central hub to a very active community, then you don't understand why that small change should be prioritized over something else. New people have to get caught up and figure out how to make a framework and central forum site work for everyone it's now serving. So yeah, a lot of our pain this last year has been simply that XNA is doing well and XBLIG is doing well so the focus was shifted to catch other things up. It hurts when a parent seems to not have any time for you and they're spending some much time with your new baby brother. Growing pains. All families and in our case our product family experience it to some degree. I think as WP7 matures we'll see the team figuring out how to give everyone the right amount of attention. While we're talking about some of our growing pains, it is also important to note (although not really an excuse) that the Xbox LIVE Arcade developers complain about many of the same things that we do. If you paid attention to talks and information coming out of GDC 2011, most of the the XBLA guys were saying things that sounded eerily similar to what the XBLIG developers are saying (Scott Nichols from GayGamer.net noticed http://twitter.com/#!/NaviFairyGG/status/43540379206811650). Does this mean we should just accept the status quo since we're being treated exactly the same? No way. However it DOES show that the way we're being treated is no indication of the stability and future of the platform, it's just Microsoft dropping the communication ball on two playing fields. We're not alone and we're not even being treated worse. Not great, but also in a weird way a very good sign. Now on to a few tidbits I think I CAN share from the summit (I'm really crossing my fingers I'm not stepping over some NDA line I shouldn't be). First, I discovered that the XBLIG user base is bigger than I personally had originally estimated. I won't give the exact numbers (although we did beg Microsoft to release some of these numbers so maybe someday?) but it was much larger than my original guestimates and I was pleasantly surprised. Maybe some of you guys had the right number when you were guessing, but I know that mine was much too low. And even MORE importantly the number of users/shoppers is growing at a steady pace as well. Our market is growing! That was fantastic news and really something that I had to share. On to the community manager discussion. It was mentioned. I was mentioned. I blushed. Nothing more to report there than the blush in my cheeks was a light crimson color. If I ever see a job description posted for that position I have a resume waiting in the wings. I can't deny that I think that would be my dream job... ...so after I finished blushing, the MVPs did make it very, very clear that the communication has to improve. Community manager or not the single biggest pain point with the Xbox LIVE Indie Game community has been a lack of communication. I have seen dramatic improvement in the team responding to MVPs and I'm even seeing more communication from them on the forums so I'm hoping that's a long term change. I really think they understood the issue, the problem remains how to open that communication channel in a way that was sustainable. I think they'll get it figured out and hopefully that's sooner rather than later. During the summit, you may have seen me tweeting about how I was "that guy" (http://twitter.com/#!/clingermangw/status/42740432471470081). You also may have noticed that Andy and Catalin both mentioned me in their summit write ups. I may have come on a bit strong while I was there...went a little out of character for myself. I've been agitated for a while with the way things have been and I've been listening to you guys and hearing you guys be agitated. I'm also watching some really awesome indie game developers looking elsewhere and leaving the platform. Some of them we might not have been able to keep even with changes, but others are only leaving because of perceptions and lack of communication from Microsoft. And that pisses me off. And I let Microsoft know that I was pissed off. You made your list and I took that list and verbalized it. I verbalized the hell out of it. [It was actually mentioned that I'm a lot nicer on the forums and in email than I am in person...I felt bad about that, but I couldn't stay silent]. Hopefully it did something guys, I really did try hard to get the message across. Along with my agitation, I also brought some pride. I mentioned several things in person to the team that I was particularly proud of. From people in the community that are doing an awesome job, to the re-launch of XboxIndies that was going on that week and even gamers like Steven Hurdle (http://writingsofmassdeduction.com/) who have purchased one XBLIG every day for over 100 days now. The community is freaking rocking it and I made sure to highlight that. So in conclusion, I'd just like to say hang in there (you know, like that picture of the cat). If you've been worried about investing in Xbox LIVE Indie Games because you think it's on shaky ground. It's not. Dream Build Play being about the Xbox 360 should have helped a little to point that out. The team is really scrambling around trying to figure things out and make improvements all around. There’s quite a few new gals and guys and it's going to take them time to catch up and there are a lot of constantly shifting priorities. We all have one toy, one team and we're fighting for time with it. It's also time for the community to continue spreading our wings and going out on our own more often. The Indie Game Winter Uprising was a fantastic example of that. We took things into our own hands and it got noticed and Microsoft got behind it. They do every time we stand up and do something (look at how many Microsoft employees tweeted, wrote about the re-launch of XboxIndies.com or the support I've gotten from them for my weekly XNA Notes). XNA is here to stay, it's time for us to stop being scared of that and figure out how to make our own games the successes they should be. There's definitely a list of things that need to be fixed, things that should be improved and I think we should definitely keep vocal about that with Microsoft. Keep it short, focused and prioritized. There's also a lot of things we can do ourselves while we're waiting on them to fix and change things. Lots of ways we can compensate for particular weaknesses in the channel. The kind of stuff that we can step up and do ourselves. Do it on our own, you know, the way Indies always do. And I'm really looking forward to watching us do just that.

    Read the article

  • Do’s and Don’ts Building SharePoint Applications

    - by Bil Simser
    SharePoint is a great platform for building quick LOB applications. Simple things from employee time trackers to server and software inventory to full blown Help Desks can be crafted up using SharePoint from just customizing Lists. No programming necessary. However there are a few tricks I’ve painfully learned over the years that you can use for your own solutions. DO What’s In A Name? When you create a new list, column, or view you’ll commonly name it something like “Expense Reports”. However this has the ugly effect of creating a url to the list as “Expense%20Reports”. Or worse, an internal field name of “Expense_x0x0020_Reports” which is not only cryptic but hard to remember when you’re trying to find the column by internal name. While “Expense Reports 2011” is user friendly, “ExpenseReports2011” is not (unless you’re a programmer). So that’s not the solution. Well, not entirely. Instead when you create your column or list or view use the scrunched up name (I can’t think of the technical term for it right now) of “ExpenseReports2011”, “WomenAtTheOfficeThatAreMen” or “KoalaMeatIsGoodWhenBroiled”. After you’ve created it, go back and change the name to the more friendly “Silly Expense Reports That Nobody Reads”. The original internal name will be the url and code friendly one without spaces while the one used on data entry forms and view headers will be the human version. Smart Columns When building a view include columns that make sense. By default when you add a column the “Add to default view” is checked. Resist the urge to be lazy and leave it checked. Uncheck that puppy and decide consciously what columns should be included in the view. Pick columns that make sense to what the user is trying to do. This means you have to talk to the user. Yes, I know. That can be trying at times and even painful. Go ahead, talk to them. You might learn something. Find out what’s important to them and why. If they’re doing something repetitively as part of their job, try to make their life easier by including what’s most important to them. Do they really need to see the Created *and* Modified date of a document or do they just need the title and author? You’ll only find out after talking to them (or getting them drunk in a bar and leaving them in the back alley handcuffed to a garbage bin, don’t ask). Gotta Keep it Separated Hey, views are there for a reason. Use them. While “All Items” is a fine way to present a list of well, all items, it’s hardly sufficient to present a list of servers built before the Y2K bug hit. You’ll be scrolling the list for hours finally arriving at Page 387 of 12,591 and cursing that SharePoint guy for convincing you that putting your hardware into a list would be of any use to anyone. Next to collecting the data, presenting it is just as important. Views are often overlooked and many times ignored or misused. They’re the way you can slice and dice the data up so that you’re not trying to consume 3,000 years of human evolution on a single web page. Remember views can be filtered so feel free to create a view for each status or one for each operating system or one for each species of Information Worker you might be putting in that list or document library. Not only will it reduce the number of items someone sees at one time, it’ll also make the information that much more relevant. Also remember that each view is a separate page. Use it in navigation by creating a menu on the Quick Launch to each view. The discoverability of the Views menu isn’t overly obvious and if you violate the rule of columns (see Horizontally Scrolling below) the view menu doesn’t even show up until you shuffle the scroll bar to the left. Navigation links, big giant buttons, a screaming flashing “CLICK ME NOW” will help your users find their way. Sort It! Views are great so we’re building nice, rich views for the user. Awesomesauce. However sort is not very discoverable by the user. For example when you’re looking at a view how do you know if it’s ascending or descending and what is it sorted on. Maybe it’s sorted using two fields so what’s that all about? Help your users by letting them know the information they’re looking at is sorted. Maybe you name the view something appropriate like “Bogus Expense Claims Sorted By Deadbeats”. If you use the naming strategy just make sure you keep the name consistent with the description. In the previous example their better be a Deadbeat column so I can see the sort in action. Having a “Loser” column, while equally correct, is a little obtuse to the average Information Worker. Remember, they usually don’t use acronyms and even if they knew how to, it’s not immediately obvious to them that’s what you’re trying to convey. Another option is to simply drop a Content Editor Web Part above the list and explain exactly the view they’re looking at. Each view is it’s own page so one CEWP won’t be used across the board. Be descriptive in what the user is seeing but try to keep it brief. Dumping the first chapter of I, Claudius might be informative to the data but can gobble up screen real estate and miss the point of having the list. DO NOT Useless Attachments The attachments column is, in a word, useless. For the most part. Sure it indicates there’s an attachment on the list item but in the grand scheme of things that’s not overly informative. Maybe it is and by all means, if it makes sense to you include it. Colour it. Make it shine and stand like the Return of Clippy on every SharePoint list. Without it being functional it can be boring. EndUserSharePoint.com has an article to make the son of Clippy that much more useful so feel free to head over and check out this blog post by Paul Grenier on the task (Warning code ahead! Danger Will Robinson!) In any case, I would suggest you remove it from your views. Again if it’s important then include it but consider the jQuery solution above to make it functional. It’s added by default to views and one of things that people forget to clean up. Horizontal Scrolling Screen real estate is premium so building a list that contains 8,000 columns and stretches horizontally across 15 screens probably isn’t the most user friendly experience. Most users can’t figure out how to scroll vertically let alone horizontally so don’t make it even that more confusing for them. Take the Steve Krug approach in your view designs and try not to make the user think. Again views are your friend. Consider splitting up the data into views where one view contains 10 columns and other view contains the other 10. Okay, maybe your information doesn’t work that way but humans can only process 7 pieces of data at a time, 10 at most (then their heads explode and you don’t want to clean that mess up, especially on a Friday night before the big dance). It drives me batshit crazy when I see a view with 80 columns of data. I often ask the user “So what do you do with all this information”. The response is usually “With this data [the first 10 columns] I decide if I’m going to fire everyone, and with this data [the next 10 columns] I decide if I’m going to set the building on fire and collect the insurance”. It’s at that point I show them how to create two new views “People Who Are About To Get The Axe” and “Beach Time For The Executives”. Again, talk to your users and try to reason with them on cutting down the number of columns they see at once. Vertical Scrolling Another big faux pas I find is the use of multi-line comment fields in views. It’s not so bad when you have a statement like this in your view: “I really like, oh my god, thought I was going to scream when I saw this turtle then I decided what I was going to have for dinner and frankly I hate having to work late so when I was talking to the customer I thought, oh my god, what if the customer has turtles and then it appeared to me that I really was hungry so I'm going to have lunch now.” It’s fine if that’s the only column along with two or three others, but once you slap those 20 columns of data into the list, the comment field wraps and forms a new multi-page novel that takes up your entire screen. Do everyone a favour and just avoid adding the column to views. Train the user to just click through to the item if they need to see the contents. Duplicate Information Duplication is never good. Views and great as you can group data together. For example create a view of project status reports grouped by author. Then you can see what project manager is being a dip and not submitting their report. However if you group by author do you really need the Created By field as well in the view? Or if the view is grouped by Project then Author do you need both. Horizontal real estate is always at a premium so try not to clutter up the view with duplicate data like this. Oh  yeah, if you’re scratching your head saying “But Bil, if I don’t include the Project name in the view and I have a lot of items then how do I know which one I’m looking at”. That’s a hint that your grouping is too vague or you have too much data in the view based on that criteria. Filter it down a notch, create some views, and try to keep the group down to a single screen where you can see the group header at the top of the page. Again it’s just managing the information you have. Redundant, See Redundant This partially relates to duplicate information and smart columns but basically remember to not include the obvious in a view. Remember, don’t make me think. If you’ve gone to the trouble (and it was a lot of trouble wasn’t it?) to create separate views of your data by creating a “September Zombie Brain Sales”, “October Zombie Brain Sales”, etc. then please for the love of all that is holy do not include the Month and Product columns in your view. Similarly if you create a “My” view of anything (“My Favourite Brands of Spandex”, “My Co-Workers I Find The Urge To Disinfect”) then again, do not include the owner or author field (or whatever field you use to identify “My”). That’s just silly. Hope that helps! Happy customizing!

    Read the article

  • Exam 70-480 Study Material: Programming in HTML5 with JavaScript and CSS3

    - by Stacy Vicknair
    Here’s a list of sources of information for the different elements that comprise the 70-480 exam: General Resources http://www.w3schools.com (As pointed out in David Pallmann’s blog some of this content is unverified, but it is a decent source of information. For more about when it isn’t decent, see http://www.w3fools.com ) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/ (A guy who did a lot of what I did already, sadly I found this halfway through finishing my resources list. This list is expertly put together so I would recommend checking it out.) http://davidpallmann.blogspot.com/2012/08/microsoft-certification-exam-70-480.html http://pluralsight.com/training/Courses (Yes, this isn’t free, but if you look at the course listing there is an entire section on HTML5, CSS3 and Javascript. You can always try the trial!)   Some of the links I put below will overlap with the other resources above, but I tried to find explanations that looked beneficial to me on links outside those already mentioned.   Test Breakdown Implement and Manipulate Document Structures and Objects (24%) Create the document structure. o This objective may include but is not limited to: structure the UI by using semantic markup, including for search engines and screen readers (Section, Article, Nav, Header, Footer, and Aside); create a layout container in HTML http://www.w3schools.com/html/html5_new_elements.asp   Write code that interacts with UI controls. o This objective may include but is not limited to: programmatically add and modify HTML elements; implement media controls; implement HTML5 canvas and SVG graphics http://www.w3schools.com/html/html5_canvas.asp http://www.w3schools.com/html/html5_svg.asp   Apply styling to HTML elements programmatically. o This objective may include but is not limited to: change the location of an element; apply a transform; show and hide elements   Implement HTML5 APIs. o This objective may include but is not limited to: implement storage APIs, AppCache API, and Geolocation API http://www.w3schools.com/html/html5_geolocation.asp http://www.w3schools.com/html/html5_webstorage.asp http://www.w3schools.com/html/html5_app_cache.asp   Establish the scope of objects and variables. o This objective may include but is not limited to: define the lifetime of variables; keep objects out of the global namespace; use the “this” keyword to reference an object that fired an event; scope variables locally and globally http://robertnyman.com/2008/10/09/explaining-javascript-scope-and-closures/ http://www.quirksmode.org/js/this.html   Create and implement objects and methods. o This objective may include but is not limited to: implement native objects; create custom objects and custom properties for native objects using prototypes and functions; inherit from an object; implement native methods and create custom methods http://www.javascriptkit.com/javatutors/object.shtml http://www.crockford.com/javascript/inheritance.html http://stackoverflow.com/questions/1635116/javascript-class-method-vs-class-prototype-method http://www.javascriptkit.com/javatutors/proto.shtml     Implement Program Flow (25%) Implement program flow. o This objective may include but is not limited to: iterate across collections and array items; manage program decisions by using switch statements, if/then, and operators; evaluate expressions http://www.javascriptkit.com/jsref/looping.shtml http://www.javascriptkit.com/javatutors/varshort.shtml http://www.javascriptkit.com/javatutors/switch.shtml   Raise and handle an event. o This objective may include but is not limited to: handle common events exposed by DOM (OnBlur, OnFocus, OnClick); declare and handle bubbled events; handle an event by using an anonymous function http://dev.w3.org/2006/webapi/DOM-Level-3-Events/html/DOM3-Events.html http://javascript.info/tutorial/bubbling-and-capturing   Implement exception handling. o This objective may include but is not limited to: set and respond to error codes; throw an exception; request for null checks; implement try-catch-finally blocks http://www.javascriptkit.com/javatutors/trycatch.shtml   Implement a callback. o This objective may include but is not limited to: receive messages from the HTML5 WebSocket API; use jQuery to make an AJAX call; wire up an event; implement a callback by using anonymous functions; handle the “this” pointer http://www.w3.org/TR/2011/WD-websockets-20110419/ http://www.html5rocks.com/en/tutorials/websockets/basics/ http://api.jquery.com/jQuery.ajax/   Create a web worker process. o This objective may include but is not limited to: start and stop a web worker; pass data to a web worker; configure timeouts and intervals on the web worker; register an event listener for the web worker; limitations of a web worker https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers http://www.html5rocks.com/en/tutorials/workers/basics/   Access and Secure Data (26%) Validate user input by using HTML5 elements. o This objective may include but is not limited to: choose the appropriate controls based on requirements; implement HTML input types and content attributes (for example, required) to collect user input http://diveintohtml5.info/forms.html   Validate user input by using JavaScript. o This objective may include but is not limited to: evaluate a regular expression to validate the input format; validate that you are getting the right kind of data type by using built-in functions; prevent code injection http://www.regular-expressions.info/javascript.html http://msdn.microsoft.com/en-us/library/66ztdbe6(v=vs.94).aspx https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Operators/typeof http://blog.stackoverflow.com/2008/06/safe-html-and-xss/ http://stackoverflow.com/questions/942011/how-to-prevent-javascript-injection-attacks-within-user-generated-html   Consume data. o This objective may include but is not limited to: consume JSON and XML data; retrieve data by using web services; load data or get data from other sources by using XMLHTTPRequest http://www.erichynds.com/jquery/working-with-xml-jquery-and-javascript/ http://www.webdevstuff.com/86/javascript-xmlhttprequest-object.html http://www.json.org/ http://stackoverflow.com/questions/4935632/how-to-parse-json-in-javascript   Serialize, deserialize, and transmit data. o This objective may include but is not limited to: binary data; text data (JSON, XML); implement the jQuery serialize method; Form.Submit; parse data; send data by using XMLHTTPRequest; sanitize input by using URI/form encoding http://api.jquery.com/serialize/ http://www.javascript-coder.com/javascript-form/javascript-form-submit.phtml http://stackoverflow.com/questions/327685/is-there-a-way-to-read-binary-data-into-javascript https://developer.mozilla.org/en-US/docs/JavaScript/Reference/Global_Objects/encodeURI     Use CSS3 in Applications (25%) Style HTML text properties. o This objective may include but is not limited to: apply styles to text appearance (color, bold, italics); apply styles to text font (WOFF and @font-face, size); apply styles to text alignment, spacing, and indentation; apply styles to text hyphenation; apply styles for a text drop shadow http://www.w3schools.com/css/css_text.asp http://www.w3schools.com/css/css_font.asp http://nicewebtype.com/notes/2009/10/30/how-to-use-css-font-face/ http://webdesign.about.com/od/beginningcss/p/aacss5text.htm http://www.w3.org/TR/css3-text/ http://www.css3.info/preview/box-shadow/   Style HTML box properties. o This objective may include but is not limited to: apply styles to alter appearance attributes (size, border and rounding border corners, outline, padding, margin); apply styles to alter graphic effects (transparency, opacity, background image, gradients, shadow, clipping); apply styles to establish and change an element’s position (static, relative, absolute, fixed) http://net.tutsplus.com/tutorials/html-css-techniques/10-css3-properties-you-need-to-be-familiar-with/ http://www.w3schools.com/css/css_image_transparency.asp http://www.w3schools.com/cssref/pr_background-image.asp http://ie.microsoft.com/testdrive/graphics/cssgradientbackgroundmaker/default.html http://www.w3.org/TR/CSS21/visufx.html http://www.barelyfitz.com/screencast/html-training/css/positioning/ http://davidwalsh.name/css-fixed-position   Create a flexible content layout. o This objective may include but is not limited to: implement a layout using a flexible box model; implement a layout using multi-column; implement a layout using position floating and exclusions; implement a layout using grid alignment; implement a layout using regions, grouping, and nesting http://www.html5rocks.com/en/tutorials/flexbox/quick/ http://www.css3.info/preview/multi-column-layout/ http://msdn.microsoft.com/en-us/library/ie/hh673558(v=vs.85).aspx http://dev.w3.org/csswg/css3-grid-layout/ http://dev.w3.org/csswg/css3-regions/   Create an animated and adaptive UI. o This objective may include but is not limited to: animate objects by applying CSS transitions; apply 3-D and 2-D transformations; adjust UI based on media queries (device adaptations for output formats, displays, and representations); hide or disable controls http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Find elements by using CSS selectors and jQuery. o This objective may include but is not limited to: choose the correct selector to reference an element; define element, style, and attribute selectors; find elements by using pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Structure a CSS file by using CSS selectors. o This objective may include but is not limited to: reference elements correctly; implement inheritance; override inheritance by using !important; style an element based on pseudo-elements and pseudo-classes (for example, :before, :first-line, :first-letter, :target, :lang, :checked, :first-child) http://www.bloggedbychris.com/2012/09/19/microsoft-exam-70-480-study-guide/   Technorati Tags: 70-480,CSS3,HTML5,HTML,CSS,JavaScript,Certification

    Read the article

  • Migrating from SQL Trace to Extended Events

    - by extended_events
    In SQL Server codenamed “Denali” we are moving our diagnostic tracing capabilities forward by building a system on top of Extended Events. With every new system you face the specter of migration which is always a bit of a hassle. I’m obviously motivated to see everyone move their diagnostic tracing systems over to the new extended events based system, so I wanted to make sure we lowered the bar for the migration process to help ease your trials. In my initial post on Denali CTP 1 I described a couple tables that we created that will help map the existing SQL Trace Event Classes to the equivalent Extended Events events. In this post I’ll describe the tables in a bit more details, explain the relationship between the SQL Trace objects (Event Class & Column) and Extended Event objects (Events & Actions) and at the end provide some sample code for a managed stored procedure that will take an existing SQL Trace session (eg. a trace that you can see in sys.Traces) and converts it into event session DDL. Can you relate? In some ways, SQL Trace and Extended Events is kind of like the Standard and Metric measuring systems in the United States. If you spend too much time trying to figure out how to convert between the two it will probably make your head hurt. It’s often better to just use the new system without trying to translate between the two. That said, people like to relate new things to the things they’re comfortable with, so, with some trepidation, I will now explain how these two systems are related to each other. First, some terms… SQL Trace is made up of Event Classes and Columns. The Event Class occurs as the result of some activity in the database engine, for example, SQL:Batch Completed fires when a batch has completed executing on the server. Each Event Class can have any number of Columns associated with it and those Columns contain the data that is interesting about the Event Class, such as the duration or database name. In Extended Events we have objects named Events, EventData field and Actions. The Event (some people call this an xEvent but I’ll stick with Event) is equivalent to the Event Class in SQL Trace since it is the thing that occurs as the result of some activity taking place in the server. An  EventData field (from now on I’ll just refer to these as fields) is a piece of information that is highly correlated with the event and is always included as part of the schema of an Event. An Action is something that can be associated with any Event and it will cause some additional “action” to occur when ever the parent Event occurs. Actions can do a number of different things for example, there are Actions that collect additional data and, take memory dumps. When mapping SQL Trace onto Extended Events, Columns are covered by a combination of both fields and Actions. Knowing exactly where a Column is covered by a field and where it is covered by an Action is a bit of an art, so we created the mapping tables to make you an Artist without the years of practice. Let me draw you a map. Event Mapping The table dbo.trace_xe_event_map exists in the master database with the following structure: Column_name Type trace_event_id smallint package_name nvarchar xe_event_name nvarchar By joining this table sys.trace_events using trace_event_id and to the sys.dm_xe_objects using xe_event_name you can get a fair amount of information about how Event Classes are related to Events. The most basic query this lends itself to is to match an Event Class with the corresponding Event. SELECT     t.trace_event_id,     t.name [event_class],     e.package_name,     e.xe_event_name FROM sys.trace_events t INNER JOIN dbo.trace_xe_event_map e     ON t.trace_event_id = e.trace_event_id There are a couple things you’ll notice as you peruse the output of this query: For the most part, the names of Events are fairly close to the original Event Class; eg. SP:CacheMiss == sp_cache_miss, and so on. We’ve mostly stuck to a one to one mapping between Event Classes and Events, but there are a few cases where we have combined when it made sense. For example, Data File Auto Grow, Log File Auto Grow, Data File Auto Shrink & Log File Auto Shrink are now all covered by a single event named database_file_size_change. This just seemed like a “smarter” implementation for this type of event, you can get all the same information from this single event (grow/shrink, Data/Log, Auto/Manual growth) without having multiple different events. You can use Predicates if you want to limit the output to just one of the original Event Class measures. There are some Event Classes that did not make the cut and were not migrated. These fall into two categories; there were a few Event Classes that had been deprecated, or that just did not make sense, so we didn’t migrate them. (You won’t find an Event related to mounting a tape – sorry.) The second class is bigger; with rare exception, we did not migrate any of the Event Classes that were related to Security Auditing using SQL Trace. We introduced the SQL Audit feature in SQL Server 2008 and that will be the compliance and auditing feature going forward. Doing this is a very deliberate decision to support separation of duties for DBAs. There are separate permissions required for SQL Audit and Extended Events tracing so you can assign these tasks to different people if you choose. (If you’re wondering, the permission for Extended Events is ALTER ANY EVENT SESSION, which is covered by CONTROL SERVER.) Action Mapping The table dbo.trace_xe_action_map exists in the master database with the following structure: Column_name Type trace_column_id smallint package_name nvarchar xe_action_name nvarchar You can find more details by joining this to sys.trace_columns on the trace_column_id field. SELECT     c.trace_column_id,     c.name [column_name],     a.package_name,     a.xe_action_name FROM sys.trace_columns c INNER JOIN    dbo.trace_xe_action_map a     ON c.trace_column_id = a.trace_column_id If you examine this list, you’ll notice that there are relatively few Actions that map to SQL Trace Columns given the number of Columns that exist. This is not because we forgot to migrate all the Columns, but because much of the data for individual Event Classes is included as part of the EventData fields of the equivalent Events so there is no need to specify them as Actions. Putting it all together If you’ve spent a bunch of time figuring out the inner workings of SQL Trace, and who hasn’t, then you probably know that the typically set of Columns you find associated with any given Event Class in SQL Profiler is not fix, but is determine by the contents of the table sys.trace_event_bindings. We’ve used this table along with the mapping tables to produce a list of Event + Action combinations that duplicate the SQL Profiler Event Class definitions using the following query, which you can also find in the Books Online topic How To: View the Extended Events Equivalents to SQL Trace Event Classes. USE MASTER; GO SELECT DISTINCT    tb.trace_event_id,    te.name AS 'Event Class',    em.package_name AS 'Package',    em.xe_event_name AS 'XEvent Name',    tb.trace_column_id,    tc.name AS 'SQL Trace Column',    am.xe_action_name as 'Extended Events action' FROM (sys.trace_events te LEFT OUTER JOIN dbo.trace_xe_event_map em    ON te.trace_event_id = em.trace_event_id) LEFT OUTER JOIN sys.trace_event_bindings tb    ON em.trace_event_id = tb.trace_event_id LEFT OUTER JOIN sys.trace_columns tc    ON tb.trace_column_id = tc.trace_column_id LEFT OUTER JOIN dbo.trace_xe_action_map am    ON tc.trace_column_id = am.trace_column_id ORDER BY te.name, tc.name As you might imagine, it’s also possible to map an existing trace definition to the equivalent event session by judicious use of fn_trace_geteventinfo joined with the two mapping tables. This query extracts the list of Events and Actions equivalent to the trace with ID = 1, which is most likely the Default Trace. You can find this query, along with a set of other queries and steps required to migrate your existing traces over to Extended Events in the Books Online topic How to: Convert an Existing SQL Trace Script to an Extended Events Session. USE MASTER; GO DECLARE @trace_id int SET @trace_id = 1 SELECT DISTINCT el.eventid, em.package_name, em.xe_event_name AS 'event'    , el.columnid, ec.xe_action_name AS 'action' FROM (sys.fn_trace_geteventinfo(@trace_id) AS el    LEFT OUTER JOIN dbo.trace_xe_event_map AS em       ON el.eventid = em.trace_event_id) LEFT OUTER JOIN dbo.trace_xe_action_map AS ec    ON el.columnid = ec.trace_column_id WHERE em.xe_event_name IS NOT NULL AND ec.xe_action_name IS NOT NULL You’ll notice in the output that the list doesn’t include any of the security audit Event Classes, as I wrote earlier, those were not migrated. But wait…there’s more! If this were an infomercial there’d by some obnoxious guy next to me blogging “Well Mike…that’s pretty neat, but I’m sure you can do more. Can’t you make it even easier to migrate from SQL Trace?”  Needless to say, I’d blog back, in an overly excited way, “You bet I can' obnoxious blogger side-kick!” What I’ve got for you here is a Extended Events Team Blog only special – this tool will not be sold in any store; it’s a special offer for those of you reading the blog. I’ve wrapped all the logic of pulling the configuration information out of an existing trace and and building the Extended Events DDL statement into a handy, dandy CLR stored procedure. Once you load the assembly and register the procedure you just supply the trace id (from sys.traces) and provide a name for the event session. Run the procedure and out pops the DDL required to create an equivalent session. Any aspects of the trace that could not be duplicated are included in comments within the DDL output. This procedure does not actually create the event session – you need to copy the DDL out of the message tab and put it into a new query window to do that. It also requires an existing trace (but it doesn’t have to be running) to evaluate; there is no functionality to parse t-sql scripts. I’m not going to spend a bunch of time explaining the code here – the code is pretty well commented and hopefully easy to follow. If not, you can always post comments or hit the feedback button to send us some mail. Sample code: TraceToExtendedEventDDL   Installing the procedure Just in case you’re not familiar with installing CLR procedures…once you’ve compile the assembly you can load it using a script like this: -- Context to master USE master GO -- Create the assembly from a shared location. CREATE ASSEMBLY TraceToXESessionConverter FROM 'C:\Temp\TraceToXEventSessionConverter.dll' WITH PERMISSION_SET = SAFE GO -- Create a stored procedure from the assembly. CREATE PROCEDURE CreateEventSessionFromTrace @trace_id int, @session_name nvarchar(max) AS EXTERNAL NAME TraceToXESessionConverter.StoredProcedures.ConvertTraceToExtendedEvent GO Enjoy! -Mike

    Read the article

< Previous Page | 62 63 64 65 66 67 68 69  | Next Page >