Search Results

Search found 6524 results on 261 pages for 'the ever kid'.

Page 40/261 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • See the Lord of the Rings Epic from the Perspective of Mordor [eBook]

    - by ETC
    Much like the wildly popular book “Wicked” mixed up the good/bad dichotomy in the Wizard of Oz, “The Last Ring-Bearer” shows us the Mordor’s take on the Lord of the Rings. The work of a Russian paleontologist, Kirill Yeskov, “The Last Ring-Bearer” frames the conflict in the Lord of the Rings from the perspective of the citizens of Mordor. Salon magazine offers this summary, as part of their larger review: In Yeskov’s retelling, the wizard Gandalf is a war-monger intent on crushing the scientific and technological initiative of Mordor and its southern allies because science “destroys the harmony of the world and dries up the souls of men!” He’s in cahoots with the elves, who aim to become “masters of the world,” and turn Middle-earth into a “bad copy” of their magical homeland across the sea. Barad-dur, also known as the Dark Tower and Sauron’s citadel, is, by contrast, described as “that amazing city of alchemists and poets, mechanics and astronomers, philosophers and physicians, the heart of the only civilization in Middle-earth to bet on rational knowledge and bravely pitch its barely adolescent technology against ancient magic.” Hit up the link below to grab a PDF of the official English translation of Yeskov’s work. The Last Ring-Bearer [via Salon] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Lucky Kid Gets Playable Angry Birds Cake [Video] See the Lord of the Rings Epic from the Perspective of Mordor [eBook] Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    Read the article

  • How to Create Geo-Reminders in Android with GeoNote

    - by Zainul Franciscus
    Unlike most reminders that remind you to do a certain action at a desired time, GeoNote gives you reminders when you enter a location. If you’re a big fan of location-based services, then GeoNote is the perfect reminder for you. Image by Menino.Us GeoNote is one of the few Geo-Reminder applications that are available on the market for free. Its simple interface allows us to create To-Do list quickly. Just click the “Add Location” button to add your first note Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin The How-To Geek Video Guide to Using Windows 7 Speech Recognition A History of Vintage Transformers [Infographic] Amazon Finally Adds Real Page Numbers to the Kindle Now You Can Print Google Docs and Gmail through Google Cloud Print AppBrain Enables Direct-to-Phone Installation Again Build a DIY Clapper to Hone Your Electronics Chops How to Kid Proof Your Computer’s Power and Reset Buttons

    Read the article

  • Is knowledge of hacking mechanisms required for an MMO?

    - by Gabe
    Say I was planning on, in the future (not now! There is alot I need to learn first) looking to participating in a group project that was going to make a massively multiplayer online game (mmo), and my job would be the networking portion. I'm not that familiar with network programming (I've read a very basic book on PHP, MYSQL and I messed around a bit with WAMP). In the course of my studying of PHP and MYSQL, should I look into hacking? Hacking as in port scanning, router hacking, etc. In MMOs people are always trying to cheat, bots and such, but the worst scenario would be having someone hack the databases. This is just my conception of this, I really don't know. I do however understand networking fairly well, like subnetting/ports/IP's (local/global)/etc. In your professional opinion, (If you understand the topic, enlighten me) Should I learn about these things in order to counter the possibility of this happening? Also, out of the things I mentioned (port scanning, router hacking) Is there anything else that pertains to hacking that I should look into? I'm not too familiar with the malicious/Security aspects of Networking. And a note: I'm not some kid trying to learn how to hack. I just want to learn as much as possible before I go to college, and I really need to know if I need to study this or not.

    Read the article

  • DVDs and CDs not recognized

    - by Larry Doyle
    I have Ubuntu 12.04 LTS installed on my computer (Sony Vaio VGN-NS205N). It reads some CDs and DVDs just fine, but many others it will not recognize at all, or will only recognize after opening and closing the CD tray many times. It just shows the cd/dvd drive as being empty and won't show that anything is on there at all. I have a few DVDs (from the same publisher) that work every time. This comes up most often when trying to show my kid Bob the Builder, etc. Some data CDs I would like to work with also have the same issue. It's not a hardware or defective disc problem, since all these will play in Windows on the same machine. It's just really annoying to have to switch operating systems to play videos or get data off discs. All the searches I have done people recommend installing medibuntu and other repositories. I have and they are all up to date. No change. Any help that works would be greatly appreciated.

    Read the article

  • 2D/Isometric map algorithm

    - by Icarus Cocksson
    First of all, I don't have much experience on game development but I do have experience on development. I do know how to make a map, but I don't know if my solution is a normal or a hacky solution. I don't want to waste my time coding things, and realise they're utterly crap and lose my motivation. Let's imagine the following map. (2D - top view - A square) X: 0 to 500 Y: 0 to 500 My character currently stands at X:250,Y:400, somewhere near center of 100px above bottom and I can control him with my keyboard buttons. LEFT button does X--, UP button does Y-- etc. This one is kid's play. I'm asking this because I know there are some engines that automate this task. For example, games like Diablo 3 uses an engine. You can pretty much drag drop a rock to map, and it is automatically being placed here - making player unable to pass through/detect the collision. But what the engine exactly does in the background? Generates a map like mine, places a rock at the center, and checks it like: unmovableObjects = array('50,50'); //we placed a rock at 50,50 location if(Map.hasUnmovableObject(CurrentPlayerX, CurrentPlayerY)) { //unable to move } else { //able to move } My question is: Is this how 2D/Isometric maps are being generated or there is a different and more complex logic behind them?

    Read the article

  • Activate Your Monitor via Motion Trigger

    - by ETC
    Most people are in the habit of jiggling their mouse or tapping their keyboard when they want to wake their monitor. This clever electronics hack adds a sensor to your computer for motion-based monitor activation. At the DIY and electronics blog Radio Etcetera they tackled an interesting project and shared the build guide. Their local volunteer fire department needed a monitor on for quick information checks but they didn’t need it on all the time and they didn’t want to have to walk over and activate the monitor when they needed it. The solution involved hacking a simple infrared security sensor and wiring it via USB to send a mouse command when motion is detected in the room. Fire fighter walks in, monitor turns on and displays information; fire fighter leaves and the monitor goes back to sleep. Hit up the link below to see additional photos, schematics, and the complete build guide. Motion Activated PC Monitor [Radio Etcetera via Hack A Day] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Lucky Kid Gets Playable Angry Birds Cake [Video] See the Lord of the Rings Epic from the Perspective of Mordor [eBook] Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    Read the article

  • 50 Years After The Jetsons

    - by Jason Fitzpatrick
    The Jetsons, the future-oriented animated cartoon series from the 1960s, turned 50 this week. The Smithsonian takes a look at what the show meant, then and now. At the Smithsonian blog Paleofuture, Matt Novak looks back at the last 50 years and the impact that The Jetsons had. He writes: It’s important to remember that today’s political, social and business leaders were pretty much watching ”The Jetsons” on repeat during their most impressionable years. People are often shocked to learn that “The Jetsons” lasted just one season during its original run in 1962-63 and wasn’t revived until 1985. Essentially every kid in America (and many internationally) saw the series on constant repeat during Saturday morning cartoons throughout the 1960s, ’70s and ’80s. Everyone (including my own mom) seems to ask me, “How could it have been around for only 24 episodes? Did I really just watch those same episodes over and over again?” Yes, yes you did. But it’s just a cartoon, right? So what if today’s political and social elite saw ”The Jetsons” a lot? Thanks in large part to the Jetsons, there’s a sense of betrayal that is pervasive in American culture today about the future that never arrived. We’re all familiar with the rallying cries of the angry retrofuturist: Where’s my jetpack!?! Where’s my flying car!?! Where’s my robot maid?!? “The Jetsons” and everything they represented were seen by so many not as a possible future, but a promise of one. Hit up the link below for the full article–prepare to be surprised at just how few episodes of the show were ever animated and aired. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • HTG Explains: What Are Computer Algorithms and How Do They Work?

    - by YatriTrivedi
      Unless you’re into math or programming, the word “algorithm” might be Greek to you, but it’s one of the building blocks of everything you’re using to read this article. Here’s a quick explanation of what they are, and how they work. Disclaimer: I’m not a math or computer science teacher, so not all of the terms I use are technical. That’s because I’m trying to explain everything in plain English for people aren’t quite comfortable with math. That being said, there is some math involved, and that’s unavoidable. Math geeks, feel free to correct or better explain in the comments, but please, keep it simple for the mathematically disinclined among us. Image by Ian Ruotsala Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Lucky Kid Gets Playable Angry Birds Cake [Video] See the Lord of the Rings Epic from the Perspective of Mordor [eBook] Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    Read the article

  • Ten Classic Electronic Toys and Their Modern Equivalents

    - by Jason Fitzpatrick
    Whether you’re looking to relive the toy exploits of your youth or pass your love of tinkering and electronics onto the younger generation, this list highlights ten great electronic toys of yesteryear and their modern equivalents. Courtesy of Wired’s Geek Dad, the description for the all-in-one electronics kit seen here: What is was: Arthur C. Clarke has said that any sufficiently advanced technology is indistinguishable from magic. As a kid in the midst of an increasing technological revolution, electronics were at the heart of that. Learning electronics was made easy through the Science Fair Electronic Project Kits found at Radioshack. Through the project guides, kids could construct various ‘experiments’ by attaching wires to terminal springs that make circuits. The terminal springs would wire in components such as LED segment lights, photo sensors, resistors, diodes, etc. While it was fun getting the projects to work, the manuals lacked in depth explanation as to what was happening in the circuit to produce the project’s result. Why it was awesome: First, it was a simple buy for parents. Everything you needed to get your child interested in electronics was right in the kit. You didn’t need to breadboard or solder. I remember a distinct feeling of accomplishment making a high-water alarm or a light-sensor game with the realization that the bundles of wires springing up from the kit were actually doing something! Modern equivalent: You can still pick up variations of the 100-in-1 kits, but their popular replacement seem to be Snap Circuits by Elenco. All of the components are mounted on a plastic base with a contact on either end which interconnect with each other and the plastic base that projects can be mounted to. Each component also has the electrical diagram symbol for that component drawn on it so it can help you read schematics. For that reason alone, I like these better. HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now

    Read the article

  • Masters vs. PhD - long [closed]

    - by Sterling
    I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully having people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I can't even really accurately imagine. I know that I wouldn't have at once obviously, but just to know I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research. Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. I When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help.

    Read the article

  • Is it OK to reoccupy my old GitHub username to protect repository redirections?

    - by Idan Arye
    I'm considering changing my GitHub username from the old alias I was using as a kid to my real name. I'm concerned about my repository URLs. GitHub will redirect the old URLs, but if someone creates a new account using my old username and creates a repository with the same name as one of my repositories, the URL redirection will break and the URL will lead to their repository, not mine. Now, this is understandable, and GitHub recommends to not count on the redirect in the long term, and update all the remotes, but I'm concerned about some Vim plugins I'm hosting on GitHub. It's a common practice to manage Vim plugins with Git(either as separate repositories or as submodules), and if one of the plugins' remotes break you'll get error messages when you try to batch-update all your plugins(it happened to me once...). It's not that hard to solve, and the chances that'll happen are slim, but I would still like to avoid causing trouble to the users of my plugins... To prevent this, I think to create a new account with my old username. That way I can avoid the risk of someone else taking my old username and breaking the redirects of my old repositories. While researching this approach I've found GitHub's Name Squatting Policy. According to that policy, GitHub can delete or rename inactive accounts. To my understanding, they do this to prevent Cybersquatting, but surely this isn't the case with my fake account - I'm not holding someone else's name in an attempt to sell it to them, I'm merely occupying a name I was using to protect my old URLs... So, is it acceptable to go with this plan an create a fake account with my old username?

    Read the article

  • What are the options for retraining formally as a software engineer?

    - by Matt Harrison
    I'm a self-taught programmer. I have a good undergraduate degree in Architecture (building, not software). I was always a science/maths kid and got consistency good grades in these subjects. However I became indecisive at undergraduate level and switched between Physics, Chemistry, Art and finally stuck with Architecture mainly out of the desperate need to finish any degree. As soon as I graduated, I ditched architecture and started writing code again professionally. I've been a programmer now for 3 years and I've progressed very quickly. I'm ambitious and I want to work for the top companies in this field at some point and I've realised I need a Computer Science education to be taken seriously (based on job ads for the big tech firms). I've applied for a few MSc programs in Computer Science but they've all rejected me because of my BA. It's just not an option for me to quit my job and go back and do another 3 year undergraduate degree in CS. I know I can study at this level because I've read most of the books on the reading lists for CS courses in the UK that I can find and I have this knowledge now, it's just I can't prove it on an application form. What options are available to me?

    Read the article

  • Bust Out these 13 Spooky Games for Halloween

    - by Jason Fitzpatrick
    Looking for a suitably spooky board game for Halloween? This roundup includes everything from the light-hearted to the dark and challenging. Over at GeekDad they’ve rounded up 13 horror/Halloween themed board games to help you fill your holiday board gaming quota. The list includes classics like the in-depth and atmospheric Arkham Horror to more recent and kid-friendly GeistesBlitz. Although we think their list is rock solid and includes some great titles, we’re disappointed to see that Witch of Salem, a great lighter-weight alternative to Arkham Horror for those nights you want some cooperative H.P. Lovecraft inspired play without the massive setup and game length, didn’t make the list. To check out the full GeekDad list hit up the link below, for more boardgame-centric reading we highly recommend the excellent board gaming site BoardgameGeek. 13 Spooky Board Games for Your Halloween Game Night [GeekDad] What Is the Purpose of the “Do Not Cover This Hole” Hole on Hard Drives? How To Log Into The Desktop, Add a Start Menu, and Disable Hot Corners in Windows 8 HTG Explains: Why You Shouldn’t Use a Task Killer On Android

    Read the article

  • How to Manage and Use LVM (Logical Volume Management) in Ubuntu

    - by Justin Garrison
    In our previous article we told you what LVM is and what you may want to use it for, and today we are going to walk you through some of the key management tools of LVM so you will be confident when setting up or expanding your installation. As stated before, LVM is a abstraction layer between your operating system and physical hard drives. What that means is your physical hard drives and partitions are no longer tied to the hard drives and partitions they reside on. Rather, the hard drives and partitions that your operating system sees can be any number of separate hard drives pooled together or in a software RAID Latest Features How-To Geek ETC Inspire Geek Love with These Hilarious Geek Valentines How to Integrate Dropbox with Pages, Keynote, and Numbers on iPad RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? How to Recover that Photo, Picture or File You Deleted Accidentally How To Colorize Black and White Vintage Photographs in Photoshop How To Get SSH Command-Line Access to Windows 7 Using Cygwin How to Kid Proof Your Computer’s Power and Reset Buttons Microsoft’s Windows Media Player Extension Adds H.264 Support Back to Google Chrome Android Notifier Pushes Android Notices to Your Desktop Dead Space 2 Theme for Chrome and Iron Carl Sagan and Halo Reach Mashup – We Humans are Capable of Greatness [Video] Battle the Necromorphs Once Again on Your Desktop with the Dead Space 2 Theme for Windows 7

    Read the article

  • multple inner joins 3 or more crashes mysql server 5.1.30 opensolaris

    - by user331849
    when doing simple query on 4 inner joined tables, the server crashes with the output below appearing in the the mysql .err file. eg. select * from table1 inner join table2 on table1.a = table2.a and table1.b = table2.b inner join table3 on table2.a = table3.a and table2.c = table3.c inner join table4 on table3.a = table4.a and table3.d = table4.d If i remove one of the tables it executes fine. Likewise if I remove a different table, it executes fine. Though all tables have been checked anyway, this would suggest that it is not a problem specifically with one of the tables. mysql.err trace: 100503 18:13:19 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=1572864000 read_buffer_size=2097152 max_used_connections=11 max_threads=151 threads_connected=10 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 2155437 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd: 0x72febda8 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = fe07efb0 thread_stack 0x40000 Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd-query at be1021f0 = explain select * from business inner join timetable on business.id = timetable.business_id inner join timetableentry on timetable.business_id = timetableentry.business_id and timetable.kid = timetableentry.parent inner join staff on timetable.business_id = staff.business_id and timetable.staf f_person = staff.kid where business.id = '3050bb04fda41df64a9c1c149150026c' thd-thread_id=9 thd-killed=NOT_KILLED The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains information that should help you find out what is causing the crash. 100503 18:13:19 mysqld_safe mysqld restarted 100503 18:13:20 InnoDB: Failed to set DIRECTIO_ON on file ./ibdata1: OPEN: Inap propriate ioctl for device, continuing anyway 100503 18:13:20 InnoDB: Failed to set DIRECTIO_ON on file ./ibdata1: OPEN: Inap propriate ioctl for device, continuing anyway InnoDB: The log sequence number in ibdata files does not match InnoDB: the log sequence number in the ib_logfiles! 100503 18:13:20 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Last MySQL binlog file position 0 2731, file name ./mysql-bin.000093 100503 18:13:20 InnoDB: Started; log sequence number 0 2650338426 100503 18:13:20 [Note] Recovering after a crash using mysql-bin 100503 18:13:20 [Note] Starting crash recovery... 100503 18:13:20 [Note] Crash recovery finished. This on opensolaris SunOS 5.11 snv_111b i86pc i386 i86pc Mysql 5.1.30 Here is a snippet from the my.cnf file: key_buffer = 1500M max_allowed_packet = 1M thread_stack = 256K thread_cache_size = 8 sort_buffer_size = 2M read_buffer_size = 2M read_rnd_buffer_size = 8M table_cache = 512 tmp_table_size = 400M max_heap_table_size = 64M query_cache_limit = 20M query_cache_size = 200M Is this a bug or a configuration issue?

    Read the article

  • Cannot install ia32-lib package

    - by A British Person
    I have several programs that reuquire 32 bit packages (pointing to the ia32-lib package). However, when I try to install it, this happens. spirit@ubuntu:~$ sudo apt-get install ia32-libs Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: ia32-libs : Depends: ia32-libs-multiarch but it is not installable E: Unable to correct problems, you have held broken packages. No big whoop, packages die all the time. I tried a month later however and I still got this error, trying to install the specific package produces this error. spirit@ubuntu:~$ sudo apt-get install ia32-libs-multiarch Reading package lists... Done Building dependency tree Reading state information... Done Package ia32-libs-multiarch is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'ia32-libs-multiarch' has no installation candidate I am no Linux whizz-kid, but this seems to be that the package doesn't exist. I searched for Skype in the software centre (I was told this installs the 32-bit packages) and it does not appear in the software centre, and the downloadable from their website produces an error about - funnily enough - no 32-bit packages. Anyone who helps me will get a medal from the gods with the weight of a thousand planets. Just don't wear it for god's sake.

    Read the article

  • Sweden Windows Azure Group Meeting in November &amp; Fast with Windows Azure Competition

    - by Alan Smith
    SWAG November Meeting There will be a Sweden Windows Azure Group (SWAG) meeting in Stockholm on Monday 19th November. Chris Klug will be presenting a session on Windows Azure Mobile Services, and I will be presenting a session on Web Site Authentication with Social Identity Providers. Active Solution have been kid enough to host the event, and will be providing food and refreshments. The registration link is here: http://swag14.eventbrite.com If you would like to join SWAG the link is here: http://swagmembership.eventbrite.com Fast with Windows Azure Competition I’ve entered a 3 minute video of rendering a 3D animation using 256 Windows Azure worker roles in the “Fast with Windows Azure” competition. It’s the last week of voting this week, it would be great if you can check out the video and vote for it if you like it. I have not driven a car for about 15 years, so if I win you can expect a hilarious summery of the track day in Vegas. My preparation for the day would be to play Project Gotham Racing for a weekend, and watch a lot of Top Gear.   My video is “Rapid Massive On-Demand Scalability Makes Me Fast!”. The link is here: http://www.meetwindowsazure.com/fast/

    Read the article

  • Best CMS for review-type sites

    - by Pru
    Is there an ideal CMS for making a review site? By review site, I mean like a restaurant review site where you have each entry belonging to different major categories like Cuisine and City. Then users can browse and filter by each or by combination (Chinese Food in Los Angeles, with suggestions of other Chinese restaurants in LA, etc). Furthermore, I'd want it to support other fields like price, parking, kid-friendliness, etc. And to have users be able to filter by those criteria. I've been told that with a combination of custom taxonomies, plug-ins and many clever little queries, that Wordpress 3.x can handle this. But I'm having a heck of a time with it getting into the nitty gritty, and that's where I find the community support is lacking. The sort of stuff you'd think would work in WP, like making one parent category for Cuisine and one for City, don't really work once you get further in and start trying to pull it all together. Then you find these blog posts where people say, "This example shows that one could create a huge movie review site using custom taxonomies..." but when you go and try it you hit all sorts of challenges and oddities that point a big long finger at Wordpress being in fact a blogging platform. The best I came up with was one category for the cuisine and one tag for the city, then I created a couple of custom tag-like taxonomies for the other features. It's quite a mess to try to figure out how to assemble all of that into a natural, intuitive site. I expect a few versions down the road WP will be able to do these sorts of sites out of the box. So I thought I'd take a step back before I run back into the Wordpress fray and find out if maybe there is another platform better suited to this sort of relational content site. Directory scripts in some ways offer many of the features I'm looking for, but I need something more flexible and, hopefully, interactive (comments, reviews). I'm especially looking for feedback from people who've crafted sites like this. Thanks!

    Read the article

  • Problem with understanding how to start

    - by Coolface
    Okay, this might be a little off-topic but i try anyway. Sorry to bother. So i'm working as sysadmin for at least 5 years now and i quite enjoy IT field in general. Somehow i was never interested in programming much but always want to learn something at least easy and for personal usage. As sysadmin i need scripting skills so learn shell scripting without much problems, i also try to learn pascal, delphi, basic over time and must recent was python. Well, my problem is when i try to learn programming i just can't apply what i learn from the books to the real word. What i mean is i understand there are data structures, algorithms, variables, lib's, if-then logic, etc. but i just can't understand how to apply this things when i want to do real things. Like i want to get a something simple as parse web page, i draw a quick algorithm like get a web page, find a word on it and write a to file, on the paper everything look simple but when i get to the coding i just stuck pretty much from the start. I try read code of the real programs that just totally confusing especially big parts with many classes so i'm just quickly lost a trail what this code do. I think i just lack some fundamentals to see a big picture but don't really know what this might be? Or maybe i just don't have a passion to programming at all... My best bet was a shell scripting so i have really no problems to write complex scripts but this just not enough. Recently i was read around 5 or 6 python books because everyone say it's so easy even kid can code something but still no much luck, python is good and easy but i can't make something harder then a prodecurial style code like in bash for easy things but when i want harder things i'm still stuck. In college i was also not a math and tech guy and like to study non-tech stuff mostly like economy, psychology maybe that my problem? Anyway any advice would be greatly appriciated.

    Read the article

  • Web Services c#, Sending notifications Clients

    - by Diode
    I want to send a notification (say a string) to subscribers(subscribers ip addresses are in a database on the server side) by calling another method. when ever I call that method the output becomes error-some. [WebMethod] public string GetGroupPath(string emailAddress, string password, string ipAddress) { //SqlDataAdapter dbadapter = null; DataSet returnDS = new DataSet(); string groupName = null; string groupPath = null; SqlConnection dbconn = new SqlConnection("Server = localhost;Database=server;User ID = admin;Password = password;Trusted_Connection=false;"); dbconn.Open(); SqlCommand cmd = new SqlCommand(); string getGroupName = "select users.groupname from users where emailaddress = "+"'"+ emailAddress+"'"+ " and "+ "users.password = " + "'" +password+"'"; cmd.CommandText = getGroupName; cmd.Connection = dbconn; SqlDataReader reader = null; try { reader = cmd.ExecuteReader(); while (reader.Read()) { groupName = reader["groupname"].ToString(); } } catch (Exception) { groupPath = "Invalied"; } dbconn.Close(); dbconn.Open(); if (groupName != null) { string getPath = "select groups.grouppath from groups where groupname = " + "'" + groupName + "'"; cmd.CommandText = getPath; cmd.Connection = dbconn; try { reader = cmd.ExecuteReader(); while (reader.Read()) { groupPath = reader["grouppath"].ToString(); } } catch { groupPath = "Invalied"; } } else groupPath = "Invalied"; dbconn.Close(); if (groupPath != "Invalied") { dbconn.Open(); string getPath = "update users set users.ipaddress = "+"'"+ipAddress+"'"+" where users.emailaddress = " + "'" + emailAddress + "'"; cmd.CommandText = getPath; cmd.Connection = dbconn; cmd.ExecuteNonQuery(); dbconn.Close(); } NotifyUsers(); //NotifyUsers nu = new NotifyUsers(); //List<string> ipList = new List<string>(); //ipList.Add("192.168.56.1"); //nu.Notify(); return groupPath; } private void NotifyUsers() { Socket sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); byte[] ipb = Encoding.ASCII.GetBytes("255.255.255.255"); IPAddress ipAddress = new IPAddress(ipb); IPEndPoint endPoint = new IPEndPoint(ipAddress, 15000); string notification = "new_update"; byte[] sendBuffer = Encoding.ASCII.GetBytes(notification); sock.SendTo(sendBuffer, endPoint); sock.Close(); } This is what has to be basically done. in the server side I have a listening thread and it gets notification when the server sends data( assume for now the database contains client ip address). then ever I call the web method it gives a error "invalid IPAddress" atline byte[] ipb = Encoding.ASCII.GetBytes("255.255.255.255"); thank you :) since this is my first ever post please be kind enough to give me a better feedback too :) thaks

    Read the article

  • NINE Questions with Michelle Juett

    - by NINEQuestions
    Michelle Juett is one of the more interesting people I know, even though we’ve never met face to face. She’s part artist, part techie and all cool. We “met” via my good buddy George Clingerman and have plotting to take over the world, errr… I mean “collaborating” ever since. If you happen to live in the Seattle area, you can catch her and her work at Sakura Con on April 2-4, 2010 and various other gamer and art cons throughout the year. You can also find her on Twitter as @Shelldragon. Now that you know a little bit, I’ll let her tell you the rest of the story in these NINE Questions: 1. Where are you from? I was born in Clearwater, Florida. I like to tell people I'm from the Bermuda Triangle, it just makes explaining myself so much easier. My family moved to Washington when I was 5 and I've been in the Pacific Northwest ever since. We like to QQ about the rain but we really love the green trees and clean water. 2. What do you do? I fight evil by moonlight and win love by daylight.. or something like that.  I’ve been in quality assurance for games during the day since January 2008 and an artist for life. I currently work in QA for a really awesome game company in Bellevue.  At home, I work on personal digital art, making game assets as well as other random freelance projects as they pop up. 3. How did you get to where you are now? I'm still not where I want to be but I'm getting closer. The biggest piece of advice I can give is to work hard and never settle for the minimum required. I tend to overwork myself but I've never regretted it. You can want something really bad but if you aren't willing to work for it, then you can't expect it to just happen. I've always drawn and had an unhealthy love for video games that I was told I’d grow out of.  I knew I would not ‘grow out’ of games and that real adults make them and I could too. After I graduated, in searching for jobs, I discovered game testing. I figured this would be a good way to get my foot in the door and start networking. I’ve worked with consoles, websites and now, PC games.  I stuck with my journey, although it has been a rocky one, daylighting as a tester and moonlighting as an artist. I'm still on that journey but I wouldn't have it any other way. Test has given me a perspective that is difficult, if not impossible, to obtain any other way. It gives an unconditional respect for other hard working testers and an insight into creative problem solving. 4. So video game testing probably sounds WAY cooler than the reality. What's it like? What's a given day for you? Game testers don't get a lot of respect because of their stigmas and the fact most people don't actually know what we do.  People hear about the opening and closing disc trays all day. Many places do treat their testers like numbers. It all depends on where you work and how awesome your company is. I've had to deal with a lot of bad work situations to get to a really good one. QA exists to ensure the game is as flawless and enjoyable as it can be by the time it has to leave the nest and go out into the world. This includes everything obvious: “can I beat the level and save the princess?” to the more obscure: ‘What happens when I lose internet connection while trying to save right before falling into a pit to my death while holding the jump key then my cat pulls out my memory card and hides it in her litter box?” On the dev side, for developers, testers can be very scary people. Especially when the test team is not in house and you can’t see each other’s faces.  I've seen both sides. We don't mean to hurt your feelings. We really DO love you and want your game to be the best it can be! It can be some serious tough love. 5. You are also an accomplished artist. Got any major projects right now you'd like to talk about? LOL, I don't know if I’d say I'm an accomplished artist just yet. I’m still a long way from where I want to be. I figure that’s what makes you grow though: the desire to never stop improving. I like QA but I want to be a full time artist. I was lucky enough to register for a table at Sakura Con in the 11 second window that the tables sold out. As such, I’ll be selling my wares in the Artist Alley April 2-4th. Part of preparing for this is actually making the art to be sold there. Anime is a fun pass time but I don’t draw a whole lot of it so I’m making up for lost time. As I seem to enjoy burying myself in work, I’m an art lead for a secret project that’s so secret I might be killed tonight for even mentioning it. I also take on various freelance projects and do what I can to help out indie games. I discovered the XNA community a year and a half ago and developed a love for Indies when I was writing a weekly newsletter on XBLA news. I’m a little late to the party but I find myself in a unique position where I am an artist and also have technical skills in games. While not programmer myself, I have a lot of game sense and experience. I hope to make some awesome happen. Lastly, I have an ongoing web comic Shell’s Angels) that tends to get neglected when I get busy. I still love drawing comics and keep a little book with me to sketch down ideas as they pop into my head. I may pick it back up again as a larger project sometime in the future. 6. Can you talk about any of the other freelance projects you're doing or are you sworn to secrecy on those too? We wouldn't want a team of game developer ninjas to take you out or anything. All my projects are currently 2d. I have personal projects such as the ongoing comic as well as a graphic novel I've been picking at here and there. My main focus until April is Sakura Con, Sakura Con, Sakura Con.  I see it as a great way to get exposure and convention experience. I found out I love conventions a couple years ago and I want to get more involved in them. 7. As an artist, what is your weapon of choice? What do you use to get most of your stuff done? I am a Photoshop Hero and I have the hoodie to prove it. (http://www.pennyarcademerch.com/pah090011.html) I've dabbled in other paint programs but I always gravitate back to Photoshop. She is my one true love. I'd like to learn programs like Flash or Anime Studio when I get a bit more time because of their animation abilities. I've worked on frame by frame animation forever but I would love to learn 2d rigging. Still, nothing can compare to a simple sketchpad and a pencil. I always have one on me in case I come across or think of something interesting and can't get to a computer. If the Courier ever comes to exist it will be an ideal weapon for me. 8. You did some videos too, depicting the art creation process. What was the motivation behind those? The creative process is just as important as the final product, if not more so.  I've always loved watching speed paint videos and wanted to try it out myself. Turns out it's a lot of work and time but it's definitely fun to go back and rewatch them. Art isn't always the end result and is more often the process itself. 9. Got any interesting tattoos? Designed any for yourself or other people? Not yet, but not for lack of desire. I've toiled over what and where for years. Last year, I finally decided the back of my shoulders would be the place. Like anything permanent, I want it to have meaning. I thought of somehow incorporating games but I couldn't find something I felt would stand the test of time even with all the classic sprite games. I'm very picky so we'll see if I can get something solid decided. Come see me at Sakura Con April 2 -4!!!

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • How I Record Screencasts

    - by Daniel Moth
    I get this asked a lot so here is my brain dump on the topic. What A screencast is just a demo that you present to yourself while recording the screen. As such, my advice for clearing your screen for demo purposes and setting up Visual Studio still applies here (adjusting for the fact I wrote those blog posts when I was running Vista and VS2008, not Windows 8 and VS2012). To see examples of screencasts, watch any of my screencasts on channel9. Why If you are a technical presenter, think of when you get best reactions from a developer audience in your sessions: when you are doing demos, of course. Imagine if you could package those alone and share them with folks to watch over and over? If you have ever gone through a tutorial trying to recreate steps to explore a feature, think how much more helpful it would be if you could watch a video and follow along. Think of how many folks you "touch" with a conference presentation, and how many more you can reach with an online shorter recording of the demo. If you invest so much of your time for the first type of activity, isn't the second type of activity also worth an investment? Fact: If you are able to record a screencast of a demo, you will be much better prepared to deliver it in person. In fact lately I will force myself to make a screencast of any demo I need to present live at an upcoming event. It is also a great backup - if for whatever reason something fails (software, network, etc) during an attempt of a live demo, you can just play the recorded video for the live audience. There are other reasons (e.g. internal sharing of the latest implemented feature) but the context above is the one within which I create most of my screencasts. Software & Hardware I use Camtasia from Tech Smith, version 7.1.1. Microsoft has a variety of options for capturing the screen to video, but I have been using this software for so long now that I have not invested time to explore alternatives… I also use whatever cheapo headset is near me, but sometimes I get some complaints from some folks about the audio so now I try to remember to use "the good headset". I do not use a web camera as I am not a huge fan of PIP. Preparation First you have to know your technology and demo. Once you think you know it, write down the outline and major steps of the demo. Keep it short 5-20 minutes max. I break that rule sometimes but try not to. The longer the video is the more chances that people will not have the patience to sit through it and the larger the download wmv file ends up being. Run your demo a few times, timing yourself each time to ensure that you have the planned timing correct, but also to make sure that you are comfortable with what you are going to demo. Unlike with a live audience, there is no live reaction/feedback to steer you, so it can be a bit unnerving at first. It can also lead you to babble too much, so try extra hard to be succinct when demoing/screencasting on your own. TIP: Before recording, hide your desktop/taskbar clock if it is showing. Recording To record you start the Camtasia Recorder tool Configure the settings thought the menus Capture menu to choose custom size or full screen. I try to use full screen and remember to lower the resolution of your screen to as low as possible, e.g. 1024x768 or 1360x768 or something like that. From the Tools -> Options dialog you can choose to record audio and the volume level. Effects menu I typically leave untouched but you should explore and experiment to your liking, e.g. how the mouse pointer is captured, and whether there should be a delay for the recording when you start it. Once you've configured these settings, typically you just launch this tool and hit the F9 key to start recording. TIP: As you record, if you ever start to "lose your way" hit F9 again to pause recording, regroup your thoughts and flow, and then hit F9 again to resume. Finally, hit F10 to stop recording. At that point the video starts playing for you in the recorder. This is where you can preview the video to see that you are happy with it before saving. If you are happy, hit the Save As menu to choose where you want to save the video.     TIP: If you've really lost your way to the extent where you'll need to do some editing, hit F10 to stop recording, save the video and then record some more - you'll be able to stitch the videos together later and this will make it easier for you to delete the parts where you messed up. TIP: Before you commit to recording the whole demo, every time you should record 5 seconds and preview them to ensure that you are capturing the screen the way you want to and that your audio is still correctly configured and at the right level. Trust me, you do not want to be recording 15 minutes only to find out that you messed up on the configuration somewhere. Editing To edit the video you launch another Camtasia app, the Camtasia Studio. File->New Project. File->Save Project and choose location. File->Import Media and choose the video(s) you saved earlier. These adds them to the area at the top/middle but not at the timeline at the bottom. Right click on the video and choose Add to timeline. It will prompt you for the Editing dimensions and I always choose Recording Dimensions. Do whatever edits you want to do for this video, then add the next video if you have one to stitch and repeat. In terms of edits there are many options. The simplest is to do nothing, which is the option I did when I first starting doing these in 2006. Nowadays, I typically cut out pieces that I don't like and also lower/mute the audio in other areas and also speed up the video in some areas. A full tutorial on how to do this is beyond the scope of this blog post, but your starting point is to select portions on the timeline and then open the Edit menu at the very top (tip: the context menu doesn't have all options). You can spend hours editing a recording, so don’t lose track of time! When you are done editing, save again, and you are now ready to Produce. Producing Production is specific to where you will publish. I've only ever published on channel9, so for that I do the following File -> Produce and share. This opens a wizard dialog In the dropdown choose Custom production settings Hit Next and then choose WMV Hit Next and keep the default of Camtasia Studio Best Quality and File Size (recommended) Hit Next and choose Editing dimensions video size Hit Next, hit Options and you get a dialog. Enter a Title for the project tab and then on the author tab enter the Creator and Homepage. Hit OK Hit Next. Hit Next again. Enter a video file name in the Production name textbox and then hit Finish. Now do other stuff while you wait for the video to be produced and you hear it playing. After the video is produced watch it to ensure it was produced correctly (e.g. sometimes you get mouse issues) and then you are ready for publishing it. Publishing Follow the instructions of the place where you are going to publish. If you are MSFT internal and want to choose channel9 then contact those folks so they can share their instructions (if you don't know who they are ping me and I'll connect you but they are easy to find in the GAL). For me this involves using a tool to point to the video, choosing a file name (again), choosing an image from the video to display when it is not playing, choosing what output formats I want, and then later on a webpage adding tags, adding a description, and adding a title. That’s all folks, have fun! Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Introducing Oracle Multitenant

    - by OracleMultitenant
    0 0 1 1142 6510 Oracle Corporation 54 15 7637 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} The First Database Designed for the Cloud Today Oracle announced the general availability (GA) of Oracle Database 12c, the first database designed for the Cloud. Oracle Multitenant, new with Oracle Database 12c, is a key component of this – a new architecture for consolidating databases and simplifying operations in the Cloud. With this, the inaugural post in the Multitenant blog, my goal is to start the conversation about Oracle Multitenant. We are very proud of this new architecture, which we view as a major advance for Oracle. Customers, partners and analysts who have had previews are very excited about its capabilities and its flexibility. This high level review of Oracle Multitenant will touch on our design considerations and how we re-architected our database for the cloud. I’ll briefly describe our new multitenant architecture and explain it’s key benefits. Finally I’ll mention some of the major use cases we see for Oracle Multitenant. Industry Trends We always start by talking to our customers about the pressures and challenges they’re facing and what trends they’re seeing in the industry. Some things don’t change. They face the same pressures and the same requirements as ever: Pressure to do more with less; be faster, leaner, cheaper, and deliver services 24/7. Big companies have achieved scale. Now they want to realize economies of scale. As ever, DBAs are faced with the challenges of patching and upgrading large numbers of databases, and provisioning new ones.  Requirements are familiar: Performance, scalability, reliability and high availability are non-negotiable. They need ever more security in this threatening climate. There’s no time to stop and retool with new applications. What’s new are the trends. These are the techniques to use to respond to these pressures within the constraints of the requirements. With the advent of cloud computing and availability of massively powerful servers – even engineered systems such as Exadata – our customers want to consolidate many applications into fewer larger servers. There’s a move to standardized services – even self-service. Consolidation Consolidation is not new; companies have tried various different approaches to consolidation of databases in the cloud. One approach is to partition a powerful server between several virtual machines, one per application. A downside of this is that you have the resource and management overheads of OS and RDBMS per VM – that is, per application. Another is that you have replaced physical sprawl with virtual sprawl and virtual sprawl is still expensive to manage. In the dedicated database model, we have a single physical server supporting multiple databases, one per application. So there’s a shared OS overhead, but RDBMS process and memory overhead are replicated per application. Let's think about our traditional Oracle Database architecture. Every time we create a database, be it a production database, a development or a test database, what do we do? We create a set of files, we allocate a bunch of memory for managing the data, and we kick off a series of background processes. This is replicated for every one of the databases that we create. As more and more databases are fired up, these replicated overheads quickly consume the available server resources and this limits the number of applications we can run on any given server. In Oracle Database 11g and earlier the highest degree of consolidation could be achieved by what we call schema consolidation. In this model we have one big server with one big database. Individual applications are installed in separate schemas or table-owners. Database overheads are shared between all applications, which affords maximum consolidation. The shortcomings are that application changes are often required. There is no tenant isolation. One bad apple can spoil the whole batch. New Architecture & Benefits In Oracle Database 12c, we have a new multitenant architecture, featuring pluggable databases. This delivers all the resource utilization advantages of schema consolidation with none of the downsides. There are two parts to the term “pluggable database”: "pluggable", which is new, and "database", which is familiar.  Before we get to the exciting new stuff let’s discuss what hasn’t changed. A pluggable database is a fully functional Oracle database. It’s not watered down in any way. From the perspective of an application or an end user it hasn’t changed at all. This is very important because it means that no application changes are required to adopt this new architecture. There are many thousands of applications built on Oracle databases and they are all ready to run on Oracle Multitenant. So we have these self-contained pluggable databases (PDBs), and as their name suggests, they are plugged into a multitenant container database (CDB). The CDB behaves as a single database from the operations point of view. Very much as we had with the schema consolidation model, we only have a single set of Oracle background processes and a single, shared database memory requirement. This gives us very high consolidation density, which affords maximum reduction in capital expenses (CapEx). By performing management operations at the CDB level – “managing many as one” – we can achieve great reductions in operating expenses (OpEx) as well, but we retain granular control where appropriate. Furthermore, the “pluggability” capability gives us portability and this adds a tremendous amount of agility. We can simply unplug a PDB from one CDB and plug it into another CDB, for example to move it from one SLA tier to another. I'll explore all these new capabilities in much more detail in a future posting.  Use Cases We can identify a number of use cases for Oracle Multitenant. Here are a few of the major ones. 0 0 1 113 650 Oracle Corporation 5 1 762 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman"; mso-fareast-language:JA;} Development / Testing where individual engineers need rapid provisioning and recycling of private copies of a few "master test databases" Consolidation of disparate applications using fewer, more powerful servers Software as a Service deploying separate copies of identical applications to individual tenants Database as a Service typically self-service provisioning of databases on the private cloud Application Distribution from ISV / Installation by Customer Eliminating many typical installation steps (create schema, import seed data, import application code PL/SQL…) - just plug in a PDB! High volume data distribution literally via disk drives in envelopes distributed by truck! - distribution of things like GIS or MDM master databases …various others! Benefits Previous approaches to consolidation have involved a trade-off between reductions in Capital Expenses (CapEx) and Operating Expenses (OpEx), and they’ve usually come at the expense of agility. With Oracle Multitenant you can have your cake and eat it: Minimize CapEx More Applications per server Minimize OpEx Manage many as one Standardized procedures and services Rapid provisioning Maximize Agility Cloning for development and testing Portability through pluggability Scalability with RAC Ease of Adoption Applications run unchanged It’s a pure deployment choice. Neither the database backend nor the application needs to be changed. In future postings I’ll explore various aspects in more detail. However, if you feel compelled to devour everything you can about Oracle Multitenant this very minute, have no fear. Visit the Multitenant page on OTN and explore the various resources we have available there. Among these, Oracle Distinguished Product Manager Bryn Llewellyn has written an excellent, thorough, and exhaustively detailed White Paper about Oracle Multitenant, which is available here.  Follow me  I tweet @OraclePDB #OracleMultitenant

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >