Search Results

Search found 5014 results on 201 pages for 'amazon kindle fire'.

Page 170/201 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • SQL SERVER – Clustered Index and Primary Key – Contest Win Joes 2 Pros Combo (USD 198) – Day 3 of 5

    - by pinaldave
    August 2011 we ran a contest where every day we give away one book for an entire month. The contest had extreme success. Lots of people participated and lots of give away. I have received lots of questions if we are doing something similar this month. Absolutely, instead of running a contest a month long we are doing something more interesting. We are giving away USD 198 worth gift every day for this week. We are giving away Joes 2 Pros 5 Volumes (BOOK) SQL 2008 Development Certification Training Kit every day. One copy in India and One in USA. Total 2 of the giveaway (worth USD 198). All the gifts are sponsored from the Koenig Training Solution and Joes 2 Pros. The books are available here Amazon | Flipkart | Indiaplaza How to Win: Read the Question Read the Hints Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India residents only) 2 Winners will be randomly selected announced on August 20th. Question of the Day: Which of the following datatype is usually NOT the best choice for Primary Key and Clustered Index? a) INT b) BIGINT c) GUID d) SMALLINT Query Hints: BIG HINT POST The clustered index is the placement order of a table’s records in memory pages. When you insert new records, then each record will be inserted into the memory page in the order it belongs. In the figure below we see another new record (Major Disarray) being inserted, in sequence, between Jonny and Rick. Since there is no room in this memory page, some records will need to shift around. The page split occurs when Irenes’ record moves to the second page. Page splits are considered very bad for performance, and there are a number of techniques to reduce, or even eliminate, the risk of page splits. You can create a clustered index on the table on any field you choose. Sometime SQL will create a clustered index for you. Often times the field having the Primary Key makes a great candidate for the clustered index. Additional Hints: I have previously discussed various concepts from SQL Server Joes 2 Pros Volume 3. SQL Joes 2 Pros Development Series – All about SQL Statistics SQL Joes 2 Pros Development Series – Introduction to Page Split SQL Joes 2 Pros Development Series – The Clustered Index – Simple Understanding SQL Joes 2 Pros Development Series – Geography Data Type – Calculating Distance Between Two Points on the Earth SQL Joes 2 Pros Development Series – Sparse Data and Space Used by Sparse Data SQL Joes 2 Pros Development Series – System and Time Data Types SQL Joes 2 Pros Development Series – Data Row Space Usage and NULL Storage Next Step: Answer the Quiz in Contact Form in following format Question Answer Name of the country (The contest is open for USA and India) Bonus Winner Leave a comment with your favorite article from the “additional hints” section and you may be eligible for surprise gift. There is no country restriction for this Bonus Contest. Do mention why you liked it any particular blog post and I will announce the winner of the same along with the main contest. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • The Latest News About SAP

    - by jmorourke
    Like many professionals, I get a lot of my news from Google e-mail alerts that I’ve set up to keep track of key industry trends and competitive news.  In the past few weeks, I’ve been getting a number of news alerts about SAP.  Below are a few recent examples: Warm weather cuts short US maple sugaring season – by Toby Talbot, AP MILWAUKEE – Temperatures in Wisconsin had already hit the high 60s when Gretchen Grape and her family began tapping their 850 maple trees. They had waited for the state's ceremonial tapping to kick off the maple sugaring season. It was moved up five days, but that didn't make much difference. For Grape, the typically month-long season ended nine days later. The SAP had stopped flowing in a record-setting heat wave, and the 5-quart collection bags that in a good year fill in a day were still half-empty. Instead of their usual 300 gallons of syrup, her family had about 40. Maple syrup producers across the North have had their season cut short by unusually warm weather. While those with expensive, modern vacuum systems say they've been able to suck a decent amount of sap from their trees, producers like Grape, who still rely on traditional taps and buckets, have seen their year ruined. "It's frustrating," said the 69-year-old retiree from Holcombe, Wis. "You put in the same amount of work, equipment, investment, and then all of a sudden, boom, you have no SAP." Home & Garden: Too-Early Spring Means Sugaring Woes  - by Georgeanne Davis for The Free Press Over this past weekend, forsythia and daffodils were blooming in the southern parts of the state as temperatures climbed to 85 degrees, and trees began budding out, putting an end to this year's maple syrup production even as the state celebrated Maine Maple Sunday. Maple sugaring needs cold nights and warm days to induce SAP flows. Once the trees begin budding, SAP can still flow, but the SAP is bitter and has an off taste. Many farmers and dairymen count on sugaring for extra income, so the abbreviated season is a real financial loss for them, akin to the shortened shrimping season's effect on Maine lobstermen. SAP season comes to a sugary Sunday finale – Kennebec Journal, March 26th, 2012 Rebecca Manthey stood out in the rain at the entrance of Old Fort Western keeping watch over a cast iron kettle of boiling SAP hooked to a tripod over a wood fire.  Manthey and the rest of the Old Fort Western staff -- decked out in 18th-century attire -- joined sugar houses across the state in observance of Maine Maple Sunday. The annual event is sponsored by the Department of Agriculture and the Maine Maple Producers Association.  She said the rain hadn't kept people from coming to enjoy all the events at the fort surrounding the production of Maple syrup.  "In the 18th century, you would be boiling SAP in the woods, so I would be in the woods," Manthey explained to the families who circled around her. "People spent weeks and weeks in the woods. You don't want to cook it to fast or it would burn. When it looks like the right consistency then you send it (into the kitchen) to be made into sugar." Manthey said she enjoyed portraying an 18th-century woman, even in the rain, which didn't seem to bother visitors either. There was a steady stream of families touring the fort and enjoying the maple syrup demonstrations. I hope you enjoy these updates on SAP – Happy April Fool’s Day!

    Read the article

  • Free tools versus paid tools.

    - by Dennis Vroegop
    We live in a strange world. Information should be free. Tools should be free. Software should be free (and I mean free as in free beer, not as in free speech). Of course, since I make my living (and pay my mortgage) by writing software I tend to disagree. Or rather: I want to get paid for the things I do in the daytime. Next to that I also spend time on projects I feel are valuable for the community, which I do for free. The reason I can do that is because I get paid enough in the daytime to afford that time. It gives me a good feeling, I help others and it’s fun to do. But the baseline is: I get paid to write software. I am sure this goes for a lot of other developers. We get paid for what we do during the daytime and spend our free time giving back. So why does everyone always make a fuzz when a company suddenly starts to charge for software? To me, this seems like a very reasonable decision. Companies need money: they have staff to pay, buildings to rent, coffee to buy, etc. All of this doesn’t come free so it makes sense that they charge their customers for the things they produce. I know there’s a very big Open Source market out there, where companies give away (parts of) their software and get revenue out of the services they provide. But this doesn’t work if your product doesn’t need services. If you build a great tool that is very easy to use, and you give it away for free you won’t get any money by selling services that no user of your tool really needs. So what do you do? You charge money for your tool. It’s either that or stop developing the tool and turn to other, more profitable projects. Like it or not, that’s simple economics at work. You have something other people want, so you charge them for it. This week it was announced that what I believe is the most used tool for .net developers (besides Visual Studio of course),namely Red Gates .net reflector, will stop being a free tool. They will charge you $35 for the next version. Suddenly twitter was on fire and everyone was mad about it. But why? The tool is downloaded by so many developers that it must be valuable to them. I know of no serious .net developer who hasn’t got it on his or her machine. So apparently the tool gives them something they need. So why do they expect it to be free? There are developers out there maintaining and extending the tool, building new and better versions of it. And the price? $35 doesn’t seem much. If I think of the time the tool saved me the 35 dollars were earned back in a day. If by spending this amount of money I can rely on great software that helps me do my job better and faster, I have no problems by spending it. I know that there is a great team behind it, (the Red Gate tools are a must have when developing SQL systems, for instance), and I do believe they are in their right to charge this. So.. there you have it. This is of course, my opinion. You may think otherwise. Please let me know in the comments what you think! Tags van Technorati: redgate,reflector,opensource

    Read the article

  • The Evolution of Television and Home Entertainment

    - by Bill Evjen
    This is a group that is focused on entertainment in the aviation industry. I am attending their conference for the first time as it relates to my job at Swank Motion Pictures and what we do for our various markets. I will post my notes here. The Evolution of Television and Home Entertainment by Patrick Cosson, Veebeam TV has been the center of living rooms for sometime. Conversations and culture evolve around the TV. The way we consume this content has dramatically been changing. After TV, we had the MTV revolution of TV. It has created shorter attention spans, it made us more materialistic, narcissistic, and not easily impressed. Then we came to the Internet. The amount of content has expanded. It contains a ton of user-generated content, provides filtering, organization, distribution. We now have a problem. We are in the age of digital excess. We can access whatever we want. In conjunction with this – we are moving. The challenge we have now is curation. The trends  we see: rapid shift from scheduled to on demand consumption. A move to Internet protocols from cable Rapid fragmentation of media a transition from the TV set to a variety of screens Social connections bring mediators and amplifiers. TiVo – the shift to on demand It is because of a time-crunch Provides personal experiences Once old consumption habits are changed, there is no way back! Experiences are that people are loading up content and then bringing it with them on planes, to hotels, etc. Rapid fragmentation of media sources Many new professional content sources and channels, the rise of digital distribution, and the rise of user-generated content contribute to the wealth of content sources and abundant choice. Netflix, BBC iPlayer, hulu, Pandora, iTunes, Amazon Video, Vudu, Voddler, Spotify (these companies didn’t exist 5 years ago). People now expect this kind of consumption. People are now thinking how to deliver all these tools. Transition from the TV set to multi-screens The TV screen has traditionally been the dominant consumption screen for TV and video. Now the PC, game consoles, and various mobile devices are rapidly becoming common video devices. Multi-screens are now the norm. Social connections becoming key mediators What increasingly funnels traffic on the web, social networking enablers, will become an integral part of the discovery, consumption and sharing model for Television. The revolution will be broadcasted on Facebook and Twitter. There is business disruption There are a lot of new entrants Rapid internationalization Increasing competition from existing media players A fragmenting audience base Web browser Freedom to access any site The fight over the walled garden Most devices are not powerful enough to support a full browser PC will always be present in the living room Wireless link between PC and TV Output 1080p, plays anything, secure Key players and their challenges Services Internet media is increasingly interconnected to social media and publicly shared UGC Content delivery moving to IPTV Rights management issues are creating silos and hindering a great user experience and growth Devices Devices are becoming people’s windows into all kinds of media from all kinds of sources There won’t be a consolidation of the device landscape, rather the opposite Finding the right niche makes the most sense. We are moving to an on demand world of streaming world. People want full access to anything.

    Read the article

  • SQL SERVER – Select the Most Optimal Backup Methods for Server

    - by pinaldave
    Backup and Restore are very interesting concepts and one should be very much with the concept if you are dealing with production database. One never knows when a natural disaster or user error will surface and the first thing everybody wants is to get back on point in time when things were all fine. Well, in this article I have attempted to answer a few of the common questions related to Backup methodology. How to Select a SQL Server Backup Type In order to select a proper SQL Server backup type, a SQL Server administrator needs to understand the difference between the major backup types clearly. Since a picture is worth a thousand words, let me offer it to you below. Select a Recovery Model First The very first question that you should ask yourself is: Can I afford to lose at least a little (15 min, 1 hour, 1 day) worth of data? Resist the temptation to save it all as it comes with the overhead – majority of businesses outside finances can actually afford to lose a bit of data. If your answer is YES, I can afford to lose some data – select a SIMPLE (default) recovery model in the properties of your database, otherwise you need to select a FULL recovery model. The additional advantage of the Full recovery model is that it allows you to restore the data to a specific point in time vs to only last backup time in the Simple recovery model, but it exceeds the scope of this article Backups in SIMPLE Recovery Model In SIMPLE recovery model you can select to do just Full backups or Full + Differential. Full Backup This is the simplest type of backup that contains all information needed to restore the database and should be your first choice. It is often sufficient for small databases, but note that it makes a big impact on the performance of your database Full + Differential Backup After Full, Differential backup picks up all of the changes since the last Full backup. This means if you made Full, Diff, Diff backup – the last Diff backup contains all of the changes and you don’t need the previous Differential backup. Differential backup is obviously smaller and carries less performance overhead Backups in FULL Recovery Model In FULL recovery model you can select Full + Transaction Log or Full + Differential + Transaction Log backup. You have to create Transaction Log backup, because at that time the log is being truncated. Otherwise your Transaction Log will grow uncontrollably. Full + Transaction Log Backup You would always need to perform a Full backup first. Then a series of Transaction log backup. Note that (in contrast to Differential) you need ALL transactions to log since the last Full of Diff backup to properly restore. Transaction log backups have the smallest performance overhead and can be performed often. Full + Differential + Transaction Log Backup If you want to ease the performance overhead on your server, you can replace some of the Full backup in the previous scenario with Differential. You restore scenario would start from Full, then the Last Differential, then all of the remaining transactions log backups Typical backup Scenarios You may say “Well, it is all nice – give me the examples now”. As you may already know, my favorite SQL backup software is SQLBackupAndFTP. If you go to Advanced Backup Schedule form in this program and click “Load a typical backup plan…” link, it will give you these scenarios that I think are quite common – see the image below. The Simplest Way to Schedule SQL Backups I hate to repeat myself, but backup scheduling in SQL agent leaves a lot to be desired. I do not know the simple way to schedule your SQL server backups than in SQLBackupAndFTP – see the image below. The whole backup scheduling with compression, encryption and upload to a Network Folder / HDD / NAS Drive / FTP / Dropbox / Google Drive / Amazon S3 takes just a few minutes – see my previous post for the review. Final Words This post offered an explanation for major backup types only. For more complicated scenarios or to research other options as usually go to MSDN. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • jqGrid - customizing the multi-select option (restrict single selection and adding custom events)

    - by Renso
    Goal: Using the jgGrid to enable a selection of a checkbox for row selection - which is easy to set in the jqGrid - but also only allowing a single row to be selectable at a time while adding events based on whether the row was selected or de-selected. Environment: jQuery 1.4.4 jqGrid 3.4.4a Issue: The jqGrid does not support the option to restrict the multi-select to only allow for a single selection. You may ask, why bother with the multi-select checkbox function if you only want to allow for the selection of a single row? Good question, as an example, you want to reserve the selection of a row to trigger another kind of event and use the checkbox multi-select to handle a different kind of event; in other words, when I select the row I want something entirely different to happen than when I select to check off the checkbox for that row. Also the setSelection method of the jqGrid is a toggle and has no support for determining whether the checkbox has already been selected or not, So it will simply act as a switch - which it is designed to do - but with no way out of the box to only check off the box (as in not to de-select) rather than act like a switch. Furthermore, the getGridParam('selrow') does not indicate if the row was selected or de-selected, which seems a bit strange and is the main reason for this blog post. Solution: How this will act: When you check off a multi-select checkbox in the gird, and then commence to select another row by checking off that row's multi-select checkbox - I'm not talking there about clicking on the row but using the grid's multi-select checkbox - it will de-select the previous selection so that you are always left with only a single selection. Furthermore, once you select or de-select a multi-select checkbox, fire off an event that will be determined by whether or not the row was selected or de-selected, not just merely clicked on. So if I de-select the row do one thing but when selecting it do another. Implementation (this of course is only a partial code snippet):             multiselect: true,             multiboxonly: true,             onSelectRow: function (rowId) {                 var gridSelRow = $(item).getGridParam('selrow');                 var s;                 s = $(item).getGridParam('selarrrow');                 if (!s || !s[0]) {                     $(item).resetSelection();                     $('#productLineDetails').fadeOut();                     lastsel = null;                     return;                 }                 var selected = $.inArray(rowId, s) != -1;                 if (selected) {                     $('#productLineDetails').show();                 }                 else {                     $('#productLineDetails').fadeOut();                 }                 if (rowId && rowId !== lastsel && selected) {                     $(item).GridToForm(gridSelRow, '#productLineDetails');                     if (lastsel) $(item).setSelection(lastsel, false);                 }                 lastsel = rowId;             }, In the example code above: The "item" property is the id of the jqGrid. The following to settings ensure that the jqGrid will add the new column to select rows with a checkbox and also the not allow for the selection by clicking on the row but to force the user to have to click on the multi-select checkbox to select the row: multiselect: true, multiboxonly: true, Unfortunately the var gridSelRow = $(item).getGridParam('selrow') function will only return the row the user clicked on or rather that the row's checkbox was clicked on and NOT whether or not it was selected nor de-selected, but it retrieves the row id, which is what we will need. The following piece get's all rows that have been selected so far, as in have a checked off multi-select checkbox: var s; s = $(item).getGridParam('selarrrow'); Now determine if the checkbox the user just clicked on was selected or de-selected: var selected = $.inArray(rowId, s) != -1; If it was selected then show a container "#productLineDetails", if not hide that container away. The following instruction populates a form with the grid data using the built-in GridToForm method (just mentioned here as an example) ONLY if the row has been selected and NOT de-selected but more importantly to de-select any other multi-select checkbox that may have been selected: if (rowId && rowId !== lastsel && selected) {                     $(item).GridToForm(gridSelRow, '#productLineDetails');                     if (lastsel) $(item).setSelection(lastsel, false); }

    Read the article

  • 2013 Predictions for Retail

    - by David Dorf
    Its that time of year to roll out the predictions for next year.  I can't say I've really nailed it in the past, but feel free to look back at my 2012, 2011, and 2010 predictions.  I'm not expecting anything earth-shattering this year; just continued maturation of several technologies that are finally taking hold. 1. Next day delivery -- Amazon finally decided it wasn't worth fighting state taxes and instead decided to place distribution centers everywhere so they can potentially offer next-day deliveries.  Not to be outdone, Walmart is looking to leverage its huge physical presence to offer the same.  Clubs like ShopRunner are pushing delivery barriers as well, so the norm is shifting to free shipping in a few days or relatively cheap shipping overnight.  Retailers need be thinking about how to ship from physical stores. 2. Bring your own device -- Earlier this year Intuit bought AisleBuyer, a mobile self-checkout start-up, at least somewhat validating the BYOD approach.  Grocery stores, especially in Europe, have been supporting in-aisle self-scanning for a while and I'm betting it will find a home in certain verticals in the US too.  There's also the BYOD concept for employees.  Some retailers are considering issuing mobile devices at hiring along side the shirt and name-tag.  Employees become responsible for the hardware until they leave. 3. TV shopping -- Will Apple finally release a TV product in 2013?  Who knows?  But the industry isn't standing still. Companies like QVC and HSN are already successfully combining the TV and online experiences for shopping.  Comcast is partnering with Tivo to allow viewers to interact with ads with Paypal handing payment.  This will be a slow maturation, but expect TVs to get smarter and eventually become a new selling channel (pun intended) for retailers. 4. Privacy backlash -- It only takes one big incident to stir the public, and I'm betting we have one in 2013.  Facebook, Google, or Apple will test the boundaries of what the public is willing to accept.  It could involve a retailer using geo-location technology, or possibly video analytics.  And as is always the case, the offender will apologize, temporarily remove the technology, and wait 2-3 years for it to be generally accepted.  Privacy is a moving target. 5. More NFC -- I've come to the conclusion that adoption of any banking technology is going to be slow.  It was slow for credit cards, ATMs, and online billpay so why should it be any different for NFC?  Maybe, just maybe the iPhone 5S will have an NFC chip, but we're not going to see mainstream uptake for years.  Next year we'll continue to see incremental improvements from Isis, Google, and Paypal and a plethora of new startups, but don't toss your magstripe cards just yet. 6. In-store location -- The technologies for tracking people inside stores is really improving.  Retailers can track people using video cameras, infrared, and by the WiFi radios in mobile phones.  We're getting closer to the point where accuracy could be a shelf-facing, which will help retailers understand how people shop, where they spend time, and what displays attract them.  Expect CPG companies to get involved and partner with retailers, since the data benefits both parties.  Consumers will benefit by being directed right to the products they seek.  (In 2013 ARTS is forming a workteam to develop new standards in this area.) 7. M&A -- Looking back at 2012 there were some really big deals involving IBM, Oracle, JDA, and NCR and I expect that trend will likely continue as vendors add assets to bolster their portfolios.  Many retailers are due for an IT transformation to support anywhere, anytime shoppers, and one-stop-vendors can minimize complexity and costs. Predictions from other sources: Independent Retailer Stores Magazine IDC Insights Mobile Commerce Daily

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-26

    - by Bob Rhubart
    Software Architecture for High Availability in the Cloud | Brian Jimerson Brian Jimerson looks at the paradigm shifts from machine-based architectures to cloud-based architectures when designing fault tolerance, and how enterprise applications need to be engineered to ensure the highest level of availability in the cloud. SOA, Cloud & Service Technology Symposium 2012 London - Special Oracle Discount Registration is now open for one of the premier SOA, Cloud, and Service Technology events. Once again, the Oracle community is well-represented in the session schedule. And now you can save on registration with a special Oracle discount code. Progress 4GL and DB to Oracle and cloud | Tom Laszewski "Getting from client/server based 4GLs and databases where the 4GL is tightly linked to the database to Oracle and the cloud is not easy," says cloud migration expert Tom Laszewski. "The least risky and expensive option...is to use the Progress OpenEdge DataServer for Oracle." Embrace 'big data' now or fall behind the competition, analyst warns | TechTarget TechTarget's Mark Brunelli's story says, in essence, that Big Data is not your fathers Business Intelligence. Calculating the Size (in Bytes and MB) of a Oracle Coherence Cache | Ricardo Ferreira Ferreira illustrates a programmatic way to use the Oracle Coherence API to calculate the total size of a specific cache that resides in the data grid. WebCenter Portal Tutorial Part 7: Integrating Discussions and Link service | Yannick Ongena The latest chapter in Oracle ACE Yannick Ongena's ongoing series. How to Setup JDeveloper workspace for ADF Fusion Applications to run Business Component Tester? | Jack Desai Helpful technical tips from yet another member of the Oracle Fusion Middleware Architecture Team. Big Data for the Enterprise; Software Architecture for High Availability in the Cloud; Why Cloud Computing is a Paradigm Shift - And Why It Isn't This week on the OTN Solution Architect Homepage, along with an updated events list and this weeks list of selected community blog posts. Worst Practices for Big Data | Dain Hansen Dain Hansen shares some insight on what NOT to do if you want to captialize on Big Data. Free Virtual Developer Day - Oracle Fusion Development | Grant Ronald "The online conference will include seminars, hands-on lab and live chats with our technical staff including me!" says Grant Ronald. "And the best bit, it doesn't cost you a single penny. It's free and available right on your desktop." Penguin is Getting Ready for Oracle OpenWorld 2012 | Zeynep Koch Linux fan? Check out Zeynep Koch's post for a list of Linux-based sessions at Oracle OpenWorld 2012 in San Francisco. Amazon Web Services (AWS) Autoscaling | Frank Munz "Autoscaling on AWS can only be configured with lengthy commands from the command line but not from the web cased AWS console," says Frank Munz. "Getting all the parameters right can be tricky." He demonstrates one easy example in this video. Oracle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broin "These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight," says O'Broin. How Much Data Is Created Every Minute? [INFOGRAPHIC] | Mashable Explaining what the "Big" in Big Data really means -- and it's more than a little mind-boggling. Thought for the Day "Real, though miniature, Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software." — Jaron Lanier Source: SoftwareQuotes.com

    Read the article

  • Nest reinvents smoke detectors. Introduces smart and talking smoke detector that keeps quite when you wave

    - by Gopinath
    Nest, the leading smart thermostat maker has introduced a smart home device today- Nest Protect, a smart, talking smoke & carbon monoxide detector that can quite when you wave your hand. Less annoyances and more intelligence Smoke detectors are around for hundreds of years and playing a major role in providing safety from fire accidents at home. But the technology of these devices is stale and there is no major innovation for the past several years. With the introduction of Nest Protect, the landscape of smoke detectors is all set to change just like how Nest thermostat redefined the industry two years ago. Nest Protect is internet enabled and equipped with motion- and smoke-detection sensors so that when it starts beeping you can silence it by waving hand instead of doing circus feats to turn off the alarm. Everyone who cooks in a home equipped with smoke detector would know how annoying it is to turn off sensitive smoke detectors that goes off control quite often. Apart from addressing the annoyances of regular smoke detector, Nest Protect has talking capabilities. It can alert users with clear & actionable instructions when it detects a danger. Instead of harsh beeps it actually speak to you so you know what is happening. It will tell you what smoke it has detected and in which room it is detected. Multiple Nest Protects installed in a home can communicate with each other. Lets say that there is a smoke in bed room, the Nest Protect installed in bed room shares this information to all Nest Protects installed in the home and your kitchen device can alert you that there is a smoke in bed room. There is an App for that The internet enabled Nest Protect has an app to view its status and various alerts. When the Protect is running on low battery it alerts you to replace them soon. If there is a smoke at home and you are away, you will get message alerts. The app works on all major smartphones as well as tablets. Auto shuts down gas furnaces/heaters on smoke Apart from forming a network with other Nest Protect devices installed at home, they can also communicate with Nest Thermostat if it is installed. When carbon monoxide is detected it can shut off your gas furnace automatically. Also with the help of motion detectors it improves Nest Thermostat’s auto-away functionality. It looks elegant and costs a lot more than a regular smoke detector Just like Nest Thermostat, Nest Protect is elegant and adorable. You just fall in love with it the moment you see it. It’s another master piece from the designer of Apple’s iPod. All is good with the Nest Protect, except the price!! It costs whooping $129, which is almost 4 times more expensive than the best selling conventional thermostats available at $30. A single bed room apartment would require at least 3 detectors and it costs around $390 to install Nest Protects compared to 90$ required for conventional smoke detectors. Though Nest Thermostat is an expensive one compared to conventional thermostats, it offered great savings through its intelligent auto-away feature. Users were able to able to see returns on their investments. If Nest Protect also can provide good return on investment the it will be very successful.

    Read the article

  • How can I draw an arrow at the edge of the screen pointing to an object that is off screen?

    - by Adam Henderson
    I am wishing to do what is described in this topic: http://www.allegro.cc/forums/print-thread/283220 I have attempted a variety of the methods mentioned here. First I tried to use the method described by Carrus85: Just take the ratio of the two triangle hypontenuses (doesn't matter which triagle you use for the other, I suggest point 1 and point 2 as the distance you calculate). This will give you the aspect ratio percentage of the triangle in the corner from the larger triangle. Then you simply multiply deltax by that value to get the x-coordinate offset, and deltay by that value to get the y-coordinate offset. But I could not find a way to calculate how far the object is away from the edge of the screen. I then tried using ray casting (which I have never done before) suggested by 23yrold3yrold: Fire a ray from the center of the screen to the offscreen object. Calculate where on the rectangle the ray intersects. There's your coordinates. I first calculated the hypotenuse of the triangle formed by the difference in x and y positions of the two points. I used this to create a unit vector along that line. I looped through that vector until either the x coordinate or the y coordinate was off the screen. The two current x and y values then form the x and y of the arrow. Here is the code for my ray casting method (written in C++ and Allegro 5) void renderArrows(Object* i) { float x1 = i->getX() + (i->getWidth() / 2); float y1 = i->getY() + (i->getHeight() / 2); float x2 = screenCentreX; float y2 = ScreenCentreY; float dx = x2 - x1; float dy = y2 - y1; float hypotSquared = (dx * dx) + (dy * dy); float hypot = sqrt(hypotSquared); float unitX = dx / hypot; float unitY = dy / hypot; float rayX = x2 - view->getViewportX(); float rayY = y2 - view->getViewportY(); float arrowX = 0; float arrowY = 0; bool posFound = false; while(posFound == false) { rayX += unitX; rayY += unitY; if(rayX <= 0 || rayX >= screenWidth || rayY <= 0 || rayY >= screenHeight) { arrowX = rayX; arrowY = rayY; posFound = true; } } al_draw_bitmap(sprite, arrowX - spriteWidth, arrowY - spriteHeight, 0); } This was relatively successful. Arrows are displayed in the bottom right section of the screen when objects are located above and left of the screen as if the locations of the where the arrows are drawn have been rotated 180 degrees around the center of the screen. I assumed this was due to the fact that when I was calculating the hypotenuse of the triangle, it would always be positive regardless of whether or not the difference in x or difference in y is negative. Thinking about it, ray casting does not seem like a good way of solving the problem (due to the fact that it involves using sqrt() and a large for loop). Any help finding a suitable solution would be greatly appreciated, Thanks Adam

    Read the article

  • Day 5 - Tada! My Game Menu Screen Graphics

    - by dapostolov
    So, tonight I took some time to mash up some graphics for my game menu screen. My artistic talent sucks...but here goes nothing...voila, my menu screen!! The Menu Screen The screen above is displaying 4 sprites, even though it looks like maybe 7... I guess one of the first things for me to test in the future is ... is it more memory efficient (and better frame rate) to draw one big background image OR tp paint the screen black, and place each sprite in set locations? To display the 4 sprites above, I borrowed my code from yesterday ... I know, tacky, but...I wanted to see it, feel it. Do you feel it? FEEL IT! (homer voice & shakes fist) Note: the menu items won't scale properly as it stands with this code, well pretty much they do nothing except look pretty... Paint.Net & Google Fun So how did I create that image above? Well, to create the background and 3 menu items I used Paint.Net. Basically, I scoured Google images for: a stone doorway, a stone pillar, an old book, a wizards hat, and...that's it pretty much it! I'll let you type in those searches and see if you can locate the images I used. I know, bad developer...but I figured since I modified the images considerably it doesn't count...well for a personal project it shouldn't count...*shrug* Anyhow, I extracted each key assest I wanted from each image and applied lots of matting, blurring, color changes, glow effects and such. Then, using my vivid imagination I placed / composed each of the layered assets into the mashed up the "scene" above. Pretty cool, eh? Hey, did you know, the cool mist effect is actually a fire rendition in Paint.net? I set it to black & white with opacity set next to nothing. I'm also very proud of the yellow "light" in the stone doorway. I drew that in and then applied gausian blur to it to give it the effect of light creeping out around the door and into the room...heheh. So did I achieve the dark, mysterious ritual as I stated in my design doc? I think I had a great stab at it! Maybe down the road I can get a real artist to crank out some quality graphics for the game... =) So, What's Next? Well, I don't have that animated brazier yet...however, I thought it would be even cooler if I can get that door pulsing that yellow light and it would be extremely cool to have the smoke / mist moving across the screen! Make the creative ideas stop!! (clutches head) haha! I'm having great fun working on this project =) I recommend others giving something like this a try, it's really fulfilling. OK. Tomorrow... I think I'm going to start creating some game / menu objects as per the design doc, maybe even get a custom mouse cursor up on the screen and handle a couple of mouse events, and lastly, maybe a feature to toggle a framerate display... D.

    Read the article

  • Production Access Denied! Who caused this rule anyways?

    - by Matt Watson
    One of the biggest challenges for most developers is getting access to production servers. In smaller dev teams of less than about 5 people everyone usually has access. Then you hire developer #6, he messes something up in production... and now nobody has access. That is how it always starts in small dev teams. I think just about every rule of life there is gets created this way. One person messes it up for the rest of us. Rules are then put in place to try and prevent it from happening again.Breaking the rules is in our nature. In this example it is for good cause and a necessity to support our applications and troubleshoot problems as they arise. So how do developers typically break the rules? Some create their own method to collect log files off servers so they can see them. Expensive log management programs can collect log files, but log files alone are not enough. Centralizing where important errors are logged to is common. Some lucky developers are given production server access by the IT operations team out of necessity. Wait. That's not fair to all developers and knowingly breaks the company rule!  When customers complain or the system is down, the rules go out the window. Commonly lead developers get production access because they are ultimately responsible for supporting the application and may be the only person who knows how to fix it. The problem with only giving lead developers production access is it doesn't scale from a support standpoint. Those key employees become the go to people to help solve application problems, but they also become a bottleneck. They end up spending up to half of their time every day helping resolve application defects, performance problems, or whatever the fire of the day is. This actually the last thing you want your lead developers doing. They should be working on something more strategic like major enhancements to the product. Having production access can actually be a curse if you are the guy stuck hunting down log files all day. Application defects are good tasks for junior developers. They can usually handle figuring out simple application problems. But nothing is worse than being a junior developer who can't figure out those problems and the back log of them grows and grows. Some of them require production server access to verify a deployment was done correctly, verify config settings, view log files, or maybe just restart an application. Since the junior developers don't have access, they end up bugging the developers who do have access or they track down a system admin to help. It can take hours or days to see server information that would take seconds or minutes if they had access of their own. It is very frustrating to the developer trying to solve the problem, the system admin being forced to help, and most importantly your customers who are not happy about the situation. This process is terribly inefficient. Production database access is also important for solving application problems, but presents a lot of risk if developers are given access. They could see data they shouldn't.  They could write queries on accident to update data, delete data, or merely select every record from every table and bring your database to its knees. Since most of the application we create are data driven, it can be very difficult to track down application bugs without access to the production databases.Besides it being against the rule, why don't all developers have access? Most of the time it comes down to security, change of control, lack of training, and other valid reasons. Developers have been known to tinker with different settings to try and solve a problem and in the process forget what they changed and made the problem worse. So it is a double edge sword. Don't give them access and fixing bugs is more difficult, or give them access and risk having more bugs or major outages being created!Matt WatsonFounder, CEOStackifyAgile Support for Agile Developers

    Read the article

  • BizTalk Server 2013 beta on Windows 8 (with Visual Studio 2012, SQL Server 2012 &amp; ESB Toolkit 2.2)

    - by Vishal
    Hello BizTalkers, Finally, Microsoft released the beta version of BizTalk Server 2010 R2 and now its called BizTalk Server 2013. I had tried the BTS 2010 R2 CTP version on Windows Azure VM and particularly I was excited about the RESTful services support and ESB fully integrated into BizTalk. Well didn’t get chance to test it much, Azure & VM running cost associated . Anyways, I was waiting for this announcement and I was so much glad that Microsoft finally released the on premise one.  Check what’s new in the BizTalk Server 2013.  Officially Microsoft says that BizTalk Server 2013 “beta” is not supported on Windows 8 but I was curious to try it out. Below is my installation and configuration experience. Virtual Machine configuration: VM Ware Workstation 9.0. Windows 8 Enterprise x64. SQL Server 2012. Visual Studio 2012 Ultimate. BizTalk Server 2013 beta. Windows 8 Machine name: WIN8 Local Administrator account name: Admin First I installed Windows 8 Enterprise on a VM Ware Workstation 9.0 and updated the OS. Even Windows 8 is the new release so luckily didn’t had much updates to perform. Next Installed Visual Studio 2012 Ultimate which was straightforward installation. Next Installed SQL Server 2012. Select New SQL Server stand-alone installation & followed the steps as shown in the screenshot below.   Once the installation is finished, fire up SQL Server Management Studio and try connecting. Initially when the management studio opened up, I thought why did Visual Studio 2010 open when I tried opening SQL Management studio but well, they made the interface alike VS 2010. Cool, I like it. Next is the real deal, download the BizTalk Server 2013 and unzip to particular folder. Double click the Setup.exe and follow the steps in the screenshots. Install Microsoft BizTalk Server 2013 beta. I selected all the normal artifacts and also all the artifacts under Additional Software's. So far so good. Next Launch BizTalk Server Configuration and I used Basic configuration as shown in screenshot below. Didn’t expect to see this but “wala”. Successful in the first shot. Still I wasn’t sure & something would have gone wrong so fired up the BizTalk Server Administration Console and that too came up just fine. Still was not able to believe so created a simple messaging application:  message in –> message out and that too worked just fine. Finally I was convinced that BizTalk Server 2013 did work on Windows 8. Next step was to install the ESB Toolkit 2.2 which is now integrated with BizTalk Server and does not come as a separate standalone installation file. Again run the BizTalk Setup.exe from the unzipped folder. Install Microsoft ESB Toolkit. Next, unlike ESB Configuration would  not open up by itself so go to “Windows 8 so called Start” (I could not resist to write this) and open the ESB Toolkit Configuration wizard. Below screenshot display the configurations I used. Also you can find them on MSDN here. Finally after the ESB Configuration, I open Admin Console and checked the 2 ESB application deployed. Cool. This concludes my experience about installation and configuration of BizTalk Server 2013 Beta & ESB Toolkit 2.2 on Windows 8. I will try and keep writing about BizTalk Server 2013 and its use with RESTful Services etc. Thanks, Vishal Mody

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-29

    - by Bob Rhubart
    Backward-compatible vs. forward-compatible: a tale of two clouds | William Vambenepe "There is the Cloud that provides value by requiring as few changes as possible. And there is the Cloud that provides value by raising the abstraction and operation level," says William Vambenepe. "The backward-compatible Cloud versus the forward-compatible Cloud." Vambenepe was a panelist on the recent ArchBeat podcast Public, Private, and Hybrid Clouds. Andrejus Baranovskis's Blog: ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) Oracle ACE Director Andrejus Baranovskis investigates "how you can customize a standard BPM Task Flow through MDS Seeded customization." Oracle OpenWorld 2012 Music Festival If, after a day spent in sessions at Oracle Openworld, you want nothing more than to head back to your hotel for a quiet evening spent responding to email, please ignore the rest of this message. Because every night from Sept 30 to Oct 4 the streets of San Francisco will pulsate with music from a vast array of bands representing more musical styles than a single human brain an comprehend. It's the first ever Oracle Music Festival, baby, 7:00pm to 1:00am every night. Are those emails that important...? Resource Kit: Oracle Exadata - includes demos, videos, product datasheets, and technical white papers. This free resource kit includes several customer case study videos, two 3D product demos, several product datasheets, and three technical architecture white papers. Registration is required for the who don't already have a free Oracle.com membership account. Some execs contemplate making 'Bring Your Own Device' mandatory | ZDNet "Companies and agencies are recognizing that individual employees are doing a better job of handling and managing their devices than their harried and overworked IT departments – who need to focus on bigger priorities, such as analytics and cloud," says ZDNet SOA blogger Joe McKendrick. Podcast Show Notes: Public, Private, and Hybrid Clouds All three parts of this discussion are now available. Featuring a panel of leading Oracle cloud computing experts, including Dr. James Baty, Mark T. Nelson, Ajay Srivastava, and William Vambenepe, the discussion covers an overview of the various flavors of cloud computing, the importance of standards, Why cloud computing is a paradigm shift—and why it isn't, and advice on what architects need to know to take advantage of the cloud. And for those who prefer reading to listening, a complete transcript is also available. Amazon AMIs and Oracle VM templates (Cloud Migrations) Cloud migration expert Tom Laszewski shares an objective comparison of these two resources. IOUC : Blogs : Read the latest news on the global user group community - June 2012! The June 2012 edition of "Are You a Member Yet?"—the quarterly newsletter about Oracle user group communities around the world. Webcast: Introducing Identity Management 11g R2 - July 19 Date: Thursday, July 19, 2012 Time: 10am PT / 1pm ET Please join Oracle and customer executives for the launch of Oracle Identity Management 11g R2, the breakthrough technology that dramatically expands the reach of identity management to cloud and mobile environments. Thought for the Day "The most important single aspect of software development is to be clear about what you are trying to build." — Bjarne Stroustrup Source: SoftwareQuotes.com

    Read the article

  • SQL SERVER – Last Two Days to Get FREE Book – Joes 2 Pros Certification 70-433

    - by pinaldave
    Earlier this week we announced that we will be giving away FREE SQL Wait Stats book to everybody who will get SQL Server Joes 2 Pros Combo Kit. We had a fantastic response to the contest. We got an overwhelming response to the offer. We knew there would be a great response but we want to honestly say thank you to all of you for making it happen. Rick and I want to make sure that we express our special thanks to all of you who are reading our books. The offer is still on and there are two more days to avail this offer. We want to make sure that everybody who buys our most selling combo kits, we will send our other most popular SQL Wait Stats book. Please read all the details of the offer here. The books are great resources for anyone who wants to learn SQL Server from fundamentals and eventually go on the certification path of 70-433. Exam 70-433 contains following important subject and the book covers the subject of fundamental. If you are taking the exam or not taking the exam – this book is for every SQL Developer to learn the subject from fundamentals.  Create and alter tables. Create and alter views. Create and alter indexes. Create and modify constraints. Implement data types. Implement partitioning solutions. Create and alter stored procedures. Create and alter user-defined functions (UDFs). Create and alter DML triggers. Create and alter DDL triggers. Create and deploy CLR-based objects. Implement error handling. Manage transactions. Query data by using SELECT statements. Modify data by using INSERT, UPDATE, and DELETE statements. Return data by using the OUTPUT clause. Modify data by using MERGE statements. Implement aggregate queries. Combine datasets. INTERSECT, EXCEPT Implement subqueries. Implement CTE (common table expression) queries. Apply ranking functions. Control execution plans. Manage international considerations. Integrate Database Mail. Implement full-text search. Implement scripts by using Windows PowerShell and SQL Server Management Objects (SMOs). Implement Service Broker solutions. Track data changes. Data capture Retrieve relational data as XML. Transform XML data into relational data. Manage XML data. Capture execution plans. Collect output from the Database Engine Tuning Advisor. Collect information from system metadata. Availability of Book USA - Amazon | India - Flipkart | Indiaplaza Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • October in Review

    - by Richard Bingham
    With OpenWorld over October was time to get back to serious work for everyone, including the Fusion Applications Developer Relations team. Don't forget the OpenWorld content is still available, including presentation downloads, for a limited period of time so be sure to grab anything you found useful or take another scan for anything you might have missed. Of all the announcements, the continued evolution of the Oracle Cloud services for extending and integrating with Fusion Applications is increasing in popularity, and certainly the Cloud Marketplace is something we're becoming involved in. More details to follow. Fusion Concepts Last week Vik from our team started the new "Fusion Concepts" series of articles, providing those new to Fusion Applications an explanation of the architectural basics, with the aim to reduce the learning curve and lay the platform for more efficient and effective development. The series begun with an insightful first post on the different schemas that exist in the Fusion Applications database. Look out for upcoming posts on multi-lingual entities, profile options, look-ups and more. New Learning Resources Our YouTube channel continued to expand with more 'how to' videos on using page composer, extending the Simplified UI (aka FUSE), and integrating BI reports and analytics. Also the Oracle Learning Library is now well established as a central resource for knowledge, now with thousands of tutorials, videos, and documents. Of particular note are the great new extensibility-related videos added by the CRM Product Management team, including more on the ever-expanding capabilities of Application Composer. To see some examples of these search using keyword 'customization' or the product 'Sales Cloud'. Finally on learning resources, as Oliver mentioned the Oracle Press book on Fusion Application Customization and Extensibility is now available for pre-order on Amazon (due out 1st Jan). Out And About October also saw us attend the annual Apps Conference held by the UK Oracle User Group in London. Interestingly there was an Applications Transformation stream of sessions and content that included Fusion Applications with all the latest in the Oracle Applications evolution, as always focused around the three tenets of social, mobile, and cloud. Read more in Richard's post-event write up. Other teams around Oracle have also been busy. Angelo from the Platform Technical Services group has done quite a bit of work using web services with Fusion SaaS and has published many interesting findings on his blog. It's definitely recommended reading if you are working on any related integration projects. The middleware-for-applications group has built a new tool called "AppAdvantage" offering an online assessment of your use of Fusion Middleware technologies with Oracle Applications. As the popularity of integrating cloud applications with on-premises systems continued to grow, leveraging existing middleware technologies (and licenses) to support the integration solution is likely to be of paramount importance. Similarly the "Build Enterprise Application Extensions with Ease" section of the related webpage has AppsUX director Killan Evers speaking about customization using the composer tools. Both are useful resources for those just getting started with a move to Fusion Applications. The Oracle A-Team, specialists in middleware technical architecture, always publish superb content via their 'chronicles' site, now with a substantial amount specifically related to Fusion Applications. Click on the Fusion Applications menu on the top right of their homepage to see more. Last month of particular note was an article on customizing the timeout pop-up message that shows to inactive users, providing design-time insight and easy-to-follow steps. Finally if you're looking at using Oracle Middleware and Cloud to tailor and extend your applications then you may also be interested in this new blog post on the roadmap for Oracle SOA and the latest on-demand Cloud Development webcast.

    Read the article

  • How to display a hierarchical skill tree in php

    - by user3587554
    If I have skill data set up in a tree format (where earlier skills are prerequisites for later ones), how would I display it as a tree, using php? The parent would be on top and have 3 children. Each of these children can then have one more child so its parent would be directly above it. I'm having trouble figuring out how to add the root element in the middle of the top div, and the child of the children below each child of the root. I'm not looking for code, but an explanation of how to do it. My data in array form is this: Data: Array ( [1] => Array ( [id] => 1 [title] => Jutsu [description] => Skill that makes you awesomer at using ninjutsu [tiers] => 1 [prereq] => [image] => images/skills/jutsu.png [children] => Array ( [2] => Array ( [id] => 2 [title] => fireball [description] => Increase your damage with fire jutsu and weapons [tiers] => 5 [prereq] => 1 [image] => images/skills/fireball.png [children] => Array ( [5] => Array ( [id] => 5 [title] => pin point [description] => Increases jutsu accuracy [tiers] => 5 [prereq] => 2 [image] => images/skills/pinpoint.png ) ) ) [3] => Array ( [id] => 3 [title] => synergy [description] => Reduce the amount of chakra needed to use ninjutsu [tiers] => 1 [prereq] => 1 [image] => images/skills/synergy.png ) [4] => Array ( [id] => 4 [title] => ebb & flow [description] => Increase the damage of water jutsu, water weapons, and reduce the damage of jutsu and weapons that use water element [tiers] => 5 [prereq] => 1 [image] => images/skills/ebbandflow.png [children] => Array ( [6] => Array ( [id] => 6 [title] => IQ [description] => Decrease the time it takes to learn a jutsu [tiers] => 5 [prereq] => 4 [image] => images/skills/iq.png ) ) ) ) ) ) An example would be this demo image minus the hover stuff.

    Read the article

  • How views are changing in future versions of SQL

    - by Rob Farley
    April is here, and this weekend, SQL v11.0 (previous known as Denali, now known as SQL Server 2012) reaches general availability. And so I thought I’d share some news about what’s coming next. I didn’t hear this at the MVP Summit earlier this year (where there was lots of NDA information given, but I didn’t go), so I think I’m free to share it. I’ve written before about CTEs being query-scoped views. Well, the actual story goes a bit further, and will continue to develop in future versions. A CTE is a like a “temporary temporary view”, scoped to a single query. Due to globally-scoped temporary objects using a two-hashes naming style, and session-scoped (or ‘local’) temporary objects a one-hash naming style, this query-scoped temporary object uses a cunning zero-hash naming style. We see this implied in Books Online in the CREATE TABLE page, but as we know, temporary views are not yet supported in the SQL Server. However, in a breakaway from ANSI-SQL, Microsoft is moving towards consistency with their naming. We know that a CTE is a “common table expression” – this is proving to be a more strategic than you may have appreciated. Within the Microsoft product group, the term “Table Expression” is far more widely used than just CTEs. Anything that can be used in a FROM clause is referred to as a Table Expression, so long as it doesn’t actually store data (which would make it a Table, rather than a Table Expression). You can see this is not just restricted to the product group by doing an internet search for how the term is used without ‘common’. In the past, Books Online has referred to a view as a “virtual table” (but notice that there is no SQL 2012 version of this page). However, it was generally decided that “virtual table” was a poor name because it wasn’t completely accurate, and it’s typically accepted that virtualisation and SQL is frowned upon. That page I linked to says “or stored query”, which is slightly better, but when the SQL 2012 version of that page is actually published, the line will be changed to read: “A view is a stored table expression (STE)”. This change will be the first of many. During the SQL 2012 R2 release, the keyword VIEW will become deprecated (this will be SQL v11 SP1.5). Three versions later, in SQL 14.5, you will need to be in compatibility mode 140 to allow “CREATE VIEW” to work. Also consistent with Microsoft’s deprecation policy, the execution of any query that refers to an object created as a view (rather than the new “CREATE STE”), will cause a Deprecation Event to fire. This will all be in preparation for the introduction of Single-Column Table Expressions (to be introduced in SQL 17.3 SP6) which will finally shut up those people waiting for a decent implementation of Inline Scalar Functions. And of course, CTEs are “Common” because the Table Expression definition needs to be repeated over and over throughout a stored procedure. ...or so I think I heard at some point. Oh, and congratulations to all the new MVPs on this April 1st. @rob_farley

    Read the article

  • Open a File Browser From Your Current Command Prompt/Terminal Directory

    - by The Geek
    Ever been doing some work at the command line when you realized… it would be a lot easier if I could just use the mouse for this task? One command later, you’ll have a window open to the same place that you’re at. This same tip works in more than one operating system, so we’ll detail how to do it in every way we know how. Open a File Browser in Windows We’ve actually covered this before when we told you how to open an Explorer window from the command prompt’s current directory, but we’ll briefly review: Just type the follow command into your command prompt: explorer . Note: You could actually just type “start .” instead. And you’ll then see a file browsing window set to the same directory you were previous at. And yes, this screenshot is from Vista, but it works the same in every version of Windows. If that wasn’t good enough, you should really read how you can navigate in the File Open/Save dialogs with just the keyboard—now that’s a Stupid Geek Trick! Open a File Browser in Linux For this exercise, we’re going to assume that you’re using Gnome under a Linux flavor like Ubuntu, because that’s the most common. From your terminal window, just type in the following command: nautilus . And the next thing you know, you’ll have a file browser window open at the current location. You’ll see some type of error message at the prompt, but you can pretty much ignore that. You can also use “gnome-open .” if you want. Open Finder in Mac OS X All the Mac computers in this office are running Linux, so we haven’t had a chance to verify, but you should be able to use the following command on OS X to open Finder in the current terminal location: open . Open Dolphin on Linux KDE4 dolphin . Got any extra tips to help out your fellow readers? How do you do the same thing in KDE3? What about OS X? Leave your savvy advice in the comments, and maybe we’ll update the article. Or not. Either way, it’ll help somebody! Similar Articles Productive Geek Tips Keyboard Ninja: Concatenate Multiple Text Files in WindowsStupid Geek Tricks: Open an Explorer Window from the Command Prompt’s Current DirectoryHow to automate FTP uploads from the Windows Command LineShell Geek: Rename Multiple Files At OnceAdd "Open with gedit" to the right click menu in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Big Data – ClustrixDB – Extreme Scale SQL Database with Real-time Analytics, Releases Software Download – NewSQL

    - by Pinal Dave
    There are so many things to learn and there is so little time we all have. As we have little time we need to be selective to learn whatever we learn. I believe I know quite a lot of things in SQL but I still do not know what is around SQL. I have started to learn about NewSQL recently. If you wonder what is NewSQL I encourage all of you to read my blog post about NewSQL over here Big Data – Buzz Words: What is NewSQL – Day 10 of 21. NewSQL databases are quickly becoming popular – providing the scale of NoSQL with the SQL features and transactions. As a part of learning NewSQL database, I have recently started to learn about ClustrixDB. ClustrixDB has been the most mature NewSQL database used by some of the largest internet sites in the world for over 3 years, with extensive SQL support. In addition to scale, it provides fast real-time analytics by bringing massively parallel processing (MPP), available only in warehousing databases, to the transactional database. The reason I am more intrigued about learning ClustrixDB is their recent announcement on Oct 31. ClustrixDB was only available as an appliance, but now with their software release on Oct 31, everyone can use it. It is now available as forever free for up to 12 cores with community support, and there is a 45 day trial for unlimited cluster sizes. With the forever free world, I am indeed interested in ClustrixDB now. I know that few of the leading eCommerce sites in the world uses them for their transactional database. Here are few of the details I have quickly noted for ClustrixDB. ClustrixDB allows user to: Scale by simply adding nodes to the cluster with a single command Run billions of transactions a day Run fast real-time analytics Achieve high-availability with recovery from node failure Manages itself Easily migrate from MySQL as it is nearly plug-and-play compatible, use MySQL drivers, tools and replication. While I was going through the documentation I realized that ClustrixDB also has extensive support for SQL features including complex queries involving joins on a dozen or more tables, aggregates, sorts, sub-queries. It also supports stored procedures, triggers, foreign keys, partitioned and temporary tables, and fully online schema changes. It is indeed a very matured product and SQL solution. Indeed Clusterix sound very promising solution, I decided to dig a bit deeper to understand who are current customers of the Clustrix as they exist in the industry for quite a few years. Their client list is indeed very interesting and here is my quick research about them. Twoo.com – Europe’s largest social discovery (dating) site runs 4.4 Billion Transactions a day with table sizes over a Terabyte, on a 168 core cluster. EngageBDR – Top 3 in the online advertising category uses ClustrixDB to serve 6.9 billion ads a day through real-time bidding platform. Their reports went from 4 hours to 15 seconds. NoMoreRack – Top 2 fastest growing e-commerce company in US used ClustrixDB for high availability and fast growth through Amazon cloud. MakeMyTrip – India’s leading travel site runs on ClustrixDB with two clusters running as multi-master in Chennai and Bangalore. Many enterprises such as AOL, CSC, Rakuten, Symantec use ClustrixDB when their applications need scale. I must accept that I am impressed with the information I have learned so far and now is the time to do some hand’s on experience with their product. I want to learn this technology so in future when it is about NewSQL, I know what I am talking about. Read more why Clustrix explains why you ClustrixDB might be the right database for you. Download ClustrixDB with me today and install it on your machine so in future when we discuss the technical aspects of it, we all are on the same page. The software can be downloaded here. Reference : Pinal Dave (http://blog.SQLAuthority.com)Filed under: Big Data, MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: Clustrix

    Read the article

  • The Raspberry Pi JavaFX In-Car System (Part 3)

    - by speakjava
    Ras Pi car pt3 Having established communication between a laptop and the ELM327 it's now time to bring in the Raspberry Pi. One of the nice things about the Raspberry Pi is the simplicity of it's power supply.  All we need is 5V at about 700mA, which in a car is as simple as using a USB cigarette lighter adapter (which is handily rated at 1A).  My car has two cigarette lighter sockets (despite being specified with the non-smoking package and therefore no actual cigarette lighter): one in the centre console and one in the rear load area.  This was convenient as my idea is to mount the Raspberry Pi in the back to minimise the disruption to the very clean design of the Audi interior. The first task was to get the Raspberry Pi to communicate using Wi-Fi with the ELM 327.  Initially I tried a cheap Wi-Fi dongle from Amazon, but I could not get this working with my home Wi-Fi network since it just would not handle the WPA security no matter what I did.  I upgraded to a Wi Pi from Farnell and this works very well. The ELM327 uses Ad-Hoc networking, which is point to point communication.  Rather than using a wireless router each connecting device has its own assigned IP address (which needs to be on the same subnet) and uses the same ESSID.  The settings of the ELM327 are fixed to an IP address of 192.168.0.10 and useing the ESSID, "Wifi327".  To configure Raspbian Linux to use these settings we need to modify the /etc/network/interfaces file.  After some searching of the web and a few false starts here's the settings I came up with: auto lo eth0 wlan0 iface lo inet loopback iface eth0 inet static     address 10.0.0.13     gateway 10.0.0.254     netmask 255.255.255.0 iface wlan0 inet static     address 192.168.0.1     netmask 255.255.255.0     wireless-essid Wifi327     wireless-mode ad-ho0 After rebooting, iwconfig wlan0 reported that the Wi-Fi settings were correct.  However, ifconfig showed no assigned IP address.  If I configured the IP address manually using ifconfig wlan0 192.168.0.1 netmask 255.255.255.0 then everything was fine and I was able to happily ping the IP address of the ELM327.  I tried numerous variations on the interfaces file, but nothing I did would get me an IP address on wlan0 when the machine booted.  Eventually I decided that this was a pointless thing to spend more time on and so I put a script in /etc/init.d and registered it with update-rc.d.  All the script does (currently) is execute the ifconfig line and now, having installed the telnet package I am able to telnet to the ELM327 via the Raspberry Pi.  Not nice, but it works. Here's a picture of the Raspberry Pi in the car for testing In the next part we'll look at running the Java code on the Raspberry Pi to collect data from the car systems.

    Read the article

  • Asynchrony in C# 5 (Part I)

    - by javarg
    I’ve been playing around with the new Async CTP preview available for download from Microsoft. It’s amazing how language trends are influencing the evolution of Microsoft’s developing platform. Much effort is being done at language level today than previous versions of .NET. In these post series I’ll review some major features contained in this release: Asynchronous functions TPL Dataflow Task based asynchronous Pattern Part I: Asynchronous Functions This is a mean of expressing asynchronous operations. This kind of functions must return void or Task/Task<> (functions returning void let us implement Fire & Forget asynchronous operations). The two new keywords introduced are async and await. async: marks a function as asynchronous, indicating that some part of its execution may take place some time later (after the method call has returned). Thus, all async functions must include some kind of asynchronous operations. This keyword on its own does not make a function asynchronous thought, its nature depends on its implementation. await: allows us to define operations inside a function that will be awaited for continuation (more on this later). Async function sample: Async/Await Sample async void ShowDateTimeAsync() {     while (true)     {         var client = new ServiceReference1.Service1Client();         var dt = await client.GetDateTimeTaskAsync();         Console.WriteLine("Current DateTime is: {0}", dt);         await TaskEx.Delay(1000);     } } The previous sample is a typical usage scenario for these new features. Suppose we query some external Web Service to get data (in this case the current DateTime) and we do so at regular intervals in order to refresh user’s UI. Note the async and await functions working together. The ShowDateTimeAsync method indicate its asynchronous nature to the caller using the keyword async (that it may complete after returning control to its caller). The await keyword indicates the flow control of the method will continue executing asynchronously after client.GetDateTimeTaskAsync returns. The latter is the most important thing to understand about the behavior of this method and how this actually works. The flow control of the method will be reconstructed after any asynchronous operation completes (specified with the keyword await). This reconstruction of flow control is the real magic behind the scene and it is done by C#/VB compilers. Note how we didn’t use any of the regular existing async patterns and we’ve defined the method very much like a synchronous one. Now, compare the following code snippet  in contrast to the previuous async/await: Traditional UI Async void ComplicatedShowDateTime() {     var client = new ServiceReference1.Service1Client();     client.GetDateTimeCompleted += (s, e) =>     {         Console.WriteLine("Current DateTime is: {0}", e.Result);         client.GetDateTimeAsync();     };     client.GetDateTimeAsync(); } The previous implementation is somehow similar to the first shown, but more complicated. Note how the while loop is implemented as a chained callback to the same method (client.GetDateTimeAsync) inside the event handler (please, do not do this in your own application, this is just an example).  How it works? Using an state workflow (or jump table actually), the compiler expands our code and create the necessary steps to execute it, resuming pending operations after any asynchronous one. The intention of the new Async/Await pattern is to let us think and code as we normally do when designing and algorithm. It also allows us to preserve the logical flow control of the program (without using any tricky coding patterns to accomplish this). The compiler will then create the necessary workflow to execute operations as the happen in time.

    Read the article

  • Retrofit Certification

    - by Bill Evjen
    Impact of Regulations on Cabin Systems Installation John Courtright, Structural Integrity Engineering There are “heightened” FAA attention to technical issues related to IFE and Wi-Fi Systems Installations The Aging Aircraft Safety Rule – EWIS & Damage Tolerance Analysis The Challenge: Maximize Flight Safety While Minimizing Costs Issue Papers & Testing, Testing, Testing The role of Airworthiness Directives (ADs) on the design of many IFE systems and all antenna systems. Goal is safety AND cost-effective maintenance intervals and inspection techniques The STC Process Briefly Stated Type Certifications (TC) Supplemental Type Certifications (STC) The STC Process Project Specific Certification Plan (PSCP) Managed by FAA Aircraft Certification Office (ACO) Type of Project (Electrical/Mechanical Systems or Structural) Specific Type of Aircraft Being Modified Schedule Design & Installation Location What does the STC Plan (PSCP) Cover? System Description – What does the system do? System qualification – Are the components qualified? Certification requirements – What FARs are applicable? Installation detail – what is being modified? Prototype installation – What is new? Functional hazard Assessment (FHA) – is it safe? EZAP-EWIS Requirements – Any aging aircraft issues? Certification Data – How is compliance achieved? Delegation and FAA involvement – Who is doing the work? Proposed certification schedule – When is the installation? Certification documentation – What the FAA Expects to see Cabin Systems Certification Concerns In addition to meeting the requirements for DO-160, Cabin System Certification needs to address issues related to: Power management: Generally, IFE and Wi-Fi Systems are classified as “Non-Essential Equipment” from a certification viewpoint. Connected to “non-essential” power buses Must be able to shed IFE & Wi-Fi Systems in a smoke/fire event or Other electrical emergency (FAA Policy 00-111-160) FAA is more relaxed with testing wi-fi. It used to be that you had to have 150 seats with laptops running wi-fi, but now it is down to around 50. Aging aircraft concerns – electrical and structural Issue papers addressing technical concerns involving: “Structural Certification Criteria for Large Antenna Installations” Antenna “Vibration/Buffeting Compliance Criteria” DO-160 : Environmental Test Procedures DO 160 – “Environmental Conditions and Test Procedures for Airborne Equipment”, Issued by RTCA Provides guidance to equipment manufacturers as to testing requirements Temperature: –40C to +55C Vibration and Shock Contaminant susceptibility – fluids and dust Electro-magnetic Interference Cabin systems are generally classified as “non-essential” Swissair 111 crashed (in part) due to non-standard wiring practices. EWIS Design Implications Installation design must take EWIS Requirements into account. This generally means: Aircraft surveys are needed to identify proper wire routing Ensure existing wiring diagrams are correct Identify primary/Secondary/Tertiary bus locations Verify proper separation of wire bundles exist Required separation from fuel quantity indicator system (FQIS) to prevent fuel tang ignition Enhanced Zonal Analysis Procedure (EZAP) Performed EZAP was developed by the Aging Transport Systems Rulemaking Advisory Committee (ATSRAC) EZAP is the method for analyzing airplane zones with an emphasis on evaluating wiring systems and the existence of combustibles  in the cabin. Certification Considerations for Wi-Fi Systems Electrical – All existing DO 160 testing required Issue papers required Onboard EMI testing – any interference with aircraft systems when multiple wi-fi users are logged on? Vibration/Buffeting compliance criteria – what is the effect of the antenna on aircraft flight characteristics? Structural certification criteria – what are the stress loads on the aircraft at the antenna location and what is the impact on maintenance inspection criteria for the airline? Damage tolerance analysis required Goal – minimize maintenance inspection intervals

    Read the article

  • Starting over and new to Ubuntu

    - by 2funnyyone
    We have been having repeated problems with our interent service and using windows xp & sp3 (users and premissions) I see no need for them. I started with computers long before windows. Every since sp 3 come out in 2009 I have had nothing but problems. I have lost so many computers to virius and trojans, we just stack them up. We are with Qwest/ Century link which is using advertising servers which I think is causing the problem. All the computers are networked together which is not how I set them up. I beleive Century link is networking them through assignment of a domain for our home. This causes all the computers to crash twice. This is getting expensive. We tried buying new harddrives but reinfect with hours of connecting to internet. I also beleive the modem, router and all computers are infected. I put combofix on this one and that is the only reason we are still online with this laptop. I am afraid to install new equipment because my partner and I are on SSDI and this cost a lot. I go to school at UOP and had to run off a flash and reboot this laptop to recovery every other day or so, this pass month. New plan is: We are getting ready to install new equipment but afraid to reinfect again. Need help to install new equipment. The plan is to use current internet services from Qwest/ now Century Link. The list of New equipment in order: Century link wireless modem is ZyXEL PK5000Z with 4 direct connect Ethernet ports Next Dell Optiplex 210L ( used auction purchase ) 2 gb ram 80 g hard drive Ubuntu 11.10 operating system Next Wireless D-Link router WBR-1310 with 4 direct connect Ethernet ports OK-------- Purchased Dell OEM disk for Repair or Reinstalling Windows XP Professional Operating system (2 roommates as well) All infected computers are Dell desktops or laptops with XP Pro Also purchasing Ubuntu 12.04 for 3 computers. We like the way it runs but still learning it. Questions 1] How do we fdisk the infected computers without infecting new system. We have Dos disks, but none have floppy dish drive. We do have a new floppy disk drive and usb adapter we purchased from Amazon. 2] We are thinking Avast internet security because of the boot scan. We want all software loaded before reconnecting. We can manually load our internet provider information. We purchased StopZilla $100 for 5 computers, but not sure that is what we need. But need how to setup ports security and services we will need. Really lost at this part. So we are safe when we go back on the internet. 3] Want to connect reloaded fdisk systems to router as public connection and no sharing. Do not want to network all computers. 4] Want parental/ ownership control from Ubuntu system for internet connection (Children and friends). Do we restrict at the modem and/ or router? Any help would be a blessing. I do not want to go alone on this anymore.

    Read the article

  • We have our standards, and we need them

    - by Tony Davis
    The presenter suddenly broke off. He was midway through his section on how to apply to the relational database the Continuous Delivery techniques that allowed for rapid-fire rounds of development and refactoring, while always retaining a “production-ready” state. He sighed deeply and then launched into an astonishing diatribe against Database Administrators, much of his frustration directed toward Oracle DBAs, in particular. In broad strokes, he painted the picture of a brave new deployment philosophy being frustratingly shackled by the relational database, and by especially by the attitudes of the guardians of these databases. DBAs, he said, shunned change and “still favored tools I’d have been embarrassed to use in the ’80′s“. DBAs, Oracle DBAs especially, were more attached to their vendor than to their employer, since the former was the primary source of their career longevity and spectacular remuneration. He contended that someone could produce the best IDE or tool in the world for Oracle DBAs and yet none of them would give a stuff, unless it happened to come from the “mother ship”. I sat blinking in astonishment at the speaker’s vehemence, and glanced around nervously. Nobody in the audience disagreed, and a few nodded in assent. Although the primary target of the outburst was the Oracle DBA, it made me wonder. Are we who work with SQL Server, database professionals or merely SQL Server fanbois? Do DBAs, in general, have an image problem? Is it a good career-move to be seen to be holding onto a particular product by the whites of our knuckles, to the exclusion of all else? If we seek a broad, open-minded, knowledge of our chosen technology, the database, and are blessed with merely mortal powers of learning, then we like standards. Vendors of RDBMSs generally don’t conform to standards by instinct, but by customer demand. Microsoft has made great strides to adopt the international SQL Standards, where possible, thanks to considerable lobbying by the community. The implementation of Window functions is a great example. There is still work to do, though. SQL Server, for example, has an unusable version of the Information Schema. One cast-iron rule of any RDBMS is that we must be able to query the metadata using the same language that we use to query the data, i.e. SQL, and we do this by running queries against the INFORMATION_SCHEMA views. Developers who’ve attempted to apply a standard query that works on MySQL, or some other database, but doesn’t produce the expected results on SQL Server are advised to shun the Standards-based approach in favor of the vendor-specific one, using the catalog views. The argument behind this is sound and well-documented, and of course we all use those catalog views, out of necessity. And yet, as database professionals, committed to supporting the best databases for the business, whatever they are now and in the future, surely our heart should sink somewhat when we advocate a vendor specific approach, to a developer struggling with something as simple as writing a guard clause. And when we read messages on the Microsoft documentation informing us that we shouldn’t rely on INFORMATION_SCHEMA to identify reliably the schema of an object, in SQL Server!

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >