Search Results

Search found 39290 results on 1572 pages for 'even numbers'.

Page 381/1572 | < Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >

  • As a person getting into mobile development, what's the best mobile platform in terms of profitability? [closed]

    - by Kyle Loman
    I realize this question can range very far so would love to hear any and all opinions on this. However, I'll be honest and say that I have been thinking of this in terms of most profitable. I know how this may sound either way but this is one of my main sticking points. I realize that I'm not guaranteed a single cent and success is never guaranteed but I'm going into this with the thought of making something out of it both financially and also for my own interest. I know that iOS gets a lot of attention on this front but Android commands a lot more market share. However, I know there are drawbacks to Android too, whether it's in the actual development process and programming (though I've heard conflicting reports on this, such as how easy/difficult it is for to address screen res in different devices) or the app ecosystem being flooded. But iOS's app ecosystem has been described as too saturated and harder to compete in for that reason. Since Windows Phone has fewer apps than both of those two, that might be the best place to start in order to be closer to the ground floor of the store and be noticed more? Less saturation = better chances of sales or differentiating? Something like the gold rush during the first years of the iOS App Store (not exactly but at least in concept)? Would it be that despite fewer users on the platform, there's more exposure due to less competition so that may translate to better success at sales? Plus, I know MS is in it for the long haul so I'm not too fearful of something like WebOS going away. Obviously RIM isn't very popular nowadays but I read a recent article that says Blackberry actually has the apps that make the most money, any thoughts on that: http://gigaom.com/mobile/which-mobile-oss-apps-make-most-money-surprise-its-blackberry/ Again, this is all I've heard or known about so if there's anything to add or correct here, please do. In addition, this has actually affected my next personal phone upgrade. I'm eligible for a carrier discount now and I've had my eye on the iPhone 5. However, the Lumia 920 is the one I'm holding out for and I'm open to trying an Android but I'm not sure I can wait that long for any new Nexus or even the Razr HD. Even the new Lumia in November is making me antsy, I'm so close to just getting an iPhone 5. But when I say this has affected my phone choice, I'd want to be able to carry the apps I write with me so that I'm able to pull my phone out to show people without having to carry around a second device to do so. So that's why I'd like to make my personal phone match the main platform I'm developing for. Of course, I will likely expand to other platforms if I gain any decent success but the one I target now would serve well as my personal phone I carry around so that I can use it as a marketing tool, in a sense, showing people my apps if the opportunity presents itself. So what's the best mobile platform to choose, and especially in regards to most lucrative? As said previously, this would influence my personal phone choice greatly. Thanks in advance and I hope this isn't taken the wrong way - I understand there are trade-offs and other factors that may balance this out but making some revenue is key among that. For some background, I have done software development and know programming language concepts so I'm not entirely new to it and I do get the notion of being familiar with these things so that I can translate this skill among a variety of languages but I'm currently just having difficulty choosing my first main mobile platform based on the criteria I've outlined above.

    Read the article

  • PowerShell: Read Excel to Create Inserts

    - by BuckWoody
    I’m writing a series of articles on how to migrate “departmental” data into SQL Server. I also hold workshops on the entire process – from discovering that the data exists to the modeling process and then how to design the Extract, Transform and Load (ETL) process. Finally I write about (and teach) a few methods on actually moving the data. One of those options is to use PowerShell. There are a lot of ways even with that choice, but the one I show is to read two columns from the spreadsheet and output statements that would insert the data using a stored procedure. Of course, you could re-write this as INSERT statements, out to a text file for bcp, or even use a database connection in the script to move the data directly from Excel into SQL Server. This snippet won’t run on your system, of course – it assumes a Microsoft Office Excel 2007 spreadsheet located at c:\temp called VendorList.xlsx. It looks for a tab in that spreadsheet called Vendors. The statement that does the writing just uses one column: Vendor Code. Here’s the breakdown of what I’m doing: In the first block, I connect to Microsoft Office Excel. That connection string is specific to Excel 2007, so if you need a different version you’ll need to look that up. In the second block I set up a selection from the entire spreadsheet based on that tab. Note that if you’re only after certain data you shouldn’t get the whole spreadsheet – that’s just good practice. In the next block I create the text I want, inserting the Vendor Code field as I go. Finally I close the connection. Enjoy! $ExcelConnection= New-Object -com "ADODB.Connection" $ExcelFile="c:\temp\VendorList.xlsx" $ExcelConnection.Open("Provider=Microsoft.ACE.OLEDB.12.0;` Data Source=$ExcelFile;Extended Properties=Excel 12.0;") $strQuery="Select * from [Vendors$]" $ExcelRecordSet=$ExcelConnection.Execute($strQuery) do { Write-Host "EXEC sp_InsertVendors '" $ExcelRecordSet.Fields.Item("Vendor Code").Value "'" $ExcelRecordSet.MoveNext()} Until ($ExcelRecordSet.EOF) $ExcelConnection.Close() Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLAuthority News – Advantages of Distance Learning

    - by Pinal Dave
    Distance education is extremely popular – almost overnight, it seems.  Almost everyone has taken an online course, or knows someone who has, or is considering joining an online school.  There are many advantages and disadvantages to attending an online school – but the same can be said of attending a physical school!  Let’s take a look at the top reasons to use distance education. 1) Flexibility.  Physical universities are usually willing to make some concessions to student – like night classes, study hours, and online networks.  However, nothing is going to beat the flexibility of distance education.  You can attend classes and take notes anytime, anywhere, wearing anything you’d like! 2) Affordability.  We don’t need to get into hard numbers to understand how an expensive university can be.  Students are taking on more and more debt just to get an education.  Many of these fees pay for room, board, and facilities.   Distance education cuts out all these costs, and makes attending school much more affordable for the average student. 3) Try before you buy.  Did you know that the average college student changes his or her major 10 times before they graduate?  You can imagine that this kind of indecision plays a huge part in WHEN you graduate – not being able to make up your mind can cost you big bucks if you have to stay in school for extra years!  Distance education allows you to take different classes from a wide range of disciplines.  Do you want to study forensic science or English literature?  Now you don’t have to pay for classes you can’t afford just to find out. 4) Pace yourself.  Some students struggle in a traditional classroom setting – classes can be taught too fast, too slow, or there are too many distractions.  Distance education allows mature students to set the pace themselves.  They can rewatch lectures they didn’t catch the first time, or go through classes quickly if they are already familiar with the material – cutting out the chance of burning out or getting bored. 5) Lifelong learning.  Maybe you already have a degree, but would like to learn more about your field, or a related field, or maybe even about something completely unrelated – just because you are curious!  Distance education allows you to learn whatever you want ,whenever you want (and yes, wearing anything you’d like!). 6) Attend whatever college you want.  Because of the popularity of distance education, physical campuses are getting in on the game by offering online courses – often just uploaded versions of classes already taught at their campus.  Ever wanted to attend Harvard, but knew you couldn’t get in?  Take a class online!  Of course, you probably should not attempt to lie and say you have a Harvard degree, but Ivy League colleges are prestigious because they are the best in their field – take advantage of the best by taking an online course! I am a big believer in continuing education, whether it is online courses, returning to school, or even take informal classes online.  Distance education can be a great way to accomplish these goals and become a lifelong learner. My friends at provides training through virtual classrooms for students who want to avoid travelling. Distance learning course allows IT aspirants to connect with trainers using the internet.  I encourage everyone to check it out! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • Ask HTG: How Can I Check the Age of My Windows Installation?

    - by Jason Fitzpatrick
    Curious about when you installed Windows and how long you’ve been chugging along without a system refresh? Read on as we show you a simple way to see how long-in-the-tooth your Windows installation is. Dear How-To Geek, It feels like it has been forever since I installed Windows 7 and I’m starting to wonder if some of the performance issues I’m experiencing have something to do with how long ago it was installed. It isn’t crashing or anything horrible, mind you, it just feels slower than it used to and I’m wondering if I should reinstall it to wipe the slate clean. Is there a simple way to determine the original installation date of Windows on its host machine? Sincerely, Worried in Windows Although you only intended to ask one question, you actually asked two. Your direct question is an easy one to answer (how to check the Windows installation date). The indirect question is, however, a little trickier (if you need to reinstall Windows to get a performance boost). Let’s start off with the easy one: how to check your installation date. Windows includes a handy little application just for the purposes of pulling up system information like the installation date, among other things. Open the Start Menu and type cmd in the run box (or, alternatively, press WinKey+R to pull up the run dialog and enter the same command). At the command prompt, type systeminfo.exe Give the application a moment to run; it takes around 15-20 seconds to gather all the data. You’ll most likely need to scroll back up in the console window to find the section at the top that lists operating system stats. What you care about is Original Install Date: We’ve been running the machine we tested the command on since August 23 2009. For the curious, that’s one month and a day after the initial public release of Windows 7 (after we were done playing with early test releases and spent a month mucking around in the guts of Windows 7 to report on features and flaws, we ran a new clean installation and kept on trucking). Now, you might be asking yourself: Why haven’t they reinstalled Windows in all that time? Haven’t things slowed down? Haven’t they upgraded hardware? The truth of the matter is, in most cases there’s no need to completely wipe your computer and start from scratch to resolve issues with Windows and, if you don’t bog your system down with unnecessary and poorly written software, things keep humming along. In fact, we even migrated this machine from a traditional mechanical hard drive to a newer solid-state drive back in 2011. Even though we’ve tested piles of software since then, the machine is still rather clean because 99% of that testing happened in a virtual machine. That’s not just a trick for technology bloggers, either, virtualizing is a handy trick for anyone who wants to run a rock solid base OS and avoid the bog-down-and-then-refresh cycle that can plague a heavily used machine. So while it might be the case that you’ve been running Windows 7 for years and heavy software installation and use has bogged your system down to the point a refresh is in order, we’d strongly suggest reading over the following How-To Geek guides to see if you can’t wrangle the machine into shape without a total wipe (and, if you can’t, at least you’ll be in a better position to keep the refreshed machine light and zippy): HTG Explains: Do You Really Need to Regularly Reinstall Windows? PC Cleaning Apps are a Scam: Here’s Why (and How to Speed Up Your PC) The Best Tips for Speeding Up Your Windows PC Beginner Geek: How to Reinstall Windows on Your Computer Everything You Need to Know About Refreshing and Resetting Your Windows 8 PC Armed with a little knowledge, you too can keep a computer humming along until the next iteration of Windows comes along (and beyond) without the hassle of reinstalling Windows and all your apps.         

    Read the article

  • Three Buckets of Knowledge

    - by BuckWoody
    As I learn more and more about SQL Server every day, I divide up my information into three “buckets”: Concepts In the first bucket are the general concepts about the topic. What is it? What does it do (or sometimes, what is is supposed to do?) How does one operation flow to another? For this information I use books, magazine articles and believe it or not – Wikipedia. I don’t always trust that last source, but I do use it to see how others lay out their thoughts around a concept. I really like graphical charts that show me the process flow if I can get it, and this is an ideal place for a good presentation. In fact, this may be the only real use for a presentation – I’ll explain what I mean in a moment. Reference The references for a topic include things like Transact-SQL (T-SQL) syntax, or the screen layout on a panel, things like that. Think Dictionary. The only reference I trust for this information is Books Online – presentations are fine, but we’re talking about a dictionary. Ever go to a movie that just reads through a dictionary? Me neither. But I have gone to presentations where people try to include tons of reference materials in their slides. Even if you give me the presentation material later, it’s not really a searchable, readable medium. How To A how-to for me is an example, or even better, a tutorial about an example. Whatever it is shows me a practical use for the concepts and of course involves the syntax. The important thing here is that you need to be able to separate out the example the person is showing you from the stuff you need to know. I can’t tell you how many times folks have told me, “well, sure, if yours is red then that works. But mine is blue.” And I have to explain, “then use “blue” for the search word here.” You get the idea. No one will do your work for you – the examples are meant as a teaching tool only. I accept that, learn what I can, and then run off to create my own thing. You might think a How To works well in a presentation, and it does, for the most part. For a complex example or tutorial, I still prefer the printed word (electronic if possible) so that I can go over the example multiple times, skip around and so on.   The order here isn’t actually that important. Most of the time I start with a concept, look at an example, and then read the reference material. But sometimes I look up an example, read a little of concepts and then check the reference. The only primary thing I try to enforce is to read something from each of them. It’s dangerous to base your work on any single example, reference or concept.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • June 23, 1983: First Successful Test of the Domain Name System [Geek History]

    - by Jason Fitzpatrick
    Nearly 30 years ago the first Domain Name System (DNS) was tested and it changed the way we interacted with the internet. Nearly impossible to remember number addresses became easy to remember names. Without DNS you’d be browsing a web where numbered addresses pointed to numbered addresses. Google, for example, would look like http://209.85.148.105/ in your browser window. That’s assuming, of course, that a numbers-based web every gained enough traction to be popular enough to spawn a search giant like Google. How did this shift occur and what did we have before DNS? From Wikipedia: The practice of using a name as a simpler, more memorable abstraction of a host’s numerical address on a network dates back to the ARPANET era. Before the DNS was invented in 1983, each computer on the network retrieved a file called HOSTS.TXT from a computer at SRI. The HOSTS.TXT file mapped names to numerical addresses. A hosts file still exists on most modern operating systems by default and generally contains a mapping of the IP address 127.0.0.1 to “localhost”. Many operating systems use name resolution logic that allows the administrator to configure selection priorities for available name resolution methods. The rapid growth of the network made a centrally maintained, hand-crafted HOSTS.TXT file unsustainable; it became necessary to implement a more scalable system capable of automatically disseminating the requisite information. At the request of Jon Postel, Paul Mockapetris invented the Domain Name System in 1983 and wrote the first implementation. The original specifications were published by the Internet Engineering Task Force in RFC 882 and RFC 883, which were superseded in November 1987 by RFC 1034 and RFC 1035.Several additional Request for Comments have proposed various extensions to the core DNS protocols. Over the years it has been refined but the core of the system is essentially the same. When you type “google.com” into your web browser a DNS server is used to resolve that host name to the IP address of 209.85.148.105–making the web human-friendly in the process. Domain Name System History [Wikipedia via Wired] What is a Histogram, and How Can I Use it to Improve My Photos?How To Easily Access Your Home Network From Anywhere With DDNSHow To Recover After Your Email Password Is Compromised

    Read the article

  • Ti Launchpad

    - by raysmithequip
    Just thought I would get a couple of notes up here for reference to anyone that is interested...it is now Feb 2011 and I have not been posting here enough to remember this blog. Back in Nov 2010 I ordered the Ti launchpad msp430, it is a little target board kit replete with a mini USB cable, two very inexpensive programmable mcu's and a couple of pin headers with a couple of led's on board, a spi connector some on board jumpers and two programmable micro switches....all for less than $5.00...INCLUDING SHIPPING!!....not bad when the ardruino's are running around 20.00 for the target board, atmega328 and cable off of eBay...I wont even mention the microchip pic right now.  Naw, for $5.00 the Ti launchpad kit is about the cheapest fun around...if-uns your a geek that is... Well, the launchpad was backordered for almost two months, came like Xmas eve in fact...I had almost forgotten it!! And really, it was way late and not my idea of an Xmas present for myself.  That would of been the web expressions 4 I bought a few weeks back.  With all the holidays, I did not even look at it till last week, in fact I passed the wrapped board around at my local ham club meeting during points of personal privilege....some oh's and ahhs but mostly duhs...I actually ordered it to avoid downloading the huge code compressor studio 4 (CCS) that was supposed to be included on the cd.  No cd.  I had already downloaded IAR  another programming IDE for these little micro bugs. In my spare time I toyed with IAR and the launchpad board but after about two days of playing delete the driver with windows I decided to just download CCS 4, the code limited version, and give that a shot......CCS 4, is a good rewrite from the earlier versions, it is based on Eclipse as an IDE and includes the drivers for the msp430 target board I received in the kit.  Once installed I quickly configured the debugger for the target chip which was already plugged into the dip socket at the factory, msp430G2131 from he drop down list and clicked ok...I was in!! The CCS4 is full of bells and whistles compared to the IAR, which I would of preferred for the simplicity.  But the code compressor studio really does have it all!!..the code limited version is free, and of all things will give you java script editor box.  The whole layout in debugger mode reminds me of any modern programmer IDE...I mean sure give me Tex anytime but you simply must admire all the boxes and options included in the GUI.  It was a simple matter to check the assembly code in the flash and ram memory that came preloaded for the launchpad kit.  Assembly.  I am right now looking for my old assembly textbooks...sure I remember how to use mov and add etc but a couple of the commands are a little more than vague anymore.  Still, these little mcu's are about 50 cents each and might just work in a couple of projects I have lined up for the near future.  I may document the code here.  Luckily, I plan to write the code in c++ for the main project but if it has to be assembly, no prob.  For reference, the program that came already on the 2131 in the kit was a temperature indicator that alternately flashed red and green leds and changed the intensity of either depending on whether the temp was rising or falling...neat.  Neat enough that it might be worthwhile banging out a little GUI in windows 7 to test the new user device system calls, maybe put a temp gauge widget up on the desktop...just to keep from getting bored.  If you see some assembly code on this blog, you know I was doing something with one of the many mcu's out there.....thats all for now, more to follow...a bit later, of course.

    Read the article

  • Puppet: Getting Started On Windows

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-getting-started-on-windows.aspxNow that we’ve talked a little about Puppet. Let’s see how easy it is to get started. Install Puppet Let’s get Puppet Installed. There are two ways to do that: With Chocolatey: Open an administrative/elevated command shell and type: choco install puppet Download and install Puppet manually - http://puppetlabs.com/misc/download-options Run Puppet Let’s make pasting into a console window work with Control + V (like it should): choco install wincommandpaste If you have a cmd.exe command shell open, (and chocolatey installed) type: RefreshEnv The previous command will refresh your environment variables, ala Chocolatey v0.9.8.24+. If you were running PowerShell, there isn’t yet a refreshenv for you (one is coming though!). If you have to restart your CLI (command line interface) session or you installed Puppet manually open an administrative/elevated command shell and type: puppet resource user Output should look similar to a few of these: user { 'Administrator': ensure => 'present', comment => 'Built-in account for administering the computer/domain', groups => ['Administrators'], uid => 'S-1-5-21-some-numbers-yo-500', } Let's create a user: puppet apply -e "user {'bobbytables_123': ensure => present, groups => ['Users'], }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: created Run the 'puppet resource user' command again. Note the user we created is there! Let’s clean up after ourselves and remove that user we just created: puppet apply -e "user {'bobbytables_123': ensure => absent, }" Relevant output should look like: Notice: /Stage[main]/Main/User[bobbytables_123]/ensure: removed Run the 'puppet resource user' command one last time. Note we just removed a user! Conclusion You just did some configuration management /system administration. Welcome to the new world of awesome! Puppet is super easy to get started with. This is a taste so you can start seeing the power of automation and where you can go with it. We haven’t talked about resources, manifests (scripts), best practices and all of that yet. Next we are going to start to get into more extensive things with Puppet. Next time we’ll walk through getting a Vagrant environment up and running. That way we can do some crazier stuff and when we are done, we can just clean it up quickly.

    Read the article

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • History of Mobile Technology

    - by David Dorf
    Over the last ten years, mobile phones have gone through several incremental technology leaps that have added capabilities that impact the retail industry.  I've listed the six major ones below, along with their long-lasting impact. 1. Location In the US, the FCC required mobile phones to implement E911 (emergency calls) by 2006, requiring the caller to be located to within 300 meters.  Back in 2000, GPS was opened up for civilian use, and by 2004 Qualcomm had figured out how to use GPS in mobile phones.  So mobile operators moved from cell tower triangulation to GPS, principally for E911.  But then lots of other uses became apparent, especially navigation.  The earliest mobile apps from retailers made it easy to find nearby stores, and companies are looking at ways to use WiFi triangulation inside stores. 2. Computer Vision In 1997 Philippe Kahn shared a photo of his newborn using a mobile phone thus launching the popularity of instant visual communications.  Over the years the quality of the cameras got better, reaching the point where barcodes could be read around 2008.  That's when Occipital came on the scene with their Red Laser application, which was eventually acquired by eBay.  This opened up the ability for consumers to easily price compare inside stores.  Other interesting apps included Tesco's Wine Finder and Amazon's Price Checker, both allowing products to be identified by picture. 3. Augmented Reality Once the mobile phone had GPS, a video camera, and compass functionality it was suddenly possible to overlay digital information on the screen in real-time.  Yelp, which was using GPS to find nearby merchants, created a backdoor called Monocle on the iPhone that showed nearby merchants overlayed on the video camera view.  Today AR apps are mostly used by retailers for marketing, like Moosejaw's app that undresses models in their catalog. 4. Geo-Fencing So if we're able to track the location of a mobile phone, why not use that context to offer timely information?  My first experience with geo-fencing came courtesy of North Face, the outdoor enthusiast store. When a mobile phone enters a predetermined area, like near a store, a text message is sent to phone with an offer or useful information.  Of course retailers can geo-fence their competitors as well and find out which customers are aren't so loyal. 5. Digital Wallet Mobile payments leverage different technologies such as NFC, QRCodes, bluetooth, and SMS to facilitate communication between the consumers's phone and the retailer's point-of-sale. The key here is the potential to consolidate loyalty cards, coupons, and bank cards into the mobile phone and enable faster checkout.  Nobody does this better than Starbucks today, but McDonald's and Duncan Donuts aren't far behind.  Google, Isis, Paypal, Square, and MCX are all vying for leadership in this area.  If NFC does finally take off, it will be leveraged by retailers in more places than just the POS. 6. Voice Response Mobile Phones have had the ability to interpret simple voice commands for a while, but Google and Amazon were the first to use voice to allow searches for products.  Allowing searches by text, barcode, and voice makes it easy to comparison shop in the aisles.  Walmart even uses voice to build shopping lists, and if the Siri API is even opened we could see lots more innovation in this area.

    Read the article

  • Why Is Hibernation Still Used?

    - by Jason Fitzpatrick
    With the increased prevalence of fast solid-state hard drives, why do we still have system hibernation? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Moses wants to know why he should use hibernate on a desktop machine: I’ve never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I’ve never truly understood why it’s used. With today’s technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me [on desktop computers] when modern technology is considered, but perhaps there are applications that I’m not considering. What was the original purpose behind hibernation, and why do people still use it? Quite a few people use hibernate, so what is Moses missing in the big picture? The Answer SuperUser contributor Vignesh4304 writes: Normally hibernate mode saves your computer’s memory, this includes for example open documents and running applications, to your hard disk and shuts down the computer, it uses zero power. Once the computer is powered back on, it will resume everything where you left off. You can use this mode if you won’t be using the laptop/desktop for an extended period of time, and you don’t want to close your documents. Simple Usage And Purpose: Save electric power and resuming of documents. In simple terms this comment serves nice e.g (i.e. you will sleep but your memories are still present). Why it’s used: Let me describe one sample scenario. Imagine your battery is low on power in your laptop, and you are working on important projects on your machine. You can switch to hibernate mode – it will result your documents being saved, and when you power on, the actual state of application gets restored. Its main usage is like an emergency shutdown with an auto-resume of your documents. MagicAndre1981 highlights the reason we use hibernate everyday: Because it saves the status of all running programs. I leave all my programs open and can resume working the next day very easily. Doing a real boot would require to start all programs again, load all the same files into those programs, get to the same place that I was at before, and put all my windows in exactly the same place. Hibernating saves a lot of work pulling these things back up again. It’s not unusual to find computers around the office here that have been hibernated day in and day out for months without an actual full system shutdown and restart. It’s enormously convenient to freeze your work space at the exact moment you stopped working and to turn right around and resume there the next morning. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • 3D terrain map with Hexagon Grids (XNA)

    - by Rob
    I'm working on a hobby project (I'm a web/backend developer by day) and I want to create a 3D Tile (terrain) engine. I'm using XNA, but I can use MonoGame, OpenGL, or straight DirectX, so the answer does not have to be XNA specific. I'm more looking for some high level advice on how to approach this problem. I know about creating height maps and such, there are thousands of references out there on the net for that, this is a bit more specific. I'm more concerned with is the approach to get a 3D hexagon tile grid out of my terrain (since the terrain, and all 3d objects, are basically triangles). The first approach I thought about is to basically draw the triangles on the screen in the following order (blue numbers) to give me the triangles for terrain (black triangles) and then make hexes out of the triangles (red hex). http://screencast.com/t/ebrH2g5V This approach seems complicated to me since i'm basically having to draw 4 different types of triangles. The next approach I thought of was to use the existing triangles like I did for a square grid and get my hexes from 6 triangles as follows http://screencast.com/t/w9b7qKzVJtb8 This seems like the easier approach to me since there are only 2 types of triangles (i would have to play with the heights and widths to get a "perfect" hexagon, but the idea is the same. So I'm looking for: 1) Any suggestions on which approach I should take, and why. 2) How would I translate mouse position to a hexagon grid position (especially when moving the camera around), for example in the second image if the mouse pointer were the green circle, how would I determine to highlight that hexagon and then translating that into grid coordinates (assuming it is 0,0)? 3) Any references, articles, books, etc - to get me going in the right direction. Note: I've done hex grid's and mouse-grid coordinate conversion before in 2d. looking for some pointers on how to do the same in 3d. The result I would like to achieve is something similar to the following: http :// www. youtube .com / watch?v=Ri92YkyC3fw (sorry about the youtube link, but it will only let me post 2 links in this post... same rep problem i mention below...) Thanks for any help! P.S. Sorry for not posting the images inline, I apparently don't have enough rep on this stack exchange site.

    Read the article

  • 11.10 desktop alerts (volume change and terminal bell) stopped working but all other audio still works

    - by FlabbergastedPickle
    All, My sound works just fine in 11.10 64-bit install on HP dm1-4050 Sandy Bridge notebook (e.g. audio works in Banshee, flash, games, browser, Thunderbird email notification, etc.), but the core desktop notifications (e.g. pressing a tab in a terminal where there is more than one option should trigger a terminal bell, or changing volume using volume keys should be accompanied with the supporting "quack" that the volume app makes) do not work. I've intentionally disabled login sound as explained here on ask ubuntu but even enabling it back makes no difference. These notifications did work before just fine and I am not sure when did the actually stop working but it must've been fairly recently. Only things I did were trying to install some ppa edge xorg drivers for my intel card (a separate issue) but also reverted them all with ppa-purge once I discovered they did not improve anything. Other thing I did was check volume settings with alsamixer and did alsactl store for the soundcard after I did some experimenting with volume settings for PCM (on my laptop PCM at 100% crackles so I had to lower it and make pulseaudio ignore its setting as per ask ubuntu's page). That said, neither of these should have any bearing on the said notifications since the volume is up and they clearly work everywhere else but the core desktop events. The system ready drum sound when Ubuntu boots and user reaches the login screen also does not work. The guest login behaves exactly same as mine. Audio works (including the login sound since I've not disabled it for the guest account), but no quacks when changing the volume or terminal bell sounds... I've tried copying ubuntu sounds to /usr/share/sounds/ as suggested on ask ubuntu and that did not work. I also tried using dconf-editor to check sound theme settings and tried both freedesktop (which is what it was set to) and ubuntu, as suggested on ask ubuntu. This did not work either. I tried purging the ~/.pulse folder and the /tmp/*pulse* entries, rebooting and restarting pulseaudio with -D flag. While audio came back on and behaved just fine in all aspects (e.g. one can adjust volume levels, play music, games, in-browser sound stuff, and other app alerts) except for the system ready drum sound (at the login screen), and any system event (terminal bell and volume change quack sound). It is interesting that the quack sound works inside system settings-sound when adjusting levels there, but it does not when volume is changed via top bar's volume settings... I do recall that at one point yesterday when I was restarting pulseaudio the quacks that accompany volume change did start working but I have no idea what caused that. This was also when I first realized those alerts were not working. After rebooting it was again gone. I did compile my own 3.0.14-rt31 kernel a little while ago as instructed on one of the wiki's for the 11.10 rt kernel. Everything works as before except for the said sound alerts. I am not sure if this began happening since I started using the rt kernel though and yesterday's momentary ability to hear those quacks while changing the volume make me believe that the kernel is not one responsible for this problem. One more thing I can think of is that I used alsoft-conf tool to configure buffering on the OpenAL (due to TA Spring's choppy audio) and changed in there default audio device to ALSA. I also tried reverting it to Pulseaudio as the only allowed output but the bottom part of the Backend tab always reverts to ALSA even when I select Pulseaudio. The pulseaudio does remain as the only active choice on top. This, however, once again does not make any sense in terms of preventing desktop audio alerts when everything else including OpenAL games plays sound just fine... So, there you have it, as verbose as I could make it :-). I tried all I could find on this issue and had no luck so far... Any ideas?

    Read the article

  • Social Search: Looking for Love

    - by Mike Stiles
    For marketers and enterprise executives who have placed a higher priority on and allocated bigger budgets to search over social, it might be time to notice yet another shift that’s well underway. Social is search. Search marketing was always more of an internal slam-dunk than other digital initiatives. Even a C-suite that understood little about the new technology world knew it’s a good thing when people are able to find you. Google was the new Yellow Pages. Only with Google, you could get your listing first without naming yourself “AAAA Plumbing.” There were wizards out there who could give your business prominence in front of people who were specifically looking for what you offered. Other search giants like Bing also came along to offer such ideal matchmaking possibilities. But what if the consumer isn’t using a search engine to find what they’re looking for? And what if the search engines started altering their algorithms so that search placement manipulation was more difficult? Both of those things have started to happen. Experian Hitwise’s numbers show that visits to the major search engines in the UK dropped 100 million through August. Search engines are far from dead, or even challenged. But more and more, the public is discovering the sites and brands they need through advice they get via social, not search. You’ll find the worlds of social and search increasingly co-mingling as well. Search behemoths Google and Bing are including Facebook and Google+ into their engines. Meanwhile, Facebook and Twitter have done some integration of global web search into their platforms. So what makes social such a worthwhile search entity for brands? First and foremost, the consumer has demonstrated a behavior of acting on recommendations from social connections. A cry in the wilderness like, “Anybody know any good catering companies?” will usually yield a link (and an endorsement) from a friend such as “Yeah, check out Just-Cheese-Balls Catering.” There’s no such human-driven force/influence behind the big search engines. Facebook’s Mark Zuckerberg and others call it “Friend Mining.” It is, in essence, searching for answers from friends’ experiences as opposed to faceless code. And Facebook has all of those friends’ experiences already stored as data. eMarketer says search in an $18 billion business, and investors are really into it. So no shock Facebook’s ready to leverage their social graph into relevant search. What do you do about all this as a brand? For one thing, it’s going to lead to some interesting paid marketing opportunities around the corner, including Sponsored Stories bought against certain queries, inserting deals into search results, capitalizing on social search results on mobile, etc. Apart from that, it might be time to stop mentally separating social and search in your strategic planning and budgeting. Courting your fans on social will cumulatively add up to more valuable, personally endorsed recommendations for your company when a consumer conducts a search on social. Fail to foster those relationships, fail to engage, fail to provide knock-em-dead customer service, fail to wow them with your actual products and services…and you’ll wind up with the visibility you deserve in social search results.

    Read the article

  • No external microphone Acer AO722

    - by Leeghwater
    The ACER AO722 comes with an external mic input, and this input is not recognised by Alsa mixer or Sound (in System Settings). There are various comments on this problem, but no real solutions. For example External Mic not working but Internal Mic works on an Acer Aspiron AO722. Using the internal mic is not an option, as I need to use skype professionally. I have tried everything in alsamixer (accessible through the Terminal Ctrl+Alt+t, command: alsamixer), and in Sound (under System Settings). I have also installed Pulseaudio. But to no avail. The headset is working normally under Skype in Windows. My AO722 came with Windows 7 on it, so I have installed Skype there too. My headset has separate connectors for ears and mic, and these go into the respective output and input on the right side of the laptop. This location: http://bernaerts.dyndns.org/linux/202-ubuntu-acer-ao722 sounds like an effective solution but it is for Ubuntu Natty 11.04. The solution suggested sounds drastic to me: replace the kernel 2.6.38-13 with version 2.6.38-12. I use Ubuntu 12.04, and my kernel is 3.2.0-30-generic-pae. Question: could I try this solution with Ubuntu 12.04? Is this a risky thing to do? I have found harware work around this problem. The audio output seems to be a combi output with also a microphone connection. I have made an adapter for this output. I used a 4 contacts 3,5 mm audio jack plug. To this plug I have soldered 2 female (common stereo) connectors, one for ears and one for the mic of my headset. The 4 contacts jack, which goes into the laptop (in audio OUTput) is wired as follows: tip = hot audio right; first sleeve after tip = hot audio left; second sleeve = common earth (for both ears and microphone); the 3rd sleeve = microphone signal input. In the connector which I could buy, the 3rd sleeve is not so much a sleeve, but part of the metal base of the connector; normally you would expect this one to be connect to earth. But connecting the mic signal to it works. Maybe ready made adapters of this kind and even headsets with a combi jack can simply be purchased; I didn't check. When I plug in the 4 contacts jack, Sound and Alsamixer immediately recognise an external microphone (even if no mic is connected to the adapter). In Sound, under the Input tab, 'Settings for internal microphone' changes into 'Setting for microphone'. The microphone comes through loud and clear, however there is a constant noise in the background. Others have reported this too. If I disconnect the external mic from the adapter, or shortcircuit the external microphone, the noise gets less but does not disappear. Therefore, it is not background noise from the room, but it comes from the computer itself. However, if you talk directly in the microphone of the headset, the noise level is acceptable for VOIP. The headset of my mobile phone Nokia C1 mobile comes wwith a 4 contacts combi 3,5mm jack plug. However, this one works (ear and mic) with the AO722 only if not inserted fully. Possibly the wiring of this headset jack is different. I cannot find detailed specs of the AO722, and don't know whether the audio 'output' was actually designed as a combi input/output. I have seen that at least one other AO model has a combi connector only. In any case, I do not believe that connecting your headset in this way will harm your computer. I would still appreciate a software solution. This must be possible, because the proper microphone input connector works under MS Windows.

    Read the article

  • Talking JavaOne with Rock Star Kirk Pepperdine

    - by Janice J. Heiss
    Kirk Pepperdine is not only a JavaOne Rock Star but a Java Champion and a highly regarded expert in Java performance tuning who works as a consultant, educator, and author. He is the principal consultant at Kodewerk Ltd. He speaks frequently at conferences and co-authored the Ant Developer's Handbook. In the rapidly shifting world of information technology, Pepperdine, as much as anyone, keeps up with what's happening with Java performance tuning. Pepperdine will participate in the following sessions: CON5405 - Are Your Garbage Collection Logs Speaking to You? BOF6540 - Java Champions and JUG Leaders Meet Oracle Executives (with Jeff Genender, Mattias Karlsson, Henrik Stahl, Georges Saab) HOL6500 - Finding and Solving Java Deadlocks (with Heinz Kabutz, Ellen Kraffmiller Martijn Verburg, Jeff Genender, and Henri Tremblay) I asked him what technological changes need to be taken into account in performance tuning. “The volume of data we're dealing with just seems to be getting bigger and bigger all the time,” observed Pepperdine. “A couple of years ago you'd never think of needing a heap that was 64g, but today there are deployments where the heap has grown to 256g and tomorrow there are plans for heaps that are even larger. Dealing with all that data simply requires more horse power and some very specialized techniques. In some cases, teams are trying to push hardware to the breaking point. Under those conditions, you need to be very clever just to get things to work -- let alone to get them to be fast. We are very quickly moving from a world where everything happens in a transaction to one where if you were to even consider using a transaction, you've lost." When asked about the greatest misconceptions about performance tuning that he currently encounters, he said, “If you have a performance problem, you should start looking at code at the very least and for that extra step, whip out an execution profiler. I'm not going to say that I never use execution profilers or look at code. What I will say is that execution profilers are effective for a small subset of performance problems and code is literally the last thing you should look at.And what is the most exciting thing happening in the world of Java today? “Interesting question because so many people would say that nothing exciting is happening in Java. Some might be disappointed that a few features have slipped in terms of scheduling. But I'd disagree with the first group and I'm not so concerned about the slippage because I still see a lot of exciting things happening. First, lambda will finally be with us and with lambda will come better ways.” For JavaOne, he is proctoring for Heinz Kabutz's lab. “I'm actually looking forward to that more than I am to my own talk,” he remarked. “Heinz will be the third non-Sun/Oracle employee to present a lab and the first since Oracle began hosting JavaOne. He's got a great message. He's spent a ton of time making sure things are going to work, and we've got a great team of proctors to help out. After that, getting my talk done, the Java Champion's panel session and then kicking back and just meeting up and talking to some Java heads."Finally, what should Java developers know that they currently do not know? “’Write Once, Run Everywhere’ is a great slogan and Java has come closer to that dream than any other technology stack that I've used. That said, different hardware bits work differently and as hard as we try, the JVM can't hide all the differences. Plus, if we are to get good performance we need to work with our hardware and not against it. All this implies that Java developers need to know more about the hardware they are deploying to.”

    Read the article

  • SQL Saturday 43 (Redmond, WA) Review

    - by BuckWoody
    Last Saturday (June 12th) we held a “SQL Saturday” (more about those here) event in Redmond, Washington. The event was held at the Microsoft campus, at the Mixer in our new location called the “Commons”. This is a mall-like area that we have on campus, and the Mixer is a large building with lots of meeting rooms, so it made a perfect location for the event. There was a sign to find the parking, and once there they had a sign to show how to get to the building. Since it’s a secure facility, Greg Larsen and crew had a person manning the door so that even late arrivals could get in. We had about 400 sign up for the event, and a little over 300 attend (official numbers later). I think we would have had a lot more, but the sun was out – and you just can’t underestimate the effect of that here in the Pacific Northwest. We joke a lot about not seeing the sun much, but when a day like what we had on Saturday comes around, and on a weekend at that, you’d cancel your wedding to go outside to play in the sun. And your spouse would agree with you for doing it. We had some top-notch speakers, including Clifford Dibble and Kalen Delany. The food was great, we had multiple sponsors (including Confio who seems to be at all of these) and the attendees were from all over the professional spectrum, from developers to BI to DBA’s. Everyone I saw was very engaged, and when I visited room-to-room I saw almost no one in the halls – everyone was in the sessions. I also saw a much larger Microsoft presence this year, especially from Dan Jones’ team. I had a great turnout at my session, and yes, I was wearing an Oracle staff shirt. I did that because I wanted to show that the session I gave on “SQL Server for the Oracle DBA” was non-marketing – I couldn’t exactly bash Oracle wearing their colors! These events are amazing. I can’t emphasize enough how much I appreciate the volunteers and how much work they put into these events, and to you for coming. If you’re reading this and you haven’t attended one yet, definitely find out if there is one in your area – and if not, start one. It’s a lot of work, but it’s totally worth it.       Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Good Customer Service Example

    - by MightyZot
    Here’s another good customer service example for you! My wife purchased a Galaxy last week and she loves the phone.  She asked me to add it to our AT&T Microcell last night. I purchased the AT&T Microcell a couple of years ago, because cell signal out where I live sucks! Since microcells are managed on the AT&T web site, I went to the site and tried to sign in. Naturally, having not managed that microcell in a couple of years…and much to my chagrin…I discovered that I didn’t know my password OR my user ID. So, I decided to call and see if I could get my account reset that late in the day (we’re talking last night, so it was well after 7pm.) I called the technical support line, touched the appropriate numbers to navigate to microcell support, turned on my speaker phone, and prepared for the long wait. After about 45 seconds I was delighted to hear “Jeffrey” break in and ask what he could help me with. I explained that I have not managed my microcell for some time and had forgotten the user name and password.  “No problem”, he replied, and he asked me for the line I used to register the microcell. After confirming the last four digits of my IMEI number, he asked me for my wife’s number. I gave him my wife’s number and he said, “I’ve taken care of it Mr Pope. Just have her reboot her phone and you should see your microcell.” We rebooted her phone, it connected to the microcell, and voila, she was online! “Is there anything else I can help you with while I’ve got you on the line”, he said. “Nope”, I replied. “Ok, have a great night.” What made this a great customer service experience for me was that “Jeffrey” didn’t stop at giving me my user account and password, which I would probably forget anyway after setting up my wife’s new phone. Instead, he solved the real problem for me – adding my wife’s new phone to my microcell. Great job Jeffrey!

    Read the article

  • What is the rationale behind snazzy Window Managers/Composers?

    - by Emanuele
    This is more of a generic question, based on trying out Window Managers like Awesome, Mate and others. To me looks like that other Window Managers like Gnome3 and/or Unity are heavy and pointless. I do understand that having all the composed UIs is more pleasant for the eye, but apart that, what are the other major benefits? To make an example, when I run the game Heroes of Newerth (using nVidia drivers) under: Unity : the FPS drops sharply Gnome3 : FPS is ok, but X and other processes use 15~20% of CPU and quite some additional memory Awesome : FPS is ok, and other processes use very little memory and CPU Below some numbers regarding what I'm saying (please note my system is 64 bit, AMD Phenom II X4, 8 GB RAM, nd nVidia 470 GTX, SSD disk). All data is sorted by mem usage (watch -d -n 10 "ps -e -o pcpu,pmem,pid,user,cmd --sort=-pmem | head -20"); again note that CPU time of ./hon-x86_64 might be different due to the fact I can't take the snapshot of the system during exactly same time. Awesome: %CPU %MEM PID USER CMD 91.8 21.6 3579 ema ./hon-x86_64 2.4 0.9 3223 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.6 0.4 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinp 0.3 0.2 3602 ema gnome-terminal 0.0 0.2 2698 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Gnome3: %CPU %MEM PID USER CMD 82.7 21.0 5528 ema ./hon-x86_64 17.7 1.7 5315 ema /usr/bin/gnome-shell 5.8 1.2 5062 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.0 0.4 5657 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.7 0.3 5331 ema nautilus -n 1.6 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- - 0.9 0.2 5451 ema gnome-terminal 0.1 0.2 5400 ema /usr/bin/python /usr/lib/desktopcouch/desktopcouch-service Unity 3D: %CPU %MEM PID USER CMD 87.2 21.1 6554 ema ./hon-x86_64 10.7 2.6 6105 ema compiz 17.8 1.1 5842 root /usr/bin/X :0 -auth /var/run/lightdm/root/:0 -nolisten tcp vt7 -novtswitch 1.3 0.9 6672 root /usr/bin/python /usr/sbin/aptd 0.4 0.4 6606 ema /usr/bin/python /usr/lib/ubuntuone-client/ubuntuone-syncdaemon 0.5 0.3 6115 ema nautilus -n 1.5 0.3 2600 ema /usr/lib/erlang/erts-5.8.5/bin/beam.smp -Bd -K true -A 4 -- -root /usr/lib/erlang -progname erl -- -home /home/ema -- -noshell -noinput -sasl errl 0.3 0.2 6180 ema /usr/lib/unity/unity-panel-service So my point is, what's the rationale behind going towards such heavy WMs/Composers?

    Read the article

  • PASS Summit 2012: keynote and Mobile BI announcements #sqlpass

    - by Marco Russo (SQLBI)
    Today at PASS Summit 2012 there have been several announcements during the keynote. Moreover, other news have not been highlighted in the keynote but are equally if not more important for the BI community. Let’s start from the big news in the keynote (other details on SQL Server Blog): Hekaton: this is the codename for in-memory OLTP technology that will appear (I suppose) in the next release of the SQL Server relational engine. The improvement in performance and scalability is impressive and it enables new scenarios. I’m curious to see whether it can be used also to improve ETL performance and how it differs from using SSD technology. Updates on Columnstore: In the next major release of SQL Server the columnstore indexes will be updatable and it will be possible to create a clustered index with Columnstore index. This is really a great news for near real-time reporting needs! Polybase: in 2013 it will debut SQL Server 2012 Parallel Data Warehouse (PDW), which will include the Polybase technology. By using Polybase a single T-SQL query will run queries across relational data and Hadoop data. A single query language for both. Sounds really interesting for using BigData in a more integrated way with existing relational databases. And, of course, to load a data warehouse using BigData, which is the ultimate goal that we all BI Pro have, right? SQL Server 2012 SP1: the Service Pack 1 for SQL Server 2012 is available now and it enable the use of PowerPivot for SharePoint and Power View on a SharePoint 2013 installation with Excel 2013. Power View works with Multidimensional cube: the long-awaited feature of being able to use PowerPivot with Multidimensional cubes has been shown by Amir Netz in an amazing demonstration during the keynote. The interesting thing is that the data model behind was based on a many-to-many relationship (something that is not fully supported by Power View with Tabular models). Another interesting aspect is that it is Analysis Services 2012 that supports DAX queries run on a Multidimensional model, enabling the use of any future tool generating DAX queries on top of a Multidimensional model. There are still no info about availability by now, but this is *not* included in SQL Server 2012 SP1. So what about Mobile BI? Well, even if not announced during the keynote, there is a dedicated session on this topic and there are very important news in this area: iOS, Android and Microsoft mobile platforms: the commitment is to get data exploration and visualization capabilities working within June 2013. This should impact at least Power View and SharePoint/Excel Services. This is the type of UI experience we are all waiting for, in order to satisfy the requests coming from users and customers. The important news here is that native applications will be available for both iOS and Windows 8 so it seems that Android will be supported initially only through the web. Unfortunately we haven’t seen any demo, so it’s not clear what will be the offline navigation experience (and whether there will be one). But at least we know that Microsoft is working on native applications in this area. I’m not too surprised that HTML5 is not the magic bullet for all the platforms. The next PASS Business Analytics conference in 2013 seems a good place to see this in action, even if I hope we don’t have to wait other six months before seeing some demo of native BI applications on mobile platforms! Viewing Reporting Services reports on iPad is supported starting with SQL Server 2012 SP1, which has been released today. This is another good reason to install SP1 on SQL Server 2012. If you are at PASS Summit 2012, come and join me, Alberto Ferrari and Chris Webb at our book signing event tomorrow, Thursday 8 2012, at the bookstore between 12:00pm and 12:30pm, or follow one of our sessions!

    Read the article

  • Going by the eBook

    - by Tony Davis
    The book and magazine publishing world is rapidly going digital, and the industry is faced with making drastic changes to their ways of doing business. The sudden take-up of digital readers by the book-buying public has surprised even the most technological-savvy of the industry. Printed books just aren't selling like they did. In contrast, eBooks are doing well. The ePub file format is the standard around which all publishers are converging. ePub is a standard for formatting book content, so that it can be reflowed for various devices, with their widely differing screen-sizes, and can be read offline. If you unzip an ePub file, you'll find familiar formats such as XML, XHTML and CSS. This is both a blessing and a curse. Whilst it is good to be able to use familiar technologies that have been developed to a level of considerable sophistication, it doesn't get us all the way to producing a viable publication. XHTML is a page-description language, not a book-description language, as we soon found out during our initial experiments, when trying to specify headers, footers, indexes and chaptering. As a result, it is difficult to predict how any particular eBook application will decide to render a book. There isn't even a consensus as to how the cover image is specified. All of this is awkward for the publisher. Each book must be created and revised in a form from which can be generated a whole range of 'printed media', from print books, to Mobi for kindles, ePub for most Tablets and SmartPhones, HTML for excerpted chapters on websites, and a plethora of other formats for other eBook readers, each with its own idiosyncrasies. In theory, if we can get our content into a clean, semantic XML form, such as DOCBOOKS, we can, from there, after every revision, perform a series of relatively simple XSLT transformations to output anything from a HTML article, to an ePub file for reading on an iPad, to an ICML file (an XML-based file format supported by the InDesign tool), ready for print publication. As always, however, the task looks bigger the closer you get to the detail. On the way to the utopian world of an XML-based book format that encompasses all the diverse requirements of the different publication media, ePub looks like a reasonable format to adopt. Its forthcoming support for HTML 5 and CSS 3, with ePub 3.0, means that features, such as widow-and-orphan controls, multi-column flow and multi-media graphics can be incorporated into eBooks. This starts to make it possible to build an "app-like" experience into the eBook and to free publishers to think of putting context before container; to think of what content is required, be it graphical, textual or audio, from the point of view of the user, rather than what's possible in a given, traditional book "Container". In the meantime, there is a gap between what publishers require and what current technology can provide and, of course building this app-like experience is far from plain sailing. Real portability between devices is still a big challenge, and achieving the sort of wizardry seen in the likes of Theodore Grey's "Elements" eBook will require some serious device-specific programming skills. Cheers, Tony.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

< Previous Page | 377 378 379 380 381 382 383 384 385 386 387 388  | Next Page >