Search Results

Search found 869 results on 35 pages for 'brad johnson'.

Page 2/35 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Delphi - ListView Question

    - by Brad
    Is there a ListView (ListBox) or similar component that allows me to easily drop another component in a specific column. (Multiple columns)? Like a checkbox, button or drop down list or all the above. (It would be nice to be able to sort via the header also) If not does anyone know of a resource on how to custom draw something like this? Thanks -Brad

    Read the article

  • open default browser with a post in Delphi

    - by Brad
    I know in delphi you can open the default browser with: ShellExecute(self.WindowHandle,'open','www.website.com',nil,nil, SW_SHOWNORMAL); but I'm wanting to know if there is a way to automatically post data on the new opened brower window OR auto fill the login data (even in firefox, safari,etc) Thanks -Brad

    Read the article

  • Delphi - MySQL Best Data aware components to use

    - by Brad
    I need my applicaiton to connect to my web server's MySQL database, what is the best option for this. Perfered Data aware Component. I tried zeos 7, but I keep getting the error: SQL error: Client does not support authentication protocal requested by server; consider upgrading MySQL client and have not been able to fix it. Thanks -Brad

    Read the article

  • Silverlight Cream for March 26, 2010 -- #821

    - by Dave Campbell
    In this Issue: Max Paulousky, Christian Schormann, John Papa, Phani Raj, David Anson(-2-, -3-), Brad Abrams(-2-), and Jeff Wilcox(-2-, -3-). Shoutouts: Jeff Wilcox posted his material from mix and some preview TestFramework bits: Unit Testing Silverlight & Windows Phone Applications – talk now online At MIX10, Jeff Wilcox demo'd an app called "Peppermint"... here's the bleeding edge demo: “Peppermint” MIX demo sources Erik Mork and Co. have put out their weekly This Week In Silverlight 3.25.2010 Brad Abrams has all his materials posted for his MIX10 session Mix2010: Search Engine Optimization (SEO) for Microsoft Silverlight... including play-by-play of the demo and all source. Do you use Rooler? Well you should! Watch a video Pete Brown did with Pete Blois on Expression Blend, Windows Phone, Rooler Interested in Silverlight and XNA for WP7? Me too! Michael Klucher has a post outlining the two: Silverlight and XNA Framework Game Development and Compatibility From SilverlightCream.com: Modularity in Silverlight Applications - An Issue With ModuleInitializeException Max Paulousky has a truly ugly error trace listed by way of not having a reference listed, and the obvious simple solution. Next time he'll talk about the difficult situations. Using SketchFlow to Prototype for Windows Phone Christian Schormann has a tutorial up on using Expression Blend to develop for WP7 ... who better than Christian for that task?? Silverlight TV 18: WCF RIA Services Validation John Papa held forth with Nikhil Kothari on WCF RIA Services and validation just prior to MIX10, and was posted yesterday. Building SL3 applications using OData client Library with Vs 2010 RC Phani Raj walks through building an OData consumer in SL3, the first problem you're going to hit, and the easy solution to it. Tip: When creating a DependencyProperty, follow the handy convention of "wrapper+register+static+virtual" David Anson has a couple more of his 'Tips' up... this first is about Dependency Properties again... having a good foundation for all your Dependency Properties is a great way to avoid problems. Tip: Do not assign DependencyProperty values in a constructor; it prevents users from overriding them In the next post, David Anson talks about not assigning Dependency Property values in a constructor and gives one of the two ways to get around doing so. Tip: Set DependencyProperty default values in a class's default style if it's more convenient In his latest post, David Anson gives the second way to avoid setting a Dependency Property value in the constructor. Silverlight 4 + RIA Services - Ready for Business: Search Engine Optimization (SEO) Brad Abrams Abrams adds SEO to the tutorial series he's doing. He begins with his PDC09 session material on the subject and then takes off on a great detailed tutorial all with source. Silverlight 4 + RIA Services - Ready for Business: Localizing Business Application Brad Abrams then discusses localization and Silverlight in another detailed tutorial with all code included. Silverlight Toolkit and the Windows Phone: WrapPanel, and a few others Jeff Wilcox has a few WP7 posts I'm going to push today. This first is from earlier this week and is about using the Toolkit in WP7 and better than that, he includes the bits you need if all you want is the WrapPanel Data binding user settings in Windows Phone applications In the next one from yesterday, Jeff Wilcox demonstrates saving some user info in Isolated Storage to improve the user experience, and shares all the necessary plumbing files, and other external links as well. Displaying 2D QR barcodes in Windows Phone applications In a post from today, Jeff Wilcox ported his Silverlight 2D QR Barcode app from last year into WP7 ... just very cool... get the source and display your Microsoft Tag. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone    MIX10

    Read the article

  • How do I fix issue causing "incomplete startup packet" log message trying to implement replication in Postgresql?

    - by colour me brad
    I've got two cloud servers running Ubuntu 13.04 and PostgreSQL 9.2. I've primarily used this blog post to aid me in setting things up. However, to do the initial database dump to the slave I'm using pg_start_backup/pg_stop_backup strategy used in this other blog post. I've read through the docs and postgres wikis as well. I ran into several problems I was able to solve, but I can't get past this wretched "the database is starting up" failure. I'm not sure if seeing "cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory" after "consistent recover state reached" is normal or the first sign of a problem. The searching I've done on "the database is starting up" and "incomplete startup packet" tells me that something is sending empty TCP packets to the slave. The only thing that even knows about the slave is the master, so I'm not sure why it's sending empty packets... Has anyone worked with this and have an idea what might be going wrong? The postgres log on the slave looks like so: 2013-08-26 13:01:38 CDT LOG: entering standby mode 2013-08-26 13:01:38 CDT LOG: restored log file "000000010000000000000039" from archive 2013-08-26 13:01:38 CDT LOG: incomplete startup packet 2013-08-26 13:01:39 CDT LOG: redo starts at 0/39000020 2013-08-26 13:01:39 CDT LOG: consistent recovery state reached at 0/390000E0 cp: cannot stat '/var/lib/postgresql/9.2/archive/00000001000000000000003A': No such file or directory 2013-08-26 13:01:39 CDT LOG: streaming replication successfully connected to primary 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:39 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:40 CDT FATAL: the database system is starting up 2013-08-26 13:01:41 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:42 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:43 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT FATAL: the database system is starting up 2013-08-26 13:01:44 CDT LOG: incomplete startup packet 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:27 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up 2013-08-26 13:03:30 CDT FATAL: the database system is starting up thanks! brad

    Read the article

  • restoring a failed SBS 2000 box...

    - by Brad Pears
    Hi there, I read a post where you had mentioned you have had a PERC card blow up on you in an SBS box... I've got a similar situation where one of my RAID drives failed and then the power supply failed before I could replace the drive... I then replaced the power supply and the failed drive and reconfigured the RAID array. I had a recent full backup of the my Win2k SBS's C: drive stored on my SYmantec backup exec server so I installed win2K server on the c: partition and then once I had that up and running, installed the backup exec agent so as to do a restore of the entire c drive including system state. THis all worked just fine, until I had to reboot. I received an "incorrect drive configuration" error and then it hangs. I figure that likely makes sense becasue I think my RAID array is configured slightly different now in that the partitions may be sizeded ever so slightly differently now than they were before I think... Is there a way I can just restore from my backup BUT maybe exclude some of the registry and hidden boot files it wants to restore so that it is booting with the current configuration now active on that machine - not the pre blow up configuration files? I also read a post that indicated you might have to install the exact same service pack etc... etc.. before attemting a restore but that does not make sense to me being as the entire c drive contents are going to be overwritten by the restore anyway? THe basic OS install is just to be able to get the backup exec agent installed . I can;t understand why one would need to install the exact same SP level. CAn you shed some light on what I might be able to do to get this thing up and running? Thanks,Brad

    Read the article

  • Using EigenObjectRecognizer

    - by Meko
    Hi. I am trying make Facial recognition using Emgu Cv. And using EigenObjectRecognizer could I do it? Also is some one can explain that usage of it? because if there is a no same foto it also returns value. Here is example from Internet Image<Gray, Byte>[] trainingImages = new Image<Gray,Byte>[5]; trainingImages[0] = new Image<Gray, byte>("brad.jpg"); trainingImages[1] = new Image<Gray, byte>("david.jpg"); trainingImages[2] = new Image<Gray, byte>("foof.jpg"); trainingImages[3] = new Image<Gray, byte>("irfan.jpg"); trainingImages[4] = new Image<Gray, byte>("joel.jpg"); String[] labels = new String[] { "Brad", "David", "Foof", "Irfan" , "Joel"} MCvTermCriteria termCrit = new MCvTermCriteria(16, 0.001); EigenObjectRecognizer recognizer = new EigenObjectRecognizer( trainingImages, labels, 5000, ref termCrit); Image<Gray,Byte> testImage = new Image<Gray,Byte>("brad_test.jpg"); String label = recognizer.Recognize(testImage); Console.Write(label); It returns brad .But if I change photo in testimage it also retunrs some name or even Brad.Is it good for face recognition to use this method?Or is there any better method?

    Read the article

  • Open World 2012

    - by jeffrey.waterman
    For those of you fortunate enough to be attending this year's Oracle OpenWorld here is a sessions I recommend carving time out of your hectic schedule to attend: Public Sector General Session (session ID#: GEN8536) Wednesday, October 3, 10:15 a.m.–11:15 a.m., Westin San Francisco, Metropolitan III Room Speakers, Mark Johnson, SVP Oracle Public Sector; Peter Doolan, CTO Oracle Public Sector; Robert Livingston, founding partner of Livingston Group and former member of the US Congress. Join Mark Johnson for an update on Oracle in government. Mark will be joined by Peter Doolan and Robert Livingston to discuss current topics facing governments and how Oracle can help organizations achieve their goals. I'll be posting more interesting sessions as I peruse the conference agenda over the next week or so.  If you see an interesting session, please feel free to share your suggestions in the comments section.

    Read the article

  • How do I combine static and dynamic DHCP leases on a Cisco router?

    - by Brad
    Basically, what I need is super similar to the unanswered cisco forum question below: https://supportforums.cisco.com/message/3139749#3139749 I have a Cisco 850 Series router. I have configured a DHCP pool for the 10.0.0.0/24 network. I have excluded 10.0.0.1 - 10.0.0.99 from the DHCP pool. I want to add a static DHCP pool for stuff and I want DHCP to statically assign them the addresses of my choice below 100. Actually, I don't care what addresses I statically assign. They can be anything in the pool for all I care, I just want it to work. Why are you doing this? Just statically assign the IPs on the devices! I don't want to do this because I have some laptop users. They could obviously only use that static IP here. This isn't a problem if they could be bothered to change any location setting or something. They can't. So it HAS to be DHCP. It also has to be static IPs because I need to forward ports to them. I know, I know, this is weird but it's an apartment LAN/WLAN so this isn't exactly a typical use case. Relevant sections of config below: ip dhcp excluded-address 10.0.0.1 10.0.0.99 ! ip dhcp pool Internal-net import all network 10.0.0.0 255.255.255.0 default-router 10.0.0.1 domain-name 1770.local lease 7 ! ip dhcp pool static-pool import all origin file flash://staticmap default-router 10.0.0.1 domain-name 1770.local Contents of staticmap: *time* Aug 5 2010 09:00 AM *version* 2 !IP address Type Hardware address Lease expiration 10.0.0.100/24 1 001f.5b3e.d50a Infinite *end* You can see here I was trying addresses outside the excluded-address range to see if that would make any difference. My testing machine's MAC: mainframe:~ brad$ ifconfig en1 en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 00:1f:5b:3e:d5:0a What shows up in the DHCP binding table: basestar#show ip dhcp binding Bindings from all pools not associated with VRF: IP address Client-ID/ Lease expiration Type Hardware address/ User name 10.0.0.112 0100.1f5b.3ed5.0a Aug 12 2010 10:06 AM Automatic What's up with the funny looking MAC in the DHCP binding table?? Is what I'm trying to accomplish basically impossible? Am I going about this the wrong way? All I want to to be able to port forward some ports to specific devices. The way I would do this with a consumer router is to do what I'm trying to do here; assign static DHCP to those devices then configure PAT for ports on those addresses.

    Read the article

  • FTP server questions

    - by Brad
    I'm currently trying to set up a home FTP server using debian and proftpd and I've run into a problem that has me confused. I have most things set up already, I believe, but I cannot access my ftp server using my external ip. I've forwarded the correct port on my router and I've checked http://www.yougetsignal.com/tools/open-ports/ to be sure that it is, in fact, opened. I've used telnet locally on my server to check that the port accepts connections. I am able to use ftp via LAN. But, I still cannot access anything externally. I'm thinking that there's still some router configuration to be done in order to fix this, such as routing all connections on my ftp port to my server via the internal ip, but I can't find any option on my router to do this. Is this a necessary step? There is an option to use DMZ hosting, but I'd rather avoid it if possible. I can provide additional information as requested, please let me know any information that you think could help at all. Thanks. -Brad PS - I have a Telus Actiontec Modem/Router Update - !! Trying my ftp server out at work, worked! I guess I did set it up correctly after all. What is confusing me, though, is why doesn't the server allow me to connect locally anymore? That seems very weird to me. Also, I don't really understand why I am denied outright if I attempt to connect from the same network using the external address. I'll look into it more when I get home, but thank you guys for your help. Update 2 - I found the problem with not being able to connect locally anymore. I was setting the masquerade address to my external IP and for some reason that was causing it to hang on MLSD when I connected using my LAN address. I've removed the masquerade address and I'm going to check if I need it at work tomorrow. I'll update this page if I find anything.

    Read the article

  • Firebird Insert Distinct Data Using ZeosLib and Delphi

    - by Brad
    I'm using Zeos 7, and Delphi 2K9 and want to check to see if a value is already in the database under a specific field before I post the data to the database. Example: Field Keyword Values of Cheese, Mouse, Trap tblkeywordKEYWORD.Value = Cheese What is wrong with the following? And is there a better way? zQueryKeyword.SQL.Add('IF NOT EXISTS(Select KEYWORD from KEYWORDLIST ='''+ tblkeywordKEYWORD.Value+''')INSERT into KEYWORDLIST(KEYWORD) VALUES ('''+ tblkeywordKEYWORD.Value+'''))'); zQueryKeyword.ExecSql; I tried using the unique constraint in IB Expert, but it gives the following error: Invalid insert or update value(s): object columns are constrained - no 2 table rows can have duplicate column values. attempt to store duplicate value (visible to active transactions) in unique index "UNQ1_KEYWORDLIST". Thanks for any help -Brad

    Read the article

  • Scrollable div on iPhone without using 2 fingers?

    - by Brad Parks
    Hi there; I've got a UIWebView embedded in my iPhone app, and I'd like to keep a locked header and footer DIV on the page at all times, with a scrollable center DIV. I know that I could do this using a header/footer that are UIView controls, but I want the header and footer to be HTML divs, as a pure HTML/JS/CSS solution will be easier to port to Android/PalmPre/AdobeAir, which is going to be on my todo list relatively soon. I can do this using techniques like the one mentioned here: http://defunc.com/blog/?p=94 But this requires that the user use 2 fingers to scroll the div, which is not satisfactory to me... Any suggestions on how to do this? Thanks, Brad

    Read the article

  • Wordpress rewrite image path

    - by Brad
    I've got a website that is running WordPress. It has several pictures that I am retrieving from a datafeed. The images from the datafeed are at locations like: http://image4.example.com/640/examples/example.jpg http://image4.example.com/640/example.jpg The image4 and 640 locations can change. I want to rewrite the images to where they show as from my website. I've tried: rewritecond %{HTTP_HOST} !^image4.example.com$ rewriterule ^([^/]+)$ http://image4.mywebsite.com/$1 [L,R=301] but it doesn't work. I don't know much about Mod Rewrite. any help would be appreciated, and no I'm not hijacking the images, i have permission to use them and the bandwidth. Thanks -Brad

    Read the article

  • Mysql : get data from 2 tables (need help)

    - by quangtruong1985
    Assume that I have 2 tables : members and orders (Mysql) Members : id | name 1 | Lee 2 | brad Orders : id | member_id | status (1: paid, 2: unpaid) | total 1 | 1 | 1 | 1000000 2 | 1 | 1 | 1500000 3 | 1 | 2 | 1300000 4 | 2 | 1 | 3000000 5 | 2 | 2 | 3500000 6 | 2 | 2 | 3300000 I have a sql query : SELECT m.name, COUNT(o.id) as number_of_order, SUM(o.total) as total2 FROM orders o LEFT JOIN members m ON o.member_id=m.id GROUP BY o.member_id which give me this: name | number_of_order | total2 Lee | 3 | 3800000 brad | 3 | 9800000 All that I want is something like this : name | number_of_order | total2 | Paid Unpaid | Paid Unpaid ------------------------------------------------ Lee | 3 | 3800000 | 2 1 | 2500000 1300000 ------------------------------------------------ brad | 3 | 9800000 | 1 2 | 3000000 6800000 ------------------------------------------------ How to make a query that can give me that result? Thanks for your time!

    Read the article

  • .Net friendly, local, key-value pair, replicatable datastore

    - by Brad Mathews
    I am looking for a key/value type datastore with very specific requirements. Anyone know anything that will work? Needs to be a component of some sort. No additional installation needed. The datastore needs to be on the local hard drive. I am using VB.Net for a desktop app running Windows XP through 7 so it needs to callable by that environment. It needs to replicatable. If I have four copies of my app running on the network, each local copy of the datastore needs to replicate with the others. As close to real time as possible. The first three are easy, I can do that with ADO.Net out of the box. The last one, replication, is the one I do not have answer to. Does such an animal exist? Thanks, Brad

    Read the article

  • Launching my deployed my app gets "has stopped working" error. How do I debug this?

    - by Brad Mathews
    I deployed a VB.Net app and ran it and I get 'AppName has stopped working" "Windows is checking for a solution to the problem" along with a Cancel button under Windows 7. Under XP I am only getting the option to Send the error report to Microsoft or not. There is no apparent way to hook into a debugger. I am not getting any exception data. I have put msgboxes at the very start of my code and they are not hit so it is failing before any of my code is even executing. I have checked all dependencies that I can think of. I developed the app on VS2008 Windows 7 and deploying to Windows 7 and WinXP. I need some advice - how do I debug this? Thanks, Brad

    Read the article

  • Making a window pop in and out of the edge of the screen

    - by Brad
    I'm trying to re-write an application I have for Windows in Objective-C for my Mac, and I want to be able to do something like Mac's hot corners. If I move my mouse to the left side of the screen it will make a window visible, if I move it outside of the window location the window will hide again. (window would be pushed up to the left side of screen). Does anyone know where I can find some demo code (or reference) on how to do this, or at least how to tell where the mouse is at, even if the current application is not on top. (not sure how to word this, too used to Windows world). Thank you -Brad

    Read the article

  • SQL Filter Multiple Tables Data

    - by Brad
    If it matters, I'm using Firebird 2.1 database. I have three tables, one with keywords, one with negative keywords, and the other with required keywords. I need to be able to filter the data so the output has just the keywords that meat the stipulation of not being in the negative keyword list, and IF there are any required words, then it will require the results to have those keywords in the end result. The tables are very similar, the field in the tables that I would be matching against are all called keyword. I don't know SQL very well at all. I'm guessing it would be something like SELECT keyword from keywordstable where keyword in requiredkeywordstable and where NOT in negativekeywordstable Just a side note, The required keywords table could be empty which would mean there are no required keywords. Any help would be appreciated. -Brad

    Read the article

  • Powershell Replace Regex

    - by Brad
    I have a select-string which is seaching an IIS log for a particular string and returning the 2 lines above and one line below. So results look like this: 2012-06-15 18:26:09 98.138.206.39 OutboundConnectionResponse SMTPSVC1 WEB10 - 25 - - 220+mta1083.sbc.mail.ne1.yahoo.com+ESMTP+YSmtp+service+ready 0 0 60 0 218 SMTP - - - - 2012-06-15 18:26:09 98.138.206.39 OutboundConnectionCommand SMTPSVC1 WEB10 - 25 EHLO - WEB10.DOMAINCOM 0 0 4 0 218 SMTP - - - - 2012-06-15 18:26:09 74.125.244.10 OutboundConnectionResponse SMTPSVC1 WEB10 - 25 - - 550+IP+Authorization+check+failed+-+psmtp 0 0 41 0 218 SMTP - - - - 2012-06-15 18:26:09 74.125.244.10 OutboundConnectionCommand SMTPSVC1 WEB10 - 25 RSET - - 0 0 4 0 218 SMTP - - - - Note the third line begins with denoting thats the line that select-string matched upon. I am trying to do a -replace on the to replace it with < font color="red"$1< /font but my replace doesn't seem to work. Here's my code: $results = $results -replace "(^ )(.*)$", "< font color='red'$1< font" Can any powershell regex guru's out there tell me why my regular expression isn't matching? Thanks Brad

    Read the article

  • Python If Statement Defaults to an elif

    - by Brad Carvalho
    Not sure why my code is defaulting to this elif. But it's never getting to the else statement. Even going as far as throwing index out of bound errors in the last elif. Please disregard my non use of regex. It wasn't allowed for this homework assignment. The problem is the last elif before the else statement. Cheers, Brad if item == '': print ("%s\n" % item).rstrip('\n') elif item.startswith('MOVE') and not item.startswith('MOVEI'): print 'Found MOVE' elif item.startswith('MOVEI'): print 'Found MOVEI' elif item.startswith('BGT'): print 'Found BGT' elif item.startswith('ADD'): print 'Found ADD' elif item.startswith('INC'): print 'Found INC' elif item.startswith('SUB'): print 'Found SUB' elif item.startswith('DEC'): print 'Found DEC' elif item.startswith('MUL'): print 'Found MUL' elif item.startswith('DIV'): print 'Found DIV' elif item.startswith('BEQ'): print 'Found BEQ' elif item.startswith('BLT'): print 'Found BLT' elif item.startswith('BR'): print 'Found BR' elif item.startswith('END'): print 'Found END' elif item.find(':') and item[(item.find(':') -1)].isalpha(): print 'Mya have found a label' else: print 'Not sure what I found'

    Read the article

  • how to dispose a incoming email and then send some words back using googe-app-engine..

    - by zjm1126
    from google.appengine.api import mail i read the doc: mail.send_mail(sender="[email protected]", to="Albert Johnson <[email protected]>", subject="Your account has been approved", body=""" Dear Albert: Your example.com account has been approved. You can now visit http://www.example.com/ and sign in using your Google Account to access new features. Please let us know if you have any questions. The example.com Team """) and i know hwo to send a email using gae ,but how to check a email incoming, and then do something thanks

    Read the article

  • Chicago SQL Saturday

    - by Johnm
    This past Saturday, April 17, 2010, I journeyed North to the great city of Chicago for some SQL Server fun, learning and fellowship. The Chicago edition of this grassroots phenomenon was the 31st scheduled SQL Saturday since the program's birth in late 2007. The Chicago SQL Saturday consisted of four tracks with eight sessions each and was a very energetic and fast paced day for the 300+/- SQL Server enthusiasts in attendance. The speaker line up included national notables such as Kevin Kline, Brent Ozar, and Brad McGehee. My hometown of Indianapolis was well represented in the speaker line up with Arie Jones, Aaron King and Derek Comingore. The day began with a very humorous keynote by Kevin Kline and Brent Ozar who emphasized the importance of community events such as SQL Saturday and the monthly user group meetings. They also brilliantly included the impact that getting involved in the SQL community through social media can have on your professional career. My approach to the day was to try to experience as much of the event as I could, so there were very few sessions that I attended for their full duration. I leaped from session to session like a bumble bee, gleaning bits of nectar from each session. Amid these leaps I took the opportunity to briefly chat with some of the in-the-queue speakers as well as other attendees that wondered the hallways. I especially enjoyed a great discussion with Devin Knight about his plans regarding the upcoming Jacksonville SQL Saturday as well as an interesting SQL interpretation of the Iron Chef, which I think would catch on like wild-fire. There were two sessions that stood out as exceptional. So much so that I could not pull myself away: Kevin Kline presented on "SQL Server Internals and Architecture". This session could have been classified as one that is intended for the beginner. Kevin even personally warned me of such as I entered the room. I am a believer in revisiting the basics regardless of the level of your mastery, so I entered into this session in that spirit. It was a very clear and precise presentation. Masterfully illustrated and demonstrated. Brad McGehee presented on "How and When to Use Indexed Views". This was a topic that I was recently exploring and was considering to for use in an integration project. Brad effectively communicated the complexity of this feature and what is involved to gain their full benefit. It was clear at the conclusion of this session that it was not the right feature for my specific needs. Overall, the event was a great success. The use of volunteers, from an attendee's perspective was masterful. The only recommendation that I would have for the next Chicago SQL Saturday would be to include more time in between sessions to permit some level of networking among the attendees, one-on-one questions for speakers and visits to the sponsor booths. Congratulations to Wendy Pastrick, Ted Krueger, and Aaron Lowe for their efforts and a very successful SQL Saturday!

    Read the article

  • How Mary Meeker’s Latest Findings May Make You Re-Imagine Commerce

    - by Brenna Johnson-Oracle
    0 0 1 954 5439 Endeca Technologies 45 12 6381 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Today, Mary Meeker released her highly anticipated annual “Internet Trends” presentation for 2014. All 164 slides are jam-packed with pretty much everything you need to know about the state of the Internet. And as luck would have it, Oracle is staying ahead of these trends (but we’ll talk about that later). There were a few surprises, some stats to solidify what you likely already know, and Meeker’s novel observations about where we are all going. What interested me the most is not only how people are engaging in their personal lives, but how they engage with brands. As you could probably predict, Internet usage growth is slowing while tablet user and mobile data traffic growth continue their meteoric rise around the globe, with tremendous growth in underpenetrated markets like China, India, Brazil and Indonesia. Now hold those the “Internet is dead” comments. Keep in mind there’s still plenty of room to grow, and a multiscreen model is Meeker’s vision for our future. Despite 1.5x YOY growth for mobile traffic, mobile still only makes up about 23% of all traffic today. With tablet shipments easily outpacing figures for PCs even at their height (in 2007), mobile will only continue on it’s path, but won’t be everything to everyone. Mobile won’t replace every touchpoint, it’s just created our shorter attention spans and demand for simpler, more personal experiences. As Meeker points out TVs, tablets, PCs, and smartphones are used for different activities at present, but lines will blur (for example, 84% of smartphones owners use their device while watching TV). Day-to-day activities are being re-imagining through simple, beautiful user experiences. It seems like every day I discover a new way a brand/site/app made the most mundane or mounting task enjoyable and frictionless – and I’m not alone. Meeker points out the evolution of how we do everything from how we communicate, get information, use money, meet someone, get places, order a meal, and consume media is all done through new user interfaces that make day-to-day tasks simpler. This movement has caused just about everyone’s patience for a poor UX to take a nosedive. And it’s not just the digital user experience, technology is making a lot of people’s offline lives easier, and less expensive. Today 47% of online shopping utilizes free shipping— nearly half. And Meeker predicts same day local delivery will be the “next big thing” (and you can take a guess on who will own that). Content, Community and Commerce creates the “Internet Trifecta.” Meeker pointed out that when content, communities and commerce occur in a single experience it’s embraced by consumers, which translates to big dollars for brands. The magic happens when consumers can get inspired, research, and buy in a single experience. As the buying cycle has changed and touchpoints (Web, mobile, social, store) are no longer tied to “roles” or steps in the customer journey, brands must make all experiences (content and commerce) available in a single, adaptable experience. (We at Oracle Commerce have a lot to say on this topic – stay tuned!) And in what Meeker calls the “biggest re-imagination of all:” consumers enabled with smartphones and sensors are creating troves of findable and sharable data, which she says is in the early stages, by growing rapidly. She notes that transparency and patterns of consumers with this hardware (FYI - there are up to 10 sensors embedded in smartphones now) has created a Big Data treasure chest to be mined to improve business and the life of the consumer. The opportunities are endless. So what does it all mean for a company doing business online? Start thinking about how you can: Re-imagine your experience. Not your online experience and your mobile experience and your social experience – your overall experience. When consumers can research, buy, and advocate from anywhere (and their attention spans are at an all-time low) channels don’t exist. Enable simple and beautiful interactions informed by all of the online and offline data you leverage across your enterprise. Ethically leverage the endless supply of data (user generated content, clicks, purchases, in-store behavior, social activity) to make experiences more beautiful, more accurate, and more personalized (not to mention, more lucrative for you). Re-imagine content and commerce. Content and commerce must co-exist in a single destination where shoppers can get inspired, explore, research, share, and purchase in a collective experience. Think of how you can deliver an experience where all types of experiences (brand stories and commerce) adapt to every customer need. (Look for more on this topic coming soon). Re-imagine your reach. Look to Meeker’s findings to see how the global appetite for digital experiences is growing, but under-served in many places (i.e.: India, Mexico, Indonesia, Brazil, Philippines, etc.). Growing your online business to a new geography doesn’t have to mean starting from scratch or having an entirely new team manage the new endeavor. Expand using what you’ve already built in a multisite framework, with global language support. And of course, make sure it’s optimized for mobile! Re-imagine the possible. After every Meeker report, I’m always left with the thought “we are just at the beginning.” Everyday there is more data, more possibilities, more online consumers, and more opportunities to use new latest technology to get closer to your customers and be more successful. There’s a lot going on in our Product Development and Product Innovations groups to automate innovation for our customers, so that they can continue to stay ahead of these trends, without disrupting their business. Check out a recent interview with our Innovations Team on some of these new possibilities. Staying on track despite the seemingly endless possibilities out there is the hard part. Prioritizing where you will focus based on your unique brand promise, customer and goals is what you do best. To learn how Oracle Commerce can help your business achieve your goals check out oracle.com/commerce. Check out Meeker’s entire report here.

    Read the article

  • Heaps of Trouble?

    - by Paul White NZ
    If you’re not already a regular reader of Brad Schulz’s blog, you’re missing out on some great material.  In his latest entry, he is tasked with optimizing a query run against tables that have no indexes at all.  The problem is, predictably, that performance is not very good.  The catch is that we are not allowed to create any indexes (or even new statistics) as part of our optimization efforts. In this post, I’m going to look at the problem from a slightly different angle, and present an alternative solution to the one Brad found.  Inevitably, there’s going to be some overlap between our entries, and while you don’t necessarily need to read Brad’s post before this one, I do strongly recommend that you read it at some stage; he covers some important points that I won’t cover again here. The Example We’ll use data from the AdventureWorks database, copied to temporary unindexed tables.  A script to create these structures is shown below: CREATE TABLE #Custs ( CustomerID INTEGER NOT NULL, TerritoryID INTEGER NULL, CustomerType NCHAR(1) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #Prods ( ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, Name NVARCHAR(50) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, ); GO CREATE TABLE #OrdHeader ( SalesOrderID INTEGER NOT NULL, OrderDate DATETIME NOT NULL, SalesOrderNumber NVARCHAR(25) COLLATE SQL_Latin1_General_CP1_CI_AI NOT NULL, CustomerID INTEGER NOT NULL, ); GO CREATE TABLE #OrdDetail ( SalesOrderID INTEGER NOT NULL, OrderQty SMALLINT NOT NULL, LineTotal NUMERIC(38,6) NOT NULL, ProductMainID INTEGER NOT NULL, ProductSubID INTEGER NOT NULL, ProductSubSubID INTEGER NOT NULL, ); GO INSERT #Custs ( CustomerID, TerritoryID, CustomerType ) SELECT C.CustomerID, C.TerritoryID, C.CustomerType FROM AdventureWorks.Sales.Customer C WITH (TABLOCK); GO INSERT #Prods ( ProductMainID, ProductSubID, ProductSubSubID, Name ) SELECT P.ProductID, P.ProductID, P.ProductID, P.Name FROM AdventureWorks.Production.Product P WITH (TABLOCK); GO INSERT #OrdHeader ( SalesOrderID, OrderDate, SalesOrderNumber, CustomerID ) SELECT H.SalesOrderID, H.OrderDate, H.SalesOrderNumber, H.CustomerID FROM AdventureWorks.Sales.SalesOrderHeader H WITH (TABLOCK); GO INSERT #OrdDetail ( SalesOrderID, OrderQty, LineTotal, ProductMainID, ProductSubID, ProductSubSubID ) SELECT D.SalesOrderID, D.OrderQty, D.LineTotal, D.ProductID, D.ProductID, D.ProductID FROM AdventureWorks.Sales.SalesOrderDetail D WITH (TABLOCK); The query itself is a simple join of the four tables: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #OrdDetail D ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID JOIN #OrdHeader H ON D.SalesOrderID = H.SalesOrderID JOIN #Custs C ON H.CustomerID = C.CustomerID ORDER BY P.ProductMainID ASC OPTION (RECOMPILE, MAXDOP 1); Remember that these tables have no indexes at all, and only the single-column sampled statistics SQL Server automatically creates (assuming default settings).  The estimated query plan produced for the test query looks like this (click to enlarge): The Problem The problem here is one of cardinality estimation – the number of rows SQL Server expects to find at each step of the plan.  The lack of indexes and useful statistical information means that SQL Server does not have the information it needs to make a good estimate.  Every join in the plan shown above estimates that it will produce just a single row as output.  Brad covers the factors that lead to the low estimates in his post. In reality, the join between the #Prods and #OrdDetail tables will produce 121,317 rows.  It should not surprise you that this has rather dire consequences for the remainder of the query plan.  In particular, it makes a nonsense of the optimizer’s decision to use Nested Loops to join to the two remaining tables.  Instead of scanning the #OrdHeader and #Custs tables once (as it expected), it has to perform 121,317 full scans of each.  The query takes somewhere in the region of twenty minutes to run to completion on my development machine. A Solution At this point, you may be thinking the same thing I was: if we really are stuck with no indexes, the best we can do is to use hash joins everywhere. We can force the exclusive use of hash joins in several ways, the two most common being join and query hints.  A join hint means writing the query using the INNER HASH JOIN syntax; using a query hint involves adding OPTION (HASH JOIN) at the bottom of the query.  The difference is that using join hints also forces the order of the join, whereas the query hint gives the optimizer freedom to reorder the joins at its discretion. Adding the OPTION (HASH JOIN) hint results in this estimated plan: That produces the correct output in around seven seconds, which is quite an improvement!  As a purely practical matter, and given the rigid rules of the environment we find ourselves in, we might leave things there.  (We can improve the hashing solution a bit – I’ll come back to that later on). Faster Nested Loops It might surprise you to hear that we can beat the performance of the hash join solution shown above using nested loops joins exclusively, and without breaking the rules we have been set. The key to this part is to realize that a condition like (A = B) can be expressed as (A <= B) AND (A >= B).  Armed with this tremendous new insight, we can rewrite the join predicates like so: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #OrdDetail D JOIN #OrdHeader H ON D.SalesOrderID >= H.SalesOrderID AND D.SalesOrderID <= H.SalesOrderID JOIN #Custs C ON H.CustomerID >= C.CustomerID AND H.CustomerID <= C.CustomerID JOIN #Prods P ON P.ProductMainID >= D.ProductMainID AND P.ProductMainID <= D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (RECOMPILE, LOOP JOIN, MAXDOP 1, FORCE ORDER); I’ve also added LOOP JOIN and FORCE ORDER query hints to ensure that only nested loops joins are used, and that the tables are joined in the order they appear.  The new estimated execution plan is: This new query runs in under 2 seconds. Why Is It Faster? The main reason for the improvement is the appearance of the eager Index Spools, which are also known as index-on-the-fly spools.  If you read my Inside The Optimiser series you might be interested to know that the rule responsible is called JoinToIndexOnTheFly. An eager index spool consumes all rows from the table it sits above, and builds a index suitable for the join to seek on.  Taking the index spool above the #Custs table as an example, it reads all the CustomerID and TerritoryID values with a single scan of the table, and builds an index keyed on CustomerID.  The term ‘eager’ means that the spool consumes all of its input rows when it starts up.  The index is built in a work table in tempdb, has no associated statistics, and only exists until the query finishes executing. The result is that each unindexed table is only scanned once, and just for the columns necessary to build the temporary index.  From that point on, every execution of the inner side of the join is answered by a seek on the temporary index – not the base table. A second optimization is that the sort on ProductMainID (required by the ORDER BY clause) is performed early, on just the rows coming from the #OrdDetail table.  The optimizer has a good estimate for the number of rows it needs to sort at that stage – it is just the cardinality of the table itself.  The accuracy of the estimate there is important because it helps determine the memory grant given to the sort operation.  Nested loops join preserves the order of rows on its outer input, so sorting early is safe.  (Hash joins do not preserve order in this way, of course). The extra lazy spool on the #Prods branch is a further optimization that avoids executing the seek on the temporary index if the value being joined (the ‘outer reference’) hasn’t changed from the last row received on the outer input.  It takes advantage of the fact that rows are still sorted on ProductMainID, so if duplicates exist, they will arrive at the join operator one after the other. The optimizer is quite conservative about introducing index spools into a plan, because creating and dropping a temporary index is a relatively expensive operation.  It’s presence in a plan is often an indication that a useful index is missing. I want to stress that I rewrote the query in this way primarily as an educational exercise – I can’t imagine having to do something so horrible to a production system. Improving the Hash Join I promised I would return to the solution that uses hash joins.  You might be puzzled that SQL Server can create three new indexes (and perform all those nested loops iterations) faster than it can perform three hash joins.  The answer, again, is down to the poor information available to the optimizer.  Let’s look at the hash join plan again: Two of the hash joins have single-row estimates on their build inputs.  SQL Server fixes the amount of memory available for the hash table based on this cardinality estimate, so at run time the hash join very quickly runs out of memory. This results in the join spilling hash buckets to disk, and any rows from the probe input that hash to the spilled buckets also get written to disk.  The join process then continues, and may again run out of memory.  This is a recursive process, which may eventually result in SQL Server resorting to a bailout join algorithm, which is guaranteed to complete eventually, but may be very slow.  The data sizes in the example tables are not large enough to force a hash bailout, but it does result in multiple levels of hash recursion.  You can see this for yourself by tracing the Hash Warning event using the Profiler tool. The final sort in the plan also suffers from a similar problem: it receives very little memory and has to perform multiple sort passes, saving intermediate runs to disk (the Sort Warnings Profiler event can be used to confirm this).  Notice also that because hash joins don’t preserve sort order, the sort cannot be pushed down the plan toward the #OrdDetail table, as in the nested loops plan. Ok, so now we understand the problems, what can we do to fix it?  We can address the hash spilling by forcing a different order for the joins: SELECT P.ProductMainID AS PID, P.Name, D.OrderQty, H.SalesOrderNumber, H.OrderDate, C.TerritoryID FROM #Prods P JOIN #Custs C JOIN #OrdHeader H ON H.CustomerID = C.CustomerID JOIN #OrdDetail D ON D.SalesOrderID = H.SalesOrderID ON P.ProductMainID = D.ProductMainID AND P.ProductSubID = D.ProductSubID AND P.ProductSubSubID = D.ProductSubSubID ORDER BY D.ProductMainID OPTION (MAXDOP 1, HASH JOIN, FORCE ORDER); With this plan, each of the inputs to the hash joins has a good estimate, and no hash recursion occurs.  The final sort still suffers from the one-row estimate problem, and we get a single-pass sort warning as it writes rows to disk.  Even so, the query runs to completion in three or four seconds.  That’s around half the time of the previous hashing solution, but still not as fast as the nested loops trickery. Final Thoughts SQL Server’s optimizer makes cost-based decisions, so it is vital to provide it with accurate information.  We can’t really blame the performance problems highlighted here on anything other than the decision to use completely unindexed tables, and not to allow the creation of additional statistics. I should probably stress that the nested loops solution shown above is not one I would normally contemplate in the real world.  It’s there primarily for its educational and entertainment value.  I might perhaps use it to demonstrate to the sceptical that SQL Server itself is crying out for an index. Be sure to read Brad’s original post for more details.  My grateful thanks to him for granting permission to reuse some of his material. Paul White Email: [email protected] Twitter: @PaulWhiteNZ

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >