Search Results

Search found 2860 results on 115 pages for 'andrew smith'.

Page 20/115 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Are there any specific workflows or design patterns that are commonly used to create large functional programming applications?

    - by Andrew
    I have been exploring Clojure for a while now, although I haven't used it on any nontrivial projects. Basically, I have just been getting comfortable with the syntax and some of the idioms. Coming from an OOP background, with Clojure being the first functional language that I have looked very much into, I'm naturally not as comfortable with the functional way of doing things. That said, are there any specific workflows or design patterns that are common with creating large functional applications? I'd really like to start using functional programming "for real", but I'm afraid that with my current lack of expertise, it would result in an epic fail. The "Gang of Four" is such a standard for OO programmers, but is there anything similar that is more directed at the functional paradigm? Most of the resources that I have found have great programming nuggets, but they don't step back to give a broader, more architectural look.

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • How do I get mlocate to only index certain directories?

    - by Andrew Ferrier
    I'd like to use mlocate on my Ubuntu server, but only to index certain directories (e.g. /home and /data, but not everything under /). However, mlocate's standard configuration works the opposite way; you specify the paths you want to remove (with PRUNE_PATHS). Is there any easy way to achieve this, or any similar utility that will do what I want? (note: it should maintain an index like mlocate, so find is not acceptable, for example) Thanks.

    Read the article

  • Ubuntu 12.10 running slow

    - by andrew
    I pasted syslog and perhaps anyone can see trouble that might need attention. It is running too slow for what I would suspect. Opening apps and web pages just takes forever. http://paste.ubuntu.com/1303211/ System Specs: Oct 24 12:42:55 ubuntu kernel: [ 1.369735] powernow-k8: Found 1 AMD V140 Processor (1 cpu cores) (version 2.20.00) http://en.wikipedia.org/wiki/List_of_AMD_Phenom_microprocessors#.22Champlain.22_.2845_nm.2C_Single-core.29_2

    Read the article

  • How to become an expert in Python, PHP and Javascript? [closed]

    - by Andrew Alexander
    So I've been programming for about 9ish months now, and I've taught myself some Python, some PHP and some Javascript. I want to become better at these languages - I can hack something out, but a lot of things like OOP, using lists in the most effective ways, etc, is lost on me. What are the best ways to become an "expert" programmer? Does it depend on the nuances of the language, or is it more general? Is there any math I should be studying alongside it? Obviously a lot depends on what you want to do with it - so far I've mostly done small scale internal applications as well as web programming. How do I find out about good program design?

    Read the article

  • Best options for freelance or part-time programming? [closed]

    - by Andrew
    I apologize in advance if this is an inappropriate question for this SE. A few years back I was all set to study computer science and get a job programming, but went a totally different route and went into healthcare. I currently work as a paramedic on a rotating 24/48 schedule, so I have two days off for every day I work, and a decent bit of downtime on the days that I do work. I've been looking at ways to earn some extra money with all that spare time, and was wondering if it'd be worth the effort to try and find a part-time/freelance gig. I know HTML/CSS, PHP, and I'm pretty familiar with Python and Ruby (and Rails). Anyways, was hoping that someone could point me in the right direction as what "skill set" would give me the best chance to be able to land a part-time/freelance gig. I realize this is a rather open-ended question but any direction is appreciated.

    Read the article

  • How to edit files in a terminal with vim?

    - by andrew.46
    It is not always possible to edit and create files in a terminal and I would like to get the answers to the basics of manipulating files in a terminal using vim under Ubuntu Linux. Specific questions I have are: How can I open text files for editing? How can I save the file? How can I save the file with a different name? How can I leave the file without saving the changes? What settings are best in my configuration file and where is this file? How do I set the colors for vim? How do I show line numbers in vim and can this be toggled? A lot of questions here but I believe these questions and their answers should cover basic vim usage...

    Read the article

  • Speaking at SQL Saturday #146

    - by Andrew Kelly
      For any of you up in the New England area that are looking for some good and free SQL Server training you may want to check out the SQL Saturday this fall in southern NH. More specifically the event will be held in Nashua NH on October 20th 2012. There is a wonderful cast of speakers including myself (shameless plug ) with a wide range of topics of which I am sure everyone can find a few topics they are interested in.  I hope to see some familiar faces from my old stomping ground and...(read more)

    Read the article

  • As a programmer, what's the most valuable non-English (human) language to learn?

    - by Andrew M
    I was thinking that with my developer skills, learning new languages like French, German etc. might be easier for me now. I could setup the verbs as objects in Python and use dir(verb) to find its methods, tenses and stuff ;-) But seriously, if you're a professional developer, in my case in the UK, what's the best foreign language to learn from an employment perspective? I'm thinking, like Hindi - if all our programming jobs are getting outsourced to India, might as well position yourself to be the on-site, go-between guy. Mandarin - if the Chinese become the pre-eminent economy, the new USA, in ten or twenty years, then speaking their language would open up a huge market to you. Russian - maybe another major up-and-comer, but already closer to Western standards. More IT-sector growth here than anywhere else in the coming years? Japanese - drivers of global technology, being able to speak their language could give you a big competitive advantage over other Westerners But I'm just guessing/musing with all these points. If you have an opinion, or even better, some evidence, I'd like to hear it. If the programming things falls through then at least it'll make for more interesting holidays.

    Read the article

  • What program has this icon?

    - by Andrew M
    I noticed a screenshot that included this icon in the sidebar, does anyone know what it's from? It looks cool! Looks vaguely reminiscient of Chrome but different enough that it could be a coincidence. I've tried putting it into a google image search but without any luck. Original source was this Ubuntu help page on unity launchers. which includes the original, larger screenshot. For context, it's not important, I'm just curious.

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Routed Command Question

    - by Andrew
    I'd like to implement a custom command to capture a Backspace key gesture inside of a textbox, but I don't know how. I wrote a test program in order to understand what's going on, but the behaviour of the program is rather confusing. Basically, I just need to be able to handle the Backspace key gesture via wpf commands while keyboard focus is in the textbox, and without disrupting the normal behaviour of the Backspace key within the textbox. Here's the xaml for the main window and the corresponding code-behind, too (note that I created a second command for the Enter key, just to compare its behaviour to that of the Backspace key): <Window x:Class="WpfApplication1.Window1" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="Window1" Height="300" Width="300"> <Grid> <TextBox Margin="44,54,44,128" Name="textBox1" /> </Grid> </Window> And here's the corresponding code-behind: using System.Windows; using System.Windows.Input; namespace WpfApplication1 { /// <summary> /// Interaction logic for EntryListView.xaml /// </summary> public partial class Window1 : Window { public static RoutedCommand EnterCommand = new RoutedCommand(); public static RoutedCommand BackspaceCommand = new RoutedCommand(); public Window1() { InitializeComponent(); CommandBinding cb1 = new CommandBinding(EnterCommand, EnterExecuted, EnterCanExecute); CommandBinding cb2 = new CommandBinding(BackspaceCommand, BackspaceExecuted, BackspaceCanExecute); this.CommandBindings.Add(cb1); this.CommandBindings.Add(cb2); KeyGesture kg1 = new KeyGesture(Key.Enter); KeyGesture kg2 = new KeyGesture(Key.Back); InputBinding ib1 = new InputBinding(EnterCommand, kg1); InputBinding ib2 = new InputBinding(BackspaceCommand, kg2); this.InputBindings.Add(ib1); this.InputBindings.Add(ib2); } #region Command Handlers private void EnterCanExecute(object sender, CanExecuteRoutedEventArgs e) { MessageBox.Show("Inside EnterCanExecute Method."); e.CanExecute = true; } private void EnterExecuted(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("Inside EnterExecuted Method."); e.Handled = true; } private void BackspaceCanExecute(object sender, CanExecuteRoutedEventArgs e) { MessageBox.Show("Inside BackspaceCanExecute Method."); e.Handled = true; } private void BackspaceExecuted(object sender, ExecutedRoutedEventArgs e) { MessageBox.Show("Inside BackspaceExecuted Method."); e.Handled = true; } #endregion Command Handlers } } Any help would be very much appreciated. Thanks! Andrew

    Read the article

  • Multi-Threading Question Concerning WPF

    - by Andrew
    Hello, I'm a newbie to threading, and I don't really know how to code a particular task. I would like to handle a mouse click event on a window that will kick off a while loop in a seperate thread. This thread, which is distinct from the UI thread, should call a function in the while loop which updates a label on the window being serviced by the UI thread. The while loop should stop running when the left mouse button is no longer being pressed. All the loop does is increment a counter, and then repeatedly call the function which displays the updated value in the window. The code for the window and all of the threading is given below (I keep getting some error about STA threading, but don't know where to put the attribute). Also, I'm hoping to use this solution, if it ever works, in another project that makes asynchronous calls elsewhere to a service via wcf, so I was hoping not to make any application-wide special configurations, since I'm really new to multi-threading and am quite worried about breaking other code in a larger program... Here's what I have: <Window x:Class="WpfApplication2.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:WpfApplication2" Name="MyMainWindow" Title="MainWindow" Width="200" Height="150" PreviewMouseLeftButtonDown="MyMainWindow_PreviewMouseLeftButtonDown"> <Label Height="28" Name="CounterLbl" /> </Window> And here's the code-behind: using System.Windows; using System.Windows.Input; using System.Threading; namespace WpfApplication2 { /// <summary> /// Interaction logic for MainWindow.xaml /// </summary> public partial class MainWindow : Window { private int counter = 0; public MainWindow() { InitializeComponent(); } private delegate void EmptyDelegate(); private void MyMainWindow_PreviewMouseLeftButtonDown(object sender, MouseButtonEventArgs e) { Thread counterThread = new Thread(new ThreadStart(MyThread)); counterThread.Start(); } private void MyThread() { while (Mouse.LeftButton == MouseButtonState.Pressed) { counter++; Dispatcher.Invoke(new EmptyDelegate(UpdateLabelContents), null); } } private void UpdateLabelContents() { CounterLbl.Content = counter.ToString(); } } } Anyways, multi-threading is really new to me, and I don't have any experience implementing it, so any thoughts or suggestions are welcome! Thanks, Andrew

    Read the article

  • data from few MySQL tables sorted by ASC

    - by Andrew
    In the dbase I 've few tables named as aaa_9xxx, aaa_9yyy, aaa_9zzz. I want to find all data with a specified DATE and show it with the TIME ASC. First, I must find a tables in the dbase: $STH_1a = $DBH->query("SELECT table_name FROM information_schema.tables WHERE table_name LIKE 'aaa\_9%' "); foreach($STH_1a as $row) { $table_name_s1[] = $row['table_name']; } Second, I must find a data wit a concrete date and show it with TIME ASC: foreach($table_name_s1 as $table_name_1) { $STH_1a2 = $DBH->query("SELECT * FROM `$table_name_1` WHERE date = '2011-11-11' ORDER BY time ASC "); while ($row = $STH_1a2->fetch(PDO::FETCH_ASSOC)) { echo " ".$table_name_1."-".$row['time']."-".$row['ei_name']." <br>"; } } .. but it shows the data sorted by tables name, then by TIME ASC. I must to have all this data (from all tables) sorted by TIME ASC. Thank You dev-null-dweller, Andrew Stubbs and Jaison Erick for your help. I test the Erick solution : foreach($STH_1a as $row) { $stmts[] = sprintf('SELECT * FROM %s WHERE date="%s"', $row['table_name'], '2011-11-11'); } $stmt = implode("\nUNION\n", $stmts); $stmt .= "\nORDER BY time ASC"; $STH_1a2 = $DBH->query($stmt); while ($row_1a2 = $STH_1a2->fetch(PDO::FETCH_ASSOC)) { echo " ".$row['table_name']."-".$row_1a2['time']."-".$row_1a2['ei_name']." <br>"; } it's working but I've problem with 'table_name' - it's always the LAST table name. //---------------------------------------------------------------------- end the ending solution with all fixes, thanks all for your help, :)) foreach($STH_1a as $row) { $stmts[] = sprintf("SELECT *, '%s' AS table_name FROM %s WHERE date='%s'", $row['table_name'], $row['table_name'], '2011-11- 11'); } $stmt = implode("\nUNION\n", $stmts); $stmt .= "\nORDER BY time ASC"; $STH_1a2 = $DBH->query($stmt); while ($row_1a2 = $STH_1a2->fetch(PDO::FETCH_ASSOC)) { echo " ".$row_1a2['table_name']."-".$row_1a2['time']."-".$row_1a2['ei_name']." <br>"; }

    Read the article

  • PASS Summit 2011 &ndash; Part II

    - by Tara Kizer
    I arrived in Seattle last Monday afternoon to attend PASS Summit 2011.  I had really wanted to attend Gail Shaw’s (blog|twitter) and Grant Fritchey’s (blog|twitter) pre-conference seminar “All About Execution Plans” on Monday, but that would have meant flying out on Sunday which I couldn’t do.  On Tuesday, I attended Allan Hirt’s (blog|twitter) pre-conference seminar entitled “A Deep Dive into AlwaysOn: Failover Clustering and Availability Groups”.  Allan is a great speaker, and his seminar was packed with demos and information about AlwaysOn in SQL Server 2012.  Unfortunately, I have lost my notes from this seminar and the presentation materials are only available on the pre-con DVD.  Hmpf! On Wednesday, I attended Gail Shaw’s “Bad Plan! Sit!”, Andrew Kelly’s (blog|twitter) “SQL 2008 Query Statistics”, Dan Jones’ (blog|twitter) “Improving your PowerShell Productivity”, and Brent Ozar’s (blog|twitter) “BLITZ! The SQL – More One Hour SQL Server Takeovers”.  In Gail’s session, she went over how to fix bad plans and bad query patterns.  Update your stale statistics! How to fix bad plans Use local variables – optimizer can’t sniff it, so it’ll optimize for “average” value Use RECOMPILE (at the query or stored procedure level) – CPU hit OPTIMIZE FOR hint – most common value you’ll pass How to fix bad query patterns Don’t use them – ha! Catch-all queries Use dynamic SQL OPTION (RECOMPILE) Multiple execution paths Split into multiple stored procedures OPTION (RECOMPILE) Modifying parameter values Use local variables Split into outer and inner procedure OPTION (RECOMPILE) She also went into “last resort” and “very last resort” options, but those are risky unless you know what you are doing.  For the average Joe, she wouldn’t recommend these.  Examples are query hints and plan guides. While I enjoyed Andrew’s session, I didn’t take any notes as it was familiar material.  Andrew is a great speaker though, and I’d highly recommend attending his sessions in the future. Next up was Dan’s PowerShell session.  I need to look into profiles, manifests, function modules, and function import scripts more as I just didn’t quite grasp these concepts.  I am attending a PowerShell training class at the end of November, so maybe that’ll help clear it up.  I really enjoyed the Excel integration demo.  It was very cool watching PowerShell build the spreadsheet in real-time.  I must look into this more!  On a side note, I am jealous of Dan’s hair.  Fabulous hair! Brent’s session showed us how to quickly gather information about a server that you will be taking over database administration duties for.  He wrote a script to do a fast health check and then later wrapped it into a stored procedure, sp_Blitz.  I can’t wait to use this at my work even on systems where I’ve been the primary DBA for years, maybe there’s something I’ve overlooked.  We are using EPM to help standardize our environment and uncover problems, but sp_Blitz will definitely still help us out.  He even provides a cloud-based update feature, sp_BlitzUpdate, for sp_Blitz so you don’t have to constantly update it when he makes a change.  I think I’ll utilize his update code for some other challenges that we face at my work.

    Read the article

  • Announcing the Winnipeg VS.NET 2012 Community Launch Event!

    - by D'Arcy Lussier
    Back in May 2010 the local Winnipeg technical community got together and put on a launch event for VS.NET 2010. That event was such a good time that we’re doing it again this year for the VS.NET 2012 launch! On December 6th, the Winnipeg .NET User Group is hosting a full day VS.NET 2012 Community Launch Event at the Imax theatre in Portage Place! We have 4 sessions planned covering dev tools, ALM/TFS, web development, and cloud development, presented by Dylan Smith, Tyler Doerksen, and myself. You can get all the details and register on our Eventbrite site: http://wpgvsnet2012launch.eventbrite.ca/ I’ve included the details below as well for convenience: Winnipeg VS.NET 2012 Community Launch Event Join us for a full day of sessions highlighting the new features and capabilities of Visual Studio .NET 2012 and the .NET 4.5 Framework! Hosted by the Winnipeg .NET User Group, this community event is FREE thanks to the generous support from our event sponsors: Imaginet Online Business Systems Prairie Developer Conference Event Details When: Thursday, Decemer 6th from 8:00 AM - 4:00 PM Where: IMAX Theatre, Portage Place Cost: *FREE!* Agenda 8:00 - 9:00 Continental Breakfast and Registration 9:00 - 9:15 Welcome 9:15 - 10:30 End-To-End Application Lifecycle Management with TFS 2012 10:30 - 10:45 Break 10:45 - 12:00 Improving Developer Productivity with Visual Studio 2012 12:00 - 1:00 Lunch Break (Lunch Not Provided) 1:00 - 2:15 Web Development in Visual Studio 2012 and .NET 4.5 2:15 - 2:30  Break 2:30 - 3:45 Microsoft Cloud Development with Azure and Visual Studio 2012 3:45 - 4:00 Prizes and Thanks Session Abstracts End-To-End Application Lifecycle Management with TFS 2012 Dylan Smith, Imaginet In this session we'll walk through the application development lifecycle from end-to-end and see how some of the new capabilities in TFS 2012 help streamline the software delivery process. There are some exciting new capabilities around Agile Project Management, Gathering Feedback, Code Reviews, Unit Testing, Version Control, Storyboarding, etc. During this session we’ll follow a fictional software development team through the process of planning, developing, testing, and deployment focusing on where the new functionality in VS/TFS 2012 fits in to make teams more effective. Improving Developer Productivity with Visual Studio 2012 Dylan Smith, Imaginet Microsoft Visual Studio 2012 enables developers to take full advantage of the capability of Windows using the skills and technologies developers already know and love to deliver exceptional and compelling apps.  Whether working individually or in a small, medium or large development team Visual Studio 2012 sets a new standard for development tools, helping teams deliver superior results for their customers that help set them apart from their competitors.  In this session we’ll walk through new features in Visual Studio 2012 specifically focusing on how these improve Developer Productivity. Web Development in Visual Studio 2012 and .NET 4.5 D’Arcy Lussier, Online Business Systems It’s an exciting time to be a web developer in the Microsoft ecosystem! The launch of Visual Studio 2012 and .NET 4.5 brings new tooling and features, and the ASP.NET team is continually releasing updates for MVC, SignalR, Web API, and other platform features. In this session we’ll take a tour of the new features and technologies available for Microsoft web developers here in 2012! Microsoft Cloud Development with Azure and Visual Studio 2012 Tyler Doerksen, Imaginet Microsoft’s public cloud platform is nearing its third year of public availability, supporting web site/service hosting, storage, relational databases, virtual machines, virtual networks and much more. Windows Azure provides both power and flexibility.  But to capture this power you need to have the right tools!  This session will demonstrate the primary ways you can harness Windows Azure with the .NET platform.  We’ll explain cloud service development, packaging, deployment, testing and show how Visual Studio 2012 with the Windows Azure SDK and other Microsoft tools can be used to develop for and manage Windows Azure.Harness the power of the cloud from the comfort of Visual Studio 2012!

    Read the article

  • Design patter to keep track UITableView rows correspondance to underlying data in constant time.

    - by DenNukem
    When my model changes I want to animate changes in UITableView by inserting/deleting rows. For that I need to know the ordinal of the given row (so I can construct NSIndexPath), which I find hard to do in better-than-linear time. For example, consider that I have a list of addressbook entries which are manualy sorted by the user, i.e. there is no ordering "key" that represents the sort order. There is also a corresponding UITableView that shows one row per addressbook entry. When UITableView queries the datasource I query the NSMUtableArray populated with my entries and return required data in constant time for each row. However, if there is a change in underlying model I am getting a notification "Joe Smith, id#123 has been removed". Now I have a dilemma. A naive approach would be to scan the array, determine the index at which Joe Smith is and then ask UITableView to remove that precise row from the view, also removing it form the array. However, the scan will take linear time to finish. Now I could have an NSDictionary which allows me to find Joe Smith in constant time, but that doesn't do me a lot of good because I still need to find his ordinal index within the array in order to instruct UITableView to remove that row, which is again a linear search. I could further decide to store each object's ordinal inside the object itself to make it constant, but it will become outdated after first such update as all subsequent index values will have changed due to removal of an object. So what is the correct design pattern to accurately reflect model changes in the UITableView in costant (or at least logarithmic) time?

    Read the article

  • ASP.NET Ajax - Asynch request has separate session???

    - by Marcus King
    We are writing a search application that saves the search criteria to session state and executes the search inside of an asp.net updatepanel. Sometimes when we execute multiple searches successively the 2nd or 3rd search will sometimes return results from the first set of search criteria. Example: our first search we do a look up on "John Smith" - John Smith results are displayed. The second search we do a look up on "Bob Jones" - John Smith results are displayed. We save all of the search criteria in session state as I said, and read it from session state inside of the ajax request to format the DB query. When we put break points in VS everything behaves as normal, but without them we get the original search criteria and results. My guess is because they are saved in session, that the ajax request somehow gets its own session and saves the criteria to that, and then retrieves the criteria from that session every time, but the non-async stuff is able to see when the criteria is modified and saves the changes to state accordingly, but because they are from two different sessions there is a disparity in what is saved and read. EDIT::: To elaborate more, there was a suggestion of appending the search criteria to the query string which normally is good practice and I agree thats how it should be but following our requirements I don't see it as being viable. They want it so the user fills out the input controls hits search and there is no page reload, the only thing they see is a progress indicator on the page, and they still have the ability to navigate and use other features on the current page. If I were to add criteria to the query string I would have to do another request causing the whole page to load, which depending on the search criteria can take a really long time. This is why we are using an ajax call to perform the search and why we aren't causing another full page request..... I hope this clarifies the situation.

    Read the article

  • Lucene complex structure search

    - by archer
    Basically I do have pretty simple database that I'd like to index with Lucene. Domains are: // Person domain class Person { Set<Pair> keys; } // Pair domain class Pair { KeyItem keyItem; String value; } // KeyItem domain, name is unique field within the DB (!!) class KeyItem{ String name; } I've tens of millions of profiles and hundreds of millions of Pairs, however, since most of KeyItem's "name" fields duplicates, there are only few dozens KeyItem instances. Came up to that structure to save on KeyItem instances. Basically any Profile with any fields could be saved into that structure. Lets say we've profile with properties - name: Andrew Morton - eduction: University of New South Wales, - country: Australia, - occupation: Linux programmer. To store it, we'll have single Profile instance, 4 KeyItem instances: name, education,country and occupation, and 4 Pair instances with values: "Andrew Morton", "University of New South Wales", "Australia" and "Linux Programmer". All other profile will reference (all or some) same instances of KeyItem: name, education, country and occupation. My question is, how to index all of that so I can search for Profile for some particular values of KeyItem::name and Pair::value. Ideally I'd like that kind of query to work: name:Andrew* AND occupation:Linux* Should I create custom Indexer and Searcher? Or I could use standard ones and just map KeyItem and Pair as Lucene components somehow?

    Read the article

  • C++ help with getline function with ifstream

    - by John
    So I am writing a program that deals with reading in and writing out to a file. I use the getline() function because some of the lines in the text file may contain multiple elements. I've never had a problem with getline until now. Here's what I got. The text file looks like this: John Smith // Client name 1234 Hollow Lane, Chicago, IL // Address 123-45-6789 // SSN Walmart // Employer 58000 // Income 2 // Number of accounts the client has 1111 // Account Number 2222 // Account Number ifstream inFile("ClientInfo.txt"); if(inFile.fail()) { cout << "Problem opening file."; } else { string name, address, ssn, employer; double income; int numOfAccount; getline(inFile, name); getline(inFile, address); // I'll stop here because I know this is where it fails. When I debugged this code, I found that name == "John", instead of name == "John Smith", and Address == "Smith" and so on. Am I doing something wrong. Any help would be much appreciated.

    Read the article

  • Changing FileNames using RegEx and Recursion

    - by yeahumok
    Hello I'm trying to rename files that my program lists as having "illegal characters" for a SharePoint file importation. The illegal characters I am referring to are: ~ # % & * {} / \ | : < ? - "" What i'm trying to do is recurse through the drive, gather up a list of filenames and then through Regular Expressions, pick out file names from a List and try to replace the invalid characters in the actual filenames themselves. Anybody have any idea how to do this? So far i have this: (please remember, i'm a complete n00b to this stuff) class Program { static void Main(string[] args) { string[] files = Directory.GetFiles(@"C:\Documents and Settings\bob.smith\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); foreach (string file in files) { Console.Write(file + "\r\n"); } Console.WriteLine("Press any key to continue..."); Console.ReadKey(true); string pattern = " *[\\~#%&*{}/:<>?|\"-]+ *"; string replacement = " "; Regex regEx = new Regex(pattern); string[] fileDrive = Directory.GetFiles(@"C:\Documents and Settings\bob.smith\Desktop\~Test Folder for [SharePoint] %testing", "*.*", SearchOption.AllDirectories); StreamWriter sw = new StreamWriter(@"C:\Documents and Settings\bob.smith\Desktop\~Test Folder for [SharePoint] %testing\File_Renames.txt"); foreach(string fileNames in fileDrive) { string sanitized = regEx.Replace(fileNames, replacement); sw.Write(sanitized + "\r\n"); } sw.Close(); } } So what i need to figure out is how to recursively search for these invalid chars, replace them in the actual filename itself. Anybody have any ideas?

    Read the article

  • SQL Server, how to join a table in a "rotated" format (returning columns instead of rows)?

    - by Joshua Carmody
    Sorry for the lame title, my descriptive skills are poor today. In a nutshell, I have a query similar to the following: SELECT P.LAST_NAME, P.FIRST_NAME, D.DEMO_GROUP FROM PERSON P JOIN PERSON_DEMOGRAPHIC PD ON PD.PERSON_ID = P.PERSON_ID JOIN DEMOGRAPHIC D ON D.DEMOGRAPHIC_ID = PD.DEMOGRAPHIC_ID This returns output like this: LAST_NAME FIRST_NAME DEMO_GROUP --------------------------------------------- Johnson Bob Male Smith Jane Female Smith Jane Teacher Beeblebrox Zaphod Male Beeblebrox Zaphod Alien Beeblebrox Zaphid Politician I would prefer the output be similar to the following: LAST_NAME FIRST_NAME Male Female Teacher Alien Politician --------------------------------------------------------------------------------------------------------- Johnson Bob 1 0 0 0 0 Smith Jane 0 1 1 0 0 Beeblebrox Zaphod 1 0 0 1 1 The number of rows in the DEMOGRAPHIC table varies, so I can't say with certainty how many columns I need. The query needs to be flexible. Yes, it would be trivial to do this in code. But this query is one piece of a complicated set of stored procedures, views, and reporting services, many of which are outside my sphere of influence. I need to produce this output inside the database to avoid breaking the system. Any ideas? This is MS SQL Server 2005, by the way. Thanks.

    Read the article

  • Filter entities that match all pairs

    - by Jon
    I have an entity (let's say Person) with a set of arbitrary attributes with a known subset of values. I need to search for all of these entities that match all my filter conditions. For example, my table structures look like this: Person: id | name 1 | John Doe 2 | Jane Roe 3 | John Smith Attribute: id | attr_name 1 | Sex 2 | Eye Color ValidValue: id | attr_id | value_name 1 | 1 | Male 2 | 1 | Female 3 | 2 | Blue 4 | 2 | Green 5 | 2 | Brown PersonAttributes id | person_id | attr_id | value_id 1 | 1 | 1 | 1 2 | 1 | 2 | 3 3 | 2 | 1 | 2 4 | 2 | 2 | 4 5 | 3 | 1 | 1 6 | 3 | 2 | 4 In JPA, I have entities built for all of these tables. What I'd like to do is perform a search for all entities matching a given set of attribute-value pairs. For instance, I'd like to be able to find all males (John Doe and John Smith), all people with green eyes (Jane Roe or John Smith), or all females with green eyes (Jane Roe). I see that I can already take advantage of the fact that I only really need to match on value_id, since that's already unique and tied to the attr_id. But where can I go from there?

    Read the article

  • SQL View with Data from two tables

    - by Alex
    Hello! I can't seem to crack this - I have two tables (Persons and Companies), and I'm trying to create a view that: 1) shows all persons 2) also returns companies by themselves once, regardless of how many persons are related to it 3) orders by name across both tables To clarify, some sample data: (Table: Companies) Id Name 1 Banana 2 ABC Inc. 3 Microsoft 4 Bigwig (Table: Persons) Id Name RelatedCompanyId 1 Joe Smith 3 2 Justin 3 Paul Rudd 4 4 Anjolie 5 Dustin 4 The output I'm looking for is something like this: Name PersonName CompanyName RelatedCompanyId ABC Inc. NULL ABC Inc. NULL Anjolie Anjolie NULL NULL Banana NULL Banana NULL Bigwig NULL Bigwig NULL Dustin Dustin Bigwig 4 Joe Smith Joe Smith Microsoft 3 Justin Justin NULL NULL Microsoft NULL Microsoft NULL Paul Rudd Paul Rudd Bigwig 4 As you can see, the new "Name" column is ordered across both tables (the company names appear correctly in between the person names), and each company appears exactly once, regardless of how many people are related to it. Can this even be done in SQL?! P.S. I'm trying to create a view so I can use this later for easy data retrieval, fulltext indexing and make the programming side simpler by just querying the view.

    Read the article

  • How can I 'transpose' my data using SQL and remove duplicates at the same time?

    - by Remnant
    I have the following data structure in my database: LastName FirstName CourseName John Day Pricing John Day Marketing John Day Finance Lisa Smith Marketing Lisa Smith Finance etc... The data shows employess within a business and which courses they have shown a preference to attend. The number of courses per employee will vary (i.e. as above, John has 3 courses and Lisa 2). I need to take this data from the database and pass it to a webpage view (asp.net mvc). I would like the data that comes out of my database to match the view as much as possible and want to transform the data using SQl so that it looks like the following: LastName FirstName Course1 Course2 Course3 John Day Pricing Marketing Finance Lisa Smith Marketing Finance Any thoughts on how this may be achieved? Note: one of the reasons I am trying this approach is that the original data structure does not easily lend itself to be iterated over using the typical mvc syntax: <% foreach (var item in Model.courseData) { %> Because of the duplication of names in the orignal data I would end up with lots of conditionals in my View which I would like to avoid. I have tried transforming the data using c# in my ViewModel but have found it tough going and feel that I could lighten the workload by leveraging SQL before I return the data. Thanks.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >