Search Results

Search found 1341 results on 54 pages for 'funny ha ha'.

Page 47/54 | < Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >

  • SQL SERVER – Powershell – Importing CSV File Into Database – Video

    - by pinaldave
    Laerte Junior is my very dear friend and Powershell Expert. On my request he has agreed to share Powershell knowledge with us. Laerte Junior is a SQL Server MVP and, through his technology blog and simple-talk articles, an active member of the Microsoft community in Brasil. He is a skilled Principal Database Architect, Developer, and Administrator, specializing in SQL Server and Powershell Programming with over 8 years of hands-on experience. He holds a degree in Computer Science, has been awarded a number of certifications (including MCDBA), and is an expert in SQL Server 2000 / SQL Server 2005 / SQL Server 2008 technologies. Let us read the blog post in his own words. I was reading an excellent post from my great friend Pinal about loading data from CSV files, SQL SERVER – Importing CSV File Into Database – SQL in Sixty Seconds #018 – Video,   to SQL Server and was honored to write another guest post on SQL Authority about the magic of the PowerShell. The biggest stuff in TechEd NA this year was PowerShell. Fellows, if you still don’t know about it, it is better to run. Remember that The Core Servers to SQL Server are the future and consequently the Shell. You don’t want to be out of this, right? Let’s see some PowerShell Magic now. To start our tour, first we need to download these two functions from Powershell and SQL Server Master Jedi Chad Miller.Out-DataTable and Write-DataTable. Save it in a module and add it in your profile. In my case, the module is called functions.psm1. To have some data to play, I created 10 csv files with the same content. I just put the SQL Server Errorlog into a csv file and created 10 copies of it. #Just create a CSV with data to Import. Using SQLErrorLog [reflection.assembly]::LoadWithPartialName(“Microsoft.SqlServer.Smo”) $ServerInstance=new-object (“Microsoft.SqlServer.Management.Smo.Server“) $Env:Computername $ServerInstance.ReadErrorLog() | export-csv-path“c:\SQLAuthority\ErrorLog.csv”-NoTypeInformation for($Count=1;$Count-le 10;$count++)  {       Copy-Item“c:\SQLAuthority\Errorlog.csv”“c:\SQLAuthority\ErrorLog$($count).csv” } Now in my path c:\sqlauthority, I have 10 csv files : Now it is time to create a table. In my case, the SQL Server is called R2D2 and the Database is SQLServerRepository and the table is CSV_SQLAuthority. CREATE TABLE [dbo].[CSV_SQLAuthority]( [LogDate] [datetime] NULL, [Processinfo] [varchar](20) NULL, [Text] [varchar](MAX) NULL ) Let’s play a little bit. I want to import synchronously all csv files from the path to the table: #Importing synchronously $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable-ServerInstanceR2D2-DatabaseSQLServerRepository-TableNameCSV_SQLAuthority-Data$DataTable Very cool, right? Let’s do it asynchronously and in background using PowerShell  Jobs: #If you want to do it to all asynchronously Start-job-Name‘ImportingAsynchronously‘ ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock {    ` $DataImport=Import-Csv-Path ( Get-ChildItem“c:\SQLAuthority\*.csv”) $DataTable=Out-DataTable-InputObject$DataImport Write-DataTable   -ServerInstance“R2D2″`                   -Database“SQLServerRepository“`                   -TableName“CSV_SQLAuthority“`                   -Data$DataTable             } Oh, but if I have csv files that are large in size and I want to import each one asynchronously. In this case, this is what should be done: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } How cool is that? Let’s make the funny stuff now. Let’s schedule it on an SQL Server Agent Job. If you are using SQL Server 2012, you can use the PowerShell Job Step. Otherwise you need to use a CMDexec job step calling PowerShell.exe. We will use the second option. First, create a ps1 file called ImportCSV.ps1 with the script above and save it in a path. In my case, it is in c:\temp\automation. Just add the line at the end: Get-ChildItem“c:\SQLAuthority\*.csv” | % { Start-job-Name“$($_)” ` -InitializationScript  {IpmoFunctions-Force-DisableNameChecking} ` -ScriptBlock { $DataImport=Import-Csv-Path$args[0]                $DataTable=Out-DataTable-InputObject$DataImport                Write-DataTable-ServerInstance“R2D2″`                               -Database“SQLServerRepository“`                               -TableName“CSV_SQLAuthority“`                               -Data$DataTable             } -ArgumentList$_.fullname } Get-Job | Wait-Job | Out-Null Remove-Job -State Completed Why? See my post Dooh PowerShell Trick–Running Scripts That has Posh Jobs on a SQL Agent Job Remember, this trick is for  ALL scripts that will use PowerShell Jobs and any kind of schedule tool (SQL Server agent, Windows Schedule) Create a Job Called ImportCSV and a step called Step_ImportCSV and choose CMDexec. Then you just need to schedule or run it. I did a short video (with matching good background music) and you can see it at: That’s it guys. C’mon, join me in the #PowerShellLifeStyle. You will love it. If you want to check what we can do with PowerShell and SQL Server, don’t miss Laerte Junior LiveMeeting on July 18. You can have more information in : LiveMeeting VC PowerShell PASS–Troubleshooting SQL Server With PowerShell–English Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology, Video Tagged: Powershell

    Read the article

  • Gene Hunt Says:

    - by BizTalk Visionary
    "She's as nervous as a very small nun at a penguin shoot"   "He's got fingers in more pies than a leper on a cookery course" "You so much as belch out of line and I'll have your scrotum on a barbed wire plate" "Let's go play slappyface" "your surrounded by armed barstewards" “Right, get out and find this murdering scum right now!” [pause] “Scratch that, we start 9am sharp tomorrow, it's beer-o-clock.” "So then Cartwright, you're such a good Detective.... Go and Detect me a packet of Garibaldies" "You're not the one who is going to have to knit himself a new arsehole after 25 years of aggressive male love in prison" “A dream for me is Diana Dors and a bottle of chip fat." “A dream for me is Diana Dors and a bottle of chip fat." “They reckon you've got concussion - but personally, I couldn't give a tart's furry cup if half your brains are falling out. Don't ever waltz into my kingdom playing king of the jungle.” “You great... soft... sissy... girlie... nancy... french... bender... Man-United supporting POOF!!” “Drugs eh? What's the point. They make you forget, make you talk funny, make you see things that aren't there. My old grandma got all of that for free when she had a stroke.” “He's Dead! It's quite serious!” “Fanny in the flat...Nice Work” “SoopaDoopa” “Tits in a Jumper!” “Drop your weapons! You are surrounded by armed bastards!” “It's 1973, almost dinnertime. I'm 'avin 'oops!” “Trust the Gene Genie!” “I wanna hump Britt Ekland...What're we gonna do...!” “Was that 'E' and you don't know the rest?! or you going 'Eeee, I Dunno'” “Good Girl! Prostate probe and no jelly. “ “Give over, it's nothing like Spain!” “I'll come over your houses and stamp on all your toys!” “The Wizard will sort it out. It's cos of the wonderful things he does” “Cartwright can jump up and down on his knackers!” “It's not a windup love, he really thinks like this!” “Women! You can't say two words to them” “I was thinking, maybe, a Berni Inn!” “If I wanted a bollocking for drinking too much...!” “Shhhh...hear that...that's the sound of this case being closed! “Chicken!? In a basket!?” “Seems a large quantity of cocaine...” “You probably thought he kept his cock in his keks!” “The tail-end of Rays demotion speech!” “Stephen Warren is gay!?” “You're a smart boy, use your initiative!” “Don't be such a Jessie!” “I find the idea of a bird brushing her teeth...!” “Never been tempted to the Magic talcum powder?” “Make sure she's got nice tits!” “You're more likely to find an ostrich with a plum up it's arse!” “Drink this lot under the table and have a pint on the way home!” “Never be a female Prime Minister!” “Pub? Pub! pub!.....Pub!” “Thou shalt not suck off rent boys!” “The number for the special clinic is on the notice board!” “If me uncle had tits, would he be me auntie!” “Got your vicars in a twist!” “We Done?!” “Your mates got balls...If they were any bigger he'd need a wheelbarrow!” “The Ending - from 'I want to go home' to the end music.”

    Read the article

  • Microsoft Sql Server driver for Nodejs - Part 2

    - by chanderdhall
    Nodejs, Sql server and Json response with Rest This post is part 2 of Microsoft Sql Server driver for Node js.In this post we will look at the JSON responses from the Microsoft Sql Server driver for Node js. Pre-requisites: If you have read the Part 1 of the series, you should be good. We will be using a framework for Rest within Nodejs - Restify, but that would need no prior learning. Restify: Restify is a simple node module for building RESTful services. It is slimmer than Express. Express is a complete module that has all what you need to create a full-blown browser app. However, Restify does not have additional overhead of templating, rendering etc that would be needed if your app has views. So, as the name suggests it's an awesome framework for building RESTful services and is very light-weight. Set up - You can continue with the same directory or project structure we had in the previous post, or can start a new one. Install restify using npm and you are good to go. npm install restify Go to Server.js and include Restify in your solution. Then create the server object using restify.CreateServer() - SLICK - ha? var restify = require('restify'); var server = restify.createServer(); server.listen(8080, function () { console.log('%s listening at %s', server.name, server.url); }); Then make sure you provide a port for the Server to listen at. The call back function is optional but helps you for debugging purposes. Once you are done, save the file and then go to the command prompt and hit 'node server.js' and you should see the following:   To test the server, go to your browser and type the address 'http://localhost:8080/' and oops you will see an error.   Why is that? - Well because we haven't defined any routes. Let's go ahead and create a route. To begin with I'd like to return whatever is typed in the url after my name and the following code should do it. server.get('/ChanderDhall/:status', function respond(req, res, next) { res.end("hello " + req.params.name + "") }); You can also avoid writing call backs inline. Something like this. function respond(req, res, next) { res.end("Chander Dhall " + req.params.name + ""); } server.get('/hello/:name', respond); Now if you go ahead and type http://localhost:8080/ChanderDhall/LovesNode you will get the response 'Chander Dhall loves node'. NOTE: Make sure your url has the right case as it's case-sensitive. You could have also typed it in as 'server.get('/chanderdhall/:name', respond);' Stored procedure: We've talked a lot about Restify now, but keep in mind the post is about being able to use Sql server with Node and return JSON. To see this in action, let's go ahead and create another route to a list of Employees from a stored procedure. server.get('/Employees', Employees); The following code will return a JSON response.  function Employees(req, res, next) { res.header("Content-Type: application/json"); //Need to specify the Content-Type which is //JSON in our case. sql.open(conn_str, function (err, conn) { if (err) { //Logs an error console.log("Error opening the database connection!"); return; } console.log("before query!"); conn.queryRaw("exec sp_GetEmployees", function (err, results) { if (err) { //Connection is open but an error occurs whileWhat else can be done? May be create a formatter or may be even come up with a hypermedia type but that may upset some pragmatists. Well, that's going to be a totally different discussion and is really not part of this series. Summary: We've discussed how to execute a stored procedure using Microsoft Sql Server driver for Node. Also, we have discussed how to format and send out a clean JSON to the app calling this API.  

    Read the article

  • Selling Visual Studio ALM

    - by Tarun Arora
    Introduction As a consultant I have been selling Application Lifecycle Management services using Visual Studio and Team Foundation Server. I’ve been contacted various times by friends working in organization telling me that ALM processes in their company were benchmarked when dinosaurs walked the earth. Most of these individuals already know the great features Microsoft ALM tools offer and are keen to start a conversation with the CIO but don’t exactly know where to start. It is very important how you engage in your first conversation, if you start the conversation with ‘There is this great tooling from Microsoft which offers amazing features to boost developer productivity, … ‘ from experience I can tell you the reply from your CIO would be ‘I already know! Our existing landscape has a combination of bleeding edge open source and cutting edge licensed tools which already cover these features quite well, more over Microsoft products have a high licensing cost associated to them.’ You will always find it harder to sell by feature, the trick is to highlight the gap in the existing processes & tools and then highlight the impact of these gaps to the overall development processes, by now you would have captured enough attention to show off how the ALM tooling offered by Microsoft not only fills those gaps but offers great value adds to take their development practices to the next level. Rangers ALM Assessment Guide Image 1 – Welcome! First look at the Rangers ALM assessment guide Most organization already have some processes in place to cover aspects of ALM. How do you go about proving that there isn’t enough cover in place? This is where Visual Studio ALM Rangers ALM Assessment guide can help. The ALM assessment guide is really a tool that helps you gather information about Development practices and processes within a customer's environment. Several questionnaires are used to identify the current state of individual development lifecycle areas and decide on a desired state for those processes. It also presents guidance and roll-up summaries to help with recommendations moving forward. The ALM Rangers assessment guide can be downloaded from here. Image 2 – ALM Assessment guide divided into different functions of SDLC The assessment guide is divided into different functions of Software Development Lifecycle (listed below), this gives you the ability to access how mature the company is in different areas of SDLC. Architecture & Design Requirement Engineering & UX Development Software Configuration Management Governance Deployment & Operations Testing & Quality Assurance Project Planning & Management Each section has a set of questions, fill in the assessment by selecting “Never/Sometimes/Always” from the Answer column in the question sheets.  Each answer has weightage to the overall score. Each question has a link next to it, clicking the link takes you to the Reference sheet which gives you more details about the question along with a reason for “why you need to ask this question?”, “other ways to phrase the question” and “what to expect as an answer from the customer”. The trick is to engage the customer in a discussion. You need to probe a lot, listen to the customer and have a discussion with several team members, preferably without management to ensure that you receive candid feedback. This reminds me of a funny incident when during an ALM review a customer told me that they have a sophisticated semi-automated application deployment process, further discussions revealed that deployment actually involved 72 manual configuration steps per production node. Such observations can be recorded in the Issue Brainstorming worksheet for further consideration later. It is also worth mentioning the different levels of ALM maturity to the customer. By default the desired state of ALM maturity is set to Standard, it is possible to set a desired state by area, you should strive for Advanced or Dynamic, it always helps by explaining the classification and advantages. Image 3 – ALM levels by description The ALM assessment guide helps you arrive at a quantitative measure of the company’s ALM maturity. The resultant graph plotted on a spider’s web shows you the company’s current state of ALM maturity and the desired state of ALM maturity. Further since the results are classified by area you can immediately spot the areas where the customer needs immediate help. Image 4 – The spiders web! The red cross icons are areas shouting out for immediate attention, the yellow exclamation icons are areas that need improvement. These icons are calculated on the difference between the Current State of ALM maturity VS the Desired state of ALM maturity. Image 5 – Results by area Conclusion To conclude the Rangers ALM assessment guide gives you the ability to, Measure the customer’s current ALM maturity level Understand the ALM maturity level the customer desires to achieve Capture a healthy list of issues the customer wants to brainstorm further Now What’s next…? Download and get started with the Rangers ALM Assessment Guide. If you have successfully captured the above listed three pieces of information you are in a great state to make recommendations on the identified areas highlighting the benefits that Visual Studio ALM tools would offer. In the next post I will be covering how to take the ALM assessment results as the base to actually convert your recommendation into a sell.  Remember to subscribe to http://feeds.feedburner.com/TarunArora. I would love to hear your feedback! If you have any recommendations on things that I should consider or any questions or feedback, feel free to leave a comment. *** A special thanks goes out to fellow ranges Willy, Ethem and Philip for reviewing the blog post and providing valuable feedback. ***

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • SQL SERVER – TechEd India 2012 – Content, Speakers and a Lots of Fun

    - by pinaldave
    TechEd is one event which every developers and IT professionals are looking forward to attend. It is opportunity of life time and no matter how many time one gets chance to engage with it, it is never enough. I still remember every single moment of every TechEd I have attended so far. We are less than 100 hours away from TechEd India 2012 event.This event is the one must attend event for every Technology Enthusiast. Fourth time in the row I am going to attend this event and I am equally excited as the first time of the event. There are going to be two very solid SQL Server track this time and I will be attending end of the end both the tracks. Here is my view on each of the 10 sessions. Each session is carefully crafted and leading exeprts from industry will present it. Day 1, March 21, 2012 T-SQL Rediscovered with SQL Server 2012 – This session is going to bring some of the lesser known enhancements that were brought with SQL Server 2012. When I learned that Jacob Sebastian is going to do this session my reaction to this is DEMO, DEMO and DEMO! Jacob spends hours and hours of his time preparing his session and this will be one of those session that I am confident will be delivered over and over through out the next many events. Catapult your data with SQL Server 2012 Integration Services – Praveen is expert story teller and one of the wizard when it is about SQL Server and business intelligence. He is surely going to mesmerize you with some interesting insights on SSIS performance too. Processing Big Data with SQL Server 2012 and Hadoop – There are three sessions on Big Data at TechEd India 2012. Stephen is going to deliver one of the session. Watching Stephen present is always joy and quite entertaining. He shares knowledge with his typical humor which captures ones attention. I wrote about what is BIG DATA in a blog post. SQL Server Misconceptions and Resolutions – I will be presenting this Session along with Vinod Kumar. READ MORE HERE. Securing with ContainedDB in SQL Server 2012 – Pranab is expert when it is about SQL Server and Security. I have seen him presenting and he is indeed very pleasant to watch. A dry subject like security, he makes it much lively. A Contained Database is a database which contains all the necessary settings and metadata, making database easily portable to another server. This database will contain all the necessary details and will not have to depend on any server where it is installed for anything. You can take this database and move it to another server without having any worries. Day 3, March 23, 2012 Peeling SQL Server like an Onion: Internals Demystified – Vinod Kumar has been writing about this extensively on his other blog post. In recent conversation he suggested that he will be creating very exclusive content for this presentation. I know Vinod for long time and have worked with him along many community activities. I am going to pay special attention to the details. I know Vinod has few give-away planned now for attending the session now only if he shares with us. Speed Up – Parallel Processes and unparalleled Performance – Performance tuning is my favorite subject. I will be discussing effect of parallelism on performance in this session. Here me out, there will be lots of quiz questions during this session and if you get the answers correct – you can win some really cool goodies – I Promise! READ MORE HERE. Keep your database available – AlwaysOn – Balmukund is like an army man. He is always ready to show and prove that he has coolest toys in terms of SQL Server and he knows how to keep them running AlwaysON. Availability groups, Listener, Clustering, Failover, Read-Only replica etc all will be demo’ed in this session. This is really heavy but very interesting content not to be missed. Lesser known facts about SQL Server Backup and Restore – Amit Banerjee – this name is known internationally for solving SQL Server problems in 140 characters. He has already blogged about this and this topic is going to be interesting. A successful restore strategy for applications is as good as their last good known backup. I have few difficult questions to ask to Amit and I am very sure that his unique style will entertain people. By the way, his one of the slide may give few in audience a funny heart attack. Top 5 reasons why you want SQL Server 2012 BI – Praveen plans to take a tour of some of the BI enhancements introduced in the new version. Business Insights with SQL Server is a critical building block and this version of SQL Server is no exception. For the matter of the fact, when I saw the demos he was going to show during this session, I felt like that I wish I can set up all of this on my machine. If you miss this session – you will miss one of the most informative session of the day. Also TechEd India 2012 has a Live streaming of some content and this can be watched here. The TechEd Team is planning to have some really good exclusive content in this channel as well. If you spot me, just do not hesitate to come by me and introduce yourself, I want to remember you! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Supporting users if they're not on your site

    - by Roger Hart
    Have a look at this Read Write Web article, specifically the paragraph in bold and the comments. Have a wry chuckle, or maybe weep for the future of humanity - your call. Then pause, and worry about information architecture. The short story: Read Write Web bumps up the Google rankings for "Facebook login" at the same time as Facebook makes UI changes, and a few hundred users get confused and leave comments on Read Write Web complaining about not being able to log in to their Facebook accounts.* Blindly clicking the first Google result is not a navigation behaviour I'd anticipated for folks visiting big names sites like Facebook. But then, I use Launchy and don't know where any of my files are, depend on Firefox auto-complete, view Facebook through my IM client, and don't need a map to find my backside with both hands. Not all our users behave in the same way, which means not all of our architecture is within our control, and people can get to your content in all sorts of ways. Even if the Read Write Web episode is a prank of some kind (there are, after all, plenty of folks who enjoy orchestrated trolling) it's still a useful reminder. Your users may take paths through and to your content you cannot control, and they are unlikely to deconstruct their assumptions along the way. I guess the meaningful question is: can you still support those users? If they get to you from Google instead of your front door, does what they find still make sense? Does your information architecture still work if your guests come in through the bathroom window? Ok, so here they broke into the house next door - you can't be expected to deal with that. But the rest is well worth thinking about. Other off-site interaction It's rarely going to be as funny as the comments at Read Write Web, but your users are going to do, say, and read things they think of as being about you and your products, in places you don't control. That's good. If you pay attention to it, you get data. Your users get a better experience. There are easy wins, too. Blogs, forums, social media &c. People may look for and find help with your product on blogs and forums, on Twitter, and what have you. They may learn about your brand in the same way. That's fine, it's an interaction you can be part of. It's time-consuming, certainly, but you have the option. You won't get a blogger to incorporate your site navigation just in case your users end up there, but you can be there when they do. Again, Anne Gentle, Gordon McLean and others have covered this in more depth than I could. Direct contact Sales people, customer care, support, they all talk to people. Are they sending links to your content? if so, which bits? Do they know about all of it? Do they have the content they need to support them - messaging that funnels sales, FAQ that are realistically frequent, detailed examples of things people want to do, that kind of thing. Are they sending links because users can't find the good stuff? Are they sending précis of your content, or re-writes, or brand new stuff? If so, does that mean your content isn't up to scratch, or that you've got content missing? Direct sales/care/support interactions are enormously valuable, and can help you know what content your users find useful. You can't have a table of contents or a "See also" in a phonecall, but your content strategy can support more interactions than browsing. *Passing observation about Facebook. For plenty if folks, it is  the internet. Its services are simple versions of what a lot of people use the internet for, and they're aggregated into one stop. Flickr, Vimeo, Wordpress, Twitter, LinkedIn, and all sorts of games, have Facebook doppelgangers that are not only friendlier to entry-level users, they're right there, behind only one layer of authentication. As such, it could own a lot of interaction convention. Heavy users may well not be tech-savvy, and be quite change averse. That doesn't make this episode not dumb, but I'm happy to go easy on 'em.

    Read the article

  • Scripting out Contained Database Users

    - by Argenis
      Today’s blog post comes from a Twitter thread on which @SQLSoldier, @sqlstudent144 and @SQLTaiob were discussing the internals of contained database users. Unless you have been living under a rock, you’ve heard about the concept of contained users within a SQL Server database (hit the link if you have not). In this article I’d like to show you that you can, indeed, script out contained database users and recreate them on another database, as either contained users or as good old fashioned logins/server principals as well. Why would this be useful? Well, because you would not need to know the password for the user in order to recreate it on another instance. I know there is a limited number of scenarios where this would be necessary, but nonetheless I figured I’d throw this blog post to show how it can be done. A more obscure use case: with the password hash (which I’m about to show you how to obtain) you could also crack the password using a utility like hashcat, as highlighted on this SQLServerCentral article. The Investigation SQL Server uses System Base Tables to save the password hashes of logins and contained database users. For logins it uses sys.sysxlgns, whereas for contained database users it leverages sys.sysowners. I’ll show you what I do to figure this stuff out: I create a login/contained user, and then I immediately browse the transaction log with, for example, fn_dblog. It’s pretty obvious that only two base tables touched by the operation are sys.sysxlgns, and also sys.sysprivs – the latter is used to track permissions. If I connect to the DAC on my instance, I can query for the password hash of this login I’ve just created. A few interesting things about this hash. This was taken on my laptop, and I happen to be running SQL Server 2014 RTM CU2, which is the latest public build of SQL Server 2014 as of time of writing. In 2008 R2 and prior versions (back to 2000), the password hashes would start with 0x0100. The reason why this changed is because starting with SQL Server 2012 password hashes are kept using a SHA512 algorithm, as opposed to SHA-1 (used since 2000) or Snefru (used in 6.5 and 7.0). SHA-1 is nowadays deemed unsafe and is very easy to crack. For regular SQL logins, this information is exposed through the sys.sql_logins catalog view, so there is really no need to connect to the DAC to grab an SID/password hash pair. For contained database users, there is (currently) no method of obtaining SID or password hashes without connecting to the DAC. If we create a contained database user, this is what we get from the transaction log: Note that the System Base Table used in this case is sys.sysowners. sys.sysprivs is used as well, and again this is to track permissions. To query sys.sysowners, you would have to connect to the DAC, as I mentioned previously. And this is what you would get: There are other ways to figure out what SQL Server uses under the hood to store contained database user password hashes, like looking at the execution plan for a query to sys.dm_db_uncontained_entities (Thanks, Robert Davis!) SIDs, Logins, Contained Users, and Why You Care…Or Not. One of the reasons behind the existence of Contained Users was the concept of portability of databases: it is really painful to maintain Server Principals (Logins) synced across most shared-nothing SQL Server HA/DR technologies (Mirroring, Availability Groups, and Log Shipping). Often times you would need the Security Identifier (SID) of these logins to match across instances, and that meant that you had to fetch whatever SID was assigned to the login on the principal instance so you could recreate it on a secondary. With contained users you normally wouldn’t care about SIDs, as the users are always available (and synced, as long as synchronization takes place) across instances. Now you might be presented some particular requirement that might specify that SIDs synced between logins on certain instances and contained database users on other databases. How would you go about creating a contained database user with a specific SID? The answer is that you can’t do it directly, but there’s a little trick that would allow you to do it. Create a login with a specified SID and password hash, create a user for that server principal on a partially contained database, then migrate that user to contained using the system stored procedure sp_user_migrate_to_contained, then drop the login. CREATE LOGIN <login_name> WITH PASSWORD = <password_hash> HASHED, SID = <sid> ; GO USE <partially_contained_db>; GO CREATE USER <user_name> FROM LOGIN <login_name>; GO EXEC sp_migrate_user_to_contained @username = <user_name>, @rename = N’keep_name’, @disablelogin = N‘disable_login’; GO DROP LOGIN <login_name>; GO Here’s how this skeleton would look like in action: And now I have a contained user with a specified SID and password hash. In my example above, I renamed the user after migrated it to contained so that it is, hopefully, easier to understand. Enjoy!

    Read the article

  • Vacations on Rodrigues 2014

    And now something completely different compared to the usual technical or community related articles here on this blog. Yes, this time I'm writing some lines on my (and my family's) activities during our long weekend stay on Rodrigues. So, please bear with me, it's eventually a bit more personal... Grab a soda, some popcorn and a cosy place to continue to read. var googleAlbumLink = "https://plus.google.com/photos/117698191428446859536/albums/6047895311458281985"; //optional----------------------- var mySlideWidth = 580; var mySlideHeight = 340; var mySlideDelay = 7000; //delay in milliseconds Special promotions during school holidays Originally, our children started to ask more frequently about going on the plane again. Obviously, after their aunty from Germany was around during May, they were really eager to travel again. So, we decided that it might be a great opportunity to book some vacations during their school holidays. And just in time the local hotels and hotel groups started to advertise their special promotions for citizens and residents. After collecting multiple brochures over several days, we got attracted by various hotel packages on Rodrigues - most interestingly the expenses for the stay and flight ticket were less compared to other resorts here on the main island. As we have been to Rodrigues already back in 2008, we followed up on this idea and got in touch with a couple travel agencies. Well, I have to report that you should be really careful about the promotions from some of them. We had a very negative experience with Shamal Travel Agency in Quatre Bornes regarding their adverts and the actual price levels and age definition for children. Please, stay away from them if you are interested in transparent cost and services. Anyway, after some arrangements with two other close families we managed to confirm our stay at the Cotton Bay Hotel in Rodrigues. Given the fact that we already stayed there, and the hotel has been renovated recently, and it is under new management all looked very promising and relaxed for our vacation. Counting the days... As we already booked in July our children were counting down the days. And it got more interesting as soon as they were on school holidays finally. Well, the day arrived and waking them up at 2:30 hrs wasn't a problem after all. Quite the opposite it was fascinating for us parents to watch them waiting for the transport and later on during the airport transfer. Despite the early hours both didn't fall asleep and it was all so exciting. We are taking the plane! Well organised by the Cotton Bay Hotel Honestly, it was a breeze and a smooth ride during our stay at the hotel. From the airport transfer, the cleanliness of our bungalow, the organisation of our day trips, and the SPA - all very well and enjoyable. The children had great fun, and although it was a bit too windy to plunge into the pool they had a lot of fun with other activities on the beach and at the Kid's Club. Oh, and we had our private petting zoo with cows, sheep and goats just close to the terrace. Some of us went to check out the SPA facilities and I have to admit that the services regarding Hammam and Sauna are better than at some other hotels in Mauritius. I don't know after how many months or years I was once again enjoying a very hot sauna. Little draw-back but nothing to worry about... There is no cold water or at least ice cubes to cool down the body, but hey there was a nice breeze coming over the hills. Some day trips to mention Based on a friend's recommendation we walked to a "restaurant" called Chez Solange & Robert. Hahaha, restaurant is widely stretched in this case, as we enjoyed a great BBQ with fresh lobster, whole fish, and pieces of chicken breast in an open cottage. Just some wooden structure covered with dried palm leaves on the roof - island feeling pure! The other day we went to the Giant Tortoise & Cave Reserve Francois Leguat to observe the giant Aldabra turtles and to visit the Grande Caverne. The biggest limestone cave on the island. Compared to our last visit this was a novelty after checking out the Caverne Partate. The formations of stalactites and stalagmites are very impressive and imaginative. Our guide had lots of funny terms and despite the low light conditions the kids had a great time wandering around on the narrow wooden paths and stairs. And last but not least, we decided to check out the Tyrodrig zip lines... Everyone was allowed to join the trip through the air, and our little ones stayed close to our field guides. But finally went on their own on the very last traversal. Puuuh, it was astounishing to glide over the valley, and for sure something to repeat next time. Impressions of our vacation on Rodrigues 2014   Next stay has been discussed already Oh yes, Rodrigues baby! We are going to come again! Tentative dates have been discussed already and now it's up to us to earn enough our next holiday on that wonderful remote piece of paradise. Eventually, a little bit longer than this time. We'll see...

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • SQL SERVER – Beginning New Weekly Series – Memory Lane – #002

    - by pinaldave
    Here is the list of curetted articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 Query to Find ByteSize of All the Tables in Database This was my second blog post and today I do not remember what was the business need which has made me build this query. It was built for SQL Server 2000 and it will not directly run on SQL Server 2005 or later version now. It measured the byte size of the tables in the database. This can be done in many different ways as well for example SP_HELPDB as well SP_HELP. I wish to build similar script in 2005 and later version. 2007 This week I had completed my – 1 Year (365 blogs) and very first 1 Million Views. I was pretty excited at that time with this new achievement. SQL SERVER Versions, CodeNames, Year of Release When I started with SQL Server I did not know all the names correctly for each version and I often used to get confused with this. However, as time passed by I started to remember all the codename as well. In this blog post I have not included SQL Server 2012′s code name as it was not released at the time. SQL Server 2012′s code name is Denali. Here is the question for you – anyone know what is the internal name of the SQL Server’s next version? Searching String in Stored Procedure I have already started to work with 2005 by this time and I was personally converting each of my stored procedures to SQL Server 2005 compatible. As we were upgrading from SQL Server 2000 to SQL Server 2005 we had to search each of the stored procedures and make sure that we remove incompatible code from it. For example, syscolumns of SQL Server 2000 was now being replaced by sys.columns of SQL Server 2005. This stored procedure was pretty helpful at that time. Later on I build few additional versions of the same stored procedure. Version 1: This version finds the Stored Procedures related to Table Version 2: This is specific version which works with SQL Server 2005 and later version 2008 Clear Drop Down List of Recent Connection From SQL Server Management Studio It happens to all of us when we connected to some remote client server and we never ever have to connect to it again. However, it keeps on bothering us that the name shows up in the list all the time. In this blog post I covered a quick tip about how we can remove the same. I also wrote a small article about How to Check Database Integrity for all Databases and there was a funny question from a reader requesting T-SQL code to refresh databases. 2009 Stored Procedure are Compiled on First Run – SP is taking Longer to Run First Time A myth is quite prevailing in the industry that Stored Procedures are pre-compiled and they should always run faster. It is not true. Stored procedures are compiled on very first execution of it and that is the reason why it takes longer when it executes first time. In this blog post I had a great time discussing the same concept. If you do not agree with it, you are welcome to read this blog post. Removing Key Lookup – Seek Predicate – Predicate – An Interesting Observation Related to Datatypes Performance Tuning is an interesting concept and my personal favorite one. In many blog posts I have described how to do performance tuning and how to improve the performance of the queries. In this quick quick tip I have explained how one can remove the Key Lookup and improve performance. Here are very relevant articles on this subject: Article 1 | Article 2 | Article 3 2010 Recycle Error Log – Create New Log file without a Server Restart During one of the consulting assignments I noticed DBA restarting server to create new log file. This is absolutely not necessary and restarting server might have many other negative impacts. There is a common sp_cycle_errorlog which can do the same task efficiently and properly. Have you ever used this SP or feature? Additionally I had a great time presenting on SQL Server Best Practices in SharePoint Conference. 2011 SSMS 2012 Reset Keyboard Shortcuts to Default It is very much possible that we mix up various SQL Server shortcuts and at times we feel like resetting it to default. In SQL Server 2012 it is not easy to do it, there is a process to follow and I enjoyed blogging about it. Fundamentals of Columnstore Index Columnstore index is introduced in SQL Server 2012 and have been a very popular subject. It increases the speed of the server dramatically as well can be an extremely useful feature with Datawharehousing. However updating the columnstore index is not as simple as a simple UPDATE statement. Read in a detailed blog post about how Update works with Columnstore Index. Additionally, you can watch a Quick Video on this subject. SQL Server 2012 New Features I had decided to explore SQL Server 2012 features last year and went through pretty much every single concept introduced in separate blog posts. Here are two blog posts where I describe how SQL Server 2012 functions works. Introduction to CUME_DIST – Analytic Functions Introduction to FIRST _VALUE and LAST_VALUE – Analytic Functions OVER clause with FIRST_VALUE and LAST_VALUE – Analytic Functions I indeed enjoyed writing about SQL Server 2012 functions last year. Have you gone through all the new features which are introduced in SQL Server 2012? If not, it is still not late to go through them. Reference: Pinal Dave (http://blog.sqlauthority.com)   Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Is it Hard to Write a Blog?

    - by Joe Mayo
    Responding to a tweet I received, asking if I found it hard to write a blog and keep it interesting. This is one of the situations where a 140 character response doesn’t do a question justice. There’s a lot to think about between the subjects of writing, subject matter, and entertainment.  Here’s my take on each of these three topics: There’s all types of writing you can do with various degrees of difficulty. If you’re writing a book and you have a gazillion editors bleeding over your every utterance, then the task becomes harder because you’re second-guessing yourself, not knowing whose opinion will be violated. However, if you’re communicating in a public forum, not too many people care about the grammar as much as whether what you have to say is correct.  For a blog, I would say it’s somewhere in-between.  Right now, I’m using Windows Live Writer, which gives me a few advantages to just typing into the blog editor, such as spelling correction and the ability to save my work and resume later.  Overall, writing is one of those things that you just need to get used to.  It’s an essential skill for developers because you need to document your work, depending on what your definition of proper documentation is, and communicate with other developers via various communications mediums. Not begin good (or not thinking that you’re good) shouldn’t hold you back.  Like most things in life, practice will improve your skill.  So, push away that inner voice that keeps you from moving forward and just do it. A good grasp on the subject matter you’re writing about helps.  However, don’t let a lack of knowledge stop you from writing about something. I recall reading something a while back by a developer who didn’t know a technology but wrote about their experience in learning it. They ended up learning more by expressing their thoughts in writing. If you look around out many blogs today, there are many items written by developers learning what they’re writing about.  So, whether you are sure or unsure, you can still write – just be honest with yourself and your readers about what you’re writing. Also, don’t be afraid to have a different opinion or worry if someone will disagree.  I’ll freely admit that it took a while for me to become accustomed to being criticized. Take the good with the bad and use the bad to make yourself better. Guaranteed, someone will disagree with one or more parts of what I’ve written here or think they have a better approach. No problem, more power to them, and whatever constructive comments they have will be a benefit to me in the future; Otherwise, to h*ll with them. :)  Every time you get knocked down, get right back up, dust the dirt off your backside, and keep moving forward.  You’ll learn in time how to align a subject with your own presentation of the material. Entertainment could be hard or could be natural, depending on the personality of yourself and your target audience. It’s even more challenging because you can say something you think is funny and someone will be offended. In fact, there are a lot of things that you shouldn’t say in the name of a joke, but I won’t mention any of them here for want of not offending anyone. Of course, I probably offended someone by saying that and there is probably an organization somewhere in the world out to get me now. I’m probably not the best person to be giving you advice on entertaining an audience.  I mean, every time I try to tell a joke on Twitter 10 people unfriend me. Okay, maybe 15, but you get my point. One thing you might be interested in knowing is that it’s not too hard for one technical person to entertain other technical people, especially when the subject is of interest.  It’s the excitement in each sentence and passion in each paragraph that will keep another developer entertained and interested in what you have to say. Not everyone will like what you’ve written, but the important part is to find your own voice and it’s likely that there is one person in some corner of the world that likes what you have to say, even if it’s your mom and she doesn’t understand a single word you write. :)   If I could leave you with one final thought; Just do it and don’t let anyone or anything hold you back.   Joe

    Read the article

  • Modifying Service URLs with LINQ to Twitter

    - by Joe Mayo
    It’s funny that two posts so close together speak about flexibility with the LINQ to Twitter provider.  There are certain things you know from experience on when to make software more flexible and when to save time.  This is another one of those times when I got lucky and made the right choice up front. I’m talking about the ability to switch URLs. It only makes sense that Twitter should begin versioning their API as it matures.  In fact, most of the entire API has moved to the v1 URL at “https://api.twitter.com/1/”, except for search and trends.  Recently, Twitter introduced the available and local trends, but hung them off the new v1, and left the rest of the trends API on the old URL. To implement this, I muscled my way into the expression tree during CreateRequestProcessor to figure out which trend I was dealing with; perhaps not elegant, but the code is in the right place and that’s what factories are for.  Anyway, the point is that I wouldn’t have to do this kind of stuff (as much fun as it is), if Twitter would have more consistency. Having went to Chirp last week and seeing the evolution of the API, it looks like my wish is coming true.  …now if they would just get their stuff together on the mess they made with geo-location and places… but again, that’s all transparent if your using LINQ to Twitter because I pulled all of that together in a consistent way so that you don’t have to. Normally, when Twitter makes a change, code breaks and I have to scramble to get the fixes in-place.  This time, in the case of a URL change, the adjustment is easy and no-one has to wait for me.  Essentially, all you need to do is change the URL passed to the TwitterContext constructor.  Here’s an example of instantiating a TwitterContext now: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://search.twitter.com/")) The third parameter constructor is the SearchUrl, which is used for Search and Trend APIs. You probably know what’s coming next; another constructor, but with the SearchUrl parameter set to the new URL as follows: using (var twitterCtx = new TwitterContext(auth, "https://api.twitter.com/1/", "https://api.twitter.com/1/")) One consequence of setting the URL this way is that you set the URL for both Trends and Search.  Since Search is still using the old URL, this is going to break for Search queries. You could always instantiate a special TwitterContext instance for Search queries, with the old URL set. Alternatively, you can use the TwitterContext’s SearchUrl property. Here’s an example: twitterCtx.SearchUrl = "https://api.twitter.com/1/"; var trends = (from trend in twitterCtx.Trends where trend.Type == TrendType.Daily && trend.Date == DateTime.Now.AddDays(-2).Date select trend) .ToList(); Notice how I set the SearchUrl property just-in-time for the query. This allows you to target the URL for each specific query. Whichever way you prefer to configure the URL, it’s your choice. So, now you know how to set the URL to be used for Trend queries and how to prevent whacking your Search queries. I’ll be updating the Trend API to use same URL as all other APIs soon, so the only API left to use the SearchUrl will be Search, but for the short term, it’s Trends and Search. Until I make this change, you’ll have a viable work-around by setting the URL yourself, as explained above. These were the Search and Trend URLs, but you might be curious about the second parameter of the TwitterContext constructor; that’s the URL for all other APIs (the BaseUrl), except for Trend and Search. Similarly, you can use the TwitterContext’s BaseUrl property to set the BaseUrl. Setting the BaseUrl can be useful when communicating with other services. In addition to Twitter changing URLs, the Twitter API has been adopted by other companies, such as Identi.ca, Tumblr, and  WordPress.  This capability lets you use LINQ to Twitter with any of these services.  This is a testament to the success of the Twitter API and it’s popularity. No doubt we’ll have hills and valleys to traverse as the Twitter API matures, but hopefully there will be enough flexibility in LINQ to Twitter to make these changes as transparent as possible for you. @JoeMayo

    Read the article

  • Solaris 11 Live CD alapú telepítés

    - by AndrasF
    Az elozo részben megigért két telepítési eljárás helyett kénytelen vagyok ebben a bejegyzésben kizárólag a Live CD-s változattal foglalkozni. Korábban nem gondoltam, hogy ennek bemutatása is több, mint 50 képernyo kimenetet igényel, ezért változtatnom kellett a korábbi tervezeten. A Solaris 11 Live CD-s telepítés elsosorban az asztali (desktop) felhasználók igényeit veszi figyelembe és kizárólag x86-os architektúrájú gépeken támogatott (annak ellenére, hogy SPARC-os rendszerek is rendelkeznek grafikus kártyával - pl. T4-1).A folyamat két részre bontható: eloször a vendéggép kerül kialakítása VirtualBox környezetben, majd ezt követi a Solaris 11-es telepítése virtuális gépre. HCL és segédprogramok (DDT, DDU) Mielott telepíteni szeretnénk a Solaris operációs rendszert, célszeru tájékozódni fizikai rendszerünk támogatottságáról. Erre jól használható a már említett hardver kompatibilitási (HCL) lista, vagy az alábbi két segédprogram: Device Detection Tool Device Driver Utility Mindkét alkalmazás képes rendszerünk hardver komponenseit feltérképezni és ellenorizni azok meghajtóprogram (driver) ellátottságát. Eltérés köztük abban nyilvánul meg, hogy míg a DDT futtatásához Java szükséges, addig a DDU Solarist igényel. Ez utóbbiról a telepítés során röviden szó fog esni. Telepíto készletek letöltési helye Hálózati installációtól eltekintve (*) telepítokészletre van szükségünk, mely az alábbi oldalról töltheto le. Célszeru letöltenünk mindhárom állományt és a csomagokat tartalmazó ún. repository médiát (a következo felsorolás utolsó eleme) is: sol-11-1111-live-x86.iso sol-11-1111-text-x86.iso sol-11-1111-ai-x86.iso sol-11-1111-repo-full.iso Az elso három változat indítható USB formátumban is rendelkezésre áll - ekkor iso végzodés helyett usb található a fájlnevek végén. Rövid utalást az egyes készletek feladatáról az elozo blog bejegyzés tartalmaz (link). Amennyiben SPARC architektúrájú rendszerre szeretnénk a telepítést végezni, 'x86' helyett a 'sparc' szöveget tartalmazó állományokra lesz szükség. (*) - arra is lehetoség van, hogy AI készletrol történo indítás segítségével végezzük a hálózaton keresztül történo telepítést. Ez akkor fontos, ha célgépünkön nincs PXE (Preboot Execution Environment) boot támogatás. VirtualBox konfigurálás Külön fizikai eszköz felhasználása nélkül virtuális környezetben is használható a Solaris 11, mint vendéggép. A VirtualBox használatával erre kényelmes lehetoség kínálkozik. Gazdagépünknek (Windows, Unix, Linux) megfelelo telepíto program, vagy programcsomag (jelenleg a 4.1.16-os verzió a legfrissebb változat) és az installációt is taglaló felhasználói kézikönyv letöltheto a termék oldaláról. A sikeres telepítést követoen az alábbi lépések során jutunk el az új virtuális gép kialakulásáig: 1. A VBox indítása után a központi ablak megmutatja a már létezo virtuális gépeinket (Sol11demo, Sol11u1b07, Sol11.1B16, Sun_ZFS_Storage_7000) és az aktuálisan kiválasztott egyed (Sol11demo) fobb jellemzoit (megnevezés, memória mérete, virtuális tároló eszközök listája...stb.) 2. A New gombra kattintva elindul a virtuális gépet létrehozó segéd (wizard) 3. Ezt követoen nevet kell adnunk a vendéggépnek és ki kell választanunk az operációs rendszer típusát (beszédes név használata esetén a VirtualBox képes az operációs rendszer családját kiválasztani, nekünk pusztán csak verziót kell beállítanunk): adjuk meg Solaris11-et névként és válasszuk a 64bites változatot (feltéve, hogy gazdagépünk támogatja ezt) 4. Telepítéshez és a kezdeti lépések megtételéhez 1536MB memória tökéletesen megfelel (ez késobb módosítható az elvárások függvényében) 5. Fizikai társaihoz hasonlóan, egyetlen virtuális gép sem létezhet merevlemez (jelen esetben virtuális diszk) nélkül. Használhatunk egy már létezo területet (virtuális lemezt tartalmazó állomány), de létrehozhatunk egy nekünk tetszo új példányt is. Maradjunk ez utóbbinál (Create new hard disk)! 6. A lehetséges formátumok közül - az egyszeruség okán - éljünk a felkínált alaptípussal (VDI - VirtualBox Disk Image). 7. Létrehozás során a virtuális lemez készülhet egyidejuleg (Fixed size), vagy több lépésben dinamikusan (Dynamically allocated). Az elso változat sokkal kevésbé terheli a rendszert, a második elonye pedig a helytakarékosság. Válasszuk a fix méretu változatot. 8. Most már csak egyetlen adat ismeretlen a VirtualBox számára, mégpedig a létrehozásra kerülo virtuális lemez nagysága. 8GB-os terület jelen esetben alkalmas az ismerkedés elkezdéséhez. 9. Amennyiben minden beállítást helyesen adtunk meg, a Create gomb megnyomása után elindul a virtuális lemez létrehozása. 10. Ez a muvelet a megadott adatoktól függoen néhány perc alatt befejezodik. 11. Hasonló megerosítés (Create gomb aktiválása) után elkezdodik a kért virtuális gép létrehozása is. 12. Sikeres végrehajtás után az új vitruális gép közvetlenül megjelenik a központi ablak baloldali listáján a rendelkezésre álló virtuális gépek közt. A blog bejegyzés folyamatosan frissül...a rész fennmaradó tartalma hamarosan felkerül az oldalra.

    Read the article

  • Some mail details about Orange Mauritius

    Being an internet service provider is not easy after all for a lot of companies. Luckily, there are quite some good international operators in this world. For example Orange Mauritius aka Mauritius Telecom aka Wanadoo(?) aka MyT here in Mauritius. The local circumstances give them a quasi-monopol position on fixed lines for telephony and therefore cable-based DSL internet connectivity. So far, not bad but as usual... the details. Just for the records, I am only using the services of Orange for mobile but friends and customers are bound, eh stuck, with other services of Orange Mauritius. And usually, being the IT guy, they get in touch with me to complain about problems or to ask questions on either their ADSL / MyT connection, mail services or whatever. Most of those issues are user-related and easily to solve by tweaking the configuration of their computer a little bit but sometimes it's getting weird. Using Orange ADSL... somewhere else Now, let's imagine we are an Orange ADSL customer for ages and we are using their mail services with our very own mail address like "[email protected]". We configured our mail client like Thunderbird, Outlook Express, Outlook or Windows Mail as publicly described, and we are able to receive and send emails like a champion. No problems at all, the world is green. Did I mention that we have a laptop? Ok, let's take our movable piece of information technology and visit a friend here on the island. Not surprising, he is also customer of Orange, so we can read and answer emails. But Orange is not the online internet service provider and one day, we happen to hang out with someone that uses Emtel via WiMAX or UMTS.. And the fun starts... We can still receive and read emails from our Orange mail account and the IT world is still bright but try to send mails to someone outside the domain "@intnet.mu" or "@orange.mu". Your mail client will deny sending mail with SMTP message 5.1.0 "blah not allowed". First guess, there is problem with the mail client, maybe magically the configuration changed over-night. But no it is still working at home... So, there is for sure a problem with the guy's internet connection. At least, it is his fault not to have Orange internet services, so it can not work properly... The Orange Mail FAQ After some more frustation we finally checkout the Orange Mail FAQ to see whether this (obviously?) common problem has been described already. Sorry, but those FAQ entries are even more confusing as it is not really clear how to handle this scenario. Best of all is that most of the entries are still refering to use servers of the domain "intnet.mu". I mean Orange will disable those systems in favour of the domain "orange.mu" in the near future and does not amend their FAQs. Come on, guys! Ok, settings for POP3 are there. Hm, what about the secure version POP3S? No signs at all... Even changing your mail client to use password encryption with STARTTLS is not allowed at all. Use "bow.intnet.mu" for incoming mail... Ahhh, pretty obvious host name. I mean, at least something like pop.intnet.mu or pop3.intnet.mu would have been more accurate. Funny of all, the hostname "pop.orange.mu" is accessible to receive your mail account. Alright, checking SMTP options for authentication or other like POP-before-SMTP or whatever well-known and established mechanism to send emails are described. I guess that spotting a whale or shark in Mauritian waters would be easier. Trial and error on SMTP settings reveal that neither STARTTLS or any other connection / password encryption is available. Using SSL/TLS on SMTP only reveals that there is no service answering your request. Calling customer service So, we have to bite into the bitter apple and get in touch with Orange customer service and complain/explain them our case and ask for advice. After some hiccups, we finally manage to get hold of someone competent in mail services and we receive the golden spoon of mail configuration made by Orange Mauritius: SMTP hostname: smtpauth.intnet.mu And the world of IT is surprisingly green again. Customer satisfaction? Dear Orange Mauritius, what's the problem with this information? Are you scared of mail spammer? Why isn't there any case in your FAQs? Ok, talking about your FAQs - simply said: they are badly outdated! Configure your mail client to use server name based in the domain intnet.mu but specify your account username with orange.mu as domain part. Although, that there are servers available on the domain orange.mu after all. So, why don't you provide current information like this: POP3 server name: pop.orange.muSMTP server name: smtp.orange.muSMTP authenticated: smtpauth.orange.mu It's not difficult, is it? In my humble opinion not really and you would provide clean, consistent and up-to-date information for your customers. This would produce less frustation and so less traffic on your customer service lines. Which after all, would improve the total user experience and satisfaction level on both sides. Without knowing these facts. Now, imagine you would take your laptop abroad and have to use other internet service providers to be able to be online... Calling your customer service would be unnecessary expensive!

    Read the article

  • How to Achieve Real-Time Data Protection and Availabilty....For Real

    - by JoeMeeks
    There is a class of business and mission critical applications where downtime or data loss have substantial negative impact on revenue, customer service, reputation, cost, etc. Because the Oracle Database is used extensively to provide reliable performance and availability for this class of application, it also provides an integrated set of capabilities for real-time data protection and availability. Active Data Guard, depicted in the figure below, is the cornerstone for accomplishing these objectives because it provides the absolute best real-time data protection and availability for the Oracle Database. This is a bold statement, but it is supported by the facts. It isn’t so much that alternative solutions are bad, it’s just that their architectures prevent them from achieving the same levels of data protection, availability, simplicity, and asset utilization provided by Active Data Guard. Let’s explore further. Backups are the most popular method used to protect data and are an essential best practice for every database. Not surprisingly, Oracle Recovery Manager (RMAN) is one of the most commonly used features of the Oracle Database. But comparing Active Data Guard to backups is like comparing apples to motorcycles. Active Data Guard uses a hot (open read-only), synchronized copy of the production database to provide real-time data protection and HA. In contrast, a restore from backup takes time and often has many moving parts - people, processes, software and systems – that can create a level of uncertainty during an outage that critical applications can’t afford. This is why backups play a secondary role for your most critical databases by complementing real-time solutions that can provide both data protection and availability. Before Data Guard, enterprises used storage remote-mirroring for real-time data protection and availability. Remote-mirroring is a sophisticated storage technology promoted as a generic infrastructure solution that makes a simple promise – whatever is written to a primary volume will also be written to the mirrored volume at a remote site. Keeping this promise is also what causes data loss and downtime when the data written to primary volumes is corrupt – the same corruption is faithfully mirrored to the remote volume making both copies unusable. This happens because remote-mirroring is a generic process. It has no  intrinsic knowledge of Oracle data structures to enable advanced protection, nor can it perform independent Oracle validation BEFORE changes are applied to the remote copy. There is also nothing to prevent human error (e.g. a storage admin accidentally deleting critical files) from also impacting the remote mirrored copy. Remote-mirroring tricks users by creating a false impression that there are two separate copies of the Oracle Database. In truth; while remote-mirroring maintains two copies of the data on different volumes, both are part of a single closely coupled system. Not only will remote-mirroring propagate corruptions and administrative errors, but the changes applied to the mirrored volume are a result of the same Oracle code path that applied the change to the source volume. There is no isolation, either from a storage mirroring perspective or from an Oracle software perspective.  Bottom line, storage remote-mirroring lacks both the smarts and isolation level necessary to provide true data protection. Active Data Guard offers much more than storage remote-mirroring when your objective is protecting your enterprise from downtime and data loss. Like remote-mirroring, an Active Data Guard replica is an exact block for block copy of the primary. Unlike remote-mirroring, an Active Data Guard replica is NOT a tightly coupled copy of the source volumes - it is a completely independent Oracle Database. Active Data Guard’s inherent knowledge of Oracle data block and redo structures enables a separate Oracle Database using a different Oracle code path than the primary to use the full complement of Oracle data validation methods before changes are applied to the synchronized copy. These include: physical check sum, logical intra-block checking, lost write validation, and automatic block repair. The figure below illustrates the stark difference between the knowledge that remote-mirroring can discern from an Oracle data block and what Active Data Guard can discern. An Active Data Guard standby also provides a range of additional services enabled by the fact that it is a running Oracle Database - not just a mirrored copy of data files. An Active Data Guard standby database can be open read-only while it is synchronizing with the primary. This enables read-only workloads to be offloaded from the primary system and run on the active standby - boosting performance by utilizing all assets. An Active Data Guard standby can also be used to implement many types of system and database maintenance in rolling fashion. Maintenance and upgrades are first implemented on the standby while production runs unaffected at the primary. After the primary and standby are synchronized and all changes have been validated, the production workload is quickly switched to the standby. The only downtime is the time required for user connections to transfer from one system to the next. These capabilities further expand the expectations of availability offered by a data protection solution beyond what is possible to do using storage remote-mirroring. So don’t be fooled by appearances.  Storage remote-mirroring and Active Data Guard replication may look similar on the surface - but the devil is in the details. Only Active Data Guard has the smarts, the isolation, and the simplicity, to provide the best data protection and availability for the Oracle Database. Stay tuned for future blog posts that dive into the many differences between storage remote-mirroring and Active Data Guard along the dimensions of data protection, data availability, cost, asset utilization and return on investment. For additional information on Active Data Guard, see: Active Data Guard Technical White Paper Active Data Guard vs Storage Remote-Mirroring Active Data Guard Home Page on the Oracle Technology Network

    Read the article

  • How to build a great relationship with your colleagues

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Calibri","sans-serif"; mso-ascii- mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi- mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} When you start new job, you worry about your performance, about being able to do what the manager asks you to do, but you also worry about the relations with your colleagues. How will you get along with them? What if they don’t like you? Have you ever felt you’re „the new guy” and your colleagues have already their own way of talking one to each other, their own jokes? It’s a common feeling and can actually become stressful. I am Norbert, Middleware Presales Intern in Hungary and I’ve been working within Oracle for only 1 month. Joining such a big company has been a challenge from many perspectives. One of them was adapting with the environment and getting to know all my colleagues. You know it’s quite difficult to introduce yourself, to try to liaise with them and find some common topics, so I felt very lucky and comfortable when my manager introduced me to all of my colleagues. It was easier to accommodate and we basically we had a starting point for our discussions. We started to talk about what my position means, for how many years they’ve been within Oracle, other Oracle related topics, but also more personal stuff like what they do after work. Having this opportunity of talking with all of them helped me introduce myself in a proper way and actually I told them many things about myself. Networking wasn’t my best skill, but these first days were really helpful from a network point of view. What else can you do to get along with your colleagues? One second thing I consider as being really helpful in networking is asking work-related questions. For instance, when you don’t know how to do something or don’t understand it, asking one of your colleagues will also help you to make a connection with him and you could easily continue the discussion with some other topics which are more personal. It’s a very effective strategy and in a company like Oracle people are very willing to help you with your tasks and perform at a high level. If you see your colleagues going to lunch, you should join them. It will help you become part of their community, finding out what’s new in their lives, you’ll, step-by-step, take part in their conversations and be up to date with the hot topics they talk about. One other opportunity of becoming part of your colleagues’ community are the internal events. Subscribing to the local free time activities mailing list is very useful for finding out information about when they’re going out and have a drink or attending all sorts of events. For instance, this is how I’ve found out about a party within Oracle that most of the employees here attend. It’s a wonderful opportunity for chatting and make a stronger connection to some of them. How important is attending these events? Think about how much time you spend at work. You’d like to enjoy your work and the environment, so getting along with your colleagues is a nice thing to have. I recently attended a corporate party whose purpose was to facilitate the interaction and communication between employees. It was a real success and we had a lot of fun, especially because it was a costume party.  All the fancy dresses and funny clothes we wore made the atmosphere really enjoyable. It was easy to liaise with colleague with whom I had never interacted with before. There was a friendly spirit among us, chatting about personal stuff and about various pleasant things. Working in an international company is not an easy thing because you interact with many people and they have different styles, but all these opportunities of informal interaction are a good way to adapt to the new working environment.

    Read the article

  • In Social Relationship Management, the Spirit is Willing, but Execution is Weak

    - by Mike Stiles
    In our final talk in this series with Aberdeen’s Trip Kucera, we wanted to find out if enterprise organizations are actually doing anything about what they’re learning around the importance of communicating via social and using social listening for a deeper understanding of customers and prospects. We found out that if your brand is lagging behind, you’re not alone. Spotlight: How was Aberdeen able to find out if companies are putting their money where their mouth is when it comes to implementing social across the enterprise? Trip: One way to think about the relative challenges a business has in a given area is to look at the gap between “say” and “do.” The first of those words reveals the brand’s priorities, while the second reveals their ability to execute on those priorities. In Aberdeen’s research, we capture this by asking firms to rank the value of a set of activities from one on the low end to five on the high end. We then ask them to rank their ability to execute those same activities, again on a one to five, not effective to highly effective scale. Spotlight: And once you get their self-assessments, what is it you’re looking for? Trip: There are two things we’re looking for in this analysis. The first is we want to be able to identify the widest gaps between perception of value and execution. This suggests impediments to adoption or simply a high level of challenge, be it technical or otherwise. It may also suggest areas where we can expect future investment and innovation. Spotlight: So the biggest potential pain points surface, places where they know something is critical but also know they aren’t doing much about it. What’s the second thing you look for? Trip: The second thing we want to do is look at specific areas in which high-performing companies, the Leaders, are out-executing the Followers. This points to the business impact of these activities since Leaders are defined by a set of business performance metrics. Put another way, we’re correlating adoption of specific business competencies with performance, looking for what high-performers do differently. Spotlight: Ah ha, that tells us what steps the winners are taking that are making them winners. So what did you find out? Trip: Generally speaking, we see something of a glass curtain when it comes to the social relationship management execution gap. There isn’t a single social media activity in which more than 50% of respondents indicated effectiveness, which would be a 4 or 5 on that 1-5 scale. This despite the fact that 70% of firms indicate that generating positive social media mentions is valuable or very valuable, a 4 or 5 on our 1-5 scale. Spotlight: Well at least they get points for being honest. The verdict they’re giving themselves is that they just aren’t cutting it in these highly critical social development areas. Trip: And the widest gap is around directly engaging with customers and/or prospects on social networks, which 69% of firms rated as valuable but only 34% of companies say they are executing well. Perhaps even more interesting is that these two are interdependent since you’re most likely to generate goodwill on social through happy, engaged customers. This data also suggests that social is largely being used as a broadcast channel rather than for one-to-one engagement. As we’ve discussed previously, social is an inherently personal media. Spotlight: And if they’re still using it as a broadcast channel, that shows they still fail to understand the root of social and see it as just another outlet for their ads and push-messaging. That’s depressing. Trip: A second way to evaluate this data is by using Aberdeen’s performance benchmarking. The story is both a bit different, but consistent in its own way. The first thing we notice is that Leaders are more effective in their execution of several key social relationship management capabilities, namely generating positive mentions and engaging with “influencers” and customers. Based on the fact that Aberdeen uses a broad set of performance metrics to rank the respondents as either “Leaders” (top 35% in weighted performance) or “Followers” (bottom 65% in weighted performance), from website conversion to annual revenue growth, we can then correlated high social effectiveness with company performance. We can also connect the specific social capabilities used by Leaders with effectiveness. We spoke about a few of those key capabilities last time and also discuss them in a new report: Social Powers Activate: Engineering Social Engagement to Win the Hidden Sales Cycle. Spotlight: What all that tells me is there are rewards for making the effort and getting it right. That’s how you become a Leader. Trip: But there’s another part of the story, which is that overall effectiveness, even among Leaders, is muted. There’s just one activity in which more than a majority of Leaders cite high effectiveness, effectiveness being the generation of positive buzz. While 80% of Leaders indicate “directly engaging with customers” through social media channels is valuable, the highest rated activity among Leaders, only 42% say they’re effective. This gap even among Leaders shows the challenges still involved in effective social relationship management. @mikestilesPhoto: stock.xchng

    Read the article

  • Gamification at OOW

    - by erikanollwebb
    Last week was Oracle OpenWorld, and for those of you not in tech or downtown San Francisco, that might not mean a whole lot.  However, if you are familiar with it, Oracle OpenWorld is our premier customer event.  This year, more than 50,000 people attended.  It's not a good week to visit San Francisco on vacation because Oracle customers take over all the hotels in town!  It was crazy, but a lot of fun and it's a great opportunity for the Apps UX group to do customer research with a range of customers.  This year, more than 100+ customers and partners took the time to team up with our UX experts and provide feedback on new designs and ideas. Over three days,  UX teams conducted 8  one-on-one user feedback sessions, 4 focus groups and 7 surveys. In addition, we conducted a voice capture activity and were able to collect close to 70 speech samples at the lab and DEMOgrounds. This was a great opportunity for us to do some testing on some specific gamification concepts with a set of business analysts.  We pulled in 8 folks for a focus group on gamification concepts and whether they thought those would work for their teams. To get ready for this, my designer extraordinaire, Andrea Cantú, flew into town and we spent almost a week locked in a room together brainstorming design ideas.  We killed a few trees trying to get all of our concepts and other examples together in the process, but in the end, we put together a whole series of examples of how you might gamify an Oracle app (in this case, CRM).  Andrea is a genius for this kind of thing and the comps she created looked great.  Here's a picture of her hard at work!  We also had the good fortune to have my boss, Laurie Pattison and my usability contractor, Shobana Subramanian there to note take and observe as well.  Here's a few shots of us, hard at work preparing for the day (or checking out something on Laurie's iPhone...) To start things off, we gave an overview of gamification and I talked about what it's used for.  Then we gave the participants a scenario about our sales person and what we were trying to get her to do. It was a great opportunity to highlight what our business goals might be and why we might want to add game mechanics.  It was also a good way to get them thinking about how that might work for them in their environments and workplaces. There were some surprises for the day.  We asked how many of them were already familiar with the concept of gamification--only two people had heard of it and only one was using game mechanics in his work.  That's in contrast to a survey we just ran internally with folks in a dev org where almost 50% of about 450 respondents had heard of gamification.  As we discussed the ways game mechanics could be used, it became clear that many of the folks had seen some game mechanics in action but didn't know that's what they were.  We also noticed that the folks in this group felt that if they were trying to sell the concept in their orgs, they wouldn't call it gamification.  That's not a huge surprise to me--they said what we've heard in the past, that gamification does not seem like a serious term for enterprise software.  They said they'd sell it with the goals--as a means to increase behaviors by rewarding users for activities.  It's a funny problem.  The word puts some folks off, but at the same time, I haven't seen another one word description that quite captures the range of things that "gamification" can cover.  My guess is that the more mainstream the term becomes, the more desensitized we'll become to the idea the it's trivializing enterprise software in some way.  Still, it was interesting to note that this group still felt that they would not take this concept to their bosses or teams and call it "gamification".  They focused on the goals, and how we could incentivize desired behaviors with game mechanics.  As I have already stated in other posts, I feel like my org is more receptive to discussing how this is just a more transparent type of usability and user experience methods than talking about gamification.  That's the argument they said they would use. All in all, it was a good session.  I love getting to talk to customers, present ideas and concepts, and get their feedback and input.  It's the type of thing that really helps drive our designs and keeps us grounded in what our customers need/want.  We're already planning where to get more feedback opportunities in the coming months. 

    Read the article

  • Thread.Interrupt Is Evil

    - by Alois Kraus
    Recently I have found an interesting issue with Thread.Interrupt during application shutdown. Some application was crashing once a week and we had not really a clue what was the issue. Since it happened not very often it was left as is until we have got some memory dumps during the crash. A memory dump usually means WindDbg which I really like to use (I know I am one of the very few fans of it).  After a quick analysis I did find that the main thread already had exited and the thread with the crash was stuck in a Monitor.Wait. Strange Indeed. Running the application a few thousand times under the debugger would potentially not have shown me what the reason was so I decided to what I call constructive debugging. I did create a simple Console application project and try to simulate the exact circumstances when the crash did happen from the information I have via memory dump and source code reading. The thread that was  crashing was actually MS code from an old version of the Microsoft Caching Application Block. From reading the code I could conclude that the main thread did call the Dispose method on the CacheManger class which did call Thread.Interrupt on the cache scavenger thread which was just waiting for work to do. My first version of the repro looked like this   static void Main(string[] args) { Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); } static void ThreadFunc() { while (true) { object value = Dequeue(); // block until unblocked or awaken via ThreadInterruptedException } } static object WaitObject = new object(); static object Dequeue() { object lret = "got value"; try { lock (WaitObject) { } } catch (ThreadInterruptedException) { Console.WriteLine("Got ThreadInterruptException"); lret = null; } return lret; } I do start a background thread and call Thread.Interrupt on it and then directly let the application terminate. The thread in the meantime does plenty of Monitor.Enter/Leave calls to simulate work on it. This first version did not crash. So I need to dig deeper. From the memory dump I did know that the finalizer thread was doing just some critical finalizers which were closing file handles. Ok lets add some long running finalizers to the sample. class FinalizableObject : CriticalFinalizerObject { ~FinalizableObject() { Console.WriteLine("Hi we are waiting to finalize now and block the finalizer thread for 5s."); Thread.Sleep(5000); } } class Program { static void Main(string[] args) { FinalizableObject fin = new FinalizableObject(); Thread t = new Thread(ThreadFunc) { IsBackground = true, Name = "Test Thread" }; t.Start(); Console.WriteLine("Interrupt Thread"); t.Interrupt(); GC.KeepAlive(fin); // prevent finalizing it too early // After leaving main the other thread is woken up via Thread.Abort // while we are finalizing. This causes a stackoverflow in the CLR ThreadAbortException handling at this time. } With this changed Main method and a blocking critical finalizer I did get my crash just like the real application. The funny thing is that this is actually a CLR bug. When the main method is left the CLR does suspend all threads except the finalizer thread and declares all objects as garbage. After the normal finalizers were called the critical finalizers are executed to e.g. free OS handles (usually). Remember that I did call Thread.Interrupt as one of the last methods in the Main method. The Interrupt method is actually asynchronous and does wake a thread up and throws a ThreadInterruptedException only once unlike Thread.Abort which does rethrow the exception when an exception handling clause is left. It seems that the CLR does not expect that a frozen thread does wake up again while the critical finalizers are executed. While trying to raise a ThreadInterrupedException the CLR goes down with an stack overflow. Ups not so nice. Why has this nobody noticed for years is my next question. As it turned out this error does only happen on the CLR for .NET 4.0 (x86 and x64). It does not show up in earlier or later versions of the CLR. I have reported this issue on connect here but so far it was not confirmed as a CLR bug. But I would be surprised if my console application was to blame for a stack overflow in my test thread in a Monitor.Wait call. What is the moral of this story? Thread.Abort is evil but Thread.Interrupt is too. It is so evil that even the CLR of .NET 4.0 contains a race condition during the CLR shutdown. When the CLR gurus can get it wrong the chances are high that you get it wrong too when you use this constructs. If you do not believe me see what Patrick Smacchia does blog about Thread.Abort and List.Sort. Not only the CLR creators can get it wrong. The BCL writers do sometimes have a hard time with correct exception handling as well. If you do tell me that you use Thread.Abort frequently and never had problems with it I do suspect that you do not have looked deep enough into your application to find such sporadic errors.

    Read the article

  • Accelerating 2d object collision with other objects [on hold]

    - by Silent Cave
    Making my very first attempt at game programming with SDL/OpenGL. So I made an object Actor witch can move in all four sides with acceleration. And there are bunch of other rectangles to collide to. the image Movement and collision detection alghorythms work just fine by itself, but when combined to prevent the green rectangle from crossing black rectangles, it gives me a kind of funny resault. Let me show you the code first: from Actor.h class Actor{ public: SDL_Rect * dim; alphaColor * col; float speed; float xlGrav, xrGrav, yuGrav, ydGrav; float acceleration; bool left,right,up,down; Actor(SDL_Rect * dim,alphaColor * col, float speed, float acceleration); bool colides(const SDL_Rect & rect); bool check_for_collisions(const std::vector<SDL_Rect*> & gameObjects ); }; from actor.cpp bool Actor::colides(const SDL_Rect & rect){ if (dim->x + dim->w < rect.x) return false; if (dim->x > rect.x + rect.w) return false; if (dim->y + dim->h < rect.y) return false; if (dim->y > rect.y + rect.h) return false; return true; } movement logic from main.cpp if (actor->left){ if(actor->xlGrav < actor->speed){ actor->xlGrav += actor->speed*actor->acceleration; }else actor->xlGrav = actor->speed; actor->dim->x -= actor->xlGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->x += actor->xlGrav; actor->xlGrav = 0; } } if (!actor->left){ if(actor->xlGrav - actor->speed*actor->acceleration > 0){ actor->xlGrav -= actor->speed*actor->acceleration; }else actor->xlGrav = 0; actor->dim->x -= actor->xlGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->x += actor->xlGrav; actor->xlGrav = 0; } } if (actor->right){ if(actor->xrGrav < actor->speed){ actor->xrGrav += actor->speed*actor->acceleration; }else actor->xrGrav = actor->speed; actor->dim->x += actor->xrGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->x -= actor->xrGrav; actor->xrGrav = 0; } } if (!actor->right){ if(actor->xrGrav - actor->speed*actor->acceleration > 0){ actor->xrGrav -= actor->speed*actor->acceleration; }else actor->xrGrav = 0; actor->dim->x += actor->xrGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->x -= actor->xrGrav; actor->xrGrav = 0; } } if (actor->up){ if(actor->yuGrav < actor->speed){ actor->yuGrav += actor->speed*actor->acceleration; }else actor->yuGrav = actor->speed; actor->dim->y -= actor->yuGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->y += actor->yuGrav; actor->yuGrav = 0; } } if (!actor->up){ if(actor->yuGrav - actor->speed*actor->acceleration > 0){ actor->yuGrav -= actor->speed*actor->acceleration; }else actor->yuGrav = 0; actor->dim->y -= actor->yuGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->y += actor->yuGrav; actor->yuGrav = 0; } } if (actor->down){ if(actor->ydGrav < actor->speed){ actor->ydGrav += actor->speed*actor->acceleration; }else actor->ydGrav = actor->speed; actor->dim->y += actor->ydGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->y -= actor->ydGrav; actor->ydGrav = 0; } } if (!actor->down){ if(actor->ydGrav - actor->speed*actor->acceleration > 0){ actor->ydGrav -= actor->speed*actor->acceleration; }else actor->ydGrav = 0; actor->dim->y += actor->ydGrav; if(actor->check_for_collisions(gameObjects)){ actor->dim->y -= actor->ydGrav; actor->ydGrav = 0; } } So, if the green box approaches an obstacle from up or left, everything goes as planned - object stops, and it's acceleration drops to zero. But if it comes from bottom or right, it enters into obstacles inner space and starts strangely dance, I'd rather say move in inverted controls. What do I fail to see?

    Read the article

  • Existential CAML - does an item exist?

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved More CAML and existence. In “SharePoint List Issues” and “Passing the CAML thru the EY of the NEEDL we saw how to use CAML to return a subset of a list and also how to check the existence of lists, fields, defaults, and values.   Here is a general function that may be used to get a subset of a list by comparing a “text” type field to a given value.  The function is pretty smart. It can be used to check existence or to return a collection of items that may be further processed. It handles non existing fields and replaces them with the ubiquitous “Title”, but only once!  /// Build an SPQuery that returns a selected set of columns from a List /// titleField must be a "Text" type field /// When the titleField parameter is empty ("") "Title" is assumed /// When the title parameter is empty ("") All is assumed /// When the columnNames parameter is null, the query returns all the fields /// When the rowLimit parameter is 0, the query return all the items. /// with a non-zero, the query returns at most rowLimits /// /// usage: to check if an item titled "Blah" exists in your list, do: /// colNames = {"Title"} /// col = GetListItemColumnByTitle(myList, "", "Blah", colNames, 1) /// Check the col.Count. if > 0 the item exists and is in the collection private static SPListItemCollection GetListItemColumnByTitle(SPList list, string titleField, string title, string[] columnNames, uint rowLimit) {   try   {     char QT = Convert.ToChar((int)34);     SPQuery query = new SPQuery();     if (title != "")     {       string tf = titleField;       if (titleField == "") tf = "Title";       tf = CAMLThisName(list, tf, "Title");        StringBuilder titleQuery = new StringBuilder  ("<Where><Eq><FieldRef Name=");       titleQuery.Append(QT);       titleQuery.Append(tf);       titleQuery.Append(QT);       titleQuery.Append("/><Value Type=");       titleQuery.Append(QT);       titleQuery.Append("Text");       titleQuery.Append(QT);       titleQuery.Append(">");       titleQuery.Append(title);       titleQuery.Append("</Value></Eq></Where>");       query.Query = titleQuery.ToString();     }     if (columnNames.Length != 0)     {       StringBuilder sb = new StringBuilder("");       bool TitleAlreadyIncluded = false;       foreach (string columnName in columnNames)       {         string tst = CAMLThisName(list, columnName, "Title");         //Allow Title only once         if (tst != "Title" || !TitleAlreadyIncluded)         {           sb.Append("<FieldRef Name=");           sb.Append(QT);           sb.Append(tst);           sb.Append(QT);           sb.Append("/>");           if (tst == "Title") TitleAlreadyIncluded = true;         }       }       query.ViewFields = sb.ToString();     }     if (rowLimit > 0)     {        query.RowLimit = rowLimit;     }     SPListItemCollection col = list.GetItems(query);     return col;   }   catch (Exception ex)   {     //Console.WriteLine("GetListItemColumnByTitle" + ex.ToString());     //sw.WriteLine("GetListItemColumnByTitle" + ex.ToString());     return null;   } } Here I called it for a list in which “Author” (it is the internal name for “Created”) and “Blah” do not exist. The list of column names is:  string[] columnNames = {"Test Column1", "Title", "Author", "Allow Multiple Ratings", "Blah"};  So if I use this call, I get all the items for which “01-STD MIL_some” has the value of 1. the fields returned are: “Test Column1”, “Title”, and “Allow Multiple Ratings”. Because “Title” was already included and the default for non exixsting is “Title”, it was not replicated for the 2 non-existing fields.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 0); The following call checks if there are any items where “01-STD MIL_some” has the value of “1”. Note that I limited the number of returned items to 1.  SPListItemCollection col = GetListItemColumnByTitle(masterList, "01-STD MIL_some", "1", columnNames, 1); The code also uses the CAMLThisName function that checks for an existence of a field and returns its InternalName. This is yet another useful function that I use again and again.  /// <summary> /// return a fields internal name (CAMLName)  /// or the "default" name that you passed. /// To check existence pass "" or some funny name like "mud in your eye" /// </summary> public static string CAMLThisName(SPList list, string name, string def) {   String CAMLName = def;   SPField fld = GetFieldByName(list, name);   if (fld != null)   {      CAMLName = fld.InternalName;   }   return CAMLName; } That’s all folks?!

    Read the article

  • What Counts for A DBA - Logic

    - by drsql
    "There are 10 kinds of people in the world. Those who will always wonder why there are only two items in my list and those who will figured it out the first time they saw this very old joke."  Those readers who will give up immediately and get frustrated with me for not explaining it to them are not likely going to be great technical professionals of any sort, much less a programmer or administrator who will be constantly dealing with the common failures that make up a DBA's day.  Many of these people will stare at this like a dog staring at a traffic signal and still have no more idea of how to decipher the riddle. Without explanation they will give up, call the joke "stupid" and, feeling quite superior, walk away indignantly to their job likely flipping patties of meat-by-product. As a data professional or any programmer who has strayed  to this very data-oriented blog, you would, if you are worth your weight in air, either have recognized immediately what was going on, or felt a bit ignorant.  Your friends are chuckling over the joke, but why is it funny? Unfortunately you left your smartphone at home on the dresser because you were up late last night programming and were running late to work (again), so you will either have to fake a laugh or figure it out.  Digging through the joke, you figure out that the word "two" is the most important part, since initially the joke mentioned 10. Hmm, why did they spell out two, but not ten? Maybe 10 could be interpreted a different way?  As a DBA, this sort of logic comes into play every day, and sometimes it doesn't involve nerdy riddles or Star Wars folklore.  When you turn on your computer and get the dreaded blue screen of death, you don't immediately cry to the help desk and sit on your thumbs and whine about not being able to work. Do that and your co-workers will question your nerd-hood; I know I certainly would. You figure out the problem, and when you have it narrowed down, you call the help desk and tell them what the problem is, usually having to explain that yes, you did in fact try to reboot before calling.  Of course, sometimes humility does come in to play when you reach the end of your abilities, but the ‘end of abilities’ is not something any of us recognize readily. It is handy to have the ability to use logic to solve uncommon problems: It becomes especially useful when you are trying to solve a data-related problem such as a query performance issue, and the way that you approach things will tell your coworkers a great deal about your abilities.  The novice is likely to immediately take the approach of  trying to add more indexes or blaming the hardware. As you become more and more experienced, it becomes increasingly obvious that performance issues are a very complex topic. A query may be slow for a myriad of reasons, from concurrency issues, a poor query plan because of a parameter value (like parameter sniffing,) poor coding standards, or just because it is a complex query that is going to be slow sometimes. Some queries that you will deal with may have twenty joins and hundreds of search criteria, and it can take a lot of thought to determine what is going on.  You can usually figure out the problem to almost any query by using basic knowledge of how joins and queries work, together with the help of such things as the query plan, profiler or monitoring tools.  It is not unlikely that it can take a full day’s work to understand some queries, breaking them down into smaller queries to find a very tiny problem. Not every time will you actually find the problem, and it is part of the process to occasionally admit that the problem is random, and everything works fine now.  Sometimes, it is necessary to realize that a problem is outside of your current knowledge, and admit temporary defeat: You can, at least, narrow down the source of the problem by looking logically at all of the possible solutions. By doing this, you can satisfy your curiosity and learn more about what the actual problem was. For example, in the joke, had you never been exposed to the concept of binary numbers, there is no way you could have known that binary - 10 = decimal - 2, but you could have logically come to the conclusion that 10 must not mean ten in the context of the joke, and at that point you are that much closer to getting the joke and at least won't feel so ignorant.

    Read the article

  • Styling ASP.NET MVC Error Messages

    - by MightyZot
    Originally posted on: http://geekswithblogs.net/MightyZot/archive/2013/11/11/styling-asp.net-mvc-error-messages.aspxOff the cuff, it may look like you’re stuck with the presentation of your error messages (model errors) in ASP.NET MVC. That’s not the case, though. You actually have quite a number of options with regard to styling those boogers. Like many of the helpers in MVC, the Html.ValidationMessageFor helper has multiple prototypes. One of those prototypes lets you pass a dictionary, or anonymous object, representing attribute values for the resulting markup. @Html.ValidationMessageFor( m => Model.Whatever, null, new { @class = “my-error” }) By passing the htmlAttributes parameter, which is the last parameter in the call to the prototype of Html.ValidationMessageFor shown above, I can style the resulting markup by associating styles to the my-error css class.  When you run your MVC project and view the source, you’ll notice that MVC adds the class field-validation-valid or field-validation-error to a span created by the helper. You could actually just style those classes instead of adding your own…it’s really up to you. Now, what if you wanted to move that error message around? Maybe you want to put that error message in a box or a callout. How do you do that? When I first started using MVC, it didn’t occur to me that the Html.ValidationMessageFor helper just spits out a little bit of markup. I wanted to put the error messages in boxes with white backgrounds, our site originally had a black background, and show a little nib on the side to make them look like callouts or conversation bubbles. Not realizing how much freedom there is in the styling and markup, and after reading someone else’s post, I created my own version of the ValidationMessageFor helper that took out the span and replaced it with divs. I styled the divs to produce the effect of a popup box and had a lot of trouble with sizing and such. That’s a really silly and unnecessary way to solve this problem. If you want to move your error messages around, all you have to do is move the helper. MVC doesn’t appear to care where you put it, which makes total sense when you think about it. Html.ValidationMessageFor is just spitting out a little markup using a little bit of reflection on the name you’re passing it. All you’ve got to do to style it the way you want it is to put it in whatever markup you desire. Take a look at this, for example… <div class=”my-anchor”>@Html.ValidationMessageFor( m => Model.Whatever )</div> @Html.TextBoxFor(m => Model.Whatever) Now, given that bit of HTML, consider the following CSS… <style> .my-anchor { position:relative; } .field-validation-error {    background-color:white;    border-radius:4px;    border: solid 1px #333;    display: block;    position: absolute;    top:0; right:0; left:0;    text-align:right; } </style> The my-anchor class establishes an anchor for the absolutely positioned error message. Now you can move the error message wherever you want it relative to the anchor. Using css3, there are some other tricks. For example, you can use the :not(:empty) selector to select the span and apply styles based upon whether or not the span has text in it. Keep it simple, though. Moving your elements around using absolute positioning may cause you issues on devices with screens smaller than your standard laptop or PC. While looking for something else recently, I saw someone asking how to style the output for Html.ValidationSummary.  Html.ValidationSummery is the helper that will spit out a list of property errors, general model errors, or both. Html.ValidationSummary spits out fairly simple markup as well, so you can use the techniques described above with it also. The resulting markup is a <ul><li></li></ul> unordered list of error messages that carries the class validation-summary-errors In the forum question, the user was asking how to hide the error summary when there are no errors. Their errors were in a red box and they didn’t want to show an empty red box when there aren’t any errors. Obviously, you can use the css3 selectors to apply different styles to the list when it’s empty and when it’s not empty; however, that’s not support in all browsers. Well, it just so happens that the unordered list carries the style validation-summary-valid when the list is empty. While the div rendered by the Html.ValidationSummary helper renders a visible div, containing one invisible listitem, you can always just style the whole div with “display:none” when the validation-summary-valid class is applied and make it visible when the validation-summary-errors class is applied. Or, if you don’t like that solution, which I like quite well, you can also check the model state for errors with something like this… int errors = ViewData.ModelState.Sum(ms => ms.Value.Errors.Count); That’ll give you a count of the errors that have been added to ModelState. You can check that and conditionally include markup in your page if you want to. The choice is yours. Obviously, doing most everything you can with styles increases the flexibility of the presentation of your solution, so I recommend going that route when you can. That picture of the fat guy jumping has nothing to do with the article. That’s just a picture of me on the roof and I thought it was funny. Doesn’t every post need a picture?

    Read the article

  • ????: PostgreSQL??Oracle RAC????

    - by Kumiko Fujita
    ?????????????????????????????????????????????????????????????????????????? ????????????????????????? * * * ?????????????????????????????????????DBMS??????????????????????????????DBMS????????????????????????????????????????????? 1. ???? ?????????????????????????????????????????????????????????????????????1?????? ???????????????? ?????????????????????????????DB???????OSS?PostgreSQL?????AP?????DB??????????????????? ???????? ?????10?????????????GB????????????????????????????DB?????????????????????????? ?????????????3,500?????????24????????????????????????????????????? ??AP?????????????????????????????????????????DB??PostgreSQL??????????????????PostgreSQL ????????????????????Vacuum????????????????????????????????????????????????????? ??????????????????PostgreSQL?OSS??????????????????????????????????????????????????DB MS??????Oracle Database 11gR2???????????????????????500GB???????????????????????????Partitioning ???????? Oracle Database Enterprise Edition?????????????????????????????????????????????? ????SAN?????Active/Standby???HA????????????????? 2. ????? 2.1. ???? PostgreSQL??????Oracle??????????????????????????????????????????????????????TEXT????? ????????????????????Oracle??????????????????????????PostgreSQL??csv???????Oracle Database?SQL*Loa der????????????? ??????????????????????????????DB??????????????Windows?Liunx??????????????????????? ????????????????????????????????????????????????? ?????????????PostgreSQL?NULL?????''????????????Oracle Database???????????????????????? ?????????? table { border-collapse: collapse; } th { border: solid 1px #666666; color: #000000; background-color: #ff9999; } td { border: solid 1px #666666; color: #000000; background-color: #ffffff; } ???? PostgreSQL Oracle Database ??? CHAR(n) CHAR(n),CLOB VARCHAR(n) VARCHAR2(n),CLOB TEXT CLOB ??? NUMERIC NUMBER INTEGER NUMBER SMALLINT NUMBER BIGINT NUMBER REAL NUMBER DOUBLE PRECISION NUMBER ??? DATE DATE TIMESTAMP TIMESTAMP ????? Bytea BLOB LOB BFILE/SecureFiles ??? OID ROWID 2.2. ????? ?????????????PostgreSQL?Oracle Database??????????SQL???????????????????????????????????Postg reSQL?LIMIT?OFFSET??Oracle Database?????????????????????? LIMIT,OFFSET???SELECT?????? /* PostgreSQL LIMIT,OFFSET */ SELECT ??? FROM ????? ORDER BY ???? LIMIT 2 OFFSET 5; /* Oracle Database????? */ SELECT ??? FROM (SELECT ???, ROWNUM line_no FROM (SELECT??? FROM ????? OREDR BY ???? ) ) WHERE line_no BETWEEN 6 AND 7; ??????????????????????????????????????????????????????????????????????????? ?????????????????? ????????????????????????????????????????????????Oracle Database??????????????????????Oracle Database????WHERE??????????????????????????????????????????????????????WHERE?????????????????????? 3. ???? ???????????????????????30%~40%????????????????????80%????????????????????? ?ITpro???:???????4????? ??????????????????????????????????? ·?????·??????????????????????????? ·????????????????????????????? ????????????????????????????????????????? 3.1. ??????? ????????????????????????????????????????·??????????????????????????????????? ???????????????????????????????????????????????????????·?????????????????? ???????????????????????????? (1)???????????????????? (2)???????????????????????????????????????????? (3)??????????????? (4)???????????????????????????????? ???????????·???????????????????????????????????????????????????????????????? ????????????????????? ????????·?? table { border-collapse: collapse; } th { border: solid 1px #666666; color: #000000; background-color: #ff9999; } td { border: solid 1px #666666; color: #000000; background-color: #ffffff; } ?? ?? ?? (1) ?????????? ????????????·???????????????????????? (2) ???????????????????? ?????????????????????????????? (3) ?????4????????????????? ???????????????????????DB????????? (4) ??????????(3)???????? ???????????????????????? ?????????????????????GB???????????????????????????????????????????(3)?????????? ??????? ??????????????????????????????????????????????csv??????????SQL*Loader?Oracle Database?????????????????????Oracle Database???????????????????????????INSERT????????????? ???????????????????????????????????????????????????????????????????????????? ?????????????????????? 3.2. ????? ???????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 3.3. ????? ??????????????????????????????????????????????????????????????????????????? ??????????????????????? DBMS????????????????????????SQL??????????????????????????????????????????????????PostgreSQL?Oracle Database???????????MVCC?????????????????????????Read Committed??????????????????????????????????????????????????????????????????????????????????? ????????????????DBMS?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? 4. ??? PostgreSQL??Oracle Database?????????????????????????????? ????????????·????????????????????????????????????? ??????4???????????????????????·??????????????????? ???????????????????????????????????????????????? ?????????????????????????????????????????????DBMS???????????????????DBMS???????? ?????SQL?????????????????????????????DB???????????????????????????? ???????????????????????????DBMS?????????????????????????????????????????????????????? ??????????????????????????????

    Read the article

< Previous Page | 43 44 45 46 47 48 49 50 51 52 53 54  | Next Page >