Search Results

Search found 11897 results on 476 pages for 'dean rather'.

Page 51/476 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Issues using gmail with google apps and external domain

    - by Jonathan Kelly
    I have recently tried to use gmail through google apps as my main email client, but I'm experiencing a few different problems. I am managing the domain (conjunktiondesign.co.uk) through 123reg.co.uk but it is hosted through fasthosts.co.uk. I transfered the domain to 123reg as fasthosts did not allow me to change the MX records myself. I followed the setup instructions step by step on google apps and changed the MX records as they told me to. My email was now working perfectly but my website was down and I was getting the following error: The dnsserver returned: No DNS records I have a friend that is using the same system as me (ie. Externally hosted domain and google apps mail) and I changed my 123reg details to the same that he had (as his was working perfectly - both email and website). I changed my name servers to point to fasthosts, rather than 123reg and I added an A record called '@' pointing to fasthosts IP address. I also created another A record called 'www' pointing to fasthosts IP address. After I did this, my website worked almost immediately but I have only realised that since changing it my email is now down. I have not received anything since Saturday. I am a web designer and would consider myself fairly tech savvy, but I have no idea about A records, CNAME's and all the things I have been messing about with! What I ultimately need is someone to help me get my email and website working at the same time, rather than one being down when the other is OK. I seem only able to get one or the other working. I have now changed the name servers back to 123reg in an attempt to get my email back as it is more important than my website at this stage. Any help is much appreciated. Thanks.

    Read the article

  • How do I calculate the cost of printing a given page?

    - by Alenanno
    I have seen questions like How much does a square inch of ink cost and How much more will a high-dpi image cost to print?, but mine isn't asking neither about a specific case, nor about how much something costs, as that would depend on the toner, for example. Rather, I was wondering how should I go about calculating the cost of printing a given page. Note that "given page" should be seen as a sort of x, i.e. the answer should be applicable in any case; I'd like this question to provide a good reference for those who want to calculate this cost. What should be taken into consideration? The cost of a single page (the paper only) is easily checkable, since you divide the cost of the whole package for the number of pages in the package itself. But how do I calculate the cost of the ink/toner? Which could translate to: how do I calculate the Ink Density1 for a given printer? I know it depends on quality of the printer itself, the type, the quality of the image being printed, the very nature of what I'm going to print, etc. But again, the focus of my question is not on the variables of this case, but rather the constants, hoping the math simile works for this case too. 1: Total amount of ink in one area of the page.

    Read the article

  • This web site needs a different Google Maps API key. A new key can be generated at http://code.googl

    - by MJI
    Apologies in advance if this is the wrong place to post. I tried searching for this issue and all that seemed to come up were questions posted from people who had this issue with their web pages. I couldn't find questions related to this issue from a laypersons perspective. I'm not a developer. I have no domain, nor wish to have one at the time. Rather I'm just a regular person who likes to upload photos to some photo related sites. My uploading process constantly gets interrupted by one of these annoying API errors. I get it at least two times, one when I click the page to upload, and also right after it has uploaded. It also pops up if I go to edit a photo or delete it. This interrupts my browsing experience until I click okay. I just want a fix for the annoying without having to register for a key. I tried before and it required a web domain. I rather not have to create a domain and go through such hoops just to fix this. Is there a solution for this problem that doesn't require registration? Another thing to note: I have used two computers. One has the message pop-up and the other doesn't. What is different about the two computers?

    Read the article

  • Synchronize Active Directory to Database

    - by Tommy Jakobsen
    We are in a situation where we would like to offer our customers to be able to manage their users themselves. It is around 300 customers with up to a total of 10.000 users. Besides creating, updating and removing users, they will very often read information about users for statics and other useful informations available. All this functionality, should be available from an Intranet web page (.NET Framework 4) that the users will access through Citrix or similar. Now the problem is that we would really like the users not to query AD directly for each request, but rather make them hit a database that is synchronized with AD. It would be sufficient to run this synchronization a few time each day (maybe every 5. hour). When they create a user, it should not be available right away, but reviewed and then created within two days (the next step would be to remove this manual review, but that's out of scope for this question). What do you think about this synchronization of AD? Does anyone have any experience with it and is it something that is done in other organizations, where you will have lots of requests which is better handled by a database than AD (I presume)? Are there any techniques out there for writing such a script that synchronizes AD with database tables? My primary concern is the groups/members relations which can be rather complicated. Or are there software that synchronizes AD with a database? Any comments will be much appreciated. Thank you.

    Read the article

  • Does migrating 2 domain controllers between 2 datacentre requires both virtual machines to be shut down at the same time?

    - by Imagineer
    I was attempting to migrate 2 virtual machines that are domain controllers between 2 datacentres running ESX 3.5 and ESX 4.1. I was advised to shut down both domain controller at the same time during the migration process. This is to avoid USN Rollback and other replication issues. The following are the steps that I was planning to perform: 1. Shutdown both DC. 2. Copy both VMs files across to new datacentre using Veeam FastSCP (connection to both vCentre through IP address instead of hostname) 3. Power them up at new datacentre. 4. Configure Network interface/DNS/DHCP for both DCs in new datacentre I have tried to use Veeam FastSCP rather than VMware Standalone Converter is because its copying rather than converting. Someone also suggested that I use backup and restore app like Veeam backup and replication software. Sounds like a simple job, but after shutting down both DCs, the transfer rate using FastSCP is so slow registering only 1KB/s as oppose to the normal 1MB/s (or more). When that attempt to transfer failed, I tried to cold clone both DCs resulted in the both ESX hosts get disconnected. I have tried troubleshooting by referring to this - VMware KB - Diagnosing an ESX Server that is Disconnected or Not Responding in VirtualCenter It seems that the DNS being down is the caused of all unusual occurrence. The moment I powered up the DCs via VMware console command, the ESX host were able to connect to the vCentre again. How can I avoid such a pitfall again? Am I doing it correctly? Any help would be greatly appreciated! Thank you.

    Read the article

  • Is it possible for the Subversion Apache module to serve html files with an html content-type without using the svn:mime-type property?

    - by Martin Pain
    I am aware that if you set the svn:mime-type Subversion property on a .html file to text/html then when viewing the file in a browser through the Subversion module in Apache httpd it will be served with a Content-Type: text/html header, enabling the browser to render it as HTML rather than plain text. However, I am looking for a way to do this without using the svn:mime-type property. I'm aware that you can configure your svn client to automatically add the property - this is not what I want, as I do not want to ensure all users have these settings. I'm also aware that I could create a pre-commit hook that rejects the commit if the properties are not set, in order to force users to set the property - I might fall back to that, but I'm looking for something less intrusive. I'm also aware that I could use a post-commit hook to add the properties automatically on the server-side. I'd rather not do that (as users then have to update immediately after their commit, and it's not trivial to write) - I'm looking for a better alternative. Perhaps something with rewrite rules in the Apache server?

    Read the article

  • How can I move linked Word/Excel files without breaking the links under Windows 7?

    - by DOUG NEEDHAM
    I currently operate under Windows XP and have multiple links between my Word and Excel files. I have to upgrade to Windows 7. When the .doc and .xls files are converted to .docm and .xlsm, respectively, the links no longer work. The Word document is still attempting to point back to the old .xls file rather than the new file. Also, creating new links between Word and Excel within Office 2010 doesn't seem to work. I create the new link, switch it from "Auto" to "Manual" and everything works fine. But when I copy the files to another folder, the Word document is still trying to link to the file in the previous folder rather than the new folder. This always worked in Windows XP. I've been using linked Word/Excel documents for 10+ years and have never really had a problem. I'm very careful to maintain Word and Excel filenames when moving the files to a new folder. The process has always been to 1.) move the files, 2.) update the links, 3.) rename the files, and 4.) update the links again. It's my understanding that under Windows XP, links between Word and Excel are relative. But under Windows 7 (and Office 2010?), those same links become fixed.

    Read the article

  • Subdomain is preventing my search results from rising as expected in page rank

    - by culov
    My problem is that I have a site which has requires a dedicated page for every city I choose to support. Early on, I decided to use subdomains rather than a directly after my domain (ie i used la.truxmap.com rather than truxmap.com/la). I realize now that this was a major mistake because Google seems to treat la.truxmap.com as a completely different site as ny.truxmap.com. So for instance, if i search "la food truck map" my site will be near the top, however, if i search "nyc food truck map" im no where in sight because ny.truxmap.com wouldnt be very high in the page rank by itself, and it doesnt have the boost that it ought to be getting from the better known la.truxmap.com So a mistake I made a year ago is now haunting my page rank. I'd like to know what the most painless way of resolving my dilemma might be. I have received so much press at la.truxmap.com that I can't just kill the site, but could I re-direct all requests at la.truxmap.com to truxmap.com/la and do the same for all cities supported without trashing my current, satisfactory page rank results I'm getting from la.truxmap.com ?? EDIT I left out some critical information. I am using Google Apps to manage my domain (that is, to add the subdomains) and Google App Engine to host my site. Thus, Google Apps provides a simple mechanism to mask truxmap.appspot.com (the app engine domain) as la.truxmap.com, but I don't see how I can mask it as truxmap.com/la. If I can get this done, then I can just 301 redirect la.truxmap.com to truxmap.com/la as suggested below. Thanks so much!

    Read the article

  • Ubuntu 10.10 - disaster - what other linux for beginner?

    - by A-ha
    Guys, I've tried to install ubuntu (desktop and notebook ed) on my laptop and unortunately I have to say that as despite the fact that installation process supposed to be easy I couldn't finish installation of this system - didn't detect my keyboard or rather lost my keyboard as soon as I tried to switch on/off pad on my laptop. After I've discovered that, I started all over again (this time without touching my laptop's pad during installation) and yes, eventually it get to the end of installation. Unfortunately, when I've tried to switch my pad (sometimes I just do not want to use a mouse) the whole system froze. So I had to restart it with the power button and this time I didn't touch pad at all, plugged in mouse and tried to rearrange taskbars according to my liking (all taskbars on the top side of the screen and auto-hide on) and I gave up. It is so unfinished that I just can't be bothered to use it. I would like to have one linux system on my machine so I started googling and most of the links are to either ubuntu (which I just do not want to touch for now) and suse or commercial versions of linux. I do not really mind paying for something (and having experience with ubuntu I'd rather pay and have something pro then get it free and discover that it's unusable). So could someone please provide short list of linux distros which would be appropriate for a beginner, and I don't mind paying for it, I just want it to be a professional product.

    Read the article

  • Relative path incorrect in the view layer when hosting a rails3 app in a subdirectory using passenger and apache

    - by Saifis
    I want to host multiple Rails apps on a multiple server using sub-directories. And have encountered some relative path problems. I have made a symbolic link to the app's public directory and placed it in the /var/www/html directory, var/www/html/ /test_app (symbolic link to the public folder of test_app) and set apache as so LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12/ext/apache2/mod_passenger.so PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.12 PassengerRuby /usr/local/bin/ruby <VirtualHost *:80> ServerName test.com DocumentRoot /var/www/html Options Indexes FollowSymLinks -MultiViews RailsBaseURI /test_app </Location> </VirtualHost> The links in the app itself works just fine, all the links acknowledge the test_app/ directory and work, however, when it comes to showing images in the public directory in the view, the relative path goes wrong. Say I have /system/files/1/aaa.png it goes looking for it in /var/www/html/system/files/1/aaa.png rather than /var/www/html/test_app/system/files/1/aaa.png As far as I understand this is an Apache setting problem than something to be done in Rails, if its possible I would prefer to have it contained in the conf file of apache rather than having to alter the code.

    Read the article

  • Exim, hot to route local mail to other adress

    - by kheraud
    I have setuped an Exim4 server on my debian wheezy server. This mail server only sends mail coming from localhost. The purpose is sending mail for my website. I have cron tasks and other services generating mails for root user. These mails are not stored in /var/mail as before, but sent by exim to [email protected]. I try to make exim send mails for root to [email protected] rather than [email protected]. I tried adding a .forward in /root with [email protected] as content. I tried also changing /etc/aliases with root: [email protected]. The fact is that routing works for root@localhost but not for root which is resolved as [email protected] I tested how routing is resolved with exim -bt : root@srv02:~# exim -bt root@localhost R: system_aliases for root@localhost R: dnslookup for [email protected] [email protected] <-- root@localhost router = dnslookup, transport = remote_smtp host gmail-smtp-in.l.google.com [173.194.67.27] MX=5 host alt1.gmail-smtp-in.l.google.com [74.125.143.27] MX=10 host alt2.gmail-smtp-in.l.google.com [74.125.25.27] MX=20 host alt3.gmail-smtp-in.l.google.com [173.194.64.27] MX=30 host alt4.gmail-smtp-in.l.google.com [74.125.142.27] MX=40 root@srv02:~# exim -bt root R: dnslookup for [email protected] [email protected] router = dnslookup, transport = remote_smtp host aspmx.l.google.com [173.194.78.27] MX=1 host alt1.aspmx.l.google.com [74.125.143.27] MX=5 host alt2.aspmx.l.google.com [74.125.25.27] MX=5 host alt4.aspmx.l.google.com [74.125.142.27] MX=10 host alt3.aspmx.l.google.com [173.194.64.27] MX=10 I bet this is a matter of how my server is configured (rather than how exim is configured). But to understand well I would like to have a solution for both : how to have root resolved as root@localhost ? how to have [email protected] routed to [email protected] ?

    Read the article

  • Revamping an old and unstable office IT-solution using Windows Server and OpenVPN

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • sub application and virtual directory file permissions

    - by Zeus
    I have a website setup in IIS7, exampledomain.com. Under the application exampledomain.com lives a sub application cms. In a rather convoluted way, we have content in our cms system in this sub-app, under cms\content\{generatedfoldername}. So to access an image in this content, the full URL would be http://www.exampledomain.com/cms/cms/content/{generatedfoldername}/image.jpg, (yes, cms twice...) and this works just fine. Now, we have a virtual directory under the parent website, called stuff which points at the content of the cms. So I should be able to get to the image using the url http://www.exampledomain.com/stuff/{generatedfoldername}/image.jpg. Unfortunately this gives a server 500 error "There is a problem with the resource you are looking for, and it cannot be displayed." Whilst you do have to log into the cms system to access any of the admin pages within, I don't think the image files are protected by login, or else the first example URL wouldn't work, right? Also it's a server 500 error, rather than a 403. I'm sure I must be missing something obvious here- will the virtual directory be using the permissions defined in the parent application, or the subapplication to which it is pointing? Or is there some other permissions I may have missed? Sorry, that was a bit long, thanks for reading all the way down here! (I also must point out that I'm pretty new to the server management stuff.) edit: also, we have <location path="." inheritInChildApplications="false"> specified in the webconfig of the parent app, so it's hopefully not the issue described in this config file hierarchy article.

    Read the article

  • Mod_pagespeed, Varnish and Apache cache issues after new code pushes

    - by WerkkreW
    I have a rather strange issue. In my environment we are running a load balanced cluster of 8 apache servers with a master-master MySQL backend. In front of apache we have Varnish in the cache layer. We have been running Apache mod_pagespeed for several weeks now and for the most part it has been working great. The issue arises when we do fresh code updates from Git, and and/all of the JS/CSS assets change. Basically the problem appears to be two fold. One, after the code push we generally take the opportunity to flush varnish, restart apache, and restart varnish. In doing this all of the mod_pagespeed combinied/minified files are cleared out ensuring that all of the new JS/CSS assets are fresh. The problem is, upon doing this the file names that mod_pagespeed creates change, but the old files (appear) to be still cached for many people client side leading to very unexpected results. However, if we do not restart apache, the changes to the files may or may not appear client side due to the cached minified assets. The simple solution is to disable mod_pagespeed, however I would rather not do that as it has made a fairly large impact in performance. I feel as if there must be a better way to deal with the inconsistencies in cache between the client and server to prevent having people to go to great lengths or perform a large number of page refreshes to see a working page. I can provide configuration snippets if anyone needs them. If you would like to inspect the site, source, headers, or anything try the following addresses: http://wellplayed.org http://wellplayed.org/tv Thanks in advance!

    Read the article

  • Revamping an old and unstable IT-solution for a customer?

    - by cmbrnt
    I've been given the cumbersome task to totally redo the IT-infrastructure for a customer's office. They are currently running Windows XP all over, with one computer acting as a file server with no control over which users have access to which files, and so on. To top it off, this file server also functions as a workstation, which means it gets rebooted every time the user notices some sluggish behavior or experiences problems with flash games. To say the least, this isn't working for them. Now - I've got a very slim budget, but I need to set up a new server, and I wish to run Windows Server 2008 on it. I also need the ability to access the network remotely via VPN. Would it be a good idea to install VMware ESXi 4.1 onto the new server, and then run Windows Server 2008 as well as a separate Debian install for openvpn on it? I don't like the Domain Controller for the future AD to also run a VPN-server, because of stability issues when something goes to hell with either of them. There will be no redundancy though. However, I'm not sure if there is something to gain by installing a VPN solution on the Windows Server itself, when it comes to accessing file shares on the network via VPN. I don't know how to enable users logging in via the VPN to access the remote files, since they will be accessing the network from their own home computers (which is indeed a really bad idea, but this is what I've got to work with). They won't be logged in to the windows Domain, but rather their home workgroups. I need to be able to grant access to files in certain directories based on the logged in AD-user, but every computer won't necessarily be configured to log into the domain. I'm not sure how to explain this in a good way, but I'd be happy to clarify if somethings not clear. Any help would be great, because I've got a feeling that I can't do this without introducing a bunch of costly new rules when it comes to their IT-solution. I'd rather leave that untouched and go on my merry way to the next assignment.

    Read the article

  • How to use non-free drivers during debian install

    - by blokeley
    I'm trying to install debian stable using unetbootin. The install process fails with "network autoconfiguration failed", probably due to the ethernet driver not working. My Lenovo U350 has a Broadcom BCM57780 which does not seem to be supported out-of-the-box: there are various bug reports here, here and here, but I don't know if the fix has made it into debian (6) stable. One discussion says that you have to use an ethernet driver from the firmware-linux-nonfree package. I'm not sure that this is correct because the BCM57780 is not in the list of drivers in firmware-linux-nonfree. The specific question tree is: Is BCM57780 supported in debian stable? If so, what could be wrong? Should I install debian unstable instead? If not, do I need to use firmware-linux-nonfree during installation and, if so, how do I do this? Please note: I've used ubuntu and debian loads in the past but please post line-by-line guidance rather than some cryptic abbreviation of any instructions. Thanks in advance for any help. Updates: Debian stable with non-free drivers did not work. Debian unstable (free drivers only) did not work. Tried loading firmware-iwlwifi_0.28_all.deb from another USB stick to get wireless working rather than BCM57780. The .deb file was found but the network configuration still failed! That's it, I'm giving up. Unfortunately I'll use ubuntu even though the Unity user interface will be very unstable for the next couple of years :(

    Read the article

  • Anyone recommend a program to print multiple HTML files at once for end users?

    - by Keith Bentrup
    I have some clients with multiple html files in folders that are occasionally updated & printed. They would like to be able to print them all at once without having to open each one. I typically do this with a quick command for myself, but I'm unaware of any freeware to do this. After a google search, I'm not finding one, so I'm hoping someone can help. I'd rather not use a script to do this for various security/ease of use/familiarity reasons, I'd rather be able to just point to a simple program they can download and use on their windows desktops. Anyone know of one or some other easy solution to do this? Maybe I'm overlooking the obvious. If anyone's curious, this is what I do for myself (not for my clients): for %h in (*.html) do type "%h" >> all.htm then open all.htm & print. If I need a page break on each doc, I just search and replace in all.htm </body> with <p style="page-break-after:always">&nbsp;</p></body>. It's quick & simple, but too unfamiliar for them. Thanks!

    Read the article

  • Slow queries in Rails- not sure if my indexes are being used.

    - by Max Williams
    I'm doing a quite complicated find with lots of includes, which rails is splitting into a sequence of discrete queries rather than do a single big join. The queries are really slow - my dataset isn't massive, with none of the tables having more than a few thousand records. I have indexed all of the fields which are examined in the queries but i'm worried that the indexes aren't helping for some reason: i installed a plugin called "query_reviewer" which looks at the queries used to build a page, and lists problems with them. This states that indexes AREN'T being used, and it features the results of calling 'explain' on the query, which lists various problems. Here's an example find call: Question.paginate(:all, {:page=>1, :include=>[:answers, :quizzes, :subject, {:taggings=>:tag}, {:gradings=>[:age_group, :difficulty]}], :conditions=>["((questions.subject_id = ?) or (questions.subject_id = ? and tags.name = ?))", "1", 19, "English"], :order=>"subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id", :per_page=>30}) And here are the generated sql queries: SELECT DISTINCT `questions`.id FROM `questions` LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id LIMIT 0, 30 SELECT `questions`.`id` AS t0_r0 <..etc...> FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) AND `questions`.id IN (602, 634, 666, 698, 730, 762, 613, 645, 677, 709, 741, 592, 624, 656, 688, 720, 752, 603, 635, 667, 699, 731, 763, 614, 646, 678, 710, 742, 593, 625) ORDER BY subjects.name, (gradings.difficulty_id is null), gradings.age_group_id, gradings.difficulty_id SELECT count(DISTINCT `questions`.id) AS count_all FROM `questions` LEFT OUTER JOIN `answers` ON answers.question_id = questions.id LEFT OUTER JOIN `quiz_questions` ON (`questions`.`id` = `quiz_questions`.`question_id`) LEFT OUTER JOIN `quizzes` ON (`quizzes`.`id` = `quiz_questions`.`quiz_id`) LEFT OUTER JOIN `subjects` ON `subjects`.id = `questions`.subject_id LEFT OUTER JOIN `taggings` ON `taggings`.taggable_id = `questions`.id AND `taggings`.taggable_type = 'Question' LEFT OUTER JOIN `tags` ON `tags`.id = `taggings`.tag_id LEFT OUTER JOIN `gradings` ON gradings.question_id = questions.id LEFT OUTER JOIN `age_groups` ON `age_groups`.id = `gradings`.age_group_id LEFT OUTER JOIN `difficulties` ON `difficulties`.id = `gradings`.difficulty_id WHERE (((questions.subject_id = '1') or (questions.subject_id = 19 and tags.name = 'English'))) Actually, looking at these all nicely formatted here, there's a crazy amount of joining going on here. This can't be optimal surely. Anyway, it looks like i have two questions. 1) I have an index on each of the ids and foreign key fields referred to here. The second of the above queries is the slowest, and calling explain on it (doing it directly in mysql) gives me the following: +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ | 1 | SIMPLE | questions | range | PRIMARY,index_questions_on_subject_id | PRIMARY | 4 | NULL | 30 | Using where; Using temporary; Using filesort | | 1 | SIMPLE | answers | ref | index_answers_on_question_id | index_answers_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | quiz_questions | ref | index_quiz_questions_on_question_id | index_quiz_questions_on_question_id | 5 | millionaire_development.questions.id | 1 | | | 1 | SIMPLE | quizzes | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.quiz_questions.quiz_id | 1 | | | 1 | SIMPLE | subjects | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.questions.subject_id | 1 | | | 1 | SIMPLE | taggings | ref | index_taggings_on_taggable_id_and_taggable_type,index_taggings_on_taggable_type | index_taggings_on_taggable_id_and_taggable_type | 263 | millionaire_development.questions.id,const | 1 | | | 1 | SIMPLE | tags | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.taggings.tag_id | 1 | Using where | | 1 | SIMPLE | gradings | ref | index_gradings_on_question_id | index_gradings_on_question_id | 5 | millionaire_development.questions.id | 2 | | | 1 | SIMPLE | age_groups | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.age_group_id | 1 | | | 1 | SIMPLE | difficulties | eq_ref | PRIMARY | PRIMARY | 4 | millionaire_development.gradings.difficulty_id | 1 | | +----+-------------+----------------+--------+---------------------------------------------------------------------------------+-------------------------------------------------+---------+------------------------------------------------+------+----------------------------------------------+ The query_reviewer plugin has this to say about it - it lists several problems: Table questions: Using temporary table, Long key length (263), Using filesort MySQL must do an extra pass to find out how to retrieve the rows in sorted order. To resolve the query, MySQL needs to create a temporary table to hold the result. The key used for the index was rather long, potentially affecting indices in memory 2) It looks like rails isn't splitting this find up in a very optimal way. Is it, do you think? Am i better off doing several find queries manually rather than one big combined one? Grateful for any advice, max

    Read the article

  • Making Sense of ASP.NET Paths

    - by Rick Strahl
    ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. .blackborder td { border-bottom: solid 1px silver; border-left: solid 1px silver; } Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject port Uri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString(); What else? I took a look through the old post’s comments and addressed as many of the questions and comments that came up in there. With a few small and silly exceptions this update post handles most of these. But I’m sure there are a more things that go in here. What else would be useful to put onto this post so it serves as a nice all in one place to go for path references? If you think of something leave a comment and I’ll try to update the post with it in the future.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • What You Need to Know About Windows 8.1

    - by Chris Hoffman
    Windows 8.1 is available to everyone starting today, October 19. The latest version of Windows improves on Windows 8 in every way. It’s a big upgrade, whether you use the desktop or new touch-optimized interface. The latest version of Windows has been dubbed “an apology” by some — it’s definitely more at home on a desktop PC than Windows 8 was. However, it also offers a more fleshed out and mature tablet experience. How to Get Windows 8.1 For Windows 8 users, Windows 8.1 is completely free. It will be available as a download from the Windows Store — that’s the “Store” app in the Modern, tiled interface. Assuming upgrading to the final version will be just like upgrading to the preview version, you’ll likely see a “Get Windows 8.1″ pop-up that will take you to the Windows Store and guide you through the download process. You’ll also be able to download ISO images of Windows 8.1, so can perform a clean install to upgrade. On any new computer, you can just install Windows 8.1 without going through Windows 8. New computers will start to ship with Windows 8.1 and boxed copies of Windows 8 will be replaced by boxed copies of Windows 8.1. If you’re using Windows 7 or a previous version of Windows, the update won’t be free. Getting Windows 8.1 will cost you the same amount as a full copy of Windows 8 — $120 for the standard version. If you’re an average Windows 7 user, you’re likely better off waiting until you buy a new PC with Windows 8.1 included rather than spend this amount of money to upgrade. Improvements for Desktop Users Some have dubbed Windows 8.1 “an apology” from Microsoft, although you certainly won’t see Microsoft referring to it this way. Either way, Steven Sinofsky, who presided over Windows 8′s development, left the company shortly after Windows 8 was released. Coincidentally, Windows 8.1 contains many features that Steven Sinofsky and Microsoft refused to implement. Windows 8.1 offers the following big improvements for desktop users: Boot to Desktop: You can now log in directly to the desktop, skipping the tiled interface entirely. Disable Top-Left and Top-Right Hot Corners: The app switcher and charms bar won’t appear when you move your mouse to the top-left or top-right corners of the screen if you enable this option. No more intrusions into the desktop. The Start Button Returns: Windows 8.1 brings back an always-present Start button on the desktop taskbar, dramatically improving discoverability for new Windows 8 users and providing a bigger mouse target for remote desktops and virtual machines. Crucially, the Start menu isn’t back — clicking this button will open the full-screen Modern interface. Start menu replacements will continue to function on Windows 8.1, offering more traditional Start menus. Show All Apps By Default: Luckily, you can hide the Start screen and its tiles almost entirely. Windows 8.1 can be configured to show a full-screen list of all your installed apps when you click the Start button, with desktop apps prioritized. The only real difference is that the Start menu is now a full-screen interface. Shut Down or Restart From Start Button: You can now right-click the Start button to access Shut down, Restart, and other power options in just as many clicks as you could on Windows 7. Shared Start Screen and Desktop Backgrounds; Windows 8 limited you to just a few Steven Sinofsky-approved background images for your Start screen, but Windows 8.1 allows you to use your desktop background on the Start screen. This can make the transition between the Start screen and desktop much less jarring. The tiles or shortcuts appear to be floating above the desktop rather than off in their own separate universe. Unified Search: Unified search is back, so you can start typing and search your programs, settings, and files all at once — no more awkwardly clicking between different categories when trying to open a Control Panel screen or search for a file. These all add up to a big improvement when using Windows 8.1 on the desktop. Microsoft is being much more flexible — the Start menu is full screen, but Microsoft has relented on so many other things and you’d never have to see a tile if you didn’t want to. For more information, read our guide to optimizing Windows 8.1 for a desktop PC. These are just the improvements specifically for desktop users. Windows 8.1 includes other useful features for everyone, such as deep SkyDrive integration that allows you to store your files in the cloud without installing any additional sync programs. Improvements for Touch Users If you have a Windows 8 or Windows RT tablet or another touch-based device you use the interface formerly known as Metro on, you’ll see many other noticeable improvements. Windows 8′s new interface was half-baked when it launched, but it’s now much more capable and mature. App Updates: Windows 8′s included apps were extremely limited in many cases. For example, Internet Explorer 10 could only display ten tabs at a time and the Mail app was a barren experience devoid of features. In Windows 8.1, some apps — like Xbox Music — have been redesigned from scratch, Internet Explorer allows you to display a tab bar on-screen all the time, while apps like Mail have accumulated quite a few useful features. The Windows Store app has been entirely redesigned and is less awkward to browse. Snap Improvements: Windows 8′s Snap feature was a toy, allowing you to snap one app to a small sidebar at one side of your screen while another app consumed most of your screen. Windows 8.1 allows you to snap two apps side-by-side, seeing each app’s full interface at once. On larger displays, you can even snap three or four apps at once. Windows 8′s ability to use multiple apps at once on a tablet is compelling and unmatched by iPads and Android tablets. You can also snap two of the same apps side-by-side — to view two web pages at once, for example. More Comprehensive PC Settings: Windows 8.1 offers a more comprehensive PC settings app, allowing you to change most system settings in a touch-optimized interface. You shouldn’t have to use the desktop Control Panel on a tablet anymore — or at least not as often. Touch-Optimized File Browsing: Microsoft’s SkyDrive app allows you to browse files on your local PC, finally offering a built-in, touch-optimized way to manage files without using the desktop. Help & Tips: Windows 8.1 includes a Help+Tips app that will help guide new users through its new interface, something Microsoft stubbornly refused to add during development. There’s still no “Modern” version of Microsoft Office apps (aside from OneNote), so you’ll still have to head to use desktop Office apps on tablets. It’s not perfect, but the Modern interface doesn’t feel anywhere near as immature anymore. Read our in-depth look at the ways Microsoft’s Modern interface, formerly known as Metro, is improved in Windows 8.1 for more information. In summary, Windows 8.1 is what Windows 8 should have been. All of these improvements are on top of the many great desktop features, security improvements, and all-around battery life and performance optimizations that appeared in Windows 8. If you’re still using Windows 7 and are happy with it, there’s probably no reason to race out and buy a copy of Windows 8.1 at the rather high price of $120. But, if you’re using Windows 8, it’s a big upgrade no matter what you’re doing. If you buy a new PC and it comes with Windows 8.1, you’re getting a much more flexible and comfortable experience. If you’re holding off on buying a new computer because you don’t want Windows 8, give Windows 8.1 a try — yes, it’s different, but Microsoft has compromised on the desktop while making a lot of improvements to the new interface. You just might find that Windows 8.1 is now a worthwhile upgrade, even if you only want to use the desktop.     

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • MVC Automatic Menu

    - by Nuri Halperin
    An ex-colleague of mine used to call his SQL script generator "Super-Scriptmatic 2000". It impressed our then boss little, but was fun to say and use. We called every batch job and script "something 2000" from that day on. I'm tempted to call this one Menu-Matic 2000, except it's waaaay past 2000. Oh well. The problem: I'm developing a bunch of stuff in MVC. There's no PM to generate mounds of requirements and there's no Ux Architect to create wireframe. During development, things change. Specifically, actions get renamed, moved from controller x to y etc. Well, as the site grows, it becomes a major pain to keep a static menu up to date, because the links change. The HtmlHelper doesn't live up to it's name and provides little help. How do I keep this growing list of pesky little forgotten actions reigned in? The general plan is: Decorate every action you want as a menu item with a custom attribute Reflect out all menu items into a structure at load time Render the menu using as CSS  friendly <ul><li> HTML. The MvcMenuItemAttribute decorates an action, designating it to be included as a menu item: [AttributeUsage(AttributeTargets.Method, AllowMultiple = true)] public class MvcMenuItemAttribute : Attribute {   public string MenuText { get; set; }   public int Order { get; set; }   public string ParentLink { get; set; }   internal string Controller { get; set; }   internal string Action { get; set; }     #region ctor   public MvcMenuItemAttribute(string menuText) : this(menuText, 0) { } public MvcMenuItemAttribute(string menuText, int order) { MenuText = menuText; Order = order; }       internal string Link { get { return string.Format("/{0}/{1}", Controller, this.Action); } }   internal MvcMenuItemAttribute ParentItem { get; set; } #endregion } The MenuText allows overriding the text displayed on the menu. The Order allows the items to be ordered. The ParentLink allows you to make this item a child of another menu item. An example action could then be decorated thusly: [MvcMenuItem("Tracks", Order = 20, ParentLink = "/Session/Index")] . All pretty straightforward methinks. The challenge with menu hierarchy becomes fairly apparent when you try to render a menu and highlight the "current" item or render a breadcrumb control. Both encounter an  ambiguity if you allow a data source to have more than one menu item with the same URL link. The issue is that there is no great way to tell which link a person click. Using referring URL will fail if a user bookmarked the page. Using some extra query string to disambiguate duplicate URLs essentially changes the links, and also ads a chance of collision with other query parameters. Besides, that smells. The stock ASP.Net sitemap provider simply disallows duplicate URLS. I decided not to, and simply pick the first one encountered as the "current". Although it doesn't solve the issue completely – one might say they wanted the second of the 2 links to be "current"- it allows one to include a link twice (home->deals and products->deals etc), and the logic of deciding "current" is easy enough to explain to the customer. Now that we got that out of the way, let's build the menu data structure: public static List<MvcMenuItemAttribute> ListMenuItems(Assembly assembly) { var result = new List<MvcMenuItemAttribute>(); foreach (var type in assembly.GetTypes()) { if (!type.IsSubclassOf(typeof(Controller))) { continue; } foreach (var method in type.GetMethods()) { var items = method.GetCustomAttributes(typeof(MvcMenuItemAttribute), false) as MvcMenuItemAttribute[]; if (items == null) { continue; } foreach (var item in items) { if (String.IsNullOrEmpty(item.Controller)) { item.Controller = type.Name.Substring(0, type.Name.Length - "Controller".Length); } if (String.IsNullOrEmpty(item.Action)) { item.Action = method.Name; } result.Add(item); } } } return result.OrderBy(i => i.Order).ToList(); } Using reflection, the ListMenuItems method takes an assembly (you will hand it your MVC web assembly) and generates a list of menu items. It digs up all the types, and for each one that is an MVC Controller, digs up the methods. Methods decorated with the MvcMenuItemAttribute get plucked and added to the output list. Again, pretty simple. To make the structure hierarchical, a LINQ expression matches up all the items to their parent: public static void RegisterMenuItems(List<MvcMenuItemAttribute> items) { _MenuItems = items; _MenuItems.ForEach(i => i.ParentItem = items.FirstOrDefault(p => String.Equals(p.Link, i.ParentLink, StringComparison.InvariantCultureIgnoreCase))); } The _MenuItems is simply an internal list to keep things around for later rendering. Finally, to package the menu building for easy consumption: public static void RegisterMenuItems(Type mvcApplicationType) { RegisterMenuItems(ListMenuItems(Assembly.GetAssembly(mvcApplicationType))); } To bring this puppy home, a call in Global.asax.cs Application_Start() registers the menu. Notice the ugliness of reflection is tucked away from the innocent developer. All they have to do is call the RegisterMenuItems() and pass in the type of the application. When you use the new project template, global.asax declares a class public class MvcApplication : HttpApplication and that is why the Register call passes in that type. protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes);   MvcMenu.RegisterMenuItems(typeof(MvcApplication)); }   What else is left to do? Oh, right, render! public static void ShowMenu(this TextWriter output) { var writer = new HtmlTextWriter(output);   renderHierarchy(writer, _MenuItems, null); }   public static void ShowBreadCrumb(this TextWriter output, Uri currentUri) { var writer = new HtmlTextWriter(output); string currentLink = "/" + currentUri.GetComponents(UriComponents.Path, UriFormat.Unescaped);   var menuItem = _MenuItems.FirstOrDefault(m => m.Link.Equals(currentLink, StringComparison.CurrentCultureIgnoreCase)); if (menuItem != null) { renderBreadCrumb(writer, _MenuItems, menuItem); } }   private static void renderBreadCrumb(HtmlTextWriter writer, List<MvcMenuItemAttribute> menuItems, MvcMenuItemAttribute current) { if (current == null) { return; } var parent = current.ParentItem; renderBreadCrumb(writer, menuItems, parent); writer.Write(current.MenuText); writer.Write(" / ");   }     static void renderHierarchy(HtmlTextWriter writer, List<MvcMenuItemAttribute> hierarchy, MvcMenuItemAttribute root) { if (!hierarchy.Any(i => i.ParentItem == root)) return;   writer.RenderBeginTag(HtmlTextWriterTag.Ul); foreach (var current in hierarchy.Where(element => element.ParentItem == root).OrderBy(i => i.Order)) { if (ItemFilter == null || ItemFilter(current)) {   writer.RenderBeginTag(HtmlTextWriterTag.Li); writer.AddAttribute(HtmlTextWriterAttribute.Href, current.Link); writer.AddAttribute(HtmlTextWriterAttribute.Alt, current.MenuText); writer.RenderBeginTag(HtmlTextWriterTag.A); writer.WriteEncodedText(current.MenuText); writer.RenderEndTag(); // link renderHierarchy(writer, hierarchy, current); writer.RenderEndTag(); // li } } writer.RenderEndTag(); // ul } The ShowMenu method renders the menu out to the provided TextWriter. In previous posts I've discussed my partiality to using well debugged, time test HtmlTextWriter to render HTML rather than writing out angled brackets by hand. In addition, writing out using the actual writer on the actual stream rather than generating string and byte intermediaries (yes, StringBuilder being no exception) disturbs me. To carry out the rendering of an hierarchical menu, the recursive renderHierarchy() is used. You may notice that an ItemFilter is called before rendering each item. I figured that at some point one might want to exclude certain items from the menu based on security role or context or something. That delegate is the hook for such future feature. To carry out rendering of a breadcrumb recursion is used again, this time simply to unwind the parent hierarchy from the leaf node, then rendering on the return from the recursion rather than as we go along deeper. I guess I was stuck in LISP that day.. recursion is fun though.   Now all that is left is some usage! Open your Site.Master or wherever you'd like to place a menu or breadcrumb, and plant one of these calls: <% MvcMenu.ShowBreadCrumb(this.Writer, Request.Url); %> to show a breadcrumb trail (notice lack of "=" after <% and the semicolon). <% MvcMenu.ShowMenu(Writer); %> to show the menu.   As mentioned before, the HTML output is nested <UL> <LI> tags, which should make it easy to style using abundant CSS to produce anything from static horizontal or vertical to dynamic drop-downs.   This has been quite a fun little implementation and I was pleased that the code size remained low. The main crux was figuring out how to pass parent information from the attribute to the hierarchy builder because attributes have restricted parameter types. Once I settled on that implementation, the rest falls into place quite easily.

    Read the article

  • Making Sense of ASP.NET Paths

    - by Renso
    Making Sense of ASP.NET Paths ASP.Net includes quite a plethora of properties to retrieve path information about the current request, control and application. There's a ton of information available about paths on the Request object, some of it appearing to overlap and some of it buried several levels down, and it can be confusing to find just the right path that you are looking for. To keep things straight I thought it a good idea to summarize the path options along with descriptions and example paths. I wrote a post about this a long time ago in 2004 and I find myself frequently going back to that page to quickly figure out which path I’m looking for in processing the current URL. Apparently a lot of people must be doing the same, because the original post is the second most visited even to this date on this blog to the tune of nearly 500 hits per day. So, I decided to update and expand a bit on the original post with a little more information and clarification based on the original comments. Request Object Paths Available Here's a list of the Path related properties on the Request object (and the Page object). Assume a path like http://www.west-wind.com/webstore/admin/paths.aspx for the paths below where webstore is the name of the virtual. Request Property Description and Value ApplicationPath Returns the web root-relative logical path to the virtual root of this app. /webstore/ PhysicalApplicationPath Returns local file system path of the virtual root for this app. c:\inetpub\wwwroot\webstore PhysicalPath Returns the local file system path to the current script or path. c:\inetpub\wwwroot\webstore\admin\paths.aspx Path FilePath CurrentExecutionFilePath All of these return the full root relative logical path to the script page including path and scriptname. CurrentExcecutionFilePath will return the ‘current’ request path after a Transfer/Execute call while FilePath will always return the original request’s path. /webstore/admin/paths.aspx AppRelativeCurrentExecutionFilePath Returns an ASP.NET root relative virtual path to the script or path for the current request. If in  a Transfer/Execute call the transferred Path is returned. ~/admin/paths.aspx PathInfo Returns any extra path following the script name. If no extra path is provided returns the root-relative path (returns text in red below). string.Empty if no PathInfo is available. /webstore/admin/paths.aspx/ExtraPathInfo RawUrl Returns the full root relative URL including querystring and extra path as a string. /webstore/admin/paths.aspx?sku=wwhelp40 Url Returns a fully qualified URL including querystring and extra path. Note this is a Uri instance rather than string. http://www.west-wind.com/webstore/admin/paths.aspx?sku=wwhelp40 UrlReferrer The fully qualified URL of the page that sent the request. This is also a Uri instance and this value is null if the page was directly accessed by typing into the address bar or using an HttpClient based Referrer client Http header. http://www.west-wind.com/webstore/default.aspx?Info Control.TemplateSourceDirectory Returns the logical path to the folder of the page, master or user control on which it is called. This is useful if you need to know the path only to a Page or control from within the control. For non-file controls this returns the Page path. /webstore/admin/ As you can see there’s a ton of information available there for each of the three common path formats: Physical Path is an OS type path that points to a path or file on disk. Logical Path is a Web path that is relative to the Web server’s root. It includes the virtual plus the application relative path. ~/ (Root-relative) Path is an ASP.NET specific path that includes ~/ to indicate the virtual root Web path. ASP.NET can convert virtual paths into either logical paths using Control.ResolveUrl(), or physical paths using Server.MapPath(). Root relative paths are useful for specifying portable URLs that don’t rely on relative directory structures and very useful from within control or component code. You should be able to get any necessary format from ASP.NET from just about any path or script using these mechanisms. ~/ Root Relative Paths and ResolveUrl() and ResolveClientUrl() ASP.NET supports root-relative virtual path syntax in most of its URL properties in Web Forms. So you can easily specify a root relative path in a control rather than a location relative path: <asp:Image runat="server" ID="imgHelp" ImageUrl="~/images/help.gif" /> ASP.NET internally resolves this URL by using ResolveUrl("~/images/help.gif") to arrive at the root-relative URL of /webstore/images/help.gif which uses the Request.ApplicationPath as the basepath to replace the ~. By convention any custom Web controls also should use ResolveUrl() on URL properties to provide the same functionality. In your own code you can use Page.ResolveUrl() or Control.ResolveUrl() to accomplish the same thing: string imgPath = this.ResolveUrl("~/images/help.gif"); imgHelp.ImageUrl = imgPath; Unfortunately ResolveUrl() is limited to WebForm pages, so if you’re in an HttpHandler or Module it’s not available. ASP.NET Mvc also has it’s own more generic version of ResolveUrl in Url.Decode: <script src="<%= Url.Content("~/scripts/new.js") %>" type="text/javascript"></script> which is part of the UrlHelper class. In ASP.NET MVC the above sort of syntax is actually even more crucial than in WebForms due to the fact that views are not referencing specific pages but rather are often path based which can lead to various variations on how a particular view is referenced. In a Module or Handler code Control.ResolveUrl() unfortunately is not available which in retrospect seems like an odd design choice – URL resolution really should happen on a Request basis not as part of the Page framework. Luckily you can also rely on the static VirtualPathUtility class: string path = VirtualPathUtility.ToAbsolute("~/admin/paths.aspx"); VirtualPathUtility also many other quite useful methods for dealing with paths and converting between the various kinds of paths supported. One thing to watch out for is that ToAbsolute() will throw an exception if a query string is provided and doesn’t work on fully qualified URLs. I wrote about this topic with a custom solution that works fully qualified URLs and query strings here (check comments for some interesting discussions too). Similar to ResolveUrl() is ResolveClientUrl() which creates a fully qualified HTTP path that includes the protocol and domain name. It’s rare that this full resolution is needed but can be useful in some scenarios. Mapping Virtual Paths to Physical Paths with Server.MapPath() If you need to map root relative or current folder relative URLs to physical URLs or you can use HttpContext.Current.Server.MapPath(). Inside of a Page you can do the following: string physicalPath = Server.MapPath("~/scripts/ww.jquery.js")); MapPath is pretty flexible and it understands both ASP.NET style virtual paths as well as plain relative paths, so the following also works. string physicalPath = Server.MapPath("scripts/silverlight.js"); as well as dot relative syntax: string physicalPath = Server.MapPath("../scripts/jquery.js"); Once you have the physical path you can perform standard System.IO Path and File operations on the file. Remember with physical paths and IO or copy operations you need to make sure you have permissions to access files and folders based on the Web server user account that is active (NETWORK SERVICE, ASPNET typically). Note the Server.MapPath will not map up beyond the virtual root of the application for security reasons. Server and Host Information Between these settings you can get all the information you may need to figure out where you are at and to build new Url if necessary. If you need to build a URL completely from scratch you can get access to information about the server you are accessing: Server Variable Function and Example SERVER_NAME The of the domain or IP Address wwww.west-wind.com or 127.0.0.1 SERVER_PORT The port that the request runs under. 80 SERVER_PORT_SECURE Determines whether https: was used. 0 or 1 APPL_MD_PATH ADSI DirectoryServices path to the virtual root directory. Note that LM typically doesn’t work for ADSI access so you should replace that with LOCALHOST or the machine’s NetBios name. /LM/W3SVC/1/ROOT/webstore Request.Url and Uri Parsing If you still need more control over the current request URL or  you need to create new URLs from an existing one, the current Request.Url Uri property offers a lot of control. Using the Uri class and UriBuilder makes it easy to retrieve parts of a URL and create new URLs based on existing URL. The UriBuilder class is the preferred way to create URLs – much preferable over creating URIs via string concatenation. Uri Property Function Scheme The URL scheme or protocol prefix. http or https Port The port if specifically specified. DnsSafeHost The domain name or local host NetBios machine name www.west-wind.com or rasnote LocalPath The full path of the URL including script name and extra PathInfo. /webstore/admin/paths.aspx Query The query string if any ?id=1 The Uri class itself is great for retrieving Uri parts, but most of the properties are read only if you need to modify a URL in order to change it you can use the UriBuilder class to load up an existing URL and modify it to create a new one. Here are a few common operations I’ve needed to do to get specific URLs: Convert the Request URL to an SSL/HTTPS link For example to take the current request URL and converted  it to a secure URL can be done like this: UriBuilder build = new UriBuilder(Request.Url); build.Scheme = "https"; build.Port = -1; // don't inject portUri newUri = build.Uri; string newUrl = build.ToString(); Retrieve the fully qualified URL without a QueryString AFAIK, there’s no native routine to retrieve the current request URL without the query string. It’s easy to do with UriBuilder however: UriBuilder builder = newUriBuilder(Request.Url); builder.Query = ""; stringlogicalPathWithoutQuery = builder.ToString();

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >