Search Results

Search found 393 results on 16 pages for 'nightmare intmain'.

Page 13/16 | < Previous Page | 9 10 11 12 13 14 15 16  | Next Page >

  • Design pattern to integrate Rails with a Comet server

    - by empire29
    I have a Ruby on Rails (2.3.5) application and an APE (Ajax Push Engine) server. When records are created within the Rails application, i need to push the new record out on applicable channels to the APE server. Records can be created in the rails app by the traditional path through the controller's create action, or it can be created by several event machines that are constantly monitoring various inputstream and creating records when they see data that meets a certain criteria. It seems to me that the best/right place to put the code that pushes the data out to the APE server (which in turn pushes it out to the clients) is in the Model's after_create hook (since not all record creations will flow through the controller's create action). The final caveat is I want to push a piece of formatted HTML out to the APE server (rather than a JSON representation of the data). The reason I want to do this is 1) I already have logic to produce the desired layout in existing partials 2) I don't want to create a javascript implementation of the partials (javascript that takes a JSON object and creates all the HTML around it for presentation). This would quickly become a maintenance nightmare. The problem with this is it would require "rendering" partials from within the Model (which im having trouble doing anyhow because they don't seem to have access to Helpers when they're rendered in this manner). Anyhow - Just wondering what the right way to go about organizing all of this is. Thanks

    Read the article

  • PHP file outside doc root needs files outside and inside the document root

    - by jax
    I have a library of classes, all interrelated. Some files are inside the document root and some are outside using the <Directory> and Alias features in httpd.conf Assuming I have 3 files: webroot.php (Inside the document root) alias_directory.php (Inside a folder outside the doc root) alias_directory2.php (Inside a **different** folder outside the doc root) If alias_directory2.php needs both webroot.php and alias_directory.php, This does not work. (Remember alias_directory.php and alias_directory2.php are not in the same locations) require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once $_SERVER['DOCUMENT_ROOT'].'/alias_directory.php'; //(not ok) This does not work because alias_directory.php is not in the doc root. Similarly require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once dirname(__FILE__).'/alias_directory.php'; //(not ok) The problem here is that dirname(__FILE__) will return the path for alias_directory2.php not alias_directory.php. This works: require_once $_SERVER['DOCUMENT_ROOT'].'/webroot.php'; //(ok) require_once '/full/path/to/directory/alias_directory.php'; //(ok) But is very nasty and is a maintenance nightmare if I decide to move my library to another location. How do I solve this problem, is seems that I need a way to resolve an Alias folder properly.

    Read the article

  • Looking for a 'pick a-or-b' voting system script

    - by user324455
    Apologies: this is my first time on stackoverflow and I'm starting with a question and seeking advice. Sorry. Caveats: I know HTML and CSS pretty well. Javascript and PHP are not completely alien, but I'm really pretty basic on those. That said, I'm pretty sharp and willing to search for explanations independently. Ok, so my question is this: I want to create a site with a voting system very much like the one on kittenwar.com - the page loads 2 random images from a db of some sort and you click on the one you want to 'win'. Ranked pairs kind of deal. Then there is a leaderboard of those images which have the highest win-loss ratio. There also needs to be an uploader for peeps to upload their own images and have them go into an approval workflow, and from there into the db that feeds the voting thing. I tried a pre-made solution ('photo battle') but found it was completely standalone, so trying to integrate it or change any of the options was a nightmare, plus it was buggy. i'm sure there has to be a relatively easy way to do this, right? Ideally I'd like to build my site in Joomla and integrate this functionality somehow. I'd be very grateful for any advice on this. Thanks Tom

    Read the article

  • MySQL to SQL Server ODBC Connector?

    - by Scott C.
    My boss wants to have data in MySQL DBs used for our website to be "linked and synced" with a Financial Server that has its DB in SQL Server. Sooooo...even though I have no idea how to accomplish this, this just sounds like an absolute nightmare especially since the MySQL DB is most likely going to be hosted in the cloud and not on a machine next to the Financial Server. Any ideas how to accomplish this? (within reason?) Also, his big thing is he wants to basically pull up the data from any record a user enters and using data pulled from that do all sorts of calculations using ANOTHER program that stores its data (apparently) in SQL Server. Thinking of all the data I might have to convert makes me very uneasy. Please tell a ODBC eliminates complicated junk like this. :/ I'm trying to talk him into just having MySQL do a nightly dump into a CSV file or something and using that (rather than connector) to update the SQL Server DBs. I guess I'm just not that comfortable with a server and/or programming I have no say over being connected DIRECTLY to my MySQL DB for the website. If there's no good answer for this, can anyone offer a suggestion as to what I can say to talk him out of this? (I'm a low-level IT guy w/ a decent grasp on programming...but I'm no expert - should I try to push this off to a seasoned IT pro?) Thanks in advance.

    Read the article

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

  • Should I have one dll or multiple for Business Logic?

    - by Brian
    In my situation, my company services many types of customers. Almost every customer requires their own Business Logic. Of course, there will be a base layer that all business logic should inherit from. However, I'm going back and forth on architecting this--either in one dll for all customers or one dll for each. My biggest point of contention deals with upgrading the software. We have about 12 data entry personnel that work with 20 companies and it's critical that they have little down time. My concern is that if I deploy everything in one dll, I could introduce a bug in company A's logic while only intending to update Company B's logic. I believe I could reduce the risk if each company's logic had their own dll, so then, I could deploy Company B's update w/o harming Company A's. -- I will be the only one supporting this. That said, this also seems like a nightmare to manage 20 different .dll's -- that's for the BLL alone. I also need to create a View layer and ViewModel layer. So, potentially, I could have 20 (companies) * 3 (layers) which would equate to 60 .dll's. Thank You.

    Read the article

  • I read 3 pages of a JQuery book and here's my reaction and question

    - by George
    My jQuery reaction to the language's flexible "selectors" is probably rooted in this experience: I once had managed a project where a developer constructed a web page that was used by users to provide very flexible search parameters for a search screen using dynamic sql string building based on the user's specified search parameter. The resulting queries were usually very complicated and involved joins to many tables. One of the options that the user had was to choose from one of 3 an options. Depending on the user's choice for this option, the resulting SQL would need to query a different set of database columns. For example, if choice option "A" were selected, the resulting database columns queried would be prefixed with "A_"; if option "B" were selected, he resulting database columns queried would be prefixed with "B_" and so on. The developer choice to write all the complete SQL assuming that the user selected, for example, option "A" and therefore first constructed SQLs of this type: SQL = "SELECT A_COL1, A_COL2, A_COL3 FROM TABLE ..." and then after constructing one of a million possible variations on the Query From Hell, did something like this: If UserOption = "B" then SQL = SQL.Replace("A_","B_") 'replace everywhere End if He insisted that this was the easiest was to code it, and while I understood that, I was concerned about maintenance of this code. You see, this worked for a while, but as the search options grew and the database columns evolved, the various "REPLACE small substring" with another small substring had unexpected consequences when applied to an evolving database and new search options. My feeling is that code should be written as much as possible such that you can add to it without fear of breaking what is already there. I feel a better approach, though a bit more work, would have been to write a function to return the appropriate target column based on a common set name and the user selected option. OK, so what does this have to do with jQuery selectors? Are the ultra flexible JQuery selectors kind of like perform a "replace all" on a SQL string? Handy as hell but potentially creating a maintenance nightmare?

    Read the article

  • What's the best approach for getting into VS2010, C# 4, and WPF if my background is in C++/MFC

    - by Canacourse
    All my past programming experience has been in C++ on VS2003/8, Mostly service based and completely self taught. 2 Years ago I had to create my first real GUI app and (Foolishly) choose MFC. I got the app working but it took a long time & was a bit of a nightmare to learn MCF (and its many shortcomings) but I ended up with a reliable workable app which was difficult to change or extend. Again I have to create another GUI app more complex than the first and again this will be created from scratch and will only ever be used on windows. I had put off learning C# for a long time but not wishing to re-visit MFC have decided that the new application with be birthed in VS2010 and WPF 4 will be the midwife. Trying to avoid the several expensive (Time wise) mistakes I made previously. Im looking for for good books/tutorials on the current versions of C# 4 & WPF 4 and also general advice on the best approach. The application will do several things one of them would persisting info in a SQL DB. So Im thinking LINQ for that? Please chip in...

    Read the article

  • How to construct objects based on XML code?

    - by the_drow
    I have XML files that are representation of a portion of HTML code. Those XML files also have widget declarations. Example XML file: <message id="msg"> <p> <Widget name="foo" type="SomeComplexWidget" attribute="value"> inner text here, sets another attribute or inserts another widget to the tree if needed... </Widget> </p> </message> I have a main Widget class that all of my widgets inherit from. The question is how would I create it? Here are my options: Create a compile time tool that will parse the XML file and create the necessary code to bind the widgets to the needed objects. Advantages: No extra run-time overhead induced to the system. It's easy to bind setters. Disadvantages: Adds another step to the build chain. Hard to maintain as every widget in the system should be added to the parser. Use of macros to bind the widgets. Complex code Find a method to register all widgets into a factory automatically. Advantages: All of the binding is done completely automatically. Easier to maintain then option 1 as every new widget will only need to call a WidgetFactory method that registers it. Disadvantages: No idea how to bind setters without introducing a maintainability nightmare. Adds memory and run-time overhead. Complex code What do you think is better? Can you guys suggest a better solution?

    Read the article

  • How do you get the solution directory in C# (VS 2008) in code?

    - by IsaacB
    Hi, Got an annoying problem here. I've got an NHibernate/Forms application I'm working through SVN. I made some of my own controls, but when I drag and drop those (or view some form editors where I have already dragged and dropped) onto some of my other controls, Visual studio decides it needs to execute some of the code I wrote, including the part that looks for hibernate.cfg.xml. I have no idea why this is, but (sometimes!) when it executes the code during my form load or drag and drop it switches the current directory to C:\program files\vs 9.0\common7\ide, and then nhibernate throws an exception that it can't find hibernate.cfg.xml, because I'm searching for that in a relative path. Now, I don't want to hard code the location of hibernate.cfg.xml, or just copy hibernate.cfg.xml to the ide directory (which will work). I want a solution that gets the solutions directory while the current directory is common7\ide. Something that will let someone view my forms in the designer on a fresh checkout to an arbitrary directory on an arbitrary machine. And no, I'm not about to load the controls in code. I have so many controls within controls that it is a nightmare to line everything up without it. I tried a pre build event that made a file that has the solution directory in it, but of course how can I find that from common7\ide? All the projects files need to be in the solution directory because of svn. Thanks for your help guys, I've already spent a few hours fiddling with this in vain.

    Read the article

  • Replacing mysql user authentication with openid

    - by David
    So, I'm working with a really old system which uses a person's mysql database credentials to authenticate to a web site (the database was originally only accessed from the command line, but is now accessed from a php frontend). Because of some internal reasons (and to preserve the user's history), I have to leave the old authentication intact. I've been charged with adding openid authentication to this system. Somehow I need to be able to retrieve a users mysql username and password upon logging into the site through openid (using the Zend framework, by the way). I've thought of simply requiring registration at the first login, where the user must provide their mysql credentials, but I'd rather not store the password plain text. I've also considered blanking everyone's mysql passwords, and just setting the user's mysql username manually (rather than having the user provide this, since they could provide any username). This is turning into a security nightmare. Does anyone have any suggestions for alternatives? This is running on a Linux server, by the way. Also, I can't use mysql pluggable authentication because the mysql version is 5.0 (pluggable authentication requires mysql 5.5), and no, I can't update it.

    Read the article

  • vSphere Client vCenter Template Customization Specification Using Windows Sysprep Unattended Answer XML File

    - by Brian
    I'm trying to setup a vSphere Client vCenter v5.0.0 Build 455964 Template Customization Specification using a Windows Sysprep unattended answer XML file for Win2008R2. However I didn't know how Sysprep worked before attempting this so it was a time-consuming nightmare (even after reviewing VMware vSphere ESXi 5's documentation)! I think I've figure out what I'm supposed to be doing, but it's still not working. The biggest problem at this point is that vSphere Client vCenter Customization Specification IP address information is not sticking when I load a Sysprep XML file with just 1 basic setting! This can only be a bug. Here is the process I'm using: PROCESS for Windows - vSphere Client Install Windows OS install VM Tools customize Windows (GPOs can be used to do this after deployment) install Applications (GPOs can be used to do this after deployment too) shutdown the VM convert the VM to a template create a custom Windows Sysprep XML answer file with desired customizations View Management Customization Specifications Manager create "New" Specification for "Target Virtual Machine OS" select Windows check "Use Custom Sysprep Answer File" (ADDS: Custom Sysprep File. KEEPS: Network (IP), Operating System Options (SID, Sysprep /generalize). REPLACES: Registration Information of Owner Name & Organization, Computer Name, Windows License (Key), Administrator Password, Time Zone, Run Once, Workgroup or Domain) name it as "VMwareCS-OS####R#x32/64w/Sysprep-TEST" (CS=Customization Specification) set Description as "Created YYYY/MM/DD by FLast" NEXT import a Sysprep answer file from secure location NEXT Custom settings NEXT click "..." box to right of "Use DHCP" set "Use the following IP settings:" for "IP Address" fill out the first 2 octets set appropriate values for other 2-3 fields set DNS server addresses OK NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish View Inventory VMs and Templates right-click previously completed template Deploy Virtual Machine from this Template provide the new OS name (max15char) select inventory location NEXT select Host/Cluster (wait for validation to succeed) NEXT select Resource Pool (wait for validation to succeed) NEXT select Storage location NEXT check "Power on this virtual machine after creation" select "Customize using an existing customization specification" select desired specification select "Use the Customization Wizard to temporarily adjust the specification before deployment" NEXT NEXT Custom settings? NEXT check "Generate New Security ID (SID)" ALWAYS as template is likely a domain-member computer so it can be updated occasionally NEXT Finish Finish. I know a community member named "brian" (http://serverfault.com/users/25904/brian) has worked with this scenario before, but I couldn't figure out how to contact him directly, so Brian if you see this message could you provide some information to help? Thanks, Brian

    Read the article

  • How to shutdown VMware Fusion virtual machine on host shutdown

    - by Nikksno
    I have a Mac mini running Mavericks server. I installed the Atmail server + webmail vm [a linux centos distribution] in VMware Fusion Professional 6 with the VMware Tools addon. It works flawlessly. I've set it to start on boot and that works very reliably. However I've been looking for a way to also safely and gracefully shut it down whenever OS X shuts down for whatever reason. The Mac is connected to a UPS and configured to perform an automatic shutdown in case the battery starts running low so that's no additional problem. Now the first thing I did was to go into Fusion's prefs and select "Power off the vm" when closing it. However I noticed that for some arcane reason closing the vm window would actually forcibly power off the vm: so then I found this post that showed me how to change the default power options and I managed to have the vm cleanly shutdown when closing its window or quitting Fusion altogether. At this point I was hoping to have solved the problem but as it turns out upon invoking system shutdown OS X doesn't wait for the vm to shutdown and terminates Fusion before it has a chance to do so. At this point I started looking for a way to automate the process of shutting down the guest os via some advanced setting but had no luck in doing so. That's when I found a command to shut the vm down: vmrun and it worked. The only thing left was to find out a way to execute this script on os x shutdown and giving it a little time to power off completely. However this turned out to be a nightmare: I spent hours looking through several ways to do this with Startup Items, rc.shutdown, cron, launchd, etc... but none of them worked the way I had configured them. I have to say that I found very limited information on using launchd for a shutdown script execution and I know it's the latest thing in the OS X world so I'm hoping someone out there among you will be able to help me out with this. I still think this is an extremely basic feature to ask for and I was really surprised to find this little documentation on so many different aspects of this problem. Is Fusion too basic of an application for this? I really hope someone can help. Thank you very much in advance.

    Read the article

  • How can I centralise MySQL data between 3 or more geographically separate servers?

    - by Andy Castles
    To explain the background to the question: We have a home-grown PHP application (for running online language-learning courses) running on a Linux server and using MySQL on localhost for saving user data (e.g. results of tests taken, marks of submitted work, time spent on different pages in the courses, etc). As we have students from different geographic locations we currently have 3 virtual servers hosted close to those locations (Spain, UK and Hong Kong) and users are added to the server closest to them (they access via different URLs, e.g. europe.domain.com, uk.domain.com and asia.domain.com). This works but is an administrative nightmare as we have to remember which server a particular user is on, and users can only connect to one server. We would like to somehow centralise the information so that all users are visible on any of the servers and users could connect to any of the 3 servers. The question is, what method should we use to implement this. It must be an issue that that lots of people have encountered but I haven't found anything conclusive after a fair bit of Googling around. The closest I have seen to solutions are: something like master-master replication, but I have read so many posts suggesting that this is not a good idea as things like auto_increment fields can break. circular replication, this sounded perfect but to quote from O'Reilly's High Performance MySQL, "In general, rings are brittle and best avoided" We're not against rewriting code in the application to make it work with whatever solution is required but I am not sure if replication is the correct thing to use. Thanks, Andy P.S. I should add that we experimented with writes to a central database and then using reads from a local database but the response time between the different servers for writing was pretty bad and it's also important that written data is available immediately for reading so if replication is too slow this could cause out-of-date data to be returned. Edit: I have been thinking about writing my own rudimentary replication script which would involve something like having each user given a server ID to say which is his "home server", e.g. users in asia would be marked as having the Hong Kong server as their own server. Then the replication scripts (which would be a PHP script set to run as a cron job reasonably frequently, e.g. every 15 minutes or so) would run independently on each of the servers in the system. They would go through the database and distribute any information about users with the "home server" set to the server that the script is running on to all of the other databases in the system. They would also need to suck new information which has been added to any of the other databases on the system where the "home server" flag is the server where the script is running. I would need to work out the details and build in the logic to deal with conflicts but I think it would be possible, however I wanted to make sure that there is not a correct solution for this already out there as it seems like it must be a problem that many people have already come across.

    Read the article

  • Is there a way to permanently arrange 2 displays under XP?

    - by rumtscho
    When I am home or on a business trip, or on a meeting, I use my laptop in the usual way. When I get to work, I put it on the docking station and boot it with the lid closed. The image appears on the two displays connected to the docking station. On the left, there is an old monitor connected over VGA, on the right, a big widescreen connected over DVI. Obviously, the videocard seems to think that the DVI is the primary output, and the VGA the secondary one. Thus Windows always displays the widescreen on the left and the old FSC monitor on the right. So when I want to move the mouse pointer from the (physically) left display to the (physically) right display, I have to move it from right to left, which is a usability nightmare. Of course, I can just drag one display over the other one in the display properties, and then everything is as it should be. The catch: Windows remembers this only as long as it has the two displays. Every time it runs on the laptop display, it forgets the setting. Physically switching the monitors isn't an option, for ergonomical reasons. I prefer to run the more important applications on the bigger screen with the better colourspace, and the shape of my desk forces me to sit off-center, so the more important applications should be shown on the right display. Just switching the video ports doesn't help either. When I connect the big monitor over VGA, image quality deteriorates visibly. So what I do now is: every time I bring the laptop to my desk, I boot it. I wait the whole 7 minutes of XP booting, syncing network drives, etc. Then I fire up the display properties, switch to the last tab, drag the widescreen display to the right, and close. Only then can I start working. Does someone have a better idea? The laptop is a Dell Latitude 630 with Windows XP SP 3. It has an nVidia graphics card (not an onboard chip).

    Read the article

  • Automatic layout of manual network mapping

    - by Paul
    So I have a small business network mainly consisting of two routed layer-2 domains with a total of ca. 100 devices spread over ca. 2000m² production and office spaces. Typical problems to solve using the graph would be: Over what (cable) path is a PC connected to the server? Where to expect devices connected to a switch port? I want to generate a graph of the physical network topology: Nodes are endpoint devices, switch ports, wall outlets, patch panel ports etc. Edges are cable connections. Ideally, grouping edges (or segments) that pass through the same bundle could be grouped. Also I would like to augment the graph data with automatically gathered data (monitoring state, MAC address, Switch port <- MAC entries to build up parts of the map). At the moment I use graphviz for this inside a Confluence wiki like that: layout = "neato" overlap = scale subgraph { rankdir = "TB" subgraph cluster_r1pf1 { r1pf1 [label="{ Rack 1 PF 1 | { <p1>P1 | <p2>P2 | <p3>P3} }", shape=record] } subgraph cluster_switch1 { switch1 [label="{ Rack 1 Switch 1 | { <p1> P1 | <p1> P1 | <p3> P3} }", shape=record] } r1pf1:p1 -> switch1:p1 (obviously there are dozens of entries omitted here) Problem is: I have a hard time to influence graphviz to generate a bearable layout. Edges overlap so bad that you can't read the diagram anymore. The question is: What other tools (be it interactive like Visio, Omnigraffle or I/O-oriented like graphviz) exist that would allow an easily versionable (as in: Operates on a text file) documentation that is both machine and human readable and editable? Why not OmniGraffle or Visio? Well we don't have Macs and Visio is not available at the moment. To buy it I would need good arguments. Automation would be one of that. But last time I looked, versioning Visio files or even thinking about automatic handling was a nightmare. Related: Network Mapping Tools basically asks the same with a focus on generating the complete graph automatically (but without the need to document cabling connections) Recommendations for automatic computer inventory brings up links of "all-in-one" solutions

    Read the article

  • Need advice on which PCI SATA Controller Card to Purchase

    - by Matt1776
    I have a major issue with the build of a machine I am trying to get up and running. My goal is to create a file server that will service the needs of my software development, personal media storage and streaming/media server needs, as well as provide a strong platform for backing up all this data in a routine, cron-job oriented German efficiency sort of way. The issue is a simple one - all my drives are SATA drives and my motherboard controller only contains 4 ports. Solving the issue has proven to be an unmitigated nightmare. I would like advice on the purchase of the following: 4 Port internal SATA / 2 Port external eSATA PCI SATA Controller Card that has the following features and/or advantages: It must function. If I plug it in and attach drives, I expect my system to still make it to the Operating System login screen. It must function on CentOS, and I mean it must function WELL and with MINIMAL hassle. If hassle is unavoidable, there shall be CLEAR CUT and EASY TO FOLLOW instructions on how to install drivers and other supporting software. I do not need nor want fakeRAID - I will be setting up any RAID configurations from within the operating system. Now, if I am able to find such a mythical device, I would be eternally grateful to whomever would be able to point me in the right direction, a direction which I assume will be paved with yellow bricks. I am prepared to pay a considerable sum of money (as SATA controller cards go) and so paying anywhere between 60 to 120 dollars will not be an issue whatsoever. Does such a magical device exist? The following link shows an "example" of the type of thing I am looking for, however, I have no way of verifying that once I plug this baby in that my system will still continue to function once I've attached the drives, or that once I've made it to the OS, I will be able to install whatever drivers or software programs I need to make it work with relative ease. It doesn't have to be dog-shit simple, but it cannot involve kernels or brain surgery. http://www.amazon.com/gp/product/B00552PLN4/ref=pd_lpo_k2_dp_sr_1?pf_rd_p=486539851&pf_rd_s=lpo-top-stripe-1&pf_rd_t=201&pf_rd_i=B003GSGMPU&pf_rd_m=ATVPDKIKX0DER&pf_rd_r=1HJG60XTZFJ48Z173HKY So does anyone have a suggestion regarding the subject I am asking about? PCI SATA Controller Cards? It would help if you've had experience with the component before - that is after all why I am asking here - for those who have had experience that I do not have. Bear in mind that this is for a home setup and that I do not have a company credit card. I have a budget with a 'relative' upper limit of about $150.00.

    Read the article

  • SQL SERVER – PAGEIOLATCH_DT, PAGEIOLATCH_EX, PAGEIOLATCH_KP, PAGEIOLATCH_SH, PAGEIOLATCH_UP – Wait Type – Day 9 of 28

    - by pinaldave
    It is very easy to say that you replace your hardware as that is not up to the mark. In reality, it is very difficult to implement. It is really hard to convince an infrastructure team to change any hardware because they are not performing at their best. I had a nightmare related to this issue in a deal with an infrastructure team as I suggested that they replace their faulty hardware. This is because they were initially not accepting the fact that it is the fault of their hardware. But it is really easy to say “Trust me, I am correct”, while it is equally important that you put some logical reasoning along with this statement. PAGEIOLATCH_XX is such a kind of those wait stats that we would directly like to blame on the underlying subsystem. Of course, most of the time, it is correct – the underlying subsystem is usually the problem. From Book On-Line: PAGEIOLATCH_DT Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Destroy mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_EX Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Exclusive mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_KP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Keep mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_SH Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Shared mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_UP Occurs when a task is waiting on a latch for a buffer that is in an I/O request. The latch request is in Update mode. Long waits may indicate problems with the disk subsystem. PAGEIOLATCH_XX Explanation: Simply put, this particular wait type occurs when any of the tasks is waiting for data from the disk to move to the buffer cache. ReducingPAGEIOLATCH_XX wait: Just like any other wait type, this is again a very challenging and interesting subject to resolve. Here are a few things you can experiment on: Improve your IO subsystem speed (read the first paragraph of this article, if you have not read it, I repeat that it is easy to say a step like this than to actually implement or do it). This type of wait stats can also happen due to memory pressure or any other memory issues. Putting aside the issue of a faulty IO subsystem, this wait type warrants proper analysis of the memory counters. If due to any reasons, the memory is not optimal and unable to receive the IO data. This situation can create this kind of wait type. Proper placing of files is very important. We should check file system for the proper placement of files – LDF and MDF on separate drive, TempDB on separate drive, hot spot tables on separate filegroup (and on separate disk), etc. Check the File Statistics and see if there is higher IO Read and IO Write Stall SQL SERVER – Get File Statistics Using fn_virtualfilestats. It is very possible that there are no proper indexes on the system and there are lots of table scans and heap scans. Creating proper index can reduce the IO bandwidth considerably. If SQL Server can use appropriate cover index instead of clustered index, it can significantly reduce lots of CPU, Memory and IO (considering cover index has much lesser columns than cluster table and all other it depends conditions). You can refer to the two articles’ links below previously written by me that talk about how to optimize indexes. Create Missing Indexes Drop Unused Indexes Updating statistics can help the Query Optimizer to render optimal plan, which can only be either directly or indirectly. I have seen that updating statistics with full scan (again, if your database is huge and you cannot do this – never mind!) can provide optimal information to SQL Server optimizer leading to efficient plan. Checking Memory Related Perfmon Counters SQLServer: Memory Manager\Memory Grants Pending (Consistent higher value than 0-2) SQLServer: Memory Manager\Memory Grants Outstanding (Consistent higher value, Benchmark) SQLServer: Buffer Manager\Buffer Hit Cache Ratio (Higher is better, greater than 90% for usually smooth running system) SQLServer: Buffer Manager\Page Life Expectancy (Consistent lower value than 300 seconds) Memory: Available Mbytes (Information only) Memory: Page Faults/sec (Benchmark only) Memory: Pages/sec (Benchmark only) Checking Disk Related Perfmon Counters Average Disk sec/Read (Consistent higher value than 4-8 millisecond is not good) Average Disk sec/Write (Consistent higher value than 4-8 millisecond is not good) Average Disk Read/Write Queue Length (Consistent higher value than benchmark is not good) Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All of the discussions of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Dynamic Data Connections

    - by Tim Dexter
    I have had a long running email thread running between Dan and David over at Valspar and myself. They have built some impressive connectivity between their in house apps and BIP using web services. The crux of their problem has been that they have multiple databases that need the same report executed against them. Not such an unusual request as I have spoken to two customers in the last month with the same situation. Of course, you could create a report against each data connection and just run or call the appropriate report. Not too bad if you have two or three data connections but more than that and it becomes a maintenance nightmare having to update queries or layouts. Ideally you want to have just a single report definition on the BIP server and to dynamically set the connection to be used at runtime based on the user or system that the user is in. A quick bit of digging and help from Shinji on the development team and I had an answer. Rather embarassingly, the solution has been around since the Oct 2010 rollup patch last year. Still, I grabbed the latest Jan 2011 patch - check out Note 797057.1 for the latest available patches. Once installed, I used the best web service testing tool I have yet to come across - SoapUI. Just point it at the WSDL and you can check out the available services and their parameters and then test them too. The XML packet has a new dynamic data source entry. You can set you own custom JDBC connection or just specify an existing data source name thats defined on the server. <pub:runReport> <pub:reportRequest> <pub:attributeFormat>xml</pub:attributeFormat> <pub:attributeTemplate>0</pub:attributeTemplate> <pub:byPassCache>true</pub:byPassCache> <pub:dynamicDataSource> <pub:JDBCDataSource> <pub:JDBCDriverClass></pub:JDBCDriverClass> <pub:JDBCDriverType></pub:JDBCDriverType> <pub:JDBCPassword></pub:JDBCPassword> <pub:JDBCURL></pub:JDBCURL> <pub:JDBCUserName></pub:JDBCUserName> <pub:dataSourceName>Conn1</pub:dataSourceName> </pub:JDBCDataSource> </pub:dynamicDataSource> <pub:reportAbsolutePath>/Test/Employee Report/Employee Report.xdo</pub:reportAbsolutePath> </pub:reportRequest> <pub:userID>Administrator</pub:userID> <pub:password>Administrator</pub:password> </pub:runReport> So I have Conn1 and Conn2 defined that are connections to different databases. I can just flip the name, make the WS call and get the appropriate dataset in my report. Just as an example, here's my web service call java code. Just a case of bringing in the BIP java libs to my java project. publicReportServiceService = new PublicReportServiceService(); PublicReportService publicReportService = publicReportServiceService.getPublicReportService_v11(); String userID = "Administrator"; String password = "Administrator"; ReportRequest rr = new ReportRequest(); rr.setAttributeFormat("xml"); rr.setAttributeTemplate("1"); rr.setByPassCache(true); rr.setReportAbsolutePath("/Test/Employee Report/Employee Report.xdo"); rr.setReportOutputPath("c:\\temp\\output.xml"); BIPDataSource bipds = new BIPDataSource(); JDBCDataSource jds = new JDBCDataSource(); jds.setDataSourceName("Conn1"); bipds.setJDBCDataSource(jds); rr.setDynamicDataSource(bipds); try { publicReportService.runReport(rr, userID, password); } catch (InvalidParametersException e) { e.printStackTrace(); } catch (AccessDeniedException e) { e.printStackTrace(); } catch (OperationFailedException e) { e.printStackTrace(); } } Note, Im no java whiz kid or whizzy old bloke, at least not unless Ive had a coffee. JDeveloper has a nice feature where you point it at the WSDL and it creates everything to support your calling code for you. Couple of things to remember: 1. When you call the service, remember to set the bypass the cache option. Forget it and much scratching of your head and taking my name in vain will ensue. 2. My demo actually hit the same database but used two users, one accessed the base tables another views with the same name. For far too long I thought the connection swapping was not working. I was getting the same results for both users until I realized I was specifying the schema name for the table/view in my query e.g. select * from EMP.EMPLOYEES. So remember to have a generic query that will depend entirely on the connection. Its a neat feature if you want to be able to switch connections and only define a single report and call it remotely. Now if you want the connection to be set dynamically based on the user and the report run via the user interface, thats going to be more tricky ... need to think about that one!

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • The Krewe App Post-Mortem

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/05/23/the-krewe-app-post-mortem.aspxNow that teched has come and gone, I thought I would use this opportunity to do a little post-mortem on The Krewe app. It is one thing to test the app at home. It is a completely different animal to see how it responds in the environment TechEd creates. At a future time, I will list all the things that I would like to change with the app. At this point, I will find some good way to get community feedback. I want to break all this down screen by screen. We'll start with the screen I got right. The first of these is the events calendar. This is the one screen that, to you guys, just worked. However, there was an issue here. When I wrote v1 for last year, I was lazy and placed everything in CST. This caused problems with the achievements, which I will explain later. Furthermore, the event locations were not check-in locations. This created another problem with the achievements. Next, we get to the Twitter page. For what this page does, it works great. For those that don't know, I have an Azure Worker Role that polls Twitter pretty close to the rate limit. I cache these results in my database, and serve them upon request. This gives me great control over the content. I just have to remember to flush past tweets after a period, to save database growth. The next screen is the check-in screen. This screen has been the bane of my existence since I first created the thing. Last year, I used a background task to check people out of locations after they traveled. This year, I removed the background task in favor of a foursquare model. You are checked out after 3 hours or when you check-in to some other location. This seemed to work well, until those pesky achievements came into the mix. Again, more on this later. Next, I want to address the Connect and Connections screens together. I wanted to use some of the capabilities of the phone, and NFC seemed a natural choice. From this, I came up with the gamification aspects of the app. Since we are, fundamentally, a networking organization, I wanted to encourage people to actually network. Users could make and share a profile, similar to a virtual business card. I just had to figure out how to get people to use the feature. Why not just give someone a business card? Thus, the achievements were born. This was such a good idea. It would have been a great idea, if I have come up with it about two months earlier... When I came up with these ideas, I had about 2 weeks to implement them. Version 1 of the app was, basically, a pure consumption app. We provided data and centralized it. With version 2, the app became a much more interactive experience. The API was not ready for this change in such a short period of time. Most of this became apparent when I started implementing the achievements. The achievements based on count and specific person when fairly easy. The problem came with tying them to locations and events. This took some true SQL kung fu. This also showed me the rookie mistake of putting CST, not UTC, in the database. Once I got all of that cleaned up, I had to find a way to get the achievement system to talk to the phone. I knew I needed to be able to dynamically add achievements. I wouldn't know the precise location of some things until I got to Houston. I wanted the server to approve the achievements. This, unfortunately, required a decent data connection. Some achievements required GPS levels of location accuracy in areas of network triangulation. All of this became a huge nightmare. My flagship feature was based on some silly assumptions. Still, I managed to get 31 people to get the first achievement (Make 1 Connection.) Quite a few of those managed to get to the higher levels. Soon, I will post a list of the feature and changes that need to happen to the API. This includes things like proper objects for communication, geo-fencing, and caching. However, that is for another day.

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • Coping with infrastructure upgrades

    - by Fatherjack
    A common topic for questions on SQL Server forums is how to plan and implement upgrades to SQL Server. Moving from old to new hardware or moving from one version of SQL Server to another. There are other circumstances where upgrades of other systems affect SQL Server DBAs. For example, where I work at the moment there is an Microsoft Exchange (email) server upgrade in progress. It it being handled by a different team so I’m not wholly sure on the details but we are in a situation where there are currently 2 Exchange email servers – the old one and the new one. Users mail boxes are being transferred in a planned process but as we approach the old server being turned off we have to also make sure that our SQL Servers get updated to use the new SMTP server for all of the SQL Agent notifications, SSIS packages etc. My servers have a number of profiles so that various jobs can send emails on behalf of various departments and different systems. This means there are lots of places that the old server name needs to be replaced by the new one. Anyone who has set up DBMail and enjoyed the click-tastic odyssey of screens to create Profiles and Accounts and so on and so forth ought to seek some professional help in my opinion. It’s a nightmare of back and forth settings changes and it stinks. I wasn’t looking forward to heading into this mess of a UI and changing the old Exchange server name for the new one on all my SQL Instances for all of the accounts I have set up. So I did what any Englishmen with a shed would do, I decided to take it apart and see if I can fix it another way. I took a guess that we are going to be working in MSDB and Books OnLine was remarkably helpful and amongst a lot of information told me about a couple of procedures that can be used to interrogate DBMail settings. USE [msdb] -- It's where all the good stuff is kept GO EXEC dbo.sysmail_help_profile_sp; EXEC dbo.sysmail_help_account_sp; Both of these procedures take optional parameters with the same name – ID and Name. If you provide an ID or a name then the results you get back are for that specific Profile or Account. Otherwise you get details of all Profiles and Accounts on the server you are connected to. As you can see (click for a bigger image), the Account has the SMTP server information in the servername column. We want to change that value to NewSMTP.Contoso.com. Now it appears that the procedure we are looking at gets it’s data from the sysmail_account and sysmail_server tables, you can get the results the stored procedure provides if you run the code below. SELECT [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [last_mod_datetime] , [last_mod_user] FROM dbo.sysmail_account AS sa; SELECT [account_id] , [servertype] , [servername] , [port] , [username] , [credential_id] , [use_default_credentials] , [enable_ssl] , [flags] , [last_mod_datetime] , [last_mod_user] , [timeout] FROM dbo.sysmail_server AS sms Now, we have no real idea how these tables are linked and whether making an update direct to one or other of them is going to do what we want or whether it will entirely cripple our ability to send email from SQL Server so we wont touch those tables with any UPDATE TSQL. So, back to Books OnLine then and we find sysmail_update_account_sp. It’s exactly what we need. The examples in BOL take the form (as below) of having every parameter explicitly defined. Not wanting to totally obliterate the existing values by not passing values in all of the parameters I set to writing some code to gather the existing data from the tables and re-write the SMTP server name and then execute the resulting TSQL. IF OBJECT_ID('tempdb..#sysmailprofiles') IS NOT NULL DROP TABLE #sysmailprofiles GO CREATE TABLE #sysmailprofiles ( account_id INT , [name] VARCHAR(50) , [description] VARCHAR(500) , email_address VARCHAR(500) , display_name VARCHAR(500) , replyto_address VARCHAR(500) , servertype VARCHAR(10) , servername VARCHAR(100) , port INT , username VARCHAR(100) , use_default_credentials VARCHAR(1) , ENABLE_ssl VARCHAR(1) ) INSERT [#sysmailprofiles] ( [account_id] , [name] , [description] , [email_address] , [display_name] , [replyto_address] , [servertype] , [servername] , [port] , [username] , [use_default_credentials] , [ENABLE_ssl] ) EXEC [dbo].[sysmail_help_account_sp] DECLARE @TSQL NVARCHAR(1000) SELECT TOP 1 @TSQL = 'EXEC [dbo].[sysmail_update_account_sp] @account_id = ' + CAST([s].[account_id] AS VARCHAR(20)) + ', @account_name = ''' + [s].[name] + '''' + ', @email_address = N''' + [s].[email_address] + '''' + ', @display_name = N''' + [s].[display_name] + '''' + ', @replyto_address = N''' + s.replyto_address + '''' + ', @description = N''' + [s].[description] + '''' + ', @mailserver_name = ''NEWSMTP.contoso.com''' + +', @mailserver_type = ' + [s].[servertype] + ', @port = ' + CAST([s].[port] AS VARCHAR(20)) + ', @username = ' + COALESCE([s].[username], '''''') + ', @use_default_credentials =' + CAST(s.[use_default_credentials] AS VARCHAR(1)) + ', @enable_ssl =' + [s].[ENABLE_ssl] FROM [#sysmailprofiles] AS s WHERE [s].[servername] = 'SMTP.Contoso.com' SELECT @tsql EXEC [sys].[sp_executesql] @tsql This worked well for me and testing the email function EXEC dbo.sp_send_dbmail afterwards showed that the settings were indeed using our new Exchange server. It was only later in writing this blog that I tried running the sysmail_update_account_sp procedure with only the SMTP server name parameter value specified. Despite what Books OnLine might intimate, you can do this and only the values for parameters specified get changed. If a parameter is not specified in the execution of the procedure then the values remain unchanged. This renders most of the above script unnecessary as I could have simply specified the account_id that I want to amend and the new value for the parameter I want to update. EXEC sysmail_update_account_sp @account_id = 1, @mailserver_name = 'NEWSMTP.Contoso.com' This wasn’t going to be the main reason for this post, it was meant to describe how to capture values from a stored procedure and use them in dynamic TSQL but instead we are here and (re)learning the fact that Books Online is a little flawed in places. It is a fantastic resource for anyone working with SQL Server but the reader must adopt an enquiring frame of mind and use a little curiosity to try simple variations on examples to fully understand the code you are working with. I think the author(s) of this part of Books OnLine missed an opportunity to include a third example that had fewer than all parameters specified to give a lead to this method existing.

    Read the article

  • Is MVVM pointless?

    - by joebeazelman
    Is orthodox MVVM implementation pointless? I am creating a new application and I considered Windows Forms and WPF. I chose WPF because it's future-proof and offer lots of flexibility. There is less code and easier to make significant changes to your UI using XAML. Since the choice for WPF is obvious, I figured that I may as well go all the way by using MVVM as my application architecture since it offers blendability, separation concerns and unit testability. Theoretically, it seems beautiful like the holy grail of UI programming. This brief adventure; however, has turned into a real headache. As expected in practice, I’m finding that I’ve traded one problem for another. I tend to be an obsessive programmer in that I want to do things the right way so that I can get the right results and possibly become a better programmer. The MVVM pattern just flunked my test on productivity and has just turned into a big yucky hack! The clear case in point is adding support for a Modal dialog box. The correct way is to put up a dialog box and tie it to a view model. Getting this to work is difficult. In order to benefit from the MVVM pattern, you have to distribute code in several places throughout the layers of your application. You also have to use esoteric programming constructs like templates and lamba expressions. Stuff that makes you stare at the screen scratching your head. This makes maintenance and debugging a nightmare waiting to happen as I recently discovered. I had an about box working fine until I got an exception the second time I invoked it, saying that it couldn’t show the dialog box again once it is closed. I had to add an event handler for the close functionality to the dialog window, another one in the IDialogView implementation of it and finally another in the IDialogViewModel. I thought MVVM would save us from such extravagant hackery! There are several folks out there with competing solutions to this problem and they are all hacks and don’t provide a clean, easily reusable, elegant solution. Most of the MVVM toolkits gloss over dialogs and when they do address them, they are just alert boxes that don’t require custom interfaces or view models. I’m planning on giving up on the MVVM view pattern, at least its orthodox implementation of it. What do you think? Has it been worth the trouble for you if you had any? Am I just a incompetent programmer or does MVVM not what it's hyped up to be?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16  | Next Page >