Search Results

Search found 3935 results on 158 pages for 'extended procedures'.

Page 79/158 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Remote location's status "Disconnected", how to keep it connected (OK)? [net-use-command]

    - by AZ
    I have an script that keeps running, under some scenarios, it needs to contact a server (\\us-sign). From time to time, if this server remains uncontacted for a while, the next time my script needs it, it will ask for my credentials. What I found is that, after such thing happening and using net use, such server will be displayed as disconnected. if I type "net use \\us-sign", it won't ask me neither user nor password, what makes me believe, my credentials for such server are still "valid", nonetheless, its status will remain "disconnected". This script is supposed to help us automate some procedures, but the need to keep a watch on it shall it request for credentials, it kind of defeats the purpose. How can I keep its status "OK" no matter how long it is not being contacted?

    Read the article

  • Entering the user's name in a URL for Chrome through Group Policy

    - by Automate Everything
    I am managing a Windows Server 2008 R2 server, with several Windows 7 machines, and we have recently deployed Google Chrome using Group Policy. We also have a locally hosted intranet for storing procedures, forms, and so on, as well as reports that pull directly from our databases. I am trying to put the user's name in the startup URL for Chrome, so that when they open Chrome at the beginning of the day, it can pull a list of items from the database that contains their username. The report works, and I have it using a drop down right now, but I would like to be able to put their username in the URL as a GET variable instead. Does anybody know how I would go about doing that for Chrome? I tried putting ${user_name} in the URL, and I tried putting %username% in the URL, but that didn't translate to anything. Is there some way to escape it so that it gets translated by the system into a username? Any help would be greatly appreciated.

    Read the article

  • Need help sending mass emails [closed]

    - by Jose
    Possible Duplicate: Prevent mail being marked as spam I have an Ubuntu server that I want to be able to send out several thousand emails each containing a generic header and an unique pdf attachment containing an invoice. What I want to know is what would be the best way to accomplish this? Also, are there any other programs I need to install such as anti-virus / verification tools? I would like to be able to know what the procedures for this normally are as I would prefer that the emails are not send to the junk folder of the clients receiving the emails. Any tips would be appreciated. Thanks

    Read the article

  • Accessing Oracle DB through SQL Server using OPENROWSET

    - by Ken Paul
    I'm trying to access a large Oracle database through SQL Server using OPENROWSET in client-side Javascript, and not having much luck. Here are the particulars: A SQL Server view that accesses the Oracle database using OPENROWSET works perfectly, so I know I have valid connection string parameters. However, the new requirement is for extremely dynamic Oracle queries that depend on client-side selections, and I haven't been able to get dynamic (or even parameterized) Oracle queries to work from SQL Server views or stored procedures. Client-side access to the SQL Server database works perfectly with dynamic and parameterized queries. I cannot count on clients having any Oracle client software. Therefore, access to the Oracle database has to be through the SQL Server database, using views, stored procedures, or dynamic queries using OPENROWSET. Because the SQL Server database is on a shared server, I'm not allowed to use globally-linked databases. My idea was to define a function that would take my own version of a parameterized Oracle query, make the parameter substitutions, wrap the query in an OPENROWSET, and execute it in SQL Server, returning the resulting recordset. Here's sample code: // db is a global variable containing an ADODB.Connection opened to the SQL Server DB // rs is a global variable containing an ADODB.Recordset . . . ss = "SELECT myfield FROM mytable WHERE {param0} ORDER BY myfield;"; OracleQuery(ss,["somefield='" + somevalue + "'"]); . . . function OracleQuery(sql,params) { var s = sql; var i; for (i = 0; i < params.length; i++) s = s.replace("{param" + i + "}",params[i]); var e = "SELECT * FROM OPENROWSET('MSDAORA','(connect-string-values)';" + "'user';'pass','" + s.split("'").join("''") + "') q"; try { rs.Open("EXEC ('" + e.split("'").join("''") + "')",db); } catch (eobj) { alert("SQL ERROR: " + eobj.description + "\nSQL: " + e); } } The SQL error that I'm getting is Ad hoc access to OLE DB provider 'MSDAORA' has been denied. You must access this provider through a linked server. which makes no sense to me. The Microsoft explanation for this error relates to a registry setting (DisallowAdhocAccess). This is set correctly on my PC, but surely this relates to the DB server and not the client PC, and I would expect that the setting there is correct since the view mentioned above works. One alternative that I've tried is to eliminate the enclosing EXEC in the Open statement: rs.Open(e,db); but this generates the same error. I also tried putting the OPENROWSET in a stored procedure. This works perfectly when executed from within SQL Server Management Studio, but fails with the same error message when the stored procedure is called from Javascript. Is what I'm trying to do possible? If so, can you recommend how to fix my code? Or is a completely different approach necessary? Any hints or related information will be welcome. Thanks in advance.

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • What's the most unsound program you've had to maintain?

    - by Robert Rossney
    I periodically am called upon to do maintenance work on a system that was built by a real rocket surgeon. There's so much wrong with it that it's hard to know where to start. No, wait, I'll start at the beginning: in the early days of the project, the designer was told that the system would need to scale, and he'd read that a source of scalability problems was traffic between the application and database servers, so he made sure to minimize this traffic. How? By putting all of the application logic in SQL Server stored procedures. Seriously. The great bulk of the application functions by the HTML front end formulating XML messages. When the middle tier receives an XML message, it uses the document element's tag name as the name of the stored procedure it should call, and calls the SP, passing it the entire XML message as a parameter. It takes the XML message that the SP returns and returns it directly back to the front end. There is no other logic in the application tier. (There was some code in the middle tier to validate the incoming XML messages against a library of schemas. But I removed it, after ascertaining that 1) only a small handful of messages had corresponding schemas, 2) the messages didn't actually conform to these schemas, and 3) after validating the messages, if any errors were encountered, the method discarded them. "This fuse box is a real time-saver - it comes from the factory with pennies pre-installed!") I've seen software that does the wrong thing before. Lots of it. I've written quite a bit. But I've never seen anything like the steely-eyed determination to do the wrong thing, at every possible turn, that's embodied in the design and programming of this system. Well, at least he went with what he knew, right? Um. Apparently, what he knew was Access. And he didn't really understand Access. Or databases. Here's a common pattern in this code: SELECT @TestCodeID FROM TestCode WHERE TestCode = @TestCode SELECT @CountryID FROM Country WHERE CountryAbbr = @CountryAbbr SELECT Invoice.*, TestCode.*, Country.* FROM Invoice JOIN TestCode ON Invoice.TestCodeID = TestCode.ID JOIN Country ON Invoice.CountryID = Country.ID WHERE Invoice.TestCodeID = @TestCodeID AND Invoice.CountryID = @CountryID Okay, fine. You don't trust the query optimizer either. But how about this? (Originally, I was going to post this in What's the best comment in source code you have ever encountered? but I realized that there was so much more to write about than just this one comment, and things just got out of hand.) At the end of many of the utility stored procedures, you'll see code that looks like the following: -- Fix NULLs SET @TargetValue = ISNULL(@TargetValue, -9999) Yes, that code is doing exactly what you can't allow yourself to believe it's doing lest you be driven mad. If the variable contains a NULL, he's alerting the caller by changing its value to -9999. Here's how this number is commonly used: -- Get target value EXEC ap_GetTargetValue @Param1, @Param2, OUTPUT @TargetValue -- Check target value for NULL value IF @TargetValue = -9999 ... Really. For another dimension of this system, see the article on thedailywtf.com entitled I Think I'll Call Them "Transactions". I'm not making any of this up. I swear. I'm often reminded, when I work on this system, of Wolfgang Pauli's famous response to a student: "That isn't right. It isn't even wrong." This can't really be the very worst program ever. It's definitely the worst one I've worked

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • Don’t learn SSDT, learn about your databases instead

    - by jamiet
    Last Thursday I presented my session “Introduction to SSDT” at the SQL Supper event held at the offices of 7 Digital (loved the samosas, guys). I did my usual spiel, tour of the IDE, connected development, declarative database development yadda yadda yadda… and at the end asked if there were any questions. One gentleman in attendance (sorry, can’t remember your name) raised his hand and stated that by attempting to evangelise all of the features I’d missed the single biggest benefit of SSDT, that it can tell you stuff about database that you didn’t already know. I realised that he was dead right. SSDT allows you to import your whole database schema into a new project and it will instantly give you a list of errors and/or warnings pertaining to the objects in your database. Invalid references (e.g a long-forgotten stored procedure that refers to a non-existent column), unnecessary 3-part naming, incorrect case usage, syntax errors…it’ll tell you about all of ‘em! Turn on static code analysis (this article shows you how) and you’ll learn even more such as any stored procedures that begin with “sp_”, WHERE clauses that will kill performance, use of @@IDENTITY instead of SCOPE_IDENTITY(), use of deprecated syntax, implicit casts etc…. the list goes on and on. I urge you to download and install SSDT (takes a few minutes, its free and you don’t need SQL Server or Visual Studio pre-installed), start a new project: right-click on your new project and import from your database: and see what happens: You may be surprised what you discover. Let me know in the comments below what results you get, total number of objects, number of errors/warnings, I’d be interested to know! @Jamiet

    Read the article

  • 30 seconds from File|New to a new CRUD Silverlight application with Teleriks new LINQ Implementation

    Last month Telerik released its new LINQ implementation and last week we released the new Data Services Wizard for Telerik OpenAccess, which supports both traditional OpenAccess entities and the new LINQ implementation. I will a walk you through the process where you can connect to a database, add a new domain model, wrap it in a new WCF Data Services (Astoria) service, and add a CRUD enabled Silverlight application. All in 30 seconds! Step 1: Build your Domain Model (20 seconds) Open Visual Studio 2010 RTM (or 2008) and add a new ASP.NET project. Right click on the project and select Add|New Item and choose Telerk OpenAccess Domain Model from the item template list. The Visual Entity Designer wizard comes up. Select the database server you are using in the first screen (SQL Server, Oracle, SQL Azure, MySQL, etc) and then also build your database connection string. Next select the tables, views, and stored procedures you want ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQLAuthority News – Download Whitepaper – A Case Study on “Hekaton” against RPM – SQL Server 2014 CTP1

    - by Pinal Dave
    In this new world of social media, apps and mobile devices, we are all now getting impatient. Automatic updates have spoiled few of our habits. When a new feature is released everybody wants to immediately adopt the feature and start using it. Though this is true in the world of apps and smart phones, but it is still not possible in the developer’s world. When new features are around, before we start using it, we need to spend quite a lots of time to understand it and test it. Once we are sold on the feature we refer the feature to our manager and eventually the entire organization makes decisions on upgrading to use the new feature. Similarly, when the new feature of In-Memory OLTP was announced, pretty much every SQL Server DBA wanted to implement that on their server. Through the implementation of the feature is not hard, it is not that easy as well. One has to do proper research about their own environment and workload before implementing this feature. Microsoft has recently released a Case Study on In-Memory OLTP feature. Here is the abstract from the white paper itself. I/O latch can cause session delays that impact application performance. This white paper describes the procedures and common I/O latch issues when migrating to Hekaton in SQL Server 2014. It also includes challenges that occurred during the migration and the performance analysis at different stages.  If you are going to implement In-Memory OLTP database, this is a good case study to refer. Download white paper from here. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL

    Read the article

  • SQL SERVER – Extending SQL Azure with Azure worker role – Guest Post by Paras Doshi

    - by pinaldave
    This is guest post by Paras Doshi. Paras Doshi is a research Intern at SolidQ.com and a Microsoft student partner. He is currently working in the domain of SQL Azure. SQL Azure is nothing but a SQL server in the cloud. SQL Azure provides benefits such as on demand rapid provisioning, cost-effective scalability, high availability and reduced management overhead. To see an introduction on SQL Azure, check out the post by Pinal here In this article, we are going to discuss how to extend SQL Azure with the Azure worker role. In other words, we will attempt to write a custom code and host it in the Azure worker role; the aim is to add some features that are not available with SQL Azure currently or features that need to be customized for flexibility. This way we extend the SQL Azure capability by building some solutions that run on Azure as worker roles. To understand Azure worker role, think of it as a windows service in cloud. Azure worker role can perform background processes, and to handle processes such as synchronization and backup, it becomes our ideal tool. First, we will focus on writing a worker role code that synchronizes SQL Azure databases. Before we do so, let’s see some scenarios in which synchronization between SQL Azure databases is beneficial: scaling out access over multiple databases enables us to handle workload efficiently As of now, SQL Azure database can be hosted in one of any six datacenters. By synchronizing databases located in different data centers, one can extend the data by enabling access to geographically distributed data Let us see some scenarios in which SQL server to SQL Azure database synchronization is beneficial To backup SQL Azure database on local infrastructure Rather than investing in local infrastructure for increased workloads, such workloads could be handled by cloud Ability to extend data to different datacenters located across the world to enable efficient data access from remote locations Now, let us develop cloud-based app that synchronizes SQL Azure databases. For an Introduction to developing cloud based apps, click here Now, in this article, I aim to provide a bird’s eye view of how a code that synchronizes SQL Azure databases look like and then list resources that can help you develop the solution from scratch. Now, if you newly add a worker role to the cloud-based project, this is how the code will look like. (Note: I have added comments to the skeleton code to point out the modifications that will be required in the code to carry out the SQL Azure synchronization. Note the placement of Setup() and Sync() function.) Click here (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-1-for-extending-sql-azure-with-azure-worker-role1.pdf ) Enabling SQL Azure databases synchronization through sync framework is a two-step process. In the first step, the database is provisioned and sync framework creates tracking tables, stored procedures, triggers, and tables to store metadata to enable synchronization. This is one time step. The code for the same is put in the setup() function which is called once when the worker role starts. Now, the second step is continuous (or on demand) synchronization of SQL Azure databases by propagating changes between databases. This is done on a continuous basis by calling the sync() function in the while loop. The code logic to synchronize changes between SQL Azure databases should be put in the sync() function. Discussing the coding part step by step is out of the scope of this article. Therefore, let me suggest you a resource, which is given here. Also, note that before you start developing the code, you will need to install SYNC framework 2.1 SDK (download here). Further, you will reference some libraries before you start coding. Details regarding the same are available in the article that I just pointed to. You will be charged for data transfers if the databases are not in the same datacenter. For pricing information, go here Currently, a tool named DATA SYNC, which is built on top of sync framework, is available in CTP that allows SQL Azure <-> SQL server and SQL Azure <-> SQL Azure synchronization (without writing single line of code); however, in some cases, the custom code shown in this blogpost provides flexibility that is not available with Data SYNC. For instance, filtering is not supported in the SQL Azure DATA SYNC CTP2; if you wish to have such a functionality now, then you have the option of developing a custom code using SYNC Framework. Now, this code can be easily extended to synchronize at some schedule. Let us say we want the databases to get synchronized every day at 10:00 pm. This is what the code will look like now: (http://parasdoshi1989.files.wordpress.com/2011/06/code-snippet-2-for-extending-sql-azure-with-azure-worker-role.pdf) Don’t you think that by writing such a code, we are imitating the functionality provided by the SQL server agent for a SQL server? Think about it. We are scheduling our administrative task by writing custom code – in other words, we have developed a “Light weight SQL server agent for SQL Azure!” Since the SQL server agent is not currently available in cloud, we have developed a solution that enables us to schedule tasks, and thus we have extended SQL Azure with the Azure worker role! Now if you wish to track jobs, you can do so by storing this data in SQL Azure (or Azure tables). The reason is that Windows Azure is a stateless platform, and we will need to store the state of the job ourselves and the choice that you have is SQL Azure or Azure tables. Note that this solution requires custom code and also it is not UI driven; however, for now, it can act as a temporary solution until SQL server agent is made available in the cloud. Moreover, this solution does not encompass functionalities that a SQL server agent provides, but it does open up an interesting avenue to schedule some of the tasks such as backup and synchronization of SQL Azure databases by writing some custom code in the Azure worker role. Now, let us see one more possibility – i.e., running BCP through a worker role in Azure-hosted services and then uploading the backup files either locally or on blobs. If you upload it locally, then consider the data transfer cost. If you upload it to blobs residing in the same datacenter, then no transfer cost applies but the cost on blob size applies. So, before choosing the option, you need to evaluate your preferences keeping the cost associated with each option in mind. In this article, I have shown that Azure worker role solution could be developed to synchronize SQL Azure databases. Moreover, a light-weight SQL server agent for SQL Azure can be developed. Also we discussed the possibility of running BCP through a worker role in Azure-hosted services for backing up our precious SQL Azure data. Thus, we can extend SQL Azure with the Azure worker role. But remember: you will be charged for running Azure worker roles. So at the end of the day, you need to ask – am I willing to build a custom code and pay money to achieve this functionality? I hope you found this blog post interesting. If you have any questions/feedback, you can comment below or you can mail me at Paras[at]student-partners[dot]com Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Azure, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Profile of Scott L Newman

    - by Ratman21
    To:       Whom It May Concern From: Scott L Newman Date:   4/23/2010 Re:      Profile Who is he, what can he do? Two very good questions. #1. I am a 20 + years experience Information Technology Professional (hold on don’t hit delete yet!). Who is not over the hill (I am on top of it) and still knows how to do (and can still do) that thing call work! #2. A can do attitude, that does not allow problems to sit unfixed. I have a broad range of skills, including: Certified CompTIA A+, Security+ and Network+ Technician §         2.5 years (NOC) Network experience on large Cisco based Wan – UK to Austria §         20 years experience MIS/DP – Yes I can do IBM mainframes and Tandem non-stops too §         18 years experience as technical Help Desk support – panicking users, no problem §         18 years experience with PC/Server based system, intranet and internet systems §         10+ years experienced on: Microsoft Office, Windows XP and Data Network Fundamentals (YES I do windows) §         Strong trouble shooting skills for software, hard ware and circuit issues (and I can tell you what kind of horrors I had to face on all of them). §         Very experienced on working with customers on problems – again panicking users, no problem §         Working experience with Remote Access (VPN/SecurID) – I didn’t just study them I worked on/with them §         Skilled in getting info for and creating documentation for Operation procedures (I do not just wait for them to give it to me I go out and get it. Waiting for info on working applications is, well dumb) Multiple software languages (Hey I have done some programming) And much more experiences in “IT” (Mortgage, stocks and financial information systems experience and have worked “IT” in a hospital) Can multitask, also have ability to adapt to change and learn quickly. (once was put in charge of a system that I had not worked with for over two years. Talk about having to relearn and adapt to changes fast. But I did it.)   The summarization is that I know what do, know keep things going and how to fix it when it breaks.   Scott L. Newman Confidential

    Read the article

  • Merging Waterfall and Agile – Getting the Worst of Both Worlds

    - by Nick Harrison
    Many people have seen and appreciate the elegance and practicality of agile methodologies.   Sadly there is still not widespread adoption.   There is still push back from many directions and from many different sources.   Some people don't understand how it is supposed to work. Some people don't believe that it could possibly work. Some people mistakenly believe that it is just code for a lazy project team trying to wiggle out of structure Some people mistakenly believe that it can work only with a very small highly trained team Some people are afraid of the control that they feel they will be losing. I have seen some people try to merge agile and water fall hoping to achieve the best of both worlds.   Unfortunately, the reality is that you end up with the worst of both worlds.   And they both can get pretty bad. Another Sad Reality Some people in an effort to get buy in for following an Agile Methodology have attempted to merge these two practices.   Sometimes this may stem from trying to assuage individual fears that they are not losing relevance.   Sometimes it may be to meet contractual obligations or to fulfill regulatory requirements.   Sometimes may not know better. These two approaches to software development cannot coexist on the same project. Let's review the main tenants of the Agile Manifesto: Individuals and interactions over processes and tools Working software over comprehensive documentation Customer collaboration over contract negotiation Responding to change over following a plan That is, while there is value in the items on the right, we value the items on the left more. Meanwhile the main tenants of the Waterfall Approach could be summarized as: Processes and procedures over individuals Comprehensive documentation proves that the software works Well defined contracts and negotiations protects the customer relationship If the plan is made right, there should be no change  Merging these two approaches will always end badly.

    Read the article

  • PASS 13 Dispatches: Memory Optimized = On

    - by Tony Davis
    I'm at the PASS Summit in Charlotte for the Day 1 keynote by Quentin Clarke, Corporate VP of the data platform group at Microsoft. He's talking about how SQL Server 2014 is “pushing boundaries” and first up is SQL Server 2014's In-Memory OLTP technology (former codename “hekaton”) It is a feature that provokes a lot of interest and for good reason as, without any need for application rewrites or hardware updates, it can enable us to ensure that an application can find in memory most or all of the data it needs, and can lead to huge improvements in processing times. A good recent hekaton use cases article talks about applications that need a “Shock Absorber” when either spikes or just a high rate of incoming workload (including data in ETL scenarios) become a primary bottleneck. To get a really deep look at this technology, I would check out David DeWitt's summit keynote tomorrow (it will be live streamed). Other than that, to get started I'd recommend Kalen Delaney's whitepaper. She offers a lot of insight into how it works and how to start to define memory-optimized tables, and natively compiled stored procedures. These memory-optimized tables uses completely optimistic multi-version concurrency control – no waiting on locks! After that, Tom LaRock has compiled a useful set of links to drill deeper, and includes one to Microsoft's AMR tool to help you gauge the tables that might benefit most. Tony.

    Read the article

  • Quick Outline: Navigating Your PL/SQL Packages in Oracle SQL Developer

    - by thatjeffsmith
    If you’re browsing your packages using the Connections panel, you have a nice tree navigator to click around your packages and your variable, procedure, and functions. Click, click, click all day long, click, click, click while I sing this song… But What if you drill into your PL/SQL source from the worksheet and don’t have the Tree expanded? Let’s say you’re working on your script, something like - Hmm, what goes next again? So I need to reacquaint myself with just what my beer package requires, so I’m going to drill into it by doing a DESCRIBE (via SHIFT+F4), and now I have the package open. The package is open but the tree hasn’t auto-expanded. Please don’t tell me I have to do the click-click-click thing in the tree!?! Just Open the Quick Outline Panel Do you see it? Just right click in the procedure editor – select the ‘Quick Outline’ in the context menu, and voila! The navigational power of the tree, without needing to drill down the tree itself. If I want to drill into my procedure declaration, just click on said procedure name in the Quick Outline panel. This works for both package specs and bodies. Technically you can use this for stand alone procedures and functions, but the real power is demonstrated for packages.

    Read the article

  • Data Quality Through Data Governance

    Data Quality Governance Data quality is very important to every organization, bad data cost an organization time, money, and resources that could be prevented if the proper governance was put in to place.  Data Governance Program Criteria: Support from Executive Management and all Business Units Data Stewardship Program  Cross Functional Team of Data Stewards Data Governance Committee Quality Structured Data It should go without saying but any successful project in today’s business world must get buy in from executive management and all stakeholders involved with the project. If management does not fully support a project because they see it is in there and the company’s best interest then they will remove/eliminate funding, resources and allocated time to work on the project. In essence they can render a project dead until it is official killed by the business. In addition, buy in from stake holders is also very important because they can cause delays increased spending in time, money and resources because they do not support a project. Data Stewardship programs are administered by a data steward manager who primary focus is to support, train and manage a cross functional data stewards team. A cross functional team of data stewards are pulled from various departments act to ensure that all systems work to ensure that an organization’s goals are achieved. Typically, data stewards are subject matter experts that act as mediators between their respective departments and IT. Data Quality Procedures Data Governance Committees are composed of data stewards, Upper management, IT Leadership and various subject matter experts depending on a company. The primary goal of this committee is to define strategic goals, coordinate activities, set data standards and offer data guidelines for the business. Data Quality Policies In 1997, Claudia Imhoff defined a Data Stewardship’s responsibility as to approve business naming standards, develop consistent data definitions, determine data aliases, develop standard calculations and derivations, document the business rules of the corporation, monitor the quality of the data in the data warehouse, define security requirements, and so forth. She further explains data stewards responsible for creating and enforcing polices on the following but not limited to issues. Resolving Data Integration Issues Determining Data Security Documenting Data Definitions, Calculations, Summarizations, etc. Maintaining/Updating Business Rules Analyzing and Improving Data Quality

    Read the article

  • Web workflow solution - how should I approach the design?

    - by Tom Pickles
    We've been tasked with creating a web based workflow tool to track change management. It has a single workflow with multiple synchronous tasks for the most part, but branch out at a point to tasks running in parallel which meet up later on. There will be all sorts of people using the application, and all of them will need to see their outstanding tasks for each change, but only theirs, not others. There will also be a high level group of people who oversee all changes, so need to see everything. They will need to see tasks which have not been done in the specified time, who's responsible etc. The data will be persisted to a SQL database. It'll all be put together using .Net. I've been trying to learn and implement OOP into my designs of late, but I'm wondering if this is moot in this instance as it may be better to have the business logic for this in stored procedures in the DB. I could use POCO's, a front end layer and a data access layer for the web application and just use it as a mechanism for CRUD actions on the DB, then use SP's fired in the DB to apply the business rules. On the other hand, I could use an object oriented design within the web app, but as the data in the app is state-less, is this a bad idea? I could try and model out the whole application into a class structure, implementing interfaces, base classes and all that good stuff. So I would create a change class, which contained a list of task classes/types, which defined each task, and implement an ITask interface etc. Put end-user types into the tasks to identify who should be doing what task. Then apply all the business logic in the respective class methods etc. What approach do you guys think I should be using for this solution?

    Read the article

  • AMR's 2010 Supply Chain Top 25 Report: Early Predictions

    - by [email protected]
    On April 6th, AMR's Debra Hoffman and Kevin O'Marah presented their annual 'Top 25 Supply Chain' predictions.  For supply chain professionals, it was a 'must-hear' event especially with the new focus on both operational excellence as well as innovation excellence.  Most people think of R&D as the primary driver for innovation, but in today's 'new-normal' firms need to constantly review, evaluate and update their workflow procedures and business processes to maintian a sharp-blade on the leading edge.  Having the right tools in place to be able to monitor supply chain effectiveness becomes paramount to firms as they compete in the global marketplace. Organizations need  user-friendly and role based dashboards with early alerts to contextualize activities and post the best-options for managers to make better and more informed decisions. 2009 Winners were 1.Apple 2.Dell 3.P&G 4.IBM 5.Cisco 6.Nokia 7. Walmart 8.Samsung 9.PepsiCo 10.Toyota 11.Schulmberger 12. J&J 13.Coke 14. Nike 15.Tesco 16.Disney 17.HP 18.TI 19.LockheedMartin 20.Colgate 21.BestBuy 22.Unilever 23.Publix 24.SonyEricsson 25.Intel    

    Read the article

  • links for 2011-03-02

    - by Bob Rhubart
    Oracle Technology Network Architect Day: Denver Registration is now open. Sessions will cover IT Optimization and consolidation, cloud computing, the evolving role of enterprise IT, and more. (tags: oracle otn entarch event denver) SOA Suite Integration: Part 2: A basic BPEL process (The Shorten Spot) The latest post in Anthony's Shorten's series about SOA Suite integration with Oracle Utilities Application Framework. (tags: oracle otn soa bpel soasuite) ADF: How to create web service based ADF pages The first in promised series of three posts on the topic by Marianne Horsch. (tags: oracle soa webservices adf) David Butler: MDM Poised for Growth (Oracle Master Data Management) David says: "Businesses are talking about the need to fix master data before they can successfully move forward on SOA initiatives. And the growing demands for compliance continue to be a major driver." (tags: oracle otn mdm) Cloud governance is about more than security | The Pervasive Data Center - CNET News Legal and regulatory procedures, transparency, service levels, indemnification, and more are all part of a broader governance landscape that requires IT to work closely with business users. Read this blog post by Gordon Haff on The Pervasive Data Center. (tags: ping.fm) Senthilkumar Rajendran's Blog: Horizontal Scaling OBIEE 11g (tags: ping.fm) InfoQ: Searching Without Objectives Kenneth O. Stanley considers that innovation is stifled when we are strictly following a high goal, and we would progress more when we are inclined to discovery rather than following an objective. (tags: ping.fm) InfoQ: Brownfield Software - Industrial Waste or Business Fertilizer? Josh Graham addresses 10 myths related to working on legacy software, attempting to prove that one can make good use of legacy code without having to rewrite the entire thing. (tags: ping.fm)

    Read the article

  • Spooling in SQL execution plans

    - by Rob Farley
    Sewing has never been my thing. I barely even know the terminology, and when discussing this with American friends, I even found out that half the words that Americans use are different to the words that English and Australian people use. That said – let’s talk about spools! In particular, the Spool operators that you find in some SQL execution plans. This post is for T-SQL Tuesday, hosted this month by me! I’ve chosen to write about spools because they seem to get a bad rap (even in my song I used the line “There’s spooling from a CTE, they’ve got recursion needlessly”). I figured it was worth covering some of what spools are about, and hopefully explain why they are remarkably necessary, and generally very useful. If you have a look at the Books Online page about Plan Operators, at http://msdn.microsoft.com/en-us/library/ms191158.aspx, and do a search for the word ‘spool’, you’ll notice it says there are 46 matches. 46! Yeah, that’s what I thought too... Spooling is mentioned in several operators: Eager Spool, Lazy Spool, Index Spool (sometimes called a Nonclustered Index Spool), Row Count Spool, Spool, Table Spool, and Window Spool (oh, and Cache, which is a special kind of spool for a single row, but as it isn’t used in SQL 2012, I won’t describe it any further here). Spool, Table Spool, Index Spool, Window Spool and Row Count Spool are all physical operators, whereas Eager Spool and Lazy Spool are logical operators, describing the way that the other spools work. For example, you might see a Table Spool which is either Eager or Lazy. A Window Spool can actually act as both, as I’ll mention in a moment. In sewing, cotton is put onto a spool to make it more useful. You might buy it in bulk on a cone, but if you’re going to be using a sewing machine, then you quite probably want to have it on a spool or bobbin, which allows it to be used in a more effective way. This is the picture that I want you to think about in relation to your data. I’m sure you use spools every time you use your sewing machine. I know I do. I can’t think of a time when I’ve got out my sewing machine to do some sewing and haven’t used a spool. However, I often run SQL queries that don’t use spools. You see, the data that is consumed by my query is typically in a useful state without a spool. It’s like I can just sew with my cotton despite it not being on a spool! Many of my favourite features in T-SQL do like to use spools though. This looks like a very similar query to before, but includes an OVER clause to return a column telling me the number of rows in my data set. I’ll describe what’s going on in a few paragraphs’ time. So what does a Spool operator actually do? The spool operator consumes a set of data, and stores it in a temporary structure, in the tempdb database. This structure is typically either a Table (ie, a heap), or an Index (ie, a b-tree). If no data is actually needed from it, then it could also be a Row Count spool, which only stores the number of rows that the spool operator consumes. A Window Spool is another option if the data being consumed is tightly linked to windows of data, such as when the ROWS/RANGE clause of the OVER clause is being used. You could maybe think about the type of spool being like whether the cotton is going onto a small bobbin to fit in the base of the sewing machine, or whether it’s a larger spool for the top. A Table or Index Spool is either Eager or Lazy in nature. Eager and Lazy are Logical operators, which talk more about the behaviour, rather than the physical operation. If I’m sewing, I can either be all enthusiastic and get all my cotton onto the spool before I start, or I can do it as I need it. “Lazy” might not the be the best word to describe a person – in the SQL world it describes the idea of either fetching all the rows to build up the whole spool when the operator is called (Eager), or populating the spool only as it’s needed (Lazy). Window Spools are both physical and logical. They’re eager on a per-window basis, but lazy between windows. And when is it needed? The way I see it, spools are needed for two reasons. 1 – When data is going to be needed AGAIN. 2 – When data needs to be kept away from the original source. If you’re someone that writes long stored procedures, you are probably quite aware of the second scenario. I see plenty of stored procedures being written this way – where the query writer populates a temporary table, so that they can make updates to it without risking the original table. SQL does this too. Imagine I’m updating my contact list, and some of my changes move data to later in the book. If I’m not careful, I might update the same row a second time (or even enter an infinite loop, updating it over and over). A spool can make sure that I don’t, by using a copy of the data. This problem is known as the Halloween Effect (not because it’s spooky, but because it was discovered in late October one year). As I’m sure you can imagine, the kind of spool you’d need to protect against the Halloween Effect would be eager, because if you’re only handling one row at a time, then you’re not providing the protection... An eager spool will block the flow of data, waiting until it has fetched all the data before serving it up to the operator that called it. In the query below I’m forcing the Query Optimizer to use an index which would be upset if the Name column values got changed, and we see that before any data is fetched, a spool is created to load the data into. This doesn’t stop the index being maintained, but it does mean that the index is protected from the changes that are being done. There are plenty of times, though, when you need data repeatedly. Consider the query I put above. A simple join, but then counting the number of rows that came through. The way that this has executed (be it ideal or not), is to ask that a Table Spool be populated. That’s the Table Spool operator on the top row. That spool can produce the same set of rows repeatedly. This is the behaviour that we see in the bottom half of the plan. In the bottom half of the plan, we see that the a join is being done between the rows that are being sourced from the spool – one being aggregated and one not – producing the columns that we need for the query. Table v Index When considering whether to use a Table Spool or an Index Spool, the question that the Query Optimizer needs to answer is whether there is sufficient benefit to storing the data in a b-tree. The idea of having data in indexes is great, but of course there is a cost to maintaining them. Here we’re creating a temporary structure for data, and there is a cost associated with populating each row into its correct position according to a b-tree, as opposed to simply adding it to the end of the list of rows in a heap. Using a b-tree could even result in page-splits as the b-tree is populated, so there had better be a reason to use that kind of structure. That all depends on how the data is going to be used in other parts of the plan. If you’ve ever thought that you could use a temporary index for a particular query, well this is it – and the Query Optimizer can do that if it thinks it’s worthwhile. It’s worth noting that just because a Spool is populated using an Index Spool, it can still be fetched using a Table Spool. The details about whether or not a Spool used as a source shows as a Table Spool or an Index Spool is more about whether a Seek predicate is used, rather than on the underlying structure. Recursive CTE I’ve already shown you an example of spooling when the OVER clause is used. You might see them being used whenever you have data that is needed multiple times, and CTEs are quite common here. With the definition of a set of data described in a CTE, if the query writer is leveraging this by referring to the CTE multiple times, and there’s no simplification to be leveraged, a spool could theoretically be used to avoid reapplying the CTE’s logic. Annoyingly, this doesn’t happen. Consider this query, which really looks like it’s using the same data twice. I’m creating a set of data (which is completely deterministic, by the way), and then joining it back to itself. There seems to be no reason why it shouldn’t use a spool for the set described by the CTE, but it doesn’t. On the other hand, if we don’t pull as many columns back, we might see a very different plan. You see, CTEs, like all sub-queries, are simplified out to figure out the best way of executing the whole query. My example is somewhat contrived, and although there are plenty of cases when it’s nice to give the Query Optimizer hints about how to execute queries, it usually doesn’t do a bad job, even without spooling (and you can always use a temporary table). When recursion is used, though, spooling should be expected. Consider what we’re asking for in a recursive CTE. We’re telling the system to construct a set of data using an initial query, and then use set as a source for another query, piping this back into the same set and back around. It’s very much a spool. The analogy of cotton is long gone here, as the idea of having a continual loop of cotton feeding onto a spool and off again doesn’t quite fit, but that’s what we have here. Data is being fed onto the spool, and getting pulled out a second time when the spool is used as a source. (This query is running on AdventureWorks, which has a ManagerID column in HumanResources.Employee, not AdventureWorks2012) The Index Spool operator is sucking rows into it – lazily. It has to be lazy, because at the start, there’s only one row to be had. However, as rows get populated onto the spool, the Table Spool operator on the right can return rows when asked, ending up with more rows (potentially) getting back onto the spool, ready for the next round. (The Assert operator is merely checking to see if we’ve reached the MAXRECURSION point – it vanishes if you use OPTION (MAXRECURSION 0), which you can try yourself if you like). Spools are useful. Don’t lose sight of that. Every time you use temporary tables or table variables in a stored procedure, you’re essentially doing the same – don’t get upset at the Query Optimizer for doing so, even if you think the spool looks like an expensive part of the query. I hope you’re enjoying this T-SQL Tuesday. Why not head over to my post that is hosting it this month to read about some other plan operators? At some point I’ll write a summary post – once I have you should find a comment below pointing at it. @rob_farley

    Read the article

  • SQL SERVER – Select Columns from Stored Procedure Resultset

    - by Pinal Dave
    It is fun to go back to basics often. Here is the one classic question: “How to select columns from Stored Procedure Resultset?” Though Stored Procedure has been introduced many years ago, the question about retrieving columns from Stored Procedure is still very popular with beginners. Let us see the solution in quick steps. First we will create a sample stored procedure. CREATE PROCEDURE SampleSP AS SELECT 1 AS Col1, 2 AS Col2 UNION SELECT 11, 22 GO Now we will create a table where we will temporarily store the result set of stored procedures. We will be using INSERT INTO and EXEC command to retrieve the values and insert into temporary table. CREATE TABLE #TempTable (Col1 INT, Col2 INT) GO INSERT INTO #TempTable EXEC SampleSP GO Next we will retrieve our data from stored procedure. SELECT * FROM #TempTable GO Finally we will clean up all the objects which we have created. DROP TABLE #TempTable DROP PROCEDURE SampleSP GO Let me know if you want me to share such back to basic tips. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL

    Read the article

  • Thoughts on the Nomination Committee Campaign 2014

    - by Testas
    Congratulations to Erin, Andy and Allen on making the Nomination Committee for 2014. As Mark Broadbent (@retracement) stated in his tweet, there’s a great set of individuals for the Nom Com, and I could not agree more. I know Erin and Allen, and I know how much value they will bring to the process. I don’t know Andy as well, but I am sure he will do a great job and I hope I can meet him at PASS soon. The final candidate appointed by the PASS board is Rick Bolesta, who brings a wealth of experience to the process. I also want to take the opportunity to thank all who have voted. Not just for me, but for all the candidates during the election. Your contribution is greatly appreciated. Would I apply for the Nom Com again?  Yes I would. My first election experience has been a learning experience in itself. So I accept the result and look forward to applying next year. Moving on from this, I do want to express my opinion about the lack of international representation in the election process. One of the tweets that I saw after the result was from Adam Machanic (@AdamMachanic) who commented on the lack of international members on the Nom Com. If truth be told, I was disappointed – when the candidate list was released -- that for the second time in recent elections there was a lack of international candidates on the candidate list. It feels that only Brits and Americans partake in such elections. This is a real shame, and I can’t help thinking why this is the case. Hugo Kornelis (@Hugo_Kornelis) wrote a blog here to express his thoughts. He did raise some valid points. I don’t know why there is an absence of international candidates. I know that the team at PASS are looking to improve the situation, so I do not want to give the impression that PASS are doing nothing. For reference please see Bill Graziano’ s article here to see how PASS are addressing the situation. There is a clear direction to change the rules within PASS to give greater inclusion of international members. In addition to this, I wanted to explore a couple of potential approaches to address the situation. I am not saying that they are the right answer, but when I see challenges, I like to bring potential solutions to the table. 1.       Use the PASS mission statement to define a tactical objective that engages community leaders into the election process. If you are not familiar with the PASS mission statement, let me provide it here as laid out on the PASS website. “Empower data professionals who leverage Microsoft technologies to connect, share, and learn through networking, knowledge sharing, and peer-based learning” PASS fulfil this mission statement regularly. Whether you attend SQL Saturday, SQLRally, SQLPASS and BA conference itself. The biggest value of PASS is the ability to bring our profession together. And the 24 hour hop allows you to learn from the comfort of your own office/home. This mission should be extended to define a tactical objectives that bring greater networking and knowledge sharing between PASS Chapter leaders/Regional Mentors and PASS HQ. It should help educate the leaders about the opportunities of elections and how leaders can become involved. I know PASS engage with Chapter leaders on a regular basis to discuss community matters for the benefit of PASS members. How could this be achieved? Perhaps PASS could perform a quarterly virtual meeting that specifically looks at helping leaders become more involved with the election process 2.       Evolve the Global Growth Strategy into a Global Engagement Strategy. One of the remits of the PASS board over the last couple of years is the Global Growth strategy. This has been very successful as we have seen the massive growth of events across the world. For that, I congratulate the board for this success. Perhaps the time is now right to look at solidifying this success, through a Global Engagement Strategy that starts with the collaboration of Chapter Leaders, Regional Mentors and Evangelists in their respective Countries or Regions. The engagement strategy should look at increasing collaboration between community leaders for the benefit of their respective communities. It should also provide a channel for encouraging leaders to put themselves forward for the elections. How could this be achieved? In the UK, there has been a big growth in PASS Chapters and SQL Server Events that was approaching saturation point. The introduction of the Community Engagement Day -- channelled through the SQLBits conference -- has enabled Chapter Leaders to collaborate, connect and share with PASS, Sponsors and Microsoft. It also provides the ability for Chapter Leaders to speak directly to the PASS representatives from PASSHQ. This brings with it the ability for PASS community evangelists to communicate PASS objectives. It has also been the event where we have found out; and/or encouraged, Chapter Leaders to put themselves forward for elections. People like encouragement and validation when going for something like an election, and being able to discuss this with peers at a dedicated event provides a useful platform. PASS has the people in place already to facilitate such an event. Regional Mentors could potentially help organise such events on an annual basis, with PASSHQ providing support in providing a room/Lync access for the event to take place. It would be really good if a PASSHQ representative could attend in person as well.   3.       Restrict candidates to serve only a limited number of terms. A frequent comment I saw on social networking was that the elections can be seen by some as a popularity conference. Perhaps by limiting the number of terms that an individual can serve on either the Nom Com or the BOD, other candidates may be encouraged to be more actively involved within the PASS election process. I don’t think that the current byelaws deal with this particular suggestion. I also saw a couple of tweets that stated that more active community members did not apply for the Nom Com. I struggled to understand how the individuals of the tweets measured “more active”. It just also further solidified the subjective nature of elections. In the absence of how candidates are put forward for the elections. Then a restriction of terms enables the opportunity to be extended to others. How could this be achieved? Set a resolution that is put to a community vote as to the viability of such a solution. For example, the questions for the vote could be: Should individuals in the Nom Com and BoD be limited to a certain number of terms?  Yes/No. What is the maximum number of terms a candidate could serve?   It would be simple to execute such a vote, and the community will have an opportunity to have a say in an important aspect of the PASS organisation. And is the change is successful, then add it as a byelaw.   So there are some of my thoughts. I am not saying they are right or wrong. But I do hope that there is a concerted effort to encourage more candidates from other reaches of the Globe to become involved with future elections.   It would be good to hear your thoughts   Thanks   Chris

    Read the article

  • Remote Data connection in iphone app

    - by Tariq- iPHONE Programmer
    Hello, i am working with Social Networking iphone app which require remote data connection. So i hired a php developer in order to provide me RESTful services. But when i start working with him, he arguing me that he will not make stored procedures and web services. Instead of he suggested me to pass query as a parameter. Suppose If I have to call Search service, he told me to send POST request with 3 parameters: Query="select * from users", username=abd and password = 123 And i thing there is no such architecture in order to use remote data. Then he is saying it is possible through socket programming. And I am 100% sure this is not an appropriate way to access remote data. This is simply illogical. Thousands of iphone application using REST/SOAP services to make remote data connection He just declined me to provide RESTful services. Please its my heartily advice to all developers that post your own views over here. So that I can show to that developers that these are the views from all developers worldwide.

    Read the article

  • How to keep word document, html and pdf documentation aligned

    - by dendini
    Is there a way to write documentation in a WYSIWYG editor which can then export into HTML, WORD and PDF and keep copies synchronized? This documentation are mostly technical notes and some contextual help for some softwares so they must contain images and some styling, they are not programmer's documentation (API list or functions list) for which probably a program like Javadoc or Doxygen would be the best choice. For example how do companies with hundreds different software lines and thousands of programmers deal with this? I have several solutions but they all seem lacking in some aspect: Latex/Tex : very good pdf and html export, not very user friendly and no full-blown WYSIWYG editor available. LibreOffice/OpenOffice : full blown WYSIWYG editor however html export not so good (need to edit manually exported html which needs to be maintained separately ) Mediawiki or any other wiki : could be keeping documentation in wikitext format, so html is automatically generated, pdf exportation is quite good with many available plugins. Again however need some formation for the staff to use it and need to setup a server for this. Notice I'm not asking for software A vs software B, I'm asking for general advice, big companies procedures for documentation and yes some software product names if available.

    Read the article

  • HTML5 Development for Dummies

    - by Geertjan
    What's HTML5 all about and what does it actually mean, concretely, to develop HTML5 applications? NetBeans IDE 7.3 provides something called "Project Easel", which is a bundling of HTML5-related tools into a coherent toolset. Within a matter of hours, you'll know everything you need to know about what all this is about if you follow the steps below.  Get A Solid Overview. Start by viewing this screencast from JavaOne 2012 (click the media link on the right side once you've clicked the link below, a downloadable MP4 file is also available there):https://oracleus.activeevents.com/connect/sessionDetail.ww?SESSION_ID=4038That is an awesome way to get you in the right mindframe for what HTML5 is and how it fits into the programming world, together with a very cool and entertaining demo, presented by JB Brock. He starts with about three slides and then does a super awesome demo that puts you into the picture very quickly. Understand How HTML5 Relates To Java EE. Now here's a very cool follow up to the above, again demo-driven (click the media links on the right side once you've clicked the link below):https://oracleus.activeevents.com/connect/sessionDetail.ww?SESSION_ID=4737David Konecny takes the Affable Bean project created via the NetBeans E-commerce Tutorial and creates an HTML5 front end for it! I.e., you are shown how HTML5 can provide a different front end, as an alternative to JSF. Why would you do that? Well, that's explained in David's session, as well as in JB Brock's session, i.e., choose the right technology for the right situation. Sometimes HTML5 might make sense, other times JSF might make sense. Follow The NetBeans Screencasts. To revise and firm up everything you've learned from the above two JavaOne sessions, watch two screencasts by Ken Ganfield, part 1, Getting Started with HTML5 and part 2, Working with JavaScript in HTML5 Applications. In particular, you'll learn how NetBeans IDE provides tools to thoroughly cover the needs of HTML5 developers. Having taken the above three steps, you now have a thorough background, together with an understanding of the tools and procedures needed for creating your own HTML5 applications.

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >